Lessons Learned building AI Powered Plugins in Strapi

Strapi's plugin system is powerful. You can build self-contained modules with their own content types, services, controllers, and admin UI. But there's a problem: plugins are islands. They don't talk to each other.

Lessons Learned building AI Powered Plugins in Strapi

TL;DR

  • Strapi v5 plugins can share AI-callable tools through a standardized ToolDefinition interface using Zod schemas, enabling a composable plugin ecosystem where each plugin works independently but gains superpowers when combined
  • A convention-based ai-tools service allows a central AI SDK plugin to automatically discover and register tools from any installed plugin at bootstrap -- zero configuration required
  • Tools are namespaced (pluginName__toolName) to avoid collisions and exposed through a three-tier access model: all tools for admin chat, config-whitelisted publicSafe tools for public chat, and non-internal tools via MCP
  • Each plugin ships its own MCP server for standalone use with Claude Desktop, but when the AI SDK plugin is installed, all tools are unified into a single MCP endpoint and admin chat interface
  • This architecture turns Strapi from a headless CMS into a composable AI platform where plugins are both standalone microservices and building blocks of a larger agentic system

The Problem: Plugins as Islands

Strapi's plugin system is powerful. You can build self-contained modules with their own content types, services, controllers, and admin UI. But there's a problem: plugins are islands. They don't talk to each other.

I hit this wall when building a suite of plugins for content operations. I had a plugin that fetches YouTube transcripts, another that creates vector embeddings from those transcripts, another that ingests social media mentions from Octolens, and I wanted an AI assistant that could use all of them. The naive approach would be to hardcode every integration into a single monolithic plugin. But that defeats the entire purpose of a plugin architecture.

What I wanted was something more like Unix pipes -- small, focused tools that do one thing well, but can be composed into something greater. Each plugin should work on its own, but when you install them together, the whole becomes greater than the sum of its parts.

This post walks through the architecture I built to make that happen.

Architecture Overview: The Big Picture

The system is built around four plugins, each with a distinct responsibility:

flowchart TB
    subgraph AI["strapi-plugin-ai-sdk (Orchestrator)"]
        TR[ToolRegistry]
        MCP_CENTRAL[Unified MCP Server]
        CHAT[Admin Chat UI]
        PUB[Public Chat API]
        BUILT_IN[Built-in Tools<br/>searchContent, writeContent,<br/>listContentTypes, sendEmail...]
    end

    subgraph YT["yt-transcript-strapi-plugin"]
        YT_TOOLS[fetchTranscript<br/>searchTranscript<br/>getTranscript<br/>listTranscripts<br/>findTranscripts]
        YT_MCP[Own MCP Server]
    end

    subgraph EMB["yt-embeddings-strapi-plugin"]
        EMB_TOOLS[searchYtKnowledge<br/>listYtVideos<br/>getYtVideoSummary<br/>getVideoTranscriptRange]
        EMB_MCP[Own MCP Server]
    end

    subgraph OCT["strapi-octolens-mentions-plugin"]
        OCT_TOOLS[searchMentions<br/>listMentions<br/>getMention<br/>updateMention]
        OCT_MCP[Own MCP Server]
    end

    YT_TOOLS -->|ai-tools service| TR
    EMB_TOOLS -->|ai-tools service| TR
    OCT_TOOLS -->|ai-tools service| TR
    BUILT_IN --> TR

    TR --> MCP_CENTRAL
    TR --> CHAT
    TR --> PUB

    YT_MCP -.->|standalone| CLAUDE_D[Claude Desktop]
    EMB_MCP -.->|standalone| CLAUDE_D
    OCT_MCP -.->|standalone| CLAUDE_D
    MCP_CENTRAL -->|unified| CLAUDE_D

The key insight: every plugin defines its tools once using a shared interface, then exposes them through multiple channels. The tools are the contract. Everything else -- MCP servers, admin chat, public APIs -- is just plumbing.

The Four Plugins

Before diving into the architecture, here's what each plugin does:

strapi-plugin-ai-sdk -- The orchestrator. Provides an admin chat interface powered by Claude (via Vercel AI SDK), a unified MCP server, public chat API, and the tool discovery system. Ships with 11 built-in tools for content management (CRUD operations, search, email, memory, tasks).

yt-transcript-strapi-plugin -- Fetches and stores YouTube transcripts with timecodes. Provides BM25 full-text search within transcripts, chunked retrieval for long videos, and caches everything in the database. Ships 5 tools.

yt-embeddings-strapi-plugin -- Creates vector embeddings from YouTube transcripts using OpenAI and stores them in Neon PostgreSQL with pgvector. Provides semantic search with context window expansion, LLM-extracted metadata (topics, summaries, key moments). Ships 4 tools. Listens to transcript creation lifecycle events for automatic embedding.

strapi-octolens-mentions-plugin -- Ingests social media mentions from Octolens via webhook. Provides BM25 search, filtering by source/sentiment/keyword, and response tracking. Ships 4 tools.

Each plugin works perfectly on its own. You can install just the transcript plugin and use its MCP server with Claude Desktop to fetch and search YouTube transcripts. But install the AI SDK plugin alongside it, and those transcript tools automatically appear in the admin chat alongside every other plugin's tools.

Tool Standardization: The ToolDefinition Interface

The entire system rests on a single interface. Every plugin that wants to contribute tools to the ecosystem implements the same contract:

interface ToolDefinition {
  name: string;
  description: string;
  schema: z.ZodObject<any>;
  execute: (
    args: any,
    strapi: Core.Strapi,
    context?: { adminUserId?: number }
  ) => Promise<unknown>;
  /** If true, only available in admin chat, not exposed via MCP */
  internal?: boolean;
  /** If true, safe for unauthenticated public chat (read-only) */
  publicSafe?: boolean;
}

This interface is intentionally minimal. Five required properties, two optional flags. Let's break down why each matters:

name -- A camelCase identifier. Will be namespaced later by the discovery system.

description -- Critical for AI tool calling. This is what the LLM reads to decide when and how to use the tool. Write it like API documentation.

schema -- A Zod object schema for input validation. This serves double duty: runtime validation in the execute function, and automatic JSON Schema generation for MCP and the Vercel AI SDK's tool() wrapper.

execute -- An async function that receives the validated args, the Strapi instance, and an optional context (currently just the admin user ID for permission scoping). Returns any serializable value.

internal and publicSafe -- Access control flags. More on these in the three-tier access model section.

Here's a real tool definition from the embeddings plugin:

// yt-embeddings-strapi-plugin/server/src/tools/search-yt-knowledge.ts

import { SearchYtKnowledgeSchema } from '../mcp/schemas';

export const searchYtKnowledgeTool = {
  name: 'searchYtKnowledge',
  description:
    'Semantically search YouTube video transcripts. Returns relevant passages ' +
    'with timestamps, deep links, video topics, and summary.',
  schema: SearchYtKnowledgeSchema,
  execute: async (args: unknown, strapi: Core.Strapi) => {
    const validated = SearchYtKnowledgeSchema.parse(args);

    const results = await strapi
      .plugin('yt-embeddings-strapi-plugin')
      .service('ytEmbeddings')
      .search(validated.query, {
        limit: validated.limit,
        minSimilarity: validated.minSimilarity,
        videoId: validated.videoId,
        topics: validated.topics,
        contextWindowSeconds: validated.contextWindowSeconds,
      });

    if (!results.length) {
      return { results: [], message: 'No relevant content found.' };
    }

    return {
      results: results.map((r: any, i: number) => ({
        rank: i + 1,
        similarity: r.similarity,
        title: r.title,
        topics: r.topics,
        videoSummary: r.videoSummary,
        timestamp: `${formatTime(r.startSeconds)} - ${formatTime(r.endSeconds)}`,
        deepLink: r.deepLink,
        contextText: r.contextText,
      })),
    };
  },
  publicSafe: true,
};

Notice the pattern: the tool validates its own input with Zod, delegates to a Strapi service for the actual work, and returns structured data. The tool definition is a thin adapter between the standardized interface and the plugin's internal service layer.

The Zod schema pulls double duty as both the runtime validator and the source of truth for the MCP/AI SDK input schema:

// Zod schema used by the tool
const SearchYtKnowledgeSchema = z.object({
  query: z.string().min(1).describe('The search query'),
  limit: z.number().min(1).max(20).optional().default(5),
  videoId: z.string().optional().describe('Filter to a specific video'),
  topics: z.array(z.string()).optional().describe('Filter by topics'),
  contextWindowSeconds: z.number().min(0).optional().default(30),
  minSimilarity: z.number().min(0).max(1).optional().default(0.65),
});

The ai-tools Service Convention

Tool standardization is only half the story. The other half is discovery. How does the AI SDK plugin find tools from other plugins without hardcoded imports?

The answer is a convention: any plugin that wants to contribute tools exposes a service named ai-tools with a getTools() method. That's it. No registration, no configuration, no dependency injection framework. Just a naming convention.

Here's the complete ai-tools service from the Octolens plugin:

// strapi-octolens-mentions-plugin/server/src/services/ai-tools.ts

import type { Core } from '@strapi/strapi';
import { tools } from '../tools';

export default ({ strapi }: { strapi: Core.Strapi }) => ({
  getTools() {
    return tools;
  },
});

Six lines. The service imports the canonical tool definitions and returns them. Every contributing plugin implements this exact same pattern. The YouTube transcript plugin, the embeddings plugin, the Octolens plugin -- they all have an identical ai-tools service.

The tools themselves are defined in a tools/ directory at the plugin root and exported as an array:

// strapi-octolens-mentions-plugin/server/src/tools/index.ts

export const tools: ToolDefinition[] = [
  searchMentionsTool,
  listMentionsTool,
  getMentionTool,
  updateMentionTool,
];
// yt-transcript-strapi-plugin/server/src/tools/index.ts

export const tools: ToolDefinition[] = [
  fetchTranscriptTool,
  listTranscriptsTool,
  getTranscriptTool,
  searchTranscriptTool,
  findTranscriptsTool,
];

This convention-over-configuration approach means adding a new plugin to the ecosystem requires zero changes to the AI SDK plugin. Install it, restart Strapi, and the tools appear automatically.

Automatic Discovery at Bootstrap

The AI SDK plugin's bootstrap function is where the magic happens. It scans every installed plugin, looks for the ai-tools service, and registers whatever tools it finds:

// strapi-plugin-ai-sdk/server/src/bootstrap.ts

const bootstrap = ({ strapi }: { strapi: Core.Strapi }) => {
  // ... provider and registry initialization ...

  // Initialize tool registry with built-in tools
  const toolRegistry = new ToolRegistry();
  for (const tool of builtInTools) {
    toolRegistry.register(tool);
  }

  // Discover tools from other plugins
  const pluginNames = Object.keys(strapi.plugins)
    .filter((n) => n !== PLUGIN_ID);

  for (const [pluginName, pluginInstance] of Object.entries(strapi.plugins)) {
    if (pluginName === PLUGIN_ID) continue;

    let aiToolsService: any = null;
    try {
      aiToolsService = strapi.plugin(pluginName)?.service?.('ai-tools');
    } catch {
      // Plugin may not support service() yet
    }

    if (!aiToolsService?.getTools) continue;

    const contributed = aiToolsService.getTools();
    if (!Array.isArray(contributed)) continue;

    for (const tool of contributed) {
      if (!tool.name || !tool.execute || !tool.schema) {
        strapi.log.warn(`Invalid tool from ${pluginName}: ${tool.name}`);
        continue;
      }

      // Namespace to avoid collisions
      const safeName = pluginName.replace(/[^a-zA-Z0-9_-]/g, '_');
      const namespacedName = `${safeName}__${tool.name}`;

      if (toolRegistry.has(namespacedName)) {
        strapi.log.warn(`Duplicate tool: ${namespacedName}`);
        continue;
      }

      toolRegistry.register({ ...tool, name: namespacedName });
    }
  }
};

The discovery loop does several important things:

  1. Iterates all plugins except itself
  2. Tries two service resolution patterns (Strapi's API can vary)
  3. Validates each tool has the required properties before registering
  4. Namespaces tool names with a pluginName__toolName pattern using double underscores
  5. Detects duplicates and logs warnings instead of crashing
  6. Catches errors gracefully so one broken plugin doesn't take down the others

The namespace pattern deserves attention. When the Octolens plugin contributes a tool named searchMentions, it gets registered as octalens-mentions__searchMentions. This prevents collisions if two plugins happen to define tools with the same name, and makes it clear in logs and the admin UI which plugin owns each tool.

sequenceDiagram
    participant Strapi
    participant AI SDK as ai-sdk bootstrap
    participant Registry as ToolRegistry
    participant YT as yt-transcript
    participant EMB as yt-embeddings
    participant OCT as octolens-mentions

    Strapi->>AI SDK: bootstrap()
    AI SDK->>Registry: register(builtInTools)

    AI SDK->>YT: plugin.service('ai-tools')
    YT-->>AI SDK: { getTools() }
    AI SDK->>AI SDK: namespace: yt-transcript__*
    AI SDK->>Registry: register(5 tools)

    AI SDK->>EMB: plugin.service('ai-tools')
    EMB-->>AI SDK: { getTools() }
    AI SDK->>AI SDK: namespace: yt-embeddings__*
    AI SDK->>Registry: register(4 tools)

    AI SDK->>OCT: plugin.service('ai-tools')
    OCT-->>AI SDK: { getTools() }
    AI SDK->>AI SDK: namespace: octolens-mentions__*
    AI SDK->>Registry: register(4 tools)

    Note over Registry: 24 total tools<br/>(11 built-in + 13 contributed)

The ToolRegistry: Internal Tool Sharing

Once discovered, all tools live in the ToolRegistry -- a simple Map wrapper that provides filtered views for different access levels:

// strapi-plugin-ai-sdk/server/src/lib/tool-registry.ts

export class ToolRegistry {
  private readonly tools = new Map<string, ToolDefinition>();

  register(def: ToolDefinition): void {
    this.tools.set(def.name, def);
  }

  /** All registered tools (internal + public) */
  getAll(): Map<string, ToolDefinition> {
    return new Map(this.tools);
  }

  /** Only tools that should be exposed via MCP (non-internal) */
  getPublic(): Map<string, ToolDefinition> {
    const result = new Map<string, ToolDefinition>();
    for (const [name, def] of this.tools) {
      if (!def.internal) result.set(name, def);
    }
    return result;
  }

  /** Only tools marked safe for unauthenticated public chat */
  getPublicSafe(): Map<string, ToolDefinition> {
    const result = new Map<string, ToolDefinition>();
    for (const [name, def] of this.tools) {
      if (def.publicSafe) result.set(name, def);
    }
    return result;
  }
}

The registry is the single source of truth. The admin chat, public chat, and MCP server all read from the same registry -- they just use different filtered views.

Three-Tier Access Model

Not every tool should be available everywhere. A tool that writes content should never be accessible from an unauthenticated public chat widget. A tool that saves admin memories shouldn't be exposed via MCP where external clients could abuse it.

The internal and publicSafe flags create a three-tier access model:

flowchart LR
    subgraph TIER1["Tier 1: Admin Chat"]
        direction TB
        ALL["All Tools"]
        A1[searchContent]
        A2[writeContent]
        A3[sendEmail]
        A4[saveMemory]
        A5[yt-transcript__fetchTranscript]
        A6[octolens__updateMention]
        A7["...24 total"]
    end

    subgraph TIER2["Tier 2: MCP Server"]
        direction TB
        PUBLIC["Non-internal Tools"]
        B1[searchContent]
        B2[writeContent]
        B3[yt-transcript__searchTranscript]
        B4[octolens__searchMentions]
        B5["...~20 tools"]
    end

    subgraph TIER3["Tier 3: Public Chat"]
        direction TB
        SAFE["publicSafe + publicToolSources config"]
        C1[searchContent]
        C2[listContentTypes]
        C3[yt-transcript__listTranscripts]
        C4["Only plugins listed in<br/>publicToolSources config"]
    end

    ALL --> PUBLIC --> SAFE

Tier 1 -- Admin Chat (getAll()): Every registered tool. This is the Strapi admin panel chat interface, behind authentication. Admin users get the full power of the system -- content writes, email sending, memory management, and all contributed tools.

Tier 2 -- MCP Server (getPublic()): Everything except tools marked internal: true. This is what external MCP clients like Claude Desktop see. Tools like saveMemory (which store per-admin-user data) are excluded because MCP connections don't have admin user context.

Tier 3 -- Public Chat (getPublicSafe() + config): Tools must pass two gates to appear in public chat. First, the tool must be marked publicSafe: true by its plugin author. Second, for plugin-contributed tools, the plugin's source ID must be explicitly listed in the publicToolSources configuration. This dual-gate approach means a plugin author marking a tool as publicSafe is a declaration that the tool can be safely exposed -- but the Strapi admin decides whether it actually is via config.

Built-in content tools are further wrapped to enforce an allowedContentTypes whitelist:

// Plugin tools gated by publicToolSources config
const sepIndex = name.indexOf('__');
if (sepIndex !== -1) {
  const prefix = name.substring(0, sepIndex);
  if (!allowedSources.has(prefix)) continue; // skip if source not in config
}

// Content tools wrapped to enforce allowed content types
if (CONTENT_TOOLS.has(name)) {
  tools[name] = tool({
    description: def.description,
    inputSchema: zodSchema(def.schema),
    execute: async (args: any) => {
      if (!allowed.has(args.contentType)) {
        return { error: `Content type "${args.contentType}" is not available.` };
      }
      return def.execute(args, strapi);
    },
  });
}

Here's how it's configured in config/plugins.ts:

'ai-sdk': {
  config: {
    anthropicApiKey: env('ANTHROPIC_API_KEY'),
    publicChat: {
      // Which content types the public chat can query
      allowedContentTypes: [
        'api::article.article',
        'api::category.category',
        'api::author.author',
      ],
      // Which plugin tool sources are available in public chat
      // Omit entirely to expose NO plugin tools (safe default)
      publicToolSources: [
        'yt-embeddings-strapi-plugin',  // allow semantic search of YouTube content
        'yt-transcript-strapi-plugin',  // allow transcript browsing
        // 'octalens-mentions' is NOT listed -- mentions stay admin-only
      ],
    },
  },
},

The key design decision: no publicToolSources means no plugin tools in public chat. This is the safe default. You must explicitly opt in to exposing each plugin source. A plugin like octalens-mentions might mark its search tools as publicSafe (they're read-only), but if your mentions contain internal business data, you simply don't list it in publicToolSources.

This means a plugin author only needs to think about two boolean flags when defining a tool. The Strapi admin then controls which plugin sources are actually exposed to the public via configuration.

MCP: Tools Shared Externally

Every plugin in this ecosystem ships its own MCP server for standalone use. But when the AI SDK plugin is installed, it creates a unified MCP server that aggregates tools from all plugins into a single endpoint.

The AI SDK's MCP server reads from the ToolRegistry and converts each tool definition into an MCP tool:

// strapi-plugin-ai-sdk/server/src/mcp/server.ts

function toSnakeCase(str: string): string {
  return str
    .replace(/:/g, '__')
    .replace(/-/g, '_')
    .replace(/[A-Z]/g, (letter) => `_${letter.toLowerCase()}`);
}

export function createMcpServer(strapi: Core.Strapi): McpServer {
  const registry = plugin.toolRegistry;

  const server = new McpServer({
    name: 'ai-sdk-mcp',
    version: '1.0.0',
  }, { capabilities: { tools: {} } });

  for (const [name, def] of registry.getPublic()) {
    const mcpName = toSnakeCase(name);

    server.registerTool(mcpName, {
      description: def.description,
      inputSchema: def.schema.shape,
    }, async (args) => {
      const result = await def.execute(args, strapi);
      return {
        content: [{
          type: 'text' as const,
          text: JSON.stringify(result, null, 2),
        }],
      };
    });
  }

  return server;
}

The Zod schema's .shape property is used directly as the MCP input schema -- Zod objects are structurally compatible with JSON Schema, which MCP requires. Tool names are converted from camelCase to snake_case per MCP conventions, so yt-transcript__searchTranscript becomes yt_transcript__search_transcript.

This gives you two deployment modes:

flowchart TB
    subgraph STANDALONE["Standalone Mode (per-plugin MCP)"]
        CD1[Claude Desktop] -->|/api/yt-transcript/mcp| YT_S[yt-transcript MCP]
        CD2[Claude Desktop] -->|/api/yt-embeddings/mcp| EMB_S[yt-embeddings MCP]
        CD3[Claude Desktop] -->|/api/octolens/mcp| OCT_S[octolens MCP]
    end

    subgraph UNIFIED["Unified Mode (ai-sdk MCP)"]
        CD4[Claude Desktop] -->|/api/ai-sdk/mcp| AI_MCP[AI SDK MCP Server]
        AI_MCP --> ALL_TOOLS["All tools from all plugins<br/>+ built-in content tools"]
    end

Standalone mode: Install any single plugin, configure its MCP endpoint in Claude Desktop, and you get that plugin's tools. The transcript plugin gives you transcript fetching and search. The Octolens plugin gives you mention search and management. Each is useful on its own.

Unified mode: Install the AI SDK plugin alongside the others. Now /api/ai-sdk/mcp exposes every tool from every plugin through a single connection. Claude Desktop only needs one MCP configuration to access everything.

Bridging Vercel AI SDK and MCP

A subtle but important detail: the same tool definitions power both the Vercel AI SDK (for the admin chat) and MCP (for external clients). The createTools() function converts ToolDefinitions into Vercel AI SDK format:

// strapi-plugin-ai-sdk/server/src/tools/index.ts

import { tool, zodSchema } from 'ai';

export function createTools(strapi: Core.Strapi, context?: ToolContext): ToolSet {
  const registry = plugin.toolRegistry;
  const tools: ToolSet = {};

  for (const [name, def] of registry.getAll()) {
    tools[name] = tool({
      description: def.description,
      inputSchema: zodSchema(def.schema) as any,
      execute: async (args: any) => def.execute(args, strapi, context),
    });
  }

  return tools;
}

Zod is the bridge. It's natively supported by the Vercel AI SDK (via zodSchema()), and its schemas can be converted to JSON Schema for MCP. One schema definition, two protocol targets, zero duplication.

flowchart LR
    ZOD[Zod Schema] --> AI_SDK["zodSchema()<br/>Vercel AI SDK"]
    ZOD --> MCP_SCHEMA[".shape<br/>MCP JSON Schema"]

    AI_SDK --> ADMIN[Admin Chat]
    AI_SDK --> PUBLIC[Public Chat]
    MCP_SCHEMA --> MCP_SERVER[MCP Server]

Plugin Composition in Practice

Let's trace a real scenario. A user opens the admin chat and asks: "Find mentions about Strapi on Reddit from the last week and check if any of our YouTube videos cover the topics being discussed."

The AI (Claude) has access to all 24 tools. Here's what happens:

sequenceDiagram
    participant User
    participant Claude
    participant OCT as octolens__searchMentions
    participant EMB as yt-embeddings__searchYtKnowledge
    participant YT as yt-transcript__getTranscript

    User->>Claude: "Find Reddit mentions about Strapi<br/>and match with our YouTube content"

    Claude->>OCT: searchMentions({<br/>  query: "Strapi",<br/>  source: "reddit"<br/>})
    OCT-->>Claude: 8 mentions found

    Claude->>EMB: searchYtKnowledge({<br/>  query: "Strapi content management",<br/>  limit: 5<br/>})
    EMB-->>Claude: 3 relevant video segments

    Claude->>YT: getTranscript({<br/>  videoId: "abc123",<br/>  startTime: 120, endTime: 300<br/>})
    YT-->>Claude: Transcript excerpt

    Claude->>User: "I found 8 Reddit mentions. 3 of them<br/>discuss topics covered in your video<br/>'Building with Strapi v5' at 2:00-5:00..."

Three plugins, three independent codebases, working together seamlessly. The user doesn't know or care that the tools come from different plugins. Claude sees a flat list of capabilities and orchestrates them as needed.

How to Add a New Plugin to the Ecosystem

Adding a new plugin to this ecosystem takes three steps:

Step 1: Define your tools

Create a tools/ directory in your plugin's server source with tool definitions following the ToolDefinition interface:

// my-plugin/server/src/tools/my-tool.ts
import { z } from 'zod';

const MyToolSchema = z.object({
  query: z.string().describe('Search query'),
  limit: z.number().optional().default(10),
});

export const myTool = {
  name: 'myToolName',
  description: 'What this tool does, written for an AI to understand.',
  schema: MyToolSchema,
  execute: async (args: unknown, strapi: Core.Strapi) => {
    const validated = MyToolSchema.parse(args);
    // Your logic here, using strapi services
    return { results: [] };
  },
  publicSafe: true, // or false for write operations
};

Step 2: Export tools from an index

// my-plugin/server/src/tools/index.ts
export const tools = [myTool];

Step 3: Create the ai-tools service

// my-plugin/server/src/services/ai-tools.ts
import { tools } from '../tools';

export default ({ strapi }) => ({
  getTools() {
    return tools;
  },
});

Register this service in your plugin's service index, install the plugin, restart Strapi. The AI SDK plugin discovers your tools automatically at bootstrap and they're immediately available in admin chat and MCP.

Step 4 (optional): Enable in public chat

If your tools are marked publicSafe: true and you want them available in the public chat widget, add your plugin's name to the publicToolSources config:

// config/plugins.ts
'ai-sdk': {
  config: {
    publicChat: {
      publicToolSources: ['my-plugin'],
    },
  },
},

Lessons Learned

Convention over configuration works. The ai-tools service naming convention is simple enough that any plugin author can implement it in minutes. No plugin manifest, no registration API, no dependency graph. Just name your service ai-tools and export getTools().

Zod is the perfect schema language for AI tools. It serves as runtime validator, TypeScript type generator, Vercel AI SDK input schema, and MCP JSON Schema -- all from a single definition. The .describe() method on Zod fields adds context that helps the AI choose the right tool and format arguments correctly.

Namespace early. The pluginName__toolName pattern prevents subtle bugs where two plugins accidentally define tools with the same name. It also makes debugging trivial -- you can always tell which plugin owns a tool from its name.

Design tools for AI consumption. Tool descriptions should explain when to use the tool, not just what it does. Return structured data, not formatted strings. Let the AI decide how to present results to the user.

Defense in depth for public access. The publicSafe flag is a declaration by the plugin author that a tool can be safely exposed. But the Strapi admin must also list the plugin source in publicToolSources config for it to actually appear in public chat. This dual-gate approach means no plugin tools leak into public chat by default -- even if marked publicSafe. The safe default (omitting publicToolSources entirely) exposes zero plugin tools.

What's Next

This architecture opens up interesting possibilities. Since tools follow a standard interface, you could:

  • Build a tool marketplace where community plugins automatically contribute tools to the AI SDK
  • Add local-first AI via Ollama/DeepSeek using the provider architecture (it already supports custom baseURL and provider registration)
  • Create agent-to-agent workflows where the MCP server acts as a sub-agent in a larger agentic system
  • Add tool analytics to the registry -- tracking which tools are called, how often, and with what success rates

The composable plugin architecture turns Strapi from a headless CMS into a platform for building AI-native applications. Each plugin is a capability. The AI SDK plugin is the conductor. And MCP is the universal language that lets any AI client tap into the full orchestra.

Citations