Modern AI applications, such as agents, often rely on multiple tools—APIs, databases, search engines, etc.—to perform various tasks. However, these diverse tools work independently. Writing code to interact with them can create code duplication, version conflicts, unclear permissions, and custom logic whenever you try to scale or change tools. Let's see how MCP can help.
What is MCP?
The Model Context Protocol is a standard for connecting AI-powered applications with external systems. It enables them to interact with data stores, development environments, productivity tools, knowledge bases, and other systems. The image below shows some systems that MCP allows your AI agents and assistants to communicate with:
From the image, you can see that MCP enables capabilities such as:
Translating natural language into SQL statements (read, and optionally write) to access organizational data and return results from databases.
Automating business tasks like PayPal transactions (orders, invoices, subscriptions, disputes, analytics) through natural language.
Accessing Slack data (messages, files, channels) to retrieve insights and automate responses and workflows.
Querying and modifying project management tools like Jira (issues, transitions, projects) through natural language to retrieve data and manage workflows.
Using developer tools like Claude Code to understand, modify, and generate code and perform other actions directly in your development environment.
With MCP, you can build more capable and scalable AI applications.
Core concepts
The MCP standard follows the client-server architecture. This means there will be an MCP host, like an AI assistant or agent, that connects to an MCP server like Notion MCP to retrieve content or perform actions. An MCP host uses one or more MCP clients to connect to multiple MCP servers for different purposes. The client maintains communication between the host and the specific server.
MCP consists of two layers:
Data layer
Transport layer
The data layer defines:
The communication protocol (JSON-RPC 2.0) — message format, structure, and meaning.
Lifecycle management — covers how clients and servers establish connections, negotiate capabilities, maintain state, and terminate the connection.
Server capabilities — the tools (actions), resources (data/context), and prompts (if applicable) that a server can offer.
Client capabilities — what the client (host) can do: asking for user input (known as elicitation), prompting LLMs (known as sampling), logging, etc.
Additional features — utilities like notifications (for real-time updates), progress tracking, error handling, and any other protocol extensions.
The transport layer manages how clients and servers communicate. It handles connection establishment, authentication, and message framing between clients and servers. Here are the transport mechanisms that MCP supports:
Stdio (local server) — useful for direct communications between clients and servers on the same machine.
Streamable HTTP — enables communication with remote servers. You can also allow Server-Sent Events (SSE) for streaming capabilities. Local servers can use this mode too.
When using streamable HTTP, you can authenticate via OAuth tokens, API keys (secrets), or custom headers.
Using MCP servers
Now that you understand the MCP protocol, how do you use it? First, you need a client. Many clients support the MCP protocol, and you can build custom clients to meet your specific needs. Claude Code, the coding assistant from Anthropic, is a good example. As an interactive coding agent, it requires access to various tools and resources in your development environment for gathering context or performing actions, such as creating pull requests automatically.
Next, you need an MCP server. Various providers maintain MCP servers for their products, and you can build a custom server if existing options don't meet your requirements. After selecting an appropriate server, you need to configure your client to connect to it. MCP hosts let you specify MCP servers in a JSON configuration file. You typically find these files in your home folder, such as ~/.claude.json for Claude Code. Sometimes, you can store the config in a project directory for project-scoped MCP servers. Here's what the configuration might look like:
{
"mcpServers": {
"notionApi": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"-e",
"NOTION_TOKEN=ntn_****",
"mcp/notion"
]
}
}
}Docker also provides a simple, standardized way to run various MCP servers from different providers via the MCP gateway:
To enable it, just head over to Docker Desktop Settings > Beta Features > Enable Docker MCP Toolkit. If you're not using Docker Desktop, you need to install the Docker MCP gateway plugin by following this guide. Then, you can use CLI commands like:
$ docker mcp catalog show # view all servers
$ docker mcp server ls # view configured servers
$ docker mcp server enable aws-documentation # run a server like AWS docs
$ docker mcp client connect cursor # connect a client like Cursor
$ docker mcp server disable notion # to disable a server like NotionIn some cases, you need to provide secrets or authenticate with OAuth for a server like Notion or GitHub to work. Docker will guide you through the configuration process when you run the server. For the CLI, you can authorize a server like GitHub as follows:
$ docker mcp oauth authorize github
# or get help
$ docker mcp oauth --help
$ docker mcp secret --helpTo connect a client, provide the following command in the client's JSON configuration to start the Docker MCP gateway:
$ docker mcp gateway runThis makes all the tools from the configured MCP servers available for the client to use. In the next section, we will connect to a local MCP server for Claude Code to use.
Example with local MCP server
You can run MCP servers locally through either Docker or Node.js. Let's explore how to run an MCP server, specifically a Notion MCP server, using Docker. Before configuring the Notion MCP server, you need to create an integration with your Notion workspace. This integration connects your workspace to external tools (like Slack, GitHub, and others) built using Notion's API, enabling clients to perform actions and exchange data. After that, name your integration, choose a workspace, and keep the type as "internal".
After creation, navigate to the integration's settings and choose permissions, such as read, update, or insert content. For this example, we'll only grant read permissions for content, no permissions for comments, and no access to workspace user information. Make sure to copy the "Internal Integration Secret" (ntn_****), as we'll need it later. Click save to update your settings. Then, go to the Access tab and select which pages you want the integration to access.
Next, add Notion via Docker's MCP toolkit and configure it with the integration secret. Finally, add Docker as the MCP server to your MCP client's JSON config (e.g. ~/.claude.json or ~/.cursor/mcp.json):
{
"mcpServers": {
"notionApi": {
"command": "docker",
"args": [
"mcp",
"gateway",
"run"
]
}
}
}Now, your MCP client should connect to the Docker MCP gateway and use it to connect to your Notion workspace:
$ claude mcp list
Checking MCP server health...
notionApi: docker mcp gateway run - ✓ ConnectedClaude Code should now be able to retrieve data from the workspace:
As shown above, Claude Code successfully retrieved content from the connected Notion workspace through the configured server.
Be aware that large documents can quickly use up tokens, which can be costly.
Example with remote MCP server
For remote connections, the JSON config would look like that below (with SSE):
{
"mcpServers": {
"Notion": {
"type": "sse",
"url": "https://mcp.notion.com/sse"
}
}
}A supported client, such as the Cursor code editor, handles authentication with the server when you first install it and complete the auth flow via the browser. After configuration, you can see the tools and resources that Cursor can use:
Now, Cursor can access documents from your workspace as needed:
The configuration process may vary depending on your provider, but the setup generally follows this pattern. Check the documentation for your client and each server you want to use for specific details.
Conclusion
MCP offers a straightforward, standardized approach for AI assistants to communicate with external systems. It minimizes redundant integration code, defines permissions and lifecycle clearly, and simplifies tool management and scalability. MCP divides the system into a protocol-style data layer (JSON-RPC, capabilities, lifecycle management) and flexible transport options (stdio, streamable HTTP/SSE). This enables agents and assistants to interact with organizational data, development environments, and productivity services without relying on fragile, ad hoc integrations or temporary fixes.
You can deploy servers locally via Docker's MCP catalog for better control and reduced latency, or connect to remote providers for production workloads. The MCP approach reduces engineering complexity and enables secure, traceable automation.