- Published on
Model Context Protocol (MCP) in Practice
- Authors
- Name
- Diego Carpintero
Introduction
Model Context Protocol (MCP) is an open protocol, developed by Anthropic, for connecting LLM applications with external data sources, tools, and systems. The implications are significant, as this enables existing software platforms to share contextual information and enhance them with the capabilities of frontier models. For end users, this translates into more powerful and context-rich AI applications.
MCP standardizes how AI applications interact with external systems, similar to how APIs standardize web application interactions with backend servers, databases, and services. Before MCP, integrating AI applications with external systems was highly fragmented and required building numerous custom connectors, leading to an inefficient M×N problem where M AI applications would need unique integrations for N different tools. By providing a common protocol, MCP transforms this challenge into a simpler M+N problem, promoting modularity, reducing complexity, and allowing users to switch between LLM providers and vendors.
As of April 2025, more than 10,000 integrations including some of the most used software platforms (such as Google Drive, GitHub, AWS, Notion, PayPal and Zapier) have already been implemented. In this guide, we dive into the MCP architecture and demonstrate how to implement a Python MCP Server and Client for the NewsAPI, allowing to automatically perform retrieval and analytics on news articles with frontier models.
Table of Contents
MCP Architecture
MCP follows a client-server architecture using JSON-RPC 2.0 messages to establish communication between:
- MCP Hosts: User-facing AI applications (like Claude Desktop, IDE Assistants, or Custom Agents) that use MCP clients to communicate with MCP servers.
- MCP Servers: Services that expose specific context and capabilities to the model via the MCP client.
- MCP Clients: Protocol clients that maintain 1:1 connections with MCP servers.
MCP takes inspiration from the Language Server Protocol (LSP), an open standard developed by Microsoft that defines a common way to enhance development tools (IDEs) with language specific tools such as code navigation, code analysis, and code intelligence.

MCP Server
MCP servers expose capabilities through standardized interfaces:
- Tools (Model Controlled): Functions that LLMs can invoke to perform specific actions, similar to function calling.
- Resources (Application Controlled): Data sources that LLMs can access to retrieve information, comparable to read-only API endpoints.
- Prompts (User Controlled): Pre-defined templates that guide the optimal use of tools and resources.
And optionally:
- Samplings: Server-initiated agentic behaviors and recursive LLM interactions.
The communication with clients is based primarily via two methods:
- stdio (Standard Input/Output): Used when the Client and Server run on the same machines. This is simple and effective for local integrations.
- HTTP via SSE (Server-Sent Events): Used when the Client connects to the Server via HTTP. After an initial setup, the Server can push messages (events) to the Client over a persistent connection using the SSE standard.
We implement an MCP Server for News Articles in Python using FastMCP. Our server exposes fetch_news
and fetch_headlines
as tools:
from mcp.server.fastmcp import FastMCP, Context
from newsapi.connector import NewsAPIConnector
from newsapi.models import NewsResponse
mcp = FastMCP("News Server")
news_api_connector = NewsAPIConnector()
@mcp.tool()
async def fetch_news(
query: str,
from_date: str | None,
to_date: str | None,
language: str = "en",
sort_by: str = "publishedAt",
page_size: int = 10,
page: int = 1,
ctx: Context = None,
) -> NewsResponse:
"""
Retrieves news articles.
Args:
query: Keywords or phrases to search for
from_date: Optional start date (YYYY-MM-DD)
to_date: Optional end date (YYYY-MM-DD)
language: Language code (default: en)
sort_by: Sort order (default: publishedAt, popularity, relevancy)
page_size: Number of results per page (default: 10, max: 100)
page: Page number for pagination (default: 1)
ctx: MCP Context
Returns:
News Articles
"""
if ctx:
ctx.info(f"Searching news for: {query}")
params = {
k: v
for k, v in {
"q": query,
"from": from_date,
"to": to_date,
"language": language,
"sortBy": sort_by,
"pageSize": min(page_size, 100),
"page": page,
}.items()
if v is not None
}
success, result = await news_api_connector.search_everything(**params)
if not success:
return error_response("Failed to fetch news articles.")
return result
@mcp.tool()
async def fetch_headlines(
category: str,
query: str,
country: str = "us",
page_size: int = 10,
page: int = 1,
ctx: Context = None,
) -> NewsResponse:
# See fetch_headlines @tool implementation in ./src/server.py
if __name__ == "__main__":
mcp.run(transport="stdio")
We have abstracted the functionality to handle connections to the NewsAPI within a Connector class using Pydantic models:
class NewsAPIConnector:
"""
Handles a connection with the NewsAPI and retrieves news articles.
"""
def __init__(self):
self.api_key = os.getenv("NEWSAPI_KEY")
self.base_url = os.getenv("NEWSAPI_URL")
async def get_client(self) -> httpx.AsyncClient:
return httpx.AsyncClient(timeout=30.0, headers={"X-Api-Key": self.api_key})
async def search_everything(
self, **kwargs
) -> Tuple[bool, Union[str, NewsResponse]]:
"""
Retrieves News Articles from the NewsAPI "/everything" endpoint.
Args:
**kwargs: Requests parameters (see https://newsapi.org/docs/endpoints/everything)
Returns:
A tuple containing (success, result) where:
- success: A boolean indicating if the request was successful
- result: Either an error message (string) or the validated NewsResponse model
"""
async with await self.get_client() as client:
params = kwargs
try:
response = await client.get(f"{self.base_url}everything", params=params)
response.raise_for_status()
data = response.json()
result = NewsResponse.model_validate(data)
return True, result
except httpx.RequestError as e:
return False, f"Request error: {str(e)}"
except Exception as e:
return False, f"Unexpected error: {str(e)}"
async def get_top_headlines(
self, **kwargs
) -> Tuple[bool, Union[str, NewsResponse]]:
# See get_top_headlines implementation in ./newsapi/connector.py
Pydantic models:
from datetime import datetime
from pydantic import BaseModel, ConfigDict, HttpUrl
from pydantic.alias_generators import to_camel
class ArticleSource(BaseModel):
model_config = ConfigDict(alias_generator=to_camel, populate_by_name=True)
id: str | None
name: str | None
class Article(BaseModel):
model_config = ConfigDict(alias_generator=to_camel, populate_by_name=True)
source: ArticleSource
author: str | None
title: str | None
description: str | None
url: HttpUrl | None
url_to_image: HttpUrl | None
published_at: datetime | None
content: str | None
class NewsResponse(BaseModel):
model_config = ConfigDict(alias_generator=to_camel, populate_by_name=True)
status: str
totalResults: int
articles: list[Article]
MCP Client
from mcp import ClientSession, StdioServerParameters, types
from mcp.client.stdio import stdio_client
# Commands for running/connecting to MCP Server
server_params = StdioServerParameters(
command="python", # Executable
args=["server.py"], # Optional command line arguments
env=None, # Optional environment variables
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(
read, write, sampling_callback=handle_sampling_message
) as session:
await session.initialize()
# List available prompts
prompts = await session.list_prompts()
# Get a prompt
prompt = await session.get_prompt(
"example-prompt", arguments={"arg1": "value"}
)
# List available resources
resources = await session.list_resources()
# List available tools
tools = await session.list_tools()
# Read a resource
content, mime_type = await session.read_resource("file://some/path")
# Call a tool
result = await session.call_tool("fetch_news", arguments={"q": "news topic or query"})
Tooling
Anthropic has developed an Inspector tool designed for testing and debugging servers that implement the Model Context Protocol. It provides a graphical interface to inspect resources, test prompts, execute tools, and monitor server logs and notifications:

Claude Desktop Demo
Once Claude for Desktop is installed, we need to configure it for our MCP server. To do this, open Claude for Desktop App configuration at ~/Library/Application Support/Claude/claude_desktop_config.json
in a text editor.
And add the server in the mcpServers
key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured:
{
"mcpServers": {
"news": {
"command": "uv",
"args": [
"--directory",
"C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\news",
"run",
"server.py"
]
}
}
}
This tells Claude for Desktop:
- There’s an MCP server named "mews"
- To launch it by running
uv --directory /ABSOLUTE/PATH/TO/PARENT/FOLDER/news run server.py
We can now test our server in Claude for Desktop:

References
- Microsoft. 2015. Language Server Protocol (LSP)
- Anthropic. 2024. Model Context Protocol
- Anthropic, Mahesh Murag. 2025. Building Agents with Model Context Protocol
Citation
@misc{carpintero-mcp-in-practice,
title = {Model Context Protocol (MCP) in Practice},
author = {Diego Carpintero},
month = {apr},
year = {2025},
date = {2025-04-20},
publisher = {https://tech.dcarpintero.com/},
howpublished = {\url{https://tech.diegocarpintero.com/blog/model-context-protocol-in-practice/}},
keywords = {large-language-models, agents, model-context-protocol, ai-tools, function-calling},
}