Real-World MCP Servers: Databases, APIs, and Files
Build practical MCP servers that connect AI assistants to PostgreSQL databases, REST APIs, and file systems. Learn patterns used in production deployments.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skill templates included
- New content added weekly
You know how to build tools, resources, and prompts. Now let’s connect them to real systems — databases, REST APIs, and file systems. These three patterns cover 80% of real-world MCP use cases.
🔄 Quick Recall: In the previous lesson, you learned all three MCP primitives: Tools for actions, Resources for data, and Prompts for templates. Now you’ll apply them to connect AI assistants to the systems your organization actually uses.
Pattern 1: Database Server
The most common MCP server connects AI to a database. The AI can explore schemas, write queries, and analyze results — all through natural conversation.
The Architecture
User: "How many orders were placed last month?"
↓
Claude → calls query_database tool
↓
MCP Server → executes SQL against PostgreSQL
↓
Returns: "1,247 orders totaling $89,340"
Implementation
import asyncpg
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Database Assistant")
# Connection pool (created on startup)
pool = None
@mcp.tool()
async def query_database(sql: str) -> str:
"""Execute a read-only SQL query and return results."""
# Safety: block write operations
forbidden = ["INSERT", "UPDATE", "DELETE", "DROP", "ALTER", "TRUNCATE"]
sql_upper = sql.upper().strip()
for keyword in forbidden:
if keyword in sql_upper:
return f"Error: {keyword} operations are not allowed. This tool is read-only."
try:
async with pool.acquire() as conn:
rows = await conn.fetch(sql)
if not rows:
return "Query returned no results."
# Format as table
headers = list(rows[0].keys())
lines = [" | ".join(headers)]
lines.append("-" * len(lines[0]))
for row in rows[:50]: # Limit to 50 rows
lines.append(" | ".join(str(row[h]) for h in headers))
return "\n".join(lines)
except Exception as e:
return f"Query error: {str(e)}"
@mcp.resource("db://schema")
def get_schema() -> str:
"""Database table schemas for reference."""
return open("schema.sql").read()
Key Safety Rules
- Default to read-only. Block INSERT, UPDATE, DELETE, DROP, ALTER, TRUNCATE
- Limit result size. Cap at 50-100 rows to prevent context overflow
- Use connection pooling. One connection per query is slow and fragile
- Expose the schema as a Resource. The AI needs to know table structures to write correct SQL
✅ Quick Check: Why do we expose the database schema as a Resource instead of a Tool? (Answer: The schema is static reference data the AI needs as context — it doesn’t change during a conversation. Resources are fetched once for context. A Tool would need to be called every time the AI wants to check a table structure, wasting calls.)
Pattern 2: API Server
Wrap external REST APIs so the AI can call them through natural language:
Implementation
import httpx
import os
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("GitHub Assistant")
GITHUB_TOKEN = os.environ.get("GITHUB_TOKEN")
BASE_URL = "https://api.github.com"
@mcp.tool()
async def list_issues(repo: str, state: str = "open", limit: int = 10) -> str:
"""List GitHub issues for a repository (format: owner/repo)."""
if "/" not in repo:
return "Error: repo must be in 'owner/repo' format"
async with httpx.AsyncClient() as client:
response = await client.get(
f"{BASE_URL}/repos/{repo}/issues",
headers={"Authorization": f"Bearer {GITHUB_TOKEN}"},
params={"state": state, "per_page": min(limit, 30)}
)
if response.status_code == 404:
return f"Repository '{repo}' not found"
if response.status_code != 200:
return f"GitHub API error: {response.status_code}"
issues = response.json()
if not issues:
return f"No {state} issues in {repo}"
lines = [f"Issues in {repo} ({state}):"]
for issue in issues:
lines.append(f" #{issue['number']}: {issue['title']}")
return "\n".join(lines)
@mcp.tool()
async def create_issue(repo: str, title: str, body: str = "") -> str:
"""Create a new GitHub issue."""
if "/" not in repo:
return "Error: repo must be in 'owner/repo' format"
async with httpx.AsyncClient() as client:
response = await client.post(
f"{BASE_URL}/repos/{repo}/issues",
headers={"Authorization": f"Bearer {GITHUB_TOKEN}"},
json={"title": title, "body": body}
)
if response.status_code == 201:
issue = response.json()
return f"Created issue #{issue['number']}: {issue['html_url']}"
return f"Failed to create issue: {response.status_code}"
Configuring API Keys
Pass secrets through environment variables in claude_desktop_config.json:
{
"mcpServers": {
"github": {
"command": "python",
"args": ["/path/to/github_server.py"],
"env": {
"GITHUB_TOKEN": "ghp_your_token_here"
}
}
}
}
Never hardcode API keys in your server code. Environment variables keep secrets out of version control and make it easy to use different keys per environment.
API Server Best Practices
- Rate limiting: Track API calls and return early if approaching limits
- Timeout handling: Set HTTP timeouts (
timeout=10.0) to prevent hanging - Error messages: Translate HTTP status codes into human-readable explanations
- Pagination: For list endpoints, support a
limitparameter and cap it
✅ Quick Check: Why do we use environment variables for API tokens instead of hardcoding them? (Answer: Environment variables keep secrets out of source code and version control. Different environments (development, staging, production) can use different tokens. The MCP client config passes them to the server process at startup.)
Pattern 3: File System Server
Give AI access to files on the local machine — with strict safety boundaries:
Implementation
import os
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("File Assistant")
# SAFETY: Only allow access within this directory
ALLOWED_DIR = os.path.expanduser("~/projects")
def safe_path(filepath: str) -> str:
"""Resolve and validate a file path is within the allowed directory."""
resolved = os.path.realpath(os.path.join(ALLOWED_DIR, filepath))
if not resolved.startswith(os.path.realpath(ALLOWED_DIR)):
raise ValueError(f"Access denied: path outside {ALLOWED_DIR}")
return resolved
@mcp.tool()
def read_file(filepath: str) -> str:
"""Read a file from the projects directory."""
try:
full_path = safe_path(filepath)
with open(full_path, 'r') as f:
content = f.read()
# Truncate very large files
if len(content) > 10000:
return content[:10000] + f"\n\n[Truncated — file is {len(content)} characters]"
return content
except ValueError as e:
return f"Error: {str(e)}"
except FileNotFoundError:
return f"File not found: {filepath}"
@mcp.tool()
def list_directory(dirpath: str = ".") -> str:
"""List files and directories in a given path."""
try:
full_path = safe_path(dirpath)
entries = os.listdir(full_path)
dirs = sorted(e + "/" for e in entries if os.path.isdir(os.path.join(full_path, e)))
files = sorted(e for e in entries if os.path.isfile(os.path.join(full_path, e)))
return "Directories:\n" + "\n".join(f" {d}" for d in dirs) + \
"\n\nFiles:\n" + "\n".join(f" {f}" for f in files)
except ValueError as e:
return f"Error: {str(e)}"
File System Security Rules
- Always define an allowed directory. Never give unrestricted filesystem access
- Resolve paths with
os.path.realpath(). This defeats../../traversal attacks - Truncate large files. A 10MB log file will overwhelm the AI’s context window
- Be cautious with write operations. Consider making write a separate, more restricted tool
Combining Patterns: Multi-System Server
Real projects often need access to multiple systems:
mcp = FastMCP("DevOps Assistant")
# Database tools
@mcp.tool()
async def query_production_db(sql: str) -> str: ...
# GitHub tools
@mcp.tool()
async def list_open_prs(repo: str) -> str: ...
# File system tools
@mcp.tool()
def read_config(filename: str) -> str: ...
# Context resources
@mcp.resource("project://architecture")
def get_architecture_docs() -> str: ...
Alternatively, run separate MCP servers per system and connect them all to your AI client. Both approaches work — single server is simpler, multiple servers is more modular.
Practice Exercise
Build one of these production-style MCP servers:
- SQLite explorer — Read-only queries against a local SQLite database, with schema as a resource
- REST API wrapper — Connect to any public API (weather, news, joke API) with proper error handling
- Project file reader — Navigate and read files in a specific project directory with directory traversal protection
Key Takeaways
- Database servers: default to read-only, expose schema as a Resource, limit result rows
- API servers: use environment variables for secrets, handle timeouts and rate limits
- File servers: always restrict to an allowed directory, resolve paths to prevent traversal
- For all patterns: validate inputs, return clear errors, use async for I/O
- Multi-system servers combine patterns — either in one server or multiple connected servers
Up Next
In the next lesson, you’ll learn to secure your MCP servers for production — OAuth 2.1 authentication, authorization, input sanitization, and deployment strategies.
Knowledge Check
Complete the quiz above first
Lesson completed!