How to Build a Custom MCP Server in Under an Hour

A hands-on developer tutorial: from zero to a working MCP server with tools, resources, and deployment.

April 20, 2026 15 min read By The AI SuperHeroes Team

Why Build a Custom MCP Server?

The MCP ecosystem already offers thousands of pre-built server integrations. So why would you build your own? The answer comes down to three scenarios: you have proprietary internal tools that no public MCP server supports, you need fine-grained control over how your AI models interact with your systems, or you want to expose your product’s API as an MCP-compatible service for your customers.

Building a custom MCP server is not as daunting as it sounds. The MCP SDK handles the protocol complexity for you. Your job is to define what tools your server offers, implement the logic behind those tools, and configure how the server communicates. In this tutorial, we will build a fully functional MCP server from scratch in under an hour.

What You Will Build: A custom MCP server that exposes a project management tool set, allowing AI models to create tasks, list projects, and query status updates from your internal system.

Prerequisites

Before we start, make sure you have the following installed and ready:

This tutorial uses the official TypeScript MCP SDK, but the concepts apply equally to the Python SDK. The architecture and patterns are identical; only the syntax differs.

Step 1: Project Setup (5 Minutes)

Let us start by creating a new project and installing the MCP SDK. Open your terminal and run the following commands:

mkdir my-mcp-server && cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node ts-node
npx tsc --init

The @modelcontextprotocol/sdk package provides the core MCP server framework. Zod is used for input validation and schema definition, which is critical for ensuring your tools receive properly structured parameters from AI models.

Configure TypeScript

Update your tsconfig.json with these settings for MCP server development:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./dist",
    "strict": true,
    "esModuleInterop": true
  },
  "include": ["src/**/*"]
}

Create a src directory and a main entry file src/index.ts. This is where your server code will live.

Step 2: Define Your Tools (10 Minutes)

MCP tools are the functions your server exposes to AI models. Each tool has a name, a description (which helps the AI understand when to use it), and an input schema that validates incoming parameters.

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "project-manager",
  version: "1.0.0",
  description: "Manage projects and tasks"
});

// Define the "create_task" tool
server.tool(
  "create_task",
  "Create a new task in a project",
  {
    project_id: z.string().describe("The project ID"),
    title: z.string().describe("Task title"),
    priority: z.enum(["low","medium","high"]).describe("Task priority"),
    assignee: z.string().optional().describe("Assignee email")
  },
  async ({ project_id, title, priority, assignee }) => {
    // Implementation goes here (Step 3)
  }
);

Tool Design Tip

Write clear, specific tool descriptions. The AI model reads these descriptions to decide which tool to call. A description like "Create a new task in a project with title, priority, and optional assignee" is far more useful than just "Create task."

Defining Additional Tools

A useful server typically exposes 3 to 8 tools. For our project manager, we will also add list_tasks to retrieve tasks by project, and get_project_status to return an overview of a project’s progress. Each tool follows the same pattern: name, description, schema, handler.

server.tool(
  "list_tasks",
  "List all tasks in a project, optionally filtered by status",
  {
    project_id: z.string().describe("The project ID"),
    status: z.enum(["open","in_progress","done"]).optional()
  },
  async ({ project_id, status }) => {
    // Implementation in Step 3
  }
);

server.tool(
  "get_project_status",
  "Get a summary of project progress including task counts",
  { project_id: z.string().describe("The project ID") },
  async ({ project_id }) => {
    // Implementation in Step 3
  }
);

Step 3: Implement Tool Handlers (15 Minutes)

Now we write the actual logic behind each tool. In a real deployment, your handlers would connect to databases, APIs, or internal services. For this tutorial, we will use an in-memory store to keep things simple and focused on the MCP patterns.

// In-memory data store
const tasks: Map<string, any[]> = new Map();
let taskCounter = 1;

// create_task handler
async ({ project_id, title, priority, assignee }) => {
  const task = {
    id: `TASK-${taskCounter++}`,
    project_id,
    title,
    priority,
    assignee: assignee || "unassigned",
    status: "open",
    created_at: new Date().toISOString()
  };
  const projectTasks = tasks.get(project_id) || [];
  projectTasks.push(task);
  tasks.set(project_id, projectTasks);
  return {
    content: [{ type: "text", text: JSON.stringify(task, null, 2) }]
  };
}

The key pattern here is the return format. MCP tool handlers must return a content array with typed content blocks. The most common type is "text" for structured data. You can also return "image" for visual content or "resource" for file references.

Error Handling in Handlers

Robust error handling is essential. When a tool call fails, return a clear error message rather than throwing an exception. This helps the AI model understand what went wrong and potentially retry with corrected parameters.

// Example: Graceful error handling
async ({ project_id }) => {
  const projectTasks = tasks.get(project_id);
  if (!projectTasks) {
    return {
      content: [{ type: "text", text: `Error: Project ${project_id} not found.` }],
      isError: true
    };
  }
  // ... continue with normal logic
}

Pro Tip: Set isError: true in your response when a tool call fails. This signals to the AI model that the result is an error, not normal data, helping it respond appropriately to the user.

Step 4: Add Resources (10 Minutes)

MCP resources are read-only data that your server exposes. Unlike tools (which perform actions), resources provide context that the AI model can reference. Think of them as documents, configuration files, or reference data.

server.resource(
  "project-list",
  "project://list",
  async (uri) => {
    const projects = Array.from(tasks.keys()).map(id => ({
      id,
      task_count: tasks.get(id)?.length || 0
    }));
    return {
      contents: [{
        uri: uri.href,
        mimeType: "application/json",
        text: JSON.stringify(projects, null, 2)
      }]
    };
  }
);

Resources are particularly useful for providing the AI model with reference data it needs to make informed tool calls. For example, exposing a list of valid project IDs helps the model avoid calling create_task with a nonexistent project.

Step 5: Testing Your Server (10 Minutes)

With your tools and resources defined, it is time to test. Add the transport connection at the bottom of your index.ts:

// Connect the server to stdio transport
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Project Manager MCP Server running on stdio");

Testing with MCP Inspector

The easiest way to test is with the MCP Inspector, a browser-based tool that lets you interact with your server directly:

npx @modelcontextprotocol/inspector ts-node src/index.ts

This launches a web interface where you can see your registered tools, test them with sample inputs, and inspect the responses. Verify that each tool returns the expected output format and handles edge cases gracefully.

Testing with Claude Desktop

To test with a real AI client, add your server to Claude Desktop’s configuration file:

// ~/Library/Application Support/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "project-manager": {
      "command": "ts-node",
      "args": ["/path/to/my-mcp-server/src/index.ts"]
    }
  }
}

Restart Claude Desktop and you should see your tools available in the tool picker. Try asking Claude to "create a high-priority task in project ALPHA called Fix login bug" and watch it call your custom MCP server.

Debugging Tip

Use console.error() for debug logging in MCP servers (not console.log()). Since stdio transport uses stdout for protocol messages, any console.log output will corrupt the communication. Stderr is safe for your debug messages.

Deployment Options for Production

Once your server works locally, you have several deployment options depending on your use case:

Local Stdio Deployment

The simplest option. Your MCP server runs as a local process, communicating via stdio. This is ideal for personal tools, development workflows, and desktop AI applications. Package your server as an npm package so users can install it with a single command.

HTTP/SSE Remote Deployment

For team or enterprise use, deploy your MCP server as a remote HTTP service with Server-Sent Events for streaming. This allows multiple users and AI clients to connect to a single server instance. Deploy on any cloud platform: AWS Lambda, Google Cloud Run, or a simple VPS.

Docker Containerization

Package your MCP server as a Docker container for consistent, reproducible deployments. This is the recommended approach for production environments where you need versioning, scaling, and orchestration.

# Dockerfile for MCP Server
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY dist/ ./dist/
CMD ["node", "dist/index.js"]

Publishing to the MCP Registry

If your server is useful to the broader community, consider publishing it to the MCP registry. This makes it discoverable by other developers and AI platforms. Include clear documentation, usage examples, and a well-defined tool schema.

Next Steps: Now that you have a working MCP server, extend it by connecting to real databases, adding authentication, implementing rate limiting, and writing comprehensive tests. See our MCP Security Best Practices guide for hardening your server.

Skip the Build and Get 40,000+ MCP Integrations Instantly

MCP SuperHero gives you pre-built, production-ready MCP server integrations for every major tool and service. Focus on building, not plumbing.

Get Started with MCP SuperHero

Continue Learning

Explore more MCP resources: What is MCP, MCP vs API Integration, and MCP Security Best Practices. Visit TheAISuperHeroes.com for more AI guides.