Think of MCP like a USB-C port for AI applications — it provides a standardized way to connect AI models to different data sources and tools, regardless of which language or framework you use.
What you’ll build
By the end of this module you will have:- A working MCP server with tools, resources, and prompts
- A client that connects to and invokes your server
- A client powered by an LLM that selects tools automatically based on natural language
- VS Code + GitHub Copilot wired up to your server
- An understanding of both stdio and Streamable HTTP transports
- Tests for your server logic
- A deployment strategy for local and cloud environments
- Authentication middleware using basic auth and RBAC
- Sampling configured so your server can delegate LLM calls to the client
Lessons in this module
Building Your First Server
Create a calculator MCP server in TypeScript, Python, .NET, Java, or Rust and inspect it with the MCP Inspector.
Building an MCP Client
Write a client that connects to your server and programmatically lists and invokes its tools, resources, and prompts.
Client with LLM Integration
Add an LLM to your client so users interact via natural language instead of explicit API calls.
VS Code Integration
Configure VS Code and GitHub Copilot Agent mode to consume your MCP server directly from the editor.
stdio Transport Server
Learn the recommended local transport: subprocess-based stdio communication with JSON-RPC.
HTTP Streaming
Use Streamable HTTP — the recommended transport for remote MCP servers — to deliver real-time progress notifications.
AI Toolkit Integration
Use the AI Toolkit VS Code extension to build, connect, and test an agent that uses your MCP server.
Testing MCP Servers
Test your server with MCP Inspector, curl, and unit testing frameworks.
Deploying MCP Servers
Deploy your MCP server locally, in containers, and to Azure Container Apps.
Advanced Server Usage
Use the low-level server API with explicit list/call handlers and a modular project architecture.
Simple Authentication
Add basic auth middleware and RBAC to your MCP server as a stepping stone toward full OAuth 2.1.
Configuring MCP Hosts
Set up your server in Claude Desktop, Cursor, Cline, and Windsurf using JSON configuration files.
MCP Inspector
Interactively debug and test your server — browse tools, resources, prompts, and JSON-RPC messages.
Sampling
Delegate LLM calls from your server to the client using the MCP sampling protocol.
Prerequisites
Before you start, make sure you have:Development environment
One of: Node.js 18+, Python 3.8+, .NET 8+, Java 21+, or Rust (latest stable)
IDE / Editor
Visual Studio Code is recommended; any modern editor works
Package manager
npm/yarn (Node), pip (Python), NuGet (.NET), Maven/Gradle (Java), or Cargo (Rust)
API key
A GitHub Personal Access Token (PAT) for lessons that use GitHub Models as the LLM backend
Official SDKs
MCP provides official SDKs for every major language. All examples in this module use these SDKs.| Language | SDK | Maintainer |
|---|---|---|
| TypeScript | @modelcontextprotocol/sdk | Anthropic |
| Python | mcp (FastMCP) | Anthropic |
| C# / .NET | ModelContextProtocol | Microsoft |
| Java | mcp-java-sdk | Spring AI |
| Kotlin | kotlin-sdk | Anthropic |
| Rust | rmcp | Anthropic |
| Swift | swift-sdk | Loopwork AI |
| Go | go-sdk | Anthropic |
Key takeaways
- Setting up an MCP development environment is straightforward with language-specific SDKs.
- MCP servers expose tools (functions), resources (data), and prompts (templates) through a unified protocol.
- MCP clients connect to servers and models to leverage extended capabilities.
- Testing and debugging are essential for reliable MCP implementations.
- Deployment options range from local development to cloud-based solutions.
Start here
Begin with the first lesson to build your first server:Building Your First MCP Server
Install the SDK, scaffold a project, register tools and resources, and run the MCP Inspector.