Skip to main content
Users live in an agentic world — they want to type a prompt and get a result, not call specific tool names with exact parameter names. Adding an LLM to your client bridges that gap: the model decides which tool to call and with what arguments based on what the user asks.

Learning objectives

By the end of this lesson you will be able to:
  • Connect an MCP client to a language model (GitHub Models / OpenAI)
  • Convert MCP tool schemas to a format the LLM understands
  • Pass a user prompt to the LLM and route the resulting tool call back to the MCP server
  • Provide a seamless natural-language experience on top of any MCP server

How it works

The flow has four steps:
  1. Connect to the MCP server and list its tools, resources, and prompts.
  2. Convert each tool’s schema to the function-calling format the LLM expects.
  3. Send the user prompt to the LLM along with the tool definitions.
  4. If the LLM decides to call a tool, forward that call to the MCP server and return the result.

Prerequisites: GitHub token

The examples below use GitHub Models as the LLM backend. You need a GitHub Personal Access Token with the Models permission:
  1. Go to GitHub Settings → Developer Settings → Fine-grained tokens.
  2. Click Generate new token, add a note, set an expiry, and enable the Models permission.
  3. Copy the token and export it: export GITHUB_TOKEN=<your-token>

Exercise: Building the LLM client

1

Connect to the server and set up the LLM

import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import OpenAI from "openai";
import { z } from "zod";

class MCPClient {
  private openai: OpenAI;
  private client: Client;

  constructor() {
    this.openai = new OpenAI({
      baseURL: "https://models.inference.ai.azure.com",
      apiKey: process.env.GITHUB_TOKEN,
    });
    this.client = new Client(
      { name: "example-client", version: "1.0.0" },
      { capabilities: { prompts: {}, resources: {}, tools: {} } }
    );
  }
}
2

Convert MCP tools to LLM format

The LLM expects tools in a specific JSON schema format. You need to map each MCP tool response into that structure.
openAiToolAdapter(tool: { name: string; description?: string; input_schema: any }) {
  return {
    type: "function" as const,
    function: {
      name: tool.name,
      description: tool.description,
      parameters: {
        type: "object",
        properties: tool.input_schema.properties,
        required: tool.input_schema.required,
      },
    },
  };
}
3

Handle user prompts with tool calls

Send the user’s message to the LLM, check whether it decides to call a tool, then forward that call to the MCP server.
async run() {
  const toolsResult = await this.client.listTools();
  const tools = toolsResult.tools.map((tool) =>
    this.openAiToolAdapter({
      name: tool.name,
      description: tool.description,
      input_schema: tool.inputSchema,
    })
  );

  const messages = [{ role: "user" as const, content: "What is the sum of 2 and 3?" }];

  const response = await this.openai.chat.completions.create({
    model: "gpt-4.1-mini",
    max_tokens: 1000,
    messages,
    tools,
  });

  for (const choice of response.choices) {
    if (choice.message.tool_calls) {
      for (const tool_call of choice.message.tool_calls) {
        const toolResult = await this.client.callTool({
          name: tool_call.function.name,
          arguments: JSON.parse(tool_call.function.arguments),
        });
        console.log("Tool result:", toolResult);
      }
    }
  }
}

Assignment

Build out the server with more tools, then create a client with an LLM and test it with different prompts to make sure all your server tools get called dynamically. This way of building a client means the end user has a great experience because they use prompts — instead of exact client commands — and remain unaware of any MCP server being called.

Key takeaways

  • Adding an LLM to your client provides a far better experience for users compared to explicit tool calls.
  • You need to convert MCP tool schemas to the function-calling format each LLM expects.
  • The LLM acts as a natural-language router: it decides which tool to call and with what arguments.
  • Frameworks like LangChain4j (Java) handle tool conversion and dispatch automatically.

Build docs developers (and LLMs) love