Learning objectives
By the end of this lesson you will be able to:- Connect an MCP client to a language model (GitHub Models / OpenAI)
- Convert MCP tool schemas to a format the LLM understands
- Pass a user prompt to the LLM and route the resulting tool call back to the MCP server
- Provide a seamless natural-language experience on top of any MCP server
How it works
The flow has four steps:- Connect to the MCP server and list its tools, resources, and prompts.
- Convert each tool’s schema to the function-calling format the LLM expects.
- Send the user prompt to the LLM along with the tool definitions.
- If the LLM decides to call a tool, forward that call to the MCP server and return the result.
Prerequisites: GitHub token
The examples below use GitHub Models as the LLM backend. You need a GitHub Personal Access Token with the Models permission:- Go to GitHub Settings → Developer Settings → Fine-grained tokens.
- Click Generate new token, add a note, set an expiry, and enable the Models permission.
- Copy the token and export it:
export GITHUB_TOKEN=<your-token>
Exercise: Building the LLM client
Convert MCP tools to LLM format
The LLM expects tools in a specific JSON schema format. You need to map each MCP tool response into that structure.
- TypeScript
- Python
- Java
Assignment
Build out the server with more tools, then create a client with an LLM and test it with different prompts to make sure all your server tools get called dynamically. This way of building a client means the end user has a great experience because they use prompts — instead of exact client commands — and remain unaware of any MCP server being called.Key takeaways
- Adding an LLM to your client provides a far better experience for users compared to explicit tool calls.
- You need to convert MCP tool schemas to the function-calling format each LLM expects.
- The LLM acts as a natural-language router: it decides which tool to call and with what arguments.
- Frameworks like LangChain4j (Java) handle tool conversion and dispatch automatically.