Get AI summaries of any video or article — Sign up free
Running MCP Servers In Games: This Changes EVERYTHING in AI Gaming? thumbnail

Running MCP Servers In Games: This Changes EVERYTHING in AI Gaming?

All About AI·
5 min read

Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

In-game prompts can trigger MCP tool servers to generate images, read emails, and run web searches, with results rendered back inside the game UI.

Briefing

MCP servers can be wired directly into a game so in-world actions trigger real AI tools—generating images, reading emails, and running web searches—without leaving the game UI. In the demo, a player uses an in-game “MCP terminal” to command a TV to display an AI-generated movie-poster image, then swaps prompts to produce a new image style. The same in-game environment also connects to an email MCP server to fetch the latest message, and to a Brave search MCP server to look up the latest Bitcoin price, then sends that price back via email. The result is a single interactive front end that can orchestrate multiple external capabilities through MCP.

Under the hood, the setup follows a clear division of labor: the game acts as the client/front end, while a backend HTTP layer routes requests to the right MCP tool server. When the game sends a user query over HTTP, the backend asks an LLM (named “Claude” in the transcript) which MCP tool should handle the request based on the available tools exposed by the MCP servers. After the backend selects the correct MCP client, the MCP client forwards the tool request to the chosen MCP server via standard out; the tool server returns results via standard out again; and the backend relays the final output back to the game for display.

The demo also emphasizes extensibility. Adding new capabilities is framed as a configuration task: the backend includes an “MCP server script paths” list that points to local MCP servers such as an email server, a Gemini server, a Brave server, and an OpenAI server used for image generation. Once those paths are registered, the backend can route new in-game prompts to the appropriate tool. A separate example shows connecting Gemini 2.5 Pro inside the game: a prompt like “list five health benefits from strength training” returns a formatted list that appears back in the game.

The transcript positions this MCP-in-game approach as a practical way to turn tool-using AI into interactive experiences—whether the front end is a game, another UI, or potentially a multiplayer setup where multiple users connect to shared MCP-backed capabilities. The creator notes the project is available on GitHub, including instructions for running a client on port 3001 and connecting the game to an MCP backend URL. Overall, the core takeaway is that MCP turns a game into a command surface for external AI services, with routing handled by an LLM-backed backend and tool servers added through simple MCP configuration.

Cornell Notes

The demo shows how to connect MCP servers to a game so in-game prompts can trigger external tools and return results inside the game UI. A TV can display images generated by an OpenAI image model, an in-game command can read the latest email via an email MCP server, and another command can run a Brave web search to fetch the latest BTC price and then email it back. The architecture uses an HTTP backend that asks an LLM (Claude) which MCP tool to call, then routes the request to the correct MCP server using standard out/in and relays results back to the game. Extending functionality is mainly adding new MCP server script paths (e.g., Gemini 2.5 Pro) so new prompts automatically map to new tools.

How does an in-game user command turn into an action on an external AI tool?

The game sends the user’s query to a backend over HTTP. The backend then consults Claude to decide which MCP tool should handle the request based on the tools available across the configured MCP servers. After selecting the right tool, the backend uses a custom MCP client to forward the tool request to the chosen MCP server via standard out. The MCP tool server responds via standard out, and the backend relays the final results back to the game for display (e.g., rendering an image on the TV or printing text in the terminal-style UI).

What concrete examples demonstrate MCP tool routing inside the game?

For image generation, the TV prompt triggers the OpenAI image MCP server to generate a movie-poster-style image, which the game then renders. For email, an in-game command like “read my latest email” calls the email MCP server and returns the latest message content. For web search, the game runs a Brave search MCP server request to fetch the latest BTC price, then uses an email MCP server action to send the BTC price info to an email address and verifies it by checking the inbox.

Why does the backend need to ask Claude which tool to use?

Tool availability can vary depending on which MCP servers are configured. By asking Claude which tool should handle a given user query, the backend can map natural-language requests (like “find the latest BTC price” or “list five health benefits from strength training”) to the correct MCP server automatically. This keeps the game logic simple: the backend handles selection and routing rather than hard-coding every possible action.

How does the system make adding new capabilities relatively straightforward?

The backend maintains an “MCP server script paths” configuration that lists local MCP servers such as an email server, a Gemini server, a Brave server, and an OpenAI server for image generation. To add a new capability, the user points the backend to the new MCP server script path. Once registered, prompts can be routed to that new server through the same Claude-driven tool selection flow.

What does the Gemini 2.5 Pro example show about text generation inside the game?

The Gemini MCP server is connected so the game can call it as a tool. When the user asks Gemini to “list five health benefits from strength training,” the backend receives the tool output from Gemini API and returns the formatted list to the game, where it appears in the in-game terminal response area.

Review Questions

  1. What components handle (1) user interaction, (2) tool selection, and (3) tool execution in the MCP-in-game architecture?
  2. How does the system decide between the email MCP server and the Brave search MCP server for different user prompts?
  3. What changes would be required to add a new MCP server capability to the game, based on the described configuration approach?

Key Points

  1. 1

    In-game prompts can trigger MCP tool servers to generate images, read emails, and run web searches, with results rendered back inside the game UI.

  2. 2

    An HTTP backend sits between the game client and MCP servers, translating in-game requests into tool calls.

  3. 3

    Claude is used to choose the correct MCP tool based on the set of available MCP servers and tools.

  4. 4

    MCP client-to-server communication is routed through standard out/in, with the backend relaying outputs back to the game.

  5. 5

    Adding new capabilities is mainly a matter of registering additional MCP server script paths in the backend configuration.

  6. 6

    The demo includes OpenAI image generation for TV rendering and Gemini 2.5 Pro for text responses like health benefits lists.

  7. 7

    The project is shared on GitHub with instructions to run the client on port 3001 and connect to an MCP backend URL.

Highlights

A single in-game “MCP terminal” can drive multiple external services: OpenAI image generation for a TV display, email retrieval via an email MCP server, and Brave web search for BTC pricing.
The routing pattern is consistent: game → HTTP backend → Claude selects tool → MCP client calls the right MCP server → backend returns results to the game.
Extensibility is practical: new MCP servers are added by updating the backend’s “MCP server script paths,” enabling new prompts to map to new tools.

Topics

Mentioned