A minimal-functionality test that patches a broken Deployment – hands-free
TL;DR: We’ll wire LibreChat to a tiny MCP server that can patch K8s resources. No human kubectl dance required.

Vibes of this post based on Tetragrammaton.
Why does this matter?
Shipping features is now lightning-fast – AI editing tools, cloud-based software engineering agents, and no-code tools get us to POC in hours. But owning that code still hurts. Most 3 AM pages aren’t logic bugs; they’re infrastructure blunders – typos in image names, mis-wired services, stray YAML. If we can hand those to an LLM agent that detects the breakage and patches it on the fly, we trade kubectl struggle for a single chat prompt.
Yes, off-the-shelf options like kubectl-ai, and kagent exist, but in this post we’ll build the fix from scratch so you can understand every moving part and tailor it to your stack.
How are we going to build it?
At its core, an “agent” is just an LLM running in a while-loop:while task_not_done: think → act → observe.
But, as with databases being “just files”, the details decide whether the loop is magic or misery (For a deep dive into what those details look like in practice, see Anthropic’s excellent article “Building Effective Agents”). Our plan:
Pick a build style
| Path | Best for | Popular picks |
|---|---|---|
| No-code | Drag-and-drop POCs, non-dev teams | Langflow · Flowise · Dify · n8n |
| SDK / code-first | Full control, CI/CD, unit tests | OpenAI Agents SDK · Smolagents · Agent Development Kit · Agent Squad · LangGraph, LlamaIndex · CrewAI · Qwen-Agent · Semantic-kernel · AIQToolkit |
| Chat UI | End-user interface, demo speed | LibreChat · AnythingGPT forks |
All three can speak MCP (Model Context Protocol) – a language-agnostic contract that lets an LLM do things, not just chat. If you haven’t met MCP yet, read this intro and come back; it’s the secret sauce.

Use-case
For this demo I want to run a “minimum-functionality test” (MFT) for Kubernetes.
The idea comes from NLP: an MFT is something your ML system should always get right. Example – sentiment analysis must label “This is an amazing movie” as positive, no matter the params or model size.
So what’s the MFT for Kubernetes? A simple typo in the Docker image name – ngnix instead of nginx – super common (ask me how I know) and trivial to fix. It’s even part of Google’s benchmark for K8s agents, k8s-bench – see kubectl-ai, kagent and tasks list.
The best part is that if you’re using a different infrastructure platform, the same concepts and patterns can be reused.
Stack & Setup

- Frontend: LibreChat
Free, open-source, ChatGPT-style UI that’s fully pluggable. - Brains: GPT-4.1 (or your local Llama3)
Any MCP-aware model will do. - Tooling: Custom K8s MCP server
Exposes three tools:patch_resource,apply_yaml, andlogs. - Runtime:
kindcluster
Includes a deliberately broken NGINX Deployment the agent will fix.
Try it yourself: run the Makefile and the whole stack comes up in one command.
Cluster right after make up: LibreChat, Mongo, and k8s-manager are healthy; the web-app-error deployment is failing with ImagePullBackOff – exactly the bug our agent will fix.

A headlamp snapshot.
Agent & Solution
Inside LibreChat you add a new agent, pick an icon, and supply two things:
- Prompt – the instructions you’ll give the LLM:

- Tools – the functions the LLM can call (exposed by our MCP server):

Where the tools come from
They’re registered in a tiny MCP server we ship as a Python class:
class KubernetesManagerMCP:
"""
KubernetesManagerMCP provides methods to manage Kubernetes resources as MCP tools.
"""
def __init__(self, name: str = "K8sManager", namespace: str = "default") -> None:
"""
Initializes the KubernetesManagerMCP with the specified name and namespace.
Args:
name (str): Name for the MCP server.
namespace (str): Kubernetes namespace to operate in.
"""
self.k8s_manager = KubernetesManager(namespace=namespace)
self.mcp = FastMCP(name)
self._register_tools()
def _register_tools(self) -> None:
"""Register all Kubernetes management tools with MCP."""
@self.mcp.tool()
def patch_resource(api_version: str, kind: str, name: str, patch_body: dict) -> str:
"""
Patches a Kubernetes resource with provided patch body.
Args:
api_version (str): API version of the resource (e.g., "apps/v1").
kind (str): Kind of the resource (e.g., "Deployment", "Service").
name (str): Name of the resource to patch.
patch_body (dict): JSON patch body defining the changes.
Returns:
str: Success message.
"""
return self.k8s_manager.patch_resource(api_version, kind, name, patch_body)
@self.mcp.tool()
def apply_yaml(yaml_file_path: str) -> str:
"""
Applies a YAML configuration to the Kubernetes cluster.
Args:
yaml_file_path (str): Path to the YAML file to apply.
Returns:
str: Success message.
"""
return self.k8s_manager.apply_yaml(yaml_file_path)
Returns:
str: Pod logs.
"""
.....Full source: mcp_k8s.py in the repo.
Connecting LibreChat to UI
version: 1.2.1
mcpServers:
k8s:
url: http://k8s-manager.default.svc.cluster.local:8000/sse(See the complete file at lines 42-46 of k8s-libre.yaml).
Because we are hosting it in the same cluster LibreChat is running, we don’t need to expose this which make things a little more secure! Once MCP is connected to LibreChat – you should assign next tools to any of your agents:

Outcome
See it yourself: the agent passed the Minimum-Functionality Test and fixed the root cause in seconds – conversation screenshot below.

Summary
Healthy deployment state – mission accomplished!
You just saw the agent patch a broken Deployment and bring every pod to running.

If you want to try it yourself, the full repo lives here → https://github.com/kyryl-opens-ml/mcp-repl/tree/main/examples/infra_librechat.
Related tools in the same space:
- https://github.com/k8sgpt-ai/k8sgpt
- https://github.com/GoogleCloudPlatform/kubectl-ai
- https://github.com/kagent-dev/kagent
The cool part: you can mix-and-match LibreChat + MCP however you like – entirely inside a single local cluster. Have fun hacking!
Pingback: AI Engineering Product Template for Reliable Coding Agents