INFRASTRUCTURE
LLMs on Autopilot: Running AI Agents on Kubernetes with Open Source Tools
AI agents are hot—but deploying them in production is often an exercise in frustration. In this talk, we’ll show how to run open source LLM agents (like LangChain, Autogen, or CrewAI) reliably on Kubernetes using tools built for real-world operations: Crossplane for infrastructure orchestration, Ray for distributed compute, Prometheus for observability, and a few homegrown tricks to glue it all together.
We’ll walk through the architecture, automation patterns, and gotchas that emerge when you move from demo to deployment. Expect a live demo of a fully autonomous AI agent operating in a cloud-native stack—planning, executing, and scaling tasks with minimal human intervention.
If you’re tired of “just works on my laptop” AI and want to see how agents fly in real production airspace, this talk is for you.
We’ll walk through the architecture, automation patterns, and gotchas that emerge when you move from demo to deployment. Expect a live demo of a fully autonomous AI agent operating in a cloud-native stack—planning, executing, and scaling tasks with minimal human intervention.
If you’re tired of “just works on my laptop” AI and want to see how agents fly in real production airspace, this talk is for you.