A production deployment guide published today by Cloud Native Now details how to deploy kagent — the CNCF Sandbox framework that lets platform engineers define AI agents as Kubernetes custom resource definitions — on Oracle Kubernetes Engine. The full stack: containerized agents on OKE, serverless autoscaling via Virtual Nodes, LLM inference through OCI Generative AI, and secrets management through OCI Vault.
The core proposition: agents become deployable, versioned, auditable Kubernetes objects. The same GitOps workflows that manage microservices now manage agent lifecycle, from provisioning to rollback.
How kagent Works
kagent extends the Kubernetes API with three custom resources, according to its GitHub repository. An Agent CRD bundles a system prompt, tool configuration, and LLM provider into a single declarative spec. A ModelConfig resource defines which LLM provider to use (OpenAI, Anthropic, Google Vertex AI, Ollama, or custom endpoints). A ToolServer resource connects agents to MCP servers with pre-built integrations for Kubernetes, Istio, Helm, Argo, Prometheus, Grafana, and Cilium.
The CNCF accepted kagent at the Sandbox maturity level in May 2025. In the year since, the project has grown 260% in contributors and 273% in commits, according to LFX Insights data on the CNCF project page.
The OCI Integration Stack
The Cloud Native Now guide covers what OKE specifically adds to the kagent deployment model. Virtual Nodes provide serverless Kubernetes compute, eliminating the need to manage worker node pools for bursty agent workloads. Pods spin up on demand and scale to zero when idle.
OCI Generative AI provides an OpenAI-compatible inference endpoint supporting Cohere Command R+, Meta Llama 3, and other foundation models with zero-data-retention guarantees. No GPU provisioning is required on the cluster side. OCI Vault handles secrets injection into pods, keeping API keys out of container images and Kubernetes manifests.
The guide also covers MCP server deployment as standalone Kubernetes workloads, enabling standardized tool discovery across agents in the cluster.
Why Kubernetes for Agents
The argument for Kubernetes-native agent management comes down to operational maturity. Kubernetes already solved orchestration, secrets, networking, and observability for microservices. kagent applies those same primitives to agents.
Red Hat’s Emerging Technologies team tested this in January 2026, deploying multi-agent systems on a related Kubernetes-based control plane (Kagenti). Their finding: treating agents as network services with workload identity (via SPIFFE) rather than static API keys eliminated the credential sprawl problem that plagues most agent deployments. Each agent pod received a cryptographic identity automatically rotated by sidecars, with no static secrets stored in ConfigMaps.
That operational model matters as enterprise agent deployments scale. An agent defined as a CRD can be version-controlled in Git, deployed through existing CI/CD pipelines, rolled back on failure, and monitored through standard Kubernetes observability tools like OpenTelemetry.
The Competing Approaches
kagent’s Kubernetes-native approach sits in contrast to the hosted agent platforms gaining traction in enterprise. Anthropic’s Claude Managed Agents, Microsoft’s Copilot Studio, and SAP’s Autonomous Enterprise (which launched 200+ agents at Sapphire 2026 this week) all offer managed agent runtimes that abstract away infrastructure entirely.
The tradeoff is control versus convenience. Teams already running Kubernetes get agent management without adding a new platform. Teams without Kubernetes expertise face a steeper onboarding curve than a managed service.
For platform engineering teams evaluating agent infrastructure, the question is whether agents are applications (deploy them like everything else) or a new category that demands its own runtime. kagent and OKE are betting on the former.