On May 15, 2026, Clu — Cloudology's Kubernetes operations copilot — will be generally available on the AWS Marketplace with a 30-day free trial. Clu is the AI assistant we wished existed every time a 2 a.m. page pulled us into a cluster we hadn't touched in months: a copilot that actually understands what's running, where it's running, and what it's connected to — and that runs entirely inside your cluster, not in someone else's SaaS.
This post is a heads-up about what's coming, what it does, and how to be ready on day one.

Kubernetes operates the production systems that the modern internet runs on, and yet the day-to-day experience of operating it has barely changed in five years. SREs and platform engineers still juggle a dozen browser tabs to answer "is this thing healthy" — kubectl, Grafana, the cloud provider console, the IaC repo, the runbook wiki, and an internal Slack channel. The hard parts of Kubernetes operations have never been about typing the right command. They've been about holding the whole system in your head: which deployment talks to which service, which CRD that operator owns, what the last person did to the HPA at 4 a.m., and whether this alert is the same incident as that one.
Generative AI is finally good enough to help with that work — but most of what's available today is poorly suited to the job. General-purpose AI assistants don't know your cluster. SaaS Kubernetes copilots want you to ship cluster state — secrets, manifests, logs — to their cloud, which is a non-starter for any team running regulated workloads. We built Clu to be the alternative: a copilot that's actually useful for cluster operations, that runs inside your cluster, and that you control end-to-end.

Clu installs as a small set of Kubernetes resources via Helm and immediately becomes a fluent participant in the work of operating your cluster. Once it's up, you'll be able to ask it the kinds of questions you'd otherwise piece together by hand:
Behind those answers, Clu does four things at once: builds a real-time knowledge graph of cluster state, watches event streams for change correlation, scaffolds manifests and IaC when you ask for them, and records every action it takes to a hash-chained audit log.
When Clu starts, it builds a graph of every workload, service, ingress, ConfigMap, secret reference, RBAC binding, network policy, and CRD instance in your cluster — and the relationships between them. The graph updates as objects change. That's what lets Clu answer questions about relationships, not just objects: which pods would be affected if you rotated this secret, which workloads are exposed by an ingress with a missing TLS annotation, which services share a node pool with a noisy neighbor.
When you ask Clu about an incident, it correlates pod state, recent events, recent deploys (from your existing GitOps tooling if it's wired in), HPA behavior, and OOM/crash patterns into a written summary you can paste into a postmortem. It does not blindly suggest "kubectl rollout restart" — it explains what it observed, what it considers the most likely cause, and what changed that would let you act with confidence.
Clu generates Kubernetes manifests, Helm values, Kustomize overlays, and Terraform for the cloud resources your workloads depend on. Generation isn't an opaque "ship the prompt to a model and hope" step: Clu uses your cluster's existing patterns as the template and explains the choices it's making. The output lands as a pull request in your repo, never as a direct cluster mutation.
Clu can be granted write permissions to take action — restarts, scale changes, applying a new manifest — but every write goes through a dry-run first, surfaces a diff, and requires explicit approval. Approved actions are recorded in a hash-chained audit log, the same way a blockchain ledger works: each entry references the previous, so tampering is detectable. That's a hard requirement for the regulated environments we're targeting, and it'll be on by default at launch.
We made an intentional product decision early: Clu runs inside the cluster you're operating, not as an external SaaS. That choice has real engineering cost — every customer is a fresh deployment, every tenant is on their own — but it earns three things that mattered more to us than ease of distribution:
Clu does not bundle a model. You point it at one — an OpenAI or Anthropic API key, a self-hosted Llama or Qwen, a Bedrock or Vertex endpoint, an internal model served by your platform team. We've intentionally kept this open so you can choose based on your data residency, latency, and cost requirements. For teams that need to keep prompts and responses inside their cloud account, the AWS Bedrock and Azure AI Foundry integrations will be turnkey on day one.
We built Clu for the people who get paged when a cluster misbehaves: site reliability engineers, platform engineers, and the small Kubernetes teams inside larger organizations that own production for a long list of services. If you're operating between five and a few hundred clusters, you're our target user. If you're running a single dev cluster for a side project, Clu will work, but you may not feel the leverage. If you're operating thousands of clusters at FAANG scale, you've already built half of what Clu does — but talk to us, because the bring-your-own-model story may still be worth a look.
At launch, Clu will be licensed per-cluster on the AWS Marketplace, with a 30-day free trial that runs in your account from day one — no demo call required to get started. Multi-cluster and enterprise pricing (with annual commits, on-prem licensing for air-gapped customers, and custom SLAs) will be available by talking to us directly.
May 15 is the first public release, not the last. On our near-term roadmap: native Argo CD and Flux change correlation, multi-cluster federation, deeper integrations with the major incident-management platforms, and a policy authoring assistant for OPA Gatekeeper and Kyverno. We'll be talking through each of these on this blog and in the Clu changelog as they ship.
Want a heads-up the moment Clu is live? Drop us a line and we'll send you the install link as soon as it's available.