kcp

What is kcp? Kubernetes Without the Pods

Abubakar Siddiq Ango
Abubakar Siddiq Ango Senior Developer Advocate
Mar 17, 2026 9 min read Beginner
getting-started platform-engineering multi-tenancy

Prerequisites

  • Basic understanding of Kubernetes concepts (API server, CRDs, RBAC)
  • Familiarity with the concept of multi-tenancy

Introduction

Platform engineering is how most organizations solve the “every team needs their own Kubernetes” problem. The standard playbook is straightforward: spin up a cluster per team, layer on some RBAC, wire up a GitOps pipeline, and call it a day. It works, but it is expensive. Each cluster carries the overhead of a control plane, a set of nodes, monitoring, upgrades, and the operational burden of keeping it all running.

kcp takes a fundamentally different approach. What if you could give every team what looks like their own Kubernetes cluster — with its own CRDs, its own RBAC, its own resources — but without running any compute? Just the API machinery.

That is exactly what kcp does. It is a CNCF Sandbox project that provides Kubernetes-like API machinery — the control plane — without pods, nodes, or container orchestration. You get the parts of Kubernetes that manage state and APIs, without the parts that manage compute.

What is kcp?

kcp is an open-source, horizontally scalable control plane for Kubernetes-like APIs. It was accepted to the CNCF Sandbox on September 19, 2023, with the first commit dating back to July 2020.

At its core, kcp provides workspaces — each one acts like an independent Kubernetes API server. You can create CRDs, apply RBAC policies, run admission webhooks, and manage resources within each workspace. From the outside, interacting with a workspace feels exactly like interacting with a Kubernetes cluster. You use kubectl. You write standard YAML manifests. Your existing tools work without modification.

The critical difference: there are no pods. No nodes. No container runtime. No scheduler deciding where to place workloads. kcp is the Kubernetes API machinery stripped down to its essence — CRDs, RBAC, admission control, and resource management — without the scheduling and compute layer. As the project itself puts it: kcp “does not replace Kubernetes, but complements it as a backend to host Kubernetes-like APIs as SaaS.”

Because a workspace is backed by a logical cluster stored in its own etcd prefix range, workspaces are cheap. A single kcp instance can host many thousands of them — each with its own CRDs, its own RBAC, its own object storage — rather than spinning up a full Kubernetes control plane per tenant.

How is kcp Different from Kubernetes?

The easiest way to understand kcp is to look at what it keeps and what it drops from Kubernetes:

Kubernetes Haskcp Haskcp Does NOT Have
API ServerAPI ServerPods
CRDsCRDsNodes / kubelet
RBACRBACKube-scheduler
Admission ControlAdmission ControlKube-controller-manager
NamespacesWorkspaces (stronger isolation)Container runtime
etcdetcd (logical clusters, disjoint prefixes)Built-in workload controllers

The key insight here: kcp takes only the parts of Kubernetes that manage state and APIs, and drops the parts that manage compute. This is not a limitation — it is a design decision. The control plane and the compute plane are separate concerns, and kcp lets you treat them that way.

kcp on its own does not run containers and does not schedule workloads onto physical Kubernetes clusters. Earlier versions of kcp shipped a Syncer and Transparent Multi-Cluster (TMC) code for that purpose, but both were removed from the project in May 2023 to refocus kcp on pure API management. When workloads are needed, API providers run their own multi-tenant operators that read from kcp workspaces (via APIBindings) and reconcile into real Kubernetes clusters or any other backend. kcp is the control plane; compute stays where it already lives.

Key Concepts

Three concepts form the foundation of kcp. Understanding these gives you the mental model for everything else.

Workspaces

Workspaces are the fundamental isolation unit in kcp. Each workspace is a Kubernetes-cluster-like HTTPS endpoint backed by its own logical cluster in etcd, with disjoint storage prefixes. That means a workspace has its own CRDs, its own RBAC, and its own object storage — no sharing with other workspaces. If namespaces in Kubernetes are rooms in a shared house, workspaces in kcp are separate houses entirely.

Workspaces are hierarchical. A workspace can contain child workspaces, which can contain their own children, arranged in a tree. Workspace types govern which parents can hold which children, so the hierarchy maps naturally to organizational structures: a top-level workspace for your platform team, child workspaces for each product team, and grandchild workspaces for individual environments like staging and production.

Tip: From a developer’s perspective, interacting with a workspace is identical to interacting with a Kubernetes cluster. Point kubectl, client-go, or helm at the workspace endpoint and everything works.

APIExport

An APIExport is how a provider workspace publishes APIs that other workspaces can consume. It references one or more APIResourceSchemas — the CRD-like schemas that define the shape of the exported APIs — and exposes them for binding.

Think of it as a service catalog entry. A database team creates an APIExport to say: “I offer a Database resource. Here is its schema. Here is how to reach the controller that reconciles it.”

APIBinding

An APIBinding is how a consumer workspace imports and uses a published API. It references an APIExport and binds every API defined there into the consuming workspace. Once bound, the consumer can kubectl apply a Database manifest as if the CRD were installed locally — but the controller reconciling it lives in the provider’s workspace, not the consumer’s.

Together, APIExport and APIBinding create a decentralized service catalog. Providers publish what they offer. Consumers bind to what they need. The platform team governs which exports are visible to which workspaces through workspace hierarchy and RBAC. No tickets. No manual provisioning. Just APIs.

flowchart LR
    subgraph Provider["Provider Workspace (db-team)"]
        Schema[APIResourceSchema
databases.v1.example.io] Export[APIExport
database-service] Controller[Multi-tenant
operator] Schema --> Export end subgraph Consumer["Consumer Workspace (app-team)"] Binding[APIBinding
→ database-service] DB[Database
my-app-prod] Binding --> DB end Export -. published .-> Binding Controller -. reconciles .-> DB DB -. provisions .-> Backend[(Real DB
on k8s / cloud)]

Running workloads with kcp

kcp itself does not run containers. To put APIs bound in a workspace to work, a provider typically runs a multi-tenant operator that watches across the workspaces that bind its APIExport and reconciles into whatever backend it cares about — one or many physical Kubernetes clusters, a cloud provider, or a bespoke system. The kcp community calls this pattern “APIs as a service”: kcp is the control plane, someone else’s operator is the compute.

When Should You Use kcp?

kcp is not a general-purpose replacement for Kubernetes. It solves specific problems well. Here are three concrete use cases where kcp shines.

Internal Developer Platforms

Give every team a workspace that looks and feels like a Kubernetes cluster. Developers interact with standard kubectl commands, write standard manifests, and use the tools they already know. But behind the scenes, you are running a single kcp instance instead of dozens of separate clusters.

The cost savings are significant. You eliminate the control plane overhead of individual clusters, reduce the operational burden of managing upgrades and patches across many clusters, and centralize policy enforcement.

SaaS Control Planes

If you are building a multi-tenant SaaS product, kcp gives you a natural isolation model. Each customer gets their own workspace with custom APIs tailored to their needs. Workspaces provide strong isolation guarantees — one customer cannot see or affect another’s resources.

This pattern works especially well for infrastructure products: managed databases, CI/CD platforms, monitoring services, or anything where customers need their own resource namespace with API-driven management.

API-First Platform Engineering

Use APIExport and APIBinding to build a self-service platform where service teams publish what they offer and consumer teams bind to what they need. This creates a decentralized, API-driven service catalog that scales with your organization.

Instead of a central platform team bottleneck fielding requests, each team advertises its capabilities as APIs. Consumers discover and bind to them. The platform team governs access through RBAC and workspace hierarchy, without being in the critical path for every request.

kcp vs Traditional Multi-Tenancy

Multi-tenancy in Kubernetes has been a persistent challenge. Here is how the main approaches compare:

  • Namespaces: Weak isolation within a shared cluster. All tenants share the same CRDs, the same API server, and the same control plane. Fine for trusted teams within a single organization, but the boundaries are soft.

  • vcluster: Virtual Kubernetes clusters that run as workloads inside a host cluster — each vcluster gets its own API server (typically k3s or k8s) and its own data store. Stronger isolation than namespaces, and workloads run for real. But every vcluster still consumes compute on the host, and you are still responsible for the host cluster.

  • kcp Workspaces: Full API-level isolation with zero compute overhead. Each workspace is an independent API scope. No shared CRDs, no shared resources, no shared control plane state. Best suited for platform building where you need strong isolation at scale.

Each approach fits different situations. Namespaces work for simple cases. vcluster works when you need virtual cluster semantics with existing compute. kcp works when you need the API isolation without the compute cost.

For a deeper comparison, see kcp Workspaces vs Namespaces vs vcluster.

The Bigger Picture

kcp is the foundation of the Kubermatic Developer Platform (KDP). It provides the multi-tenant, API-driven control plane that KDP builds on to deliver a full developer platform experience.

This fits a broader trend in the platform engineering movement. The industry is shifting toward API-driven, self-service infrastructure where platform teams build products for their internal developers. kcp represents a specific architectural pattern within that shift: the control plane as a product, not a side effect of running containers.

By separating the control plane from compute, kcp lets you scale your platform’s API layer independently of your compute layer. You can serve thousands of tenants from a single kcp instance and attach compute clusters only where and when you need them.

Warning: kcp is a CNCF Sandbox project. It is under active development and not yet recommended for production workloads without careful evaluation. The API surface may change between releases.

Next Steps

Ready to get hands-on? The next tutorial in this series walks you through installing kcp and creating your first workspace:

If you are evaluating multi-tenancy options and want a more detailed comparison of the approaches mentioned above, check out the dedicated comparison guide:

Summary

kcp is Kubernetes without the compute layer — pure API machinery for building multi-tenant platforms. It provides workspaces for isolation and APIExport/APIBinding for a decentralized service catalog. Compute stays with whoever runs the operators behind the APIs. If you are building an Internal Developer Platform or a multi-tenant SaaS product, kcp gives you Kubernetes-compatible isolation at a fraction of the cost of running separate clusters.