Introduction
In the previous article, you learned what kcp is and why it exists: a Kubernetes API server focused on state and API management, without the compute layer. That was theory. Now you are going to get your hands dirty.
In this tutorial, you will install kcp on your local machine, start a server, create multiple workspaces, and see workspace isolation in action. By the end, you will understand how kcp workspaces provide independent API scopes — each with its own resources, its own CRDs, and its own view of the world. You will also see why this matters for platform engineering: workspaces give you the isolation of separate clusters without the operational overhead of actually running them.
Everything in this tutorial runs locally. You do not need a Kubernetes cluster, a cloud account, or any special infrastructure. Just a terminal and about fifteen minutes.
Step 1: Install kcp
You have two options for installing kcp: download a pre-built binary or build from source. The pre-built binary is faster and works for most people.
Option A: Download a Pre-Built Binary (Recommended)
Grab the latest release from GitHub. The following commands detect the latest version, download it, extract it, and move the binary to your PATH:
KCP_VERSION=$(curl -s https://api.github.com/repos/kcp-dev/kcp/releases/latest | grep tag_name | cut -d '"' -f 4)
curl -L -o kcp.tar.gz "https://github.com/kcp-dev/kcp/releases/download/${KCP_VERSION}/kcp_${KCP_VERSION}_linux_amd64.tar.gz"
tar xzf kcp.tar.gz
sudo mv bin/kcp /usr/local/bin/
If you are on macOS, replace linux_amd64 with darwin_amd64 (Intel) or darwin_arm64 (Apple Silicon).
Option B: Build from Source
If you prefer to build from source, or if you want to work with the latest development version, clone the repository and build:
git clone https://github.com/kcp-dev/kcp.git
cd kcp
make build
sudo mv bin/kcp /usr/local/bin/
Building from source requires Go 1.21 or later. If the build fails with version errors, check your Go version with go version and upgrade if needed.
Verify the Installation
Regardless of which option you chose, verify that kcp is installed correctly:
kcp --version
You should see the version number printed to the terminal. If you get a “command not found” error, make sure /usr/local/bin/ is in your PATH.
Step 2: Install the kubectl kcp Plugin
kcp ships with a kubectl plugin that adds workspace management commands. This plugin is essential — it is how you create, navigate, and manage workspaces from the command line.
If you downloaded the pre-built binary in Step 1, the plugin is already in the extracted archive:
sudo mv bin/kubectl-kcp /usr/local/bin/
If you built from source, the plugin binary is in the same bin/ directory as the kcp binary.
Verify the plugin is installed:
kubectl kcp --help
You should see a list of subcommands including workspace. This plugin extends kubectl with kcp-specific operations — workspace creation, navigation, and management — while keeping the standard kubectl experience you already know.
Step 3: Start the kcp Server
Open a terminal and start the kcp server:
kcp start
kcp boots up quickly. Behind the scenes, it starts an embedded etcd instance for storage and sets up the Kubernetes API machinery. You will see log output as it initializes. Once you see lines indicating the server is ready and listening, you are good to go.
kcp generates an admin kubeconfig file at .kcp/admin.kubeconfig in the directory where you ran the command. You will use this file to connect kubectl to your kcp server.
Leave this terminal running. Open a new terminal for the remaining steps.
Tip: For a clean start, you can pass
--root-directoryto specify where kcp stores its data. This is useful if you want to keep your experiments organized. To reset everything, just delete that directory and start again.
Step 4: Connect to kcp with kubectl
In your new terminal, point kubectl at the kcp server using the generated kubeconfig:
export KUBECONFIG=$(pwd)/.kcp/admin.kubeconfig
Make sure you run this from the same directory where you started kcp, since the kubeconfig path is relative.
Now, list the available API resources:
kubectl api-resources
Look at the output carefully. You will see familiar Kubernetes API resources — ConfigMaps, Secrets, ServiceAccounts, Namespaces, CustomResourceDefinitions, and more. But you will not see Pods, Deployments, Services, or Nodes. Those belong to the compute layer, and kcp does not include them.
This is the key point from the previous article, now visible in practice: you are talking to a Kubernetes API server that manages state and APIs, not compute.
Run one more command to confirm things are working:
kubectl get namespaces
You will see the default namespace, just like regular Kubernetes. The API behaves exactly as you expect — it just does not have a scheduler or kubelet behind it.
Step 5: Explore the Root Workspace
kcp organizes everything into a hierarchy of workspaces. When you first start the server, you are in the root workspace — the top of the hierarchy.
Check where you are:
kubectl kcp workspace .
This prints your current workspace location. You should see you are at the root.
List the available workspace types:
kubectl get workspacetypes
Workspace types define what APIs and capabilities are available in a workspace when it is created. The universal type includes the standard set of Kubernetes APIs (minus compute resources) and is the one you will use most often for general-purpose workspaces.
Think of the root workspace as the top-level organizational unit. In a real deployment, you would create child workspaces here for teams, projects, or environments. That is exactly what you will do next.
Step 6: Create Your First Workspace
Create a workspace called team-alpha:
kubectl kcp workspace create team-alpha --type universal --enter
The --type universal flag tells kcp to create a workspace with the standard set of APIs. The --enter flag automatically switches your kubectl context to the new workspace, so you do not have to navigate to it manually.
Verify where you are:
kubectl kcp workspace .
You should see that you are now inside team-alpha. From this point on, every kubectl command you run operates within this workspace. It is as if you switched to an entirely different Kubernetes cluster — except you did not. You are still talking to the same kcp server.
Step 7: Create Resources in the Workspace
Create a ConfigMap inside team-alpha:
kubectl create configmap app-config --from-literal=env=staging --from-literal=region=eu-west
Verify it exists:
kubectl get configmaps
You should see app-config listed alongside the default kube-root-ca.crt ConfigMap. This resource exists only in team-alpha. No other workspace can see it, modify it, or even know it exists. This is true isolation — not just RBAC rules preventing access, but complete separation at the API level.
Step 8: Create a Second Workspace and Verify Isolation
Navigate back to the root workspace:
kubectl kcp workspace ..
Create a second workspace:
kubectl kcp workspace create team-beta --type universal --enter
Now list ConfigMaps in team-beta:
kubectl get configmaps
No app-config here. The only ConfigMap is the default kube-root-ca.crt. The workspace team-beta is completely isolated from team-alpha. It has its own resources, its own namespace structure, and its own view of the API.
Create a ConfigMap with the same name in team-beta:
kubectl create configmap app-config --from-literal=env=production --from-literal=region=us-east
This works without any conflict. Both workspaces now have a ConfigMap called app-config in the default namespace, but with different content. There is no collision, no naming convention needed, no prefix hack.
Warning: Workspaces are not namespaces. Two workspaces can have resources with the same name and namespace without conflicting. This is true isolation at the API server level. In regular Kubernetes, you cannot have two CRDs with the same name, even in different namespaces, because CRDs are cluster-scoped. In kcp, each workspace has its own CRD space. This distinction matters enormously for multi-tenancy.
Step 9: Navigate Between Workspaces
kcp makes switching between workspaces straightforward. Navigate up and down the hierarchy like a filesystem:
kubectl kcp workspace .. # Go up to the parent (root) workspace
kubectl kcp workspace use team-alpha # Switch to team-alpha
kubectl get configmaps # See team-alpha's ConfigMap (env=staging)
kubectl kcp workspace use team-beta # Switch to team-beta
kubectl get configmaps # See team-beta's ConfigMap (env=production)
Each time you switch, your kubectl context changes. The resources you see are those belonging to the current workspace and nothing else. The experience is identical to switching between different Kubernetes clusters, except it happens instantly — no new API server to connect to, no kubeconfig juggling.
Step 10: Apply a CRD in One Workspace
This step demonstrates one of the most powerful differences between workspaces and namespaces: CRD isolation.
First, create a file called database-crd.yaml with the following content:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: databases.example.com
spec:
group: example.com
names:
plural: databases
singular: database
kind: Database
scope: Namespaced
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
engine:
type: string
size:
type: string
Switch to team-alpha and apply the CRD:
kubectl kcp workspace use team-alpha
kubectl apply -f database-crd.yaml
Verify the CRD exists in team-alpha:
kubectl get crds
You should see databases.example.com listed. Now switch to team-beta and check:
kubectl kcp workspace use team-beta
kubectl get crds
The CRD does not exist in team-beta. In regular Kubernetes, CRDs are cluster-scoped — every namespace in the cluster sees them, and any CRD name collision affects the entire cluster. In kcp, each workspace has its own CRD space. Team Alpha can define a Database CRD with one schema, and Team Beta can define its own Database CRD with a completely different schema, and they will never interfere with each other.
This is the isolation property that makes kcp compelling for platform engineering. You can give each team full control over their API surface — including the ability to define custom resources — without worrying about conflicts with other teams.
flowchart TB
Root["root"]
subgraph A["team-alpha"]
CM1["ConfigMap/app-config
env=staging"]
CRD1["CRD: databases.example.com"]
end
subgraph B["team-beta"]
CM2["ConfigMap/app-config
env=production"]
NoCRD["(no databases CRD)"]
end
Root --> A
Root --> B
A -. same name, different data .- B
Common Issues
Port 6443 already in use. Another kcp instance or a Kubernetes process (like minikube or kind) is already running on that port. Either stop the other process or start kcp with a different port:
kcp start --secure-port=6444
kubectl kcp: command not found. The kubectl-kcp plugin binary is not in your PATH. Verify it exists in /usr/local/bin/ or wherever your Go binaries live. You can also check with ls /usr/local/bin/kubectl-kcp. If you built from source, it may be in the bin/ directory of the kcp repository.
Go version mismatch when building from source. kcp requires Go 1.21 or later. Check your installed version with go version and upgrade if needed. On macOS, brew upgrade go handles this. On Linux, download the latest version from the Go downloads page.
kubeconfig not found. Make sure you run the export KUBECONFIG command from the same directory where you started kcp. The .kcp/admin.kubeconfig file is created relative to the working directory of the kcp start command.
Next Steps
You now have a working kcp installation and an understanding of how workspaces provide isolation. Here is where to go next:
- Sharing APIs Across Workspaces with APIExport and APIBinding (coming soon) — the next article in this series. You will learn how to expose APIs from one workspace and consume them in another, which is the foundation of kcp’s service marketplace model.
- kcp Workspaces vs Namespaces vs vcluster — a deeper comparison of multi-tenancy approaches if you want to understand where kcp fits relative to other tools.
Summary
You installed kcp, started a local server, created two isolated workspaces, and demonstrated that resources — including CRDs — in one workspace are invisible to the other. You navigated between workspaces, verified that identically-named resources can coexist without conflict, and saw that CRD isolation is one of the sharpest differences between kcp workspaces and Kubernetes namespaces.
This is the foundation of kcp’s approach to multi-tenancy: full API-level isolation without the overhead of running separate clusters. Each workspace behaves like its own Kubernetes API server, but they all run on a single kcp instance. For platform teams, this means you can offer every tenant their own isolated API space — with their own CRDs, their own RBAC, their own resources — at a fraction of the cost of dedicated clusters.
In the next tutorial, you will learn how to break down those isolation boundaries selectively using APIExport and APIBinding, so workspaces can share APIs without losing their independence.
