kubermatic-virtualization

Installing Kubermatic Virtualization with the Declarative Installer

Abubakar Siddiq Ango
Abubakar Siddiq Ango Senior Developer Advocate
Apr 27, 2026 4 min read Beginner
getting-started installation gitops

Prerequisites

  • A bare-metal Kubernetes cluster, or hosts that the installer can provision into one
  • kubectl with cluster-admin access on the target cluster
  • quay.io credentials in KUBEV_USERNAME / KUBEV_PASSWORD (provided to your account)
  • The kubermatic-virtualization CLI on your workstation

Kubermatic Virtualization 1.1 introduced a declarative installer: a single kubermatic-virtualization apply -f cluster.yaml command that handles install, upgrade, and self-heal. You can drive it interactively with a wizard the first time, then check the resulting cluster.yaml into version control and reconcile from CI on every subsequent change. This tutorial walks both paths.

What gets installed

Kubermatic Virtualization is a stack of well-known open-source components plus Kubermatic’s own controllers and UI. The installer reconciles all of it from a single config file:

  • KubeOne provisions and upgrades the bare-metal Kubernetes cluster (the “Infrastructure Cluster”)
  • Upstream KubeVirt runs the VM workloads on that cluster
  • Kube-OVN provides the SDN — VPCs, subnets, NAT gateways, Elastic IPs as CRDs
  • Containerized Data Importer (CDI) handles cloud-image imports into PVCs
  • Longhorn ships as the default storage class; MetalLB as the default load balancer
  • Kyverno enforces a baseline set of VM security policies
  • The Kubermatic Virtualization control plane (API server, controllers, web UI)

You don’t have to install any of those individually. The installer reconciles the whole stack from your cluster.yaml.

Generate a starter cluster.yaml

The installer ships a config print subcommand that emits a template you can edit:

kubermatic-virtualization config print \
  --enable-metallb \
  --enable-longhorn \
  --control-plane-hosts 3 \
  --worker-hosts 3 \
  > cluster.yaml

Open cluster.yaml and fill in:

  • infrastructure.controlPlaneHosts: — IPs and SSH details for the three control-plane nodes
  • infrastructure.workerHosts: — IPs and SSH details for worker nodes
  • networking.cni.kubeOVN: — VPC subnet ranges (the defaults are usually fine for a lab)
  • auth: — leave as-is for now; you’ll come back to it for OIDC
  • dashboard: — see the dashboard tutorial for the block

The CLI accepts SSH key files or agent-forwarded sessions for the host steps. Once the file is filled in, you have a single artifact that describes your entire cluster.

Pre-flight

The installer pre-flights several things and refuses to apply if any are missing:

  • KUBEV_USERNAME and KUBEV_PASSWORD set (or an inline imagePullSecret: in cluster.yaml) — used to pull the gated images from quay.io
  • All listed hosts reachable over SSH from the workstation running the installer
  • Hosts meet the minimum hardware floor (CPU / RAM / disk) and have hardware virtualization enabled in firmware
  • The Kubernetes version requested is supported by KubeOne for the providers you’ve declared
echo "${KUBEV_USERNAME:-MISSING}"
test -n "$KUBEV_PASSWORD" && echo "set" || echo "MISSING"
kubermatic-virtualization apply -f cluster.yaml --dry-run

--dry-run walks the pre-flight without making changes. Fix anything that fails before you proceed.

Apply

kubermatic-virtualization apply -f cluster.yaml

The first apply on a fresh cluster takes 20–40 minutes. The installer:

  1. Provisions the bare-metal Kubernetes cluster via KubeOne
  2. Installs Kube-OVN as the CNI
  3. Installs Longhorn and MetalLB
  4. Installs KubeVirt + CDI
  5. Installs the Kubermatic Virtualization control plane (and the dashboard if you enabled it)
  6. Applies the default Kyverno policies

You can watch progress in the installer output and in kubectl once the API is reachable:

export KUBECONFIG=$(pwd)/kubeconfig    # the installer drops this in the working dir
kubectl get nodes
kubectl get pods -A | grep -E "kubevirt|kube-ovn|longhorn|kubermatic-virtualization"

GitOps from here

Once the cluster is up and cluster.yaml reflects its full configuration, the file is the single source of truth. Subsequent changes — upgrading Kubernetes, swapping the CNI’s IP ranges, enabling the dashboard, adding worker nodes — all happen by editing cluster.yaml and re-running apply:

kubermatic-virtualization apply -f cluster.yaml

The installer is declarative and self-healing. Re-applying picks up only what’s changed and reconciles. That’s the pattern that makes the installer CI-friendly: check cluster.yaml into a repo, run apply on every merged PR, let the installer drive the cluster toward the declared state.

A pragmatic split: the wizard for the first install on a fresh cluster, then GitOps from there. The wizard is for figuring out the right shape of cluster.yaml. Once you have it, you don’t need the wizard again.

What’s next

A live cluster is the floor. The next tutorial in the series covers enabling the Kubermatic Virtualization Dashboard — the per-cluster web UI introduced in 1.1 — including the three authentication modes (None / Basic / OIDC). After that, the KubeVirt Getting Started series takes over for the VM workload patterns: creating VMs, lifecycle, networking, and persistent storage.