A VirtualMachine is a Kubernetes object. You write it as YAML, apply it with kubectl apply -f, and it shows up under kubectl get vm like any other resource — namespaced, RBAC-able, watchable, governed by labels. This tutorial turns that idea into a running Linux VM in about fifteen minutes.
Why a containerDisk
The fastest way to a working VM is to skip image-import altogether. The kubevirt/containerdisks project publishes Ubuntu, Fedora, Debian, openSUSE, and CentOS Stream cloud images packaged inside container images at quay.io/containerdisks/*. Your cluster pulls one the same way it pulls any other container — no CDI, no PVC, no waiting for a 600 MB qcow2 to download. Container disks are read-only and ephemeral, which is fine for learning the shape of the API. Persistent storage comes in a later tutorial.
The manifest
Save this as vm.yaml:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: testvm
spec:
runStrategy: Halted
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: testvm
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
resources:
requests:
memory: 1Gi
cpu: "1"
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: quay.io/containerdisks/ubuntu:22.04
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#cloud-config
password: ubuntu
chpasswd: { expire: False }
ssh_pwauth: True
runStrategy: Halted is the safe default. It tells KubeVirt: create the VM object, don’t boot it. You apply the manifest, inspect what you got, then start the VM explicitly. That separation is useful — it stops you from accidentally launching workloads while you debug a YAML error.
The cloudInitNoCloud block ships first-boot configuration to the guest. Here it sets the ubuntu user’s password to ubuntu and allows password-based serial console login. The next tutorial swaps that for SSH-key auth.
Apply, start, attach
Create a namespace for the work and apply the manifest:
export NS=kubev-lab-$(whoami)
kubectl create namespace $NS
kubectl config set-context --current --namespace=$NS
kubectl apply -f vm.yaml
kubectl get vm testvm
You should see the VM as Stopped, Ready: False. The object exists, no VMI yet — exactly what runStrategy: Halted promises.
Start it:
virtctl start testvm
kubectl get vmi testvm --watch
Watch the VirtualMachineInstance walk through Scheduling → Scheduled → Running. Press Ctrl+C once it’s running. Behind the scenes, KubeVirt has created a virt-launcher pod on a worker node — the QEMU/KVM process that is your VM:
kubectl get pods -l kubevirt.io/domain=testvm
Now open the serial console:
virtctl console testvm
Wait 30–60 seconds for cloud-init to finish on first boot. You’ll land on an Ubuntu login prompt. Sign in as ubuntu / ubuntu and poke around:
uname -a
cat /etc/os-release
ip addr
Detach with Ctrl+] (Control plus close-square-bracket). Don’t type exit — that logs you out of the guest but leaves virtctl attached, which surprises whoever connects next.
Three layers, one VM
The mental model that makes everything else click: a running KubeVirt VM is three Kubernetes objects stacked on top of each other.
kubectl get vm,vmi,pods -l kubevirt.io/domain=testvm
You’ll see one VirtualMachine (the long-lived spec), one VirtualMachineInstance (the running instance, deleted on stop), and one virt-launcher-* pod (the actual hypervisor process). When you stop the VM with virtctl stop, the VMI and pod disappear; the VM object stays. Start it again and a fresh VMI and pod come back. That’s the same reconciliation pattern Kubernetes uses for Deployments and Pods — it just operates on a hypervisor.
What’s next
Leave testvm running. The next tutorial picks up here, walks through the four lifecycle operations (start, stop, pause, unpause), and replaces the password with an SSH key so you can connect with virtctl ssh instead of the serial console.
