kubevirt

VM Lifecycle and SSH Access in KubeVirt

Abubakar Siddiq Ango
Abubakar Siddiq Ango Senior Developer Advocate
Apr 27, 2026 3 min read Beginner
virtualization cloud-init virtctl

Prerequisites

  • Created your first VMtestvm running in your namespace
  • An SSH key pair on your workstation (ssh-keygen -t ed25519 if you don’t have one)

A KubeVirt VM has four lifecycle verbs: start, stop, pause, and unpause. Stop deletes the VirtualMachineInstance and its virt-launcher pod — memory and CPU are released. Pause freezes the guest at the hypervisor level but keeps the pod and memory in place. Stop is cheap; pause is instant.

You drive the lifecycle two ways: virtctl for ergonomics, or kubectl patch against the VM’s runStrategy field for GitOps. Both end at the same controller reconciliation. Use virtctl interactively. Use kubectl patch (or, ideally, a checked-in YAML) when you want every state transition recorded in version control.

The four operations

Stop the VM and wait for the VMI to disappear:

virtctl stop testvm
kubectl wait --for=delete vmi/testvm --timeout=120s

Then start it again:

virtctl start testvm
kubectl wait --for=jsonpath='{.status.phase}'=Running vmi/testvm --timeout=180s

Pause:

virtctl pause vm testvm
kubectl get vmi testvm -o jsonpath='{.status.conditions[?(@.type=="Paused")].status}'

The Paused condition flips to True. Unpause:

virtctl unpause vm testvm

The same flow via kubectl patch

virtctl start is a wrapper. The same lifecycle change happens when you flip spec.runStrategy from Always to Halted or back:

printf "spec:\n  runStrategy: Halted\n" > /tmp/halted.yaml
kubectl patch vm testvm --type merge --patch-file /tmp/halted.yaml
printf "spec:\n  runStrategy: Always\n" > /tmp/running.yaml
kubectl patch vm testvm --type merge --patch-file /tmp/running.yaml

Both reach the same controller and reconcile the VMI accordingly. For CI/CD pipelines, the kubectl patch form removes the virtctl dependency and slots into any tooling that already speaks Kubernetes.

SSH instead of the serial console

The first tutorial used virtctl console and a guest password, which is fine for poking around but not how anyone actually wants to connect. KubeVirt’s answer is cloudInitNoCloud — a tiny virtual disk delivered to the guest at first boot that the cloud-init package inside Ubuntu reads as a NoCloud datasource. Anything cloud-init understands works: users, packages, files, runcmd, and the one we want here, ssh_authorized_keys.

Cloud-init only runs on first boot, so to switch from a password to an SSH key you have to delete and re-apply the VM. Confirm your public key is exported:

export SSH_PUBKEY="$(cat ~/.ssh/id_ed25519.pub)"
echo "$SSH_PUBKEY"

Then update vm.yaml — replace the cloudInitNoCloud userData block from the previous tutorial with:

volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/containerdisks/ubuntu:22.04
  - name: cloudinitdisk
    cloudInitNoCloud:
      userData: |
        #cloud-config
        ssh_authorized_keys:
          - PASTE_YOUR_PUBLIC_KEY_HERE

Or render it from a template that picks up $SSH_PUBKEY:

envsubst < vm.yaml.template > vm.yaml
grep -A2 ssh_authorized_keys vm.yaml

Delete the old VM and re-apply:

kubectl delete vm testvm
kubectl apply -f vm.yaml
virtctl start testvm
kubectl wait --for=jsonpath='{.status.phase}'=Running vmi/testvm --timeout=180s

Give cloud-init a minute after the VMI hits Running to finish provisioning the key into /home/ubuntu/.ssh/authorized_keys, then connect:

virtctl ssh -i ~/.ssh/id_ed25519 ubuntu@vmi/testvm

You’ll land in a shell — ubuntu@testvm:~$ — without a password prompt. virtctl ssh is virtctl proxying a local SSH process through the Kubernetes API to the VM; you don’t need port-forward setup or an externally-routable VM IP. The vmi/ prefix is required on KubeVirt v1.5.x and newer; the old bare <user>@<vm> form is deprecated.

Confirm the key landed where you expect:

whoami
cat ~/.ssh/authorized_keys
exit

What’s next

testvm is now key-authenticated and reachable through virtctl ssh. The next tutorials cover VM networking (how the guest IP relates to the pod IP, how to expose a service running inside the VM) and persistent storage (so your data survives when the VM stops).