Day-two networking questions for a KubeVirt VM all reduce to the same insight: a running VM is wrapped inside a pod. The networking primitives Kubernetes already gives you for pods — labels, Services, NetworkPolicies — apply directly. There’s almost nothing VM-specific to learn. Almost.
Masquerade is NAT
The default interface binding (and the one you’ve been using) is masquerade. The VM gets its own internal IP — typically something like 10.0.2.2 — and KubeVirt hides it behind NAT inside the virt-launcher pod. Outgoing traffic from the guest is source-NAT’d to the pod’s cluster IP. Incoming cluster traffic should target the pod, not the guest.
Confirm by SSH’ing into the VM and comparing the two addresses:
virtctl ssh -i ~/.ssh/id_ed25519 ubuntu@vmi/testvm
# Inside the guest
ip -4 addr show enp1s0
exit
Then look at the pod from the host side:
kubectl get pod -l kubevirt.io/domain=testvm -o jsonpath='{.items[0].status.podIP}'
The guest’s internal address and the pod’s cluster IP are different — both real, both routable in their own contexts. The pod IP is what the rest of the cluster sees, so that’s what Services target.
DNS and the outside world
Because masquerade routes the guest through the pod network, the guest inherits the pod’s DNS configuration and gets cluster DNS plus internet reachability for free:
virtctl ssh -i ~/.ssh/id_ed25519 ubuntu@vmi/testvm
# Inside the guest
sudo apt-get update # works; resolves archive.ubuntu.com via cluster DNS
curl -sI https://kubernetes.io | head -1
nslookup kubernetes.default.svc.cluster.local
exit
The first two work because the pod has internet egress; the third resolves the in-cluster Kubernetes API service the same way any pod would. Nothing VM-specific.
Exposing a service running on the VM
Run a tiny web server inside the guest so we have something to expose:
virtctl ssh -i ~/.ssh/id_ed25519 ubuntu@vmi/testvm
# Inside the guest
sudo apt-get install -y nginx
echo "Hello from testvm" | sudo tee /var/www/html/index.html
exit
There are two ways to put a Service in front of it.
Way 1 — virtctl expose. A shortcut that picks a sane label selector (kubevirt.io/domain) and creates a ClusterIP Service:
virtctl expose vmi testvm --name=testvm-web --port=80 --target-port=80
kubectl get svc testvm-web
Way 2 — write the Service yourself. What virtctl expose does, but transparent:
apiVersion: v1
kind: Service
metadata:
name: testvm-web
spec:
selector:
kubevirt.io/domain: testvm
ports:
- port: 80
targetPort: 80
type: ClusterIP
The selector matches the labels on the virt-launcher pod, because KubeVirt copies labels from spec.template.metadata.labels on the VM down to the VMI and on to the pod. From the Service’s perspective, the VM is a pod with a couple of unusual containers — the routing logic doesn’t care.
Test from a debug pod:
kubectl run -it --rm curl-debug --image=curlimages/curl --restart=Never -- \
curl -s testvm-web.$NS.svc.cluster.local
You’ll get back Hello from testvm.
When to use which Service type
The same three rules as any containerized workload:
- ClusterIP — in-cluster only. Good for VM-to-VM and pod-to-VM traffic; what
virtctl exposedefaults to. - NodePort — opens a port on every node. Use sparingly, mostly for development.
kubectl expose vmi testvm --type=NodePort --port=80if you want one quickly. - LoadBalancer — only useful if you have a controller that provisions external load balancers (cloud LB, MetalLB, KubeLB). On a managed cluster this is the production answer. On a sandbox it’ll sit in
Pendinguntil something fulfills the request.
NetworkPolicies still apply
Because the guest’s traffic flows through the virt-launcher pod, standard Kubernetes NetworkPolicies select VMs the same way they select pods. A podSelector matching kubevirt.io/domain: testvm covers the VM transparently. That means microsegmentation, namespace isolation, and egress controls all work — your existing policy tooling needs no extension to cover virtualized workloads. This is the underrated benefit of running VMs as pods.
What’s next
Networking handled. The next gap is durable state — every VM you’ve booted so far has been on an ephemeral container disk. The storage tutorial attaches a real PersistentVolumeClaim, walks through CDI’s DataVolume importer, and covers the access-mode trap that bites teams when they get to live migration.
