How to run an ARM VM on a x86 host
A few weeks ago, I wasted a lot of time (and hair) trying to run Raspbian in a VM, so that I could run it as part of my CI system and build software for it. I eventually managed to get something running that counts as good enough, both for 32 and 64 bit.
Let me begin by disappointing you: I eventually gave up running
Raspbian as such. I found that ARM is a helluvalot different from your
average x86, and there are things such as Device Tree Blob of which I
have no idea what it is they’re actually doing. So I eventually decided
that a standard Debian distro would be good enough, and in order to not
have to go through the installation process (because that too turns out
to be way more difficult than I had anticipated), I started looking for
Cloud images because they come preinstalled. Now, not only do you need
the OS image (because that would be way too simple), you also need to
pass the kernel and init-ramdisk to the VM when starting it because
there’s no bootloader in ARM. Thankfully the Ubuntu maintainers are nice
enough to not
only provide Cloud images
(bionic-server-cloudimg-armhf.img
or
bionic-server-cloudimg-arm64.img
), but in a sidecar
directory called unpacked,
they also provide the respective kernel and initrd files. (Those are
also included in the images themselves, so you could in theory grab them
by mounting the images and copying them from /boot
, but why
go through that hassle if you can just download them.)
Prerequisites
So, suppose you want to set up an ARM-based DroneCI worker. You’ll want to download an armada of files such as:
bionic-server-cloudimg-arm64.img
bionic-server-cloudimg-arm64-initrd-generic
bionic-server-cloudimg-arm64-vmlinuz-generic
bionic-server-cloudimg-armhf.img
bionic-server-cloudimg-armhf-initrd-generic-lpae
bionic-server-cloudimg-armhf-vmlinuz-lpae
Then copy those into your /var/lib/libvirt/images
folder, and create an additional data volume for both:
drone-worker-arm64-data.img
drone-worker-arm64.img
drone-worker-arm64.initrd
drone-worker-arm64.vmlinuz
drone-worker-armhf-data.img
drone-worker-armhf.img
drone-worker-armhf.initrd
drone-worker-armhf.vmlinuz
I also recommend to resize the boot images to something like 5GB
since they can feel pretty claustrophobic once you start using the VM.
Also be sure to set a root password so you can log in to your VMs once
they’re up:
virt-customize -a drone-worker-arm64.img --root-password password:supersecretrootpw
VM setup
I’ll just assume you use libvirt and virt-manager to set up your VMs,
because that’s what I do. First thing you’ll want to do is go to the
preferences and enable XML editing. Then go ahead and create a new VM.
Say you’ll import an existing disk image, expand the architecture
options and select arm
(32 bit) or aarch64
(64
bit). Once you do that, another dropdown box pops out that asks you for
the machine type. I’m using virt-2.8
, but I suspect that
plain virt
would work as well. Just don’t select any one of
the more specific boards, because then you’re probably gonna need a DTB
and all that weird stuff. Keep it as generic (and hardware-independent)
as possible.
On the next screen select the boot image. aarch64
will
not ask you for kernel and initrd here, but arm
will, so
you’ll need to specify those here as well. Leave the DTB path empty. For
kernel args specify rw root=/dev/vda1 cloud-init=disabled
to disable cloud-init
(we’re using Cloud images,
remember?). As OS type choose Generic default
.
Give the thing 1024 MB of memory (I’ve had problems when I chose
more) and as many cores as you like. On the next screen give it a name,
and be sure to tick the box that lets you
Customize configuration before install
.
Now virt-manager throws you into a preview of the VM editor. There’s still a few adjustments we’ll need to make.
You’re going to start on the Overview
tab. Open the XML
subtab and add this piece of information (told you you’d have to enable
XML editing):
<domain type="qemu">
<!-- lots of other stuff -->
<features>
<gic version="2"/>
</features>
<!-- lots of other stuff -->
</domain>
I don’t really understand what it does, but without it, you’ll
probably get MSI is not supported by interrupt controller
errors.
Back in the Details
subtab, be sure to set “Firmware” to
None (for aarch64
you can apparently use UEFI, but the
method by passing in the Kernel image works too). Under
CPUs
, choose Hypervisor default
for
arm
and cortex-a57
for aarch64
(I
think those are probably pre-populated). Under Boot options
you can verify the kernel path, initrd path and kernel args.
Be sure to only use virtio
devices. I don’t think the
kernels have many other drivers built in, but I may very well be
mistaken about this. I haven’t tested extensively, I’m honestly just
glad it works. :P
The stangest thing is how to get console access. It appears that
standard graphics do not work here, so you’ll have to use serial
consoles. Now I’m not sure how those actually work in libvirt, but I
always add a Serial
gizmo as well as a
Console
gizmo to those VMs, and that seems to make it work
because one of them will show up as a /dev/ttyAMA0
in the
VM, and that’s what the image seems to be looking for.
Now finish the VM installation, and with any luck, a VM should boot up.
OS access, resizing the root partition
Once the VM is up, you should be able to log in using the password
you baked into the image earlier. Then you’ll probably want to run
dpkg-reconfigure openssh-server
so that SSH host keys are
generated, and configure some basic networking so that you can access
the whole thing. Next you should resize the root partition to match the
image size.
For that, run parted /dev/vda
. It’ll probably ask you if
it should resize the partition table to enclose the whole disk, to which
you should say yes
. Now here’s an interesting thing: The
partition numbering does not match the layout that the disks
have on disk. You’ll see a partition /dev/vda1
and a
/dev/vda15
, and contrary to what that numbering indicates,
/dev/vda1
is not the first partition, but actually
the second one. So you can just resizepart 1 5G
, then exit
parted and resize2fs /dev/vda1
, and it’ll just work and
give you a bigger root FS.
Running the VM with a lower priority
I’ve found that ARM VMs end up taking a lot of CPU power, probably because their emulation is just far from trivial. Thus you might want to renice those VMs, so that they don’t drown out other processes running on the same host. I do that using scripts such as these:
/usr/local/bin/niced-arm-kvm
:
#!/bin/sh
exec /usr/bin/nice -n 15 -- /usr/bin/qemu-system-arm "$@"
/usr/local/bin/niced-arm64-kvm
:
#!/bin/sh
exec /usr/bin/nice -n 15 -- /usr/bin/qemu-system-aarch64 "$@"
Then I put them into the <emulator>
tag in the
VM’s XML definition.