Launching X11 RISC-V applications on QEMU (Debian)
RISC-V revolution has come to our lives and while different manufacturers take their time to release their own boards in order to consumer have the pleasure to run applications on it, we have to test, develop and be prepared to this moment from ourselves.
Is because of this that I wanted to run X11 applications in order to build and compile my desired packages in order to be prepared when my board arrives. Also, it is a great opportunity to be more in touch with the development and try to win some time building and porting applications to this great hardware architecture.
It sound great, isn’t it? However, the lack of X11 support (or documentation really) make this very difficult for people that is not used to work with QEMU or simulated hardware environments and I wanted to add my two cents to the world.
I followed the Debian official how-to in order to have my environment ready for QEMU/RISC-V “machine” however I had some problems that I try to explain here in order to help more people to run the X11 environment.
However, instead of running like it is mentioned in the last step, I made some changes to the Linux image in order to run a window manager (i3)
Independently if you follow official Debian instructions, I will put here some images showing what you need to do in order to have X11 running. First, let’s create the chroot image of our system.
$ mmdebstrap --architectures=riscv64 --variant=minbase --include="debian-ports-archive-keyring" sid /tmp/riscv64-chroot "deb http://deb.debian.org/debian-ports/ sid main" "deb http://deb.debian.org/debian-ports/ unreleased main" -v
After creating image, you must configure binfmts in order to run RISC-V commands directly or with chroot. Following official Debian documentation you should execute:
$ cat >/tmp/qemu-riscv64 <<EOF
$ sudo update-binfmts --import /tmp/qemu-riscv64
After this, you should be able to execute chroot and have RISC-V architecture support.
From this point, the terminal behaves pretty like a docker container, so you are able to run processes with RISC-V elf binaries and QEMU will automatically convert to your host architecture so it is ready to run applications.
As our target is to run X11 applications, let’s install a simple xclock app in order to check support.
# apt update
# apt install x11-apps
Now we execute the xclock app
Great! What we have here is a communication between the wrapped up X11 communication (riscv64 -> qemu -> x86_64) and then is attended as any other x86_64 application. This is fine, however “unreal” if we want to test fully RISC-V X11 support or bugs, etc.
In order to test this, the best should that X11 server be run directly from “a RISC-V machine” and in order to do this, we will install few more packages and then package the chroot folder as img to be the base root of the QEMU system.
# apt install xss-lock nm-tray x11vnc xvfb i3 ifupdown less iputils-ping network-manager iproute2 init vim
After installing i3 Window Manager dependencies, we proceed to add the following alias to the .bashrc (here you can use the root account on /root/.bashrc or create an user account… In this example, we are using an user account)
alias startVNC="xvfb-run -f \"$cookie\" -s \"-screen 0 1280x800x24\" -n 1 i3 & x11vnc -auth \"$cookie\" -display :1"
Previous commands were from the following URL. So, thanks f8l! =)
The alias instruction is made by 2 commands. The first one is a X Virtual Frame Buffer instance associated to a tmp file and is the server 1 (you can have multiple virtual frame buffers) also typical parameters of screen resizing were provided, and finally on that frame buffer run the i3 window manager command.
The second command is an instruction to redirect all X11 server information over a vnc connection. We must remember that X11 is a server listening to a display connections and this command redirects that to vnc (Virtual Network Computing)
Once we have this configured, we are ready to build the rootfs-riscv.img with the system configured and run it completely as “independent” system. (Pretty much like a Virtual Machine with VirtualBox or VMWare)
However, before packaging the image, we must perform some minor tweaks to system as mentioned here in the Debian RISC-V forum.
$ sudo chroot /tmp/riscv64-chroot
# Update package information
# Set up basic networking
cat >>/etc/network/interfaces <<EOF
iface lo inet loopback
iface eth0 inet dhcp
# Set root password
# Disable the getty on hvc0 as hvc0 and ttyS0 share the same console device in qemu.
ln -sf /dev/null /firstname.lastname@example.org
# Install kernel and bootloader infrastructure
apt-get install linux-image-riscv64 u-boot-menu
# Install and configure ntp tools
apt-get install openntpd ntpdate
sed -i 's/^DAEMON_OPTS="/DAEMON_OPTS="-s /' /etc/default/openntpd
# Configure syslinux-style boot menu
cat >>/etc/default/u-boot <<EOF
U_BOOT_PARAMETERS="rw noquiet root=/dev/vda1"
# virt-make-fs --partition=gpt --type=ext4 --size=40G /tmp/riscv64-chroot/ rootfs-riscv.img
A good idea is to change permissions of that file to a system basic user in order to not run the virtual machine as root.
Now just run the qemu system.
qemu-system-riscv64 -nographic -machine virt -smp 4 -m 6.5G -bios /usr/lib/riscv64-linux-gnu/opensbi/generic/fw_jump.elf -kernel /usr/lib/u-boot/qemu-riscv64_smode/uboot.elf -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-device,rng=rng0 -append "console=ttyS0 rw root=/dev/vda1" -device virtio-blk-device,drive=hd0 -drive file=rootfs-riscv.img,format=raw,id=hd0 -device virtio-net-device,netdev=eth0 -netdev user,id=eth0,hostfwd=tcp::22222-:22,hostfwd=tcp::5900-:5900
Note: Here I found a bug that I haven’t found related to network interfaces… The main problem is that I have to wait 5 min in order to daemon fail and retrieves final login tty console.
Before continue, let’s review the qemu-system-riscv command and all the flags used.
-append "console=ttyS0 rw root=/dev/vda1"
First 4 flags are direct instructions to the system. We are telling QEMU that our machine doesn’t have a graphics card, is a virtual machine, has 4 cores and 6.5 G of RAM.
Bios and kernel flags point to some place on the host system where a basic pre-built elf can be located for the boot process.
Linux systems relies on a random number generator (rng) in order to name processes and tmp files. This is considered on the system as a peripheral so is manually specified.
Append flag has the commands that are passed to the linux kernel before launching where the location of main system drive will be located and the tty terminal that will be used to print the status.
Device and drive instructions refer to the previously created img file where our system is installed and is “mounted” in the Virtual Drive A (vda)
Then finally the ethernet card is configured naming it eth0 and forwarding the 5900 and 22 ports from guest to host.
Once logged in, we finally execute the final command:
And after waiting a few seconds, we can connect to the the machine through a VNC viewer like TigerVNC in the default 5900 port.
I hope this helps you to set up your qemu RISC-V system with X11.