Skip to content
Snippets Groups Projects
title: "A prototype for introspection of virtual machines using a modified hypervisor"
author: "Lionel Hemmerlé and Frédéric Tronel"
date: "March 2025"
geometry: a4paper,margin=2cm
colorlinks: true
numbersections: true

Introduction

The following instructions explains how to build the whole software stack developed at INRIA Sushi Team for the instrospection of virtual machines running on top of a modified hypervisor xvisor. The compiler and patch for modified hypervisor have been created by Lionel Hemmerlé during his PhD thesis. This software stack has been developed and tested on evaluation ARM-based boards provided by courtesy of AMD/Xilinx. The boards are ZynqMP ZCU 104 equipped with a quad core ARM A53 processor (and a FPGA coprocessor although it is not really needed in this research). For the purpose of PEPR SecurEval project, this software has also been tested on a purely software emulator offered by the Qemu project (more precisely by a modified version of Qemu maintained by AMD/Xilinx).

All tests have been conducted on a Debian stable version (Debian Bookworm) as available at the date of release. This VMI introspection tool is described with more details by the following publication.

Rebuilding the software stack

We provide several methods to rebuild the whole software stack our solution depends on:

  1. Using a dedicated shell script: build.sh. This script assumes to be launched on a Debian 12 system.

  2. Using a dedicated Docker container.

  3. Manually, all steps are detailed below.

To rebuild using the Dockerfile, we assume either a functional Docker or podman (we recommend the latter since it does not require any additional daemon or priviledge):

podman  build -v $(pwd)/artefacts:/artefacts:rw --tag=pepr-vmi --ulimit=nofile=2048:2048 .

Manual building of the software stack

Host dependencies

Build host needs the following dependencies:

apt update
apt install -y build-essential git libtool autoconf automake bison flex gcc-aarch64-linux-gnu\
        python3-pip python3-sphinx meson\
        libglib2.0-dev libgcrypt20-dev zlib1g-dev libpixman-1-dev libslirp-dev\
        device-tree-compiler gperf texinfo wget unzip help2man gawk\
        libtool-bin libncurses-dev libgnutls28-dev pahole libssl-dev libelf-dev bc cargo

Create a dedicated project folder:

export REPO=https://gitlab-research.centralesupelec.fr/sushi-public/livrables/\
    pepr-secureval/virtual-machine-introspection.git
git clone ${REPO}
git submodule update --init
export PROJECT=$(pwd)
export ARTEFACTS=$(pwd)/artefacts/
export NPCPUS=$(cat /proc/cpuinfo | grep processor | wc -l)

Building qemu from source code

The following instructions are inspired of the ones given in Xilinx wiki:

pushd qemu
# git tag to see all available versions
git checkout xilinx_v2024.2
mkdir -p build
cd build
../configure --target-list="aarch64-softmmu,microblazeel-softmmu" --enable-fdt \
            --disable-kvm --disable-xen --enable-gcrypt
make -j $NBCPUS
ln -f qemu-system-aarch64 ${ARTEFACTS}/
ln -f qemu-system-microblazeel ${ARTEFACTS}/
popd

Note that we need to build qemu for both AArch64 and MicroBlaze architectures. This is indeed necessary since the simulation of the Zynq ZCU10x provided by Xilinx is quite accurate with respect to the real hardware. In fact, the Zynq ZCU10x boot process involves firmware code executed by a MicroBlaze processor ran on the soft core.

Device trees

Components involved for running a Linux kernel on top of Xilinx Qemu for ZCU 10x boards\label{dtb}

As described by figure \ref{dtb}, Qemu must be passed two device trees (DTB hereafter):

  1. one for the emulation of PMU (Power Management Unit)

  2. and one for the processing system (PS).

These DTBs are provided by Xilinx:

pushd qemu-devicetrees
make
ln -f ${PROJECT}/qemu-devicetrees/LATEST/MULTI_ARCH/zynqmp-pmu.dtb ${ARTEFACTS}
ln -f ${PROJECT}/qemu-devicetrees/LATEST/MULTI_ARCH/zcu102-arm.dtb ${ARTEFACTS}
popd

Cross-compilation chains

Although, it is possible to use the cross-compilation chain provided by Debian package for most binaries, it is unfortunately necessary to use specific chains for some software (especially the PMU firmware). To simplify the whole process, we use a dedicated cross-compilation chain tool called Crosstool-NG which makes it easy to build them.

Crosstool-NG installation

pushd crosstool-ng
./bootstrap
./configure --enable-local
make
popd

MicroBlaze

This steps are inspired by Andrey Yurovsky's blog post. I created a dedicated config file (${PROJECT}/ct-ng-microblaze.defconfig) that builds:

- binutils 2.42
- lib GMP, MPFR, MPC, zlib
- gcc 12.4.0
- newlib 4.5.0 

for microblaze little-endian using crosstool-ng.

pushd crosstool-ng
cp ${PROJECT}/ct-ng-microblaze.defconfig defconfig
./ct-ng defconfig
./ct-ng build
popd

This should build a cross-compiler chain for MicroBlaze in a directory under ${PROJECT}/x-tools/

AArch64

Similarly, we create a dedicated cross-compilation chain for AArch64 for building the first stage bootloader provided by Xilinx which expects a compilation chain that targets the triplet (machine-vendor-operatingsystem) aarch64-none-elf, whereas Debian provides one configured as aarch64-linux-gnu. We create a dedicated config file (${PROJECT}/ct-ng-aarch64.defconfig)

pushd crosstool-ng
cp ${PROJECT}/ct-ng-aarch64.defconfig defconfig
./ct-ng defconfig
./ct-ng build
popd

This should build a cross-compilation chain for AArch64 in a directory under ${PROJECT}/x-tools/

Firmwares

MicroBlaze ROM

The ROM code is only provided as a proprietary binary.

wget https://www.xilinx.com/bin/public/openDownload?filename=PMU_ROM.tar.gz -O PMU_ROM.tgz
tar zxvf PMU_ROM.tgz
ln -f PMU_ROM/pmu-rom.elf ${ARTEFACTS}/

MicroBlaze Firmware

The following steps are inspired by different resources:

  1. Xilinx Wiki article dedicated to the MicroBlaze firmware

  2. Github repository of the MicroBlaze firmware

We need the dedicated MicroBlaze cross-compilation chain for this step since the firmware is built to run on this specific CPU which is either emulated by Qemu (for the software version) or synthetized on the PL part of the board.

pushd embeddedsw
pushd lib/sw_apps/zynqmp_pmufw/src
PATH=$PATH:${PROJECT}/x-tools/microblazeel-xilinx-elf/bin/ make -j$NPCPUS
popd
ln -f ${PROJECT}/embeddedsw/lib/sw_apps/zynqmp_pmufw/src/executable.elf ${ARTEFACTS}/pmu.elf

First Stage BootLoader

For this stage we need a custom AArch64 cross-compilation chain (built previously at step~\ref{}). Indeed the make recipes explicitely mentions a aarch64_none_elf chain.

pushd lib/sw_apps/zynqmp_fsbl/src/
PATH=$PATH:${PROJECT}/x-tools/aarch64-none-elf/bin/ make -j$NPCPUS
popd
ln -f ${PROJECT}/embeddedsw/lib/sw_apps/zynqmp_fsbl/src/fsbl.elf ${ARTEFACTS}/
popd

ARM Trusted Boot

pushd arm-trusted-firmware
CROSS_COMPILE=aarch64-linux-gnu- make -j$NPCPUS PLAT=zynqmp all
popd
ln -f ${PROJECT}/arm-trusted-firmware/build/zynqmp/release/bl31/bl31.elf ${ARTEFACTS}/

Building a Linux system

There are many ways to obtain a functional Linux system to use with xvisor. We can use:

  1. Manual setup which requires to build separately u-boot, the linux kernel, a filesystem.

  2. Using buildroot which by the way is provided with a minimal but fully functional profile using busybox.

  3. Using Xilinx PetaLinux.

We chose here to build everything almost from scratch except the root filesystem where we use the second method (using buildroot). The third method is described only for completeness.

Manual setup

Build U-boot

pushd u-boot-xlnx
make distclean
export CROSS_COMPILE=aarch64-linux-gnu-
make xilinx_zynqmp_virt_defconfig
export DEVICE_TREE="zynqmp-zcu104-revC"
make -j $NBCPUS
popd
ln -f u-boot.elf ${ARTEFACTS}/

Build Linux Kernel

We build a 5.15.xx Linux Kernel series. It has been tested with the ZCU104 board and Lionel's version of xvisor. The 5.15.1 has also a vulnerability (Dirty Pipe) that is interesting to exploit as a test for the Lionel's work We have to apply the following a patch to make the kernel compiles.

mkdir -p ./linux-build-5.15.1
wget http://ftp.lip6.fr/pub/linux/kernel/sources/v5.x/linux-5.1.15.tar.xz
unxz linux-5.15.1.tar.xz
tar xf linux.5.15.1.tar
pushd linux-5.15.1
patch -p1 < ../kernel-5.1.15.patch
cp ../linux-defconfig-5.15.1 arch/arm64/configs/zcu104_defconfig
make O=${PROJECT}/linux-build-5.15.1 ARCH=arm64 \
    CROSS_COMPILE=${PROJECT}/x-tools/aarch64-none-elf/bin/aarch64-none-elf- zcu104_defconfig
make -j$NBCPUS  O=${PROJECT}/linux-build-5.15.1 ARCH=arm64 \
    CROSS_COMPILE=${PROJECT}/x-tools/aarch64-none-elf/bin/aarch64-none-elf- Image dtbs modules
ln -f ${PROJECT}/linux-build-5.15.1/arch/arm64/boot/Image ${ARTEFACTS}/
ln -f ${PROJECT}/linux-build-5.15.1/arch/arm64/boot/dts/xilinx/zynqmp-zcu104-revC.dtb ${ARTEFACTS}/
ln -f ${PROJECT}/linux-build-5.15.1/System.map ${ARTEFACTS}/
popd

Build Root FS

Building using builroot

This method is well supported and quite fast. We provide a modified default configuration file for the ZCU104 board that targets the build a small root file system which is compatible with the requirements imposed by xvisor on the boot chain:

pushd buildroot
# Patch which allows to have GCC installed inside the target system ...
# wget https://luplab.cs.ucdavis.edu/assets/buildroot-gcc/gcc-target.patch
cp ../zynqmp_zcu104_defconfig configs/
make zynqmp_zcu104_defconfig
make
ln -f ${PROJECT}/buildroot/output/images/rootfs.ext2 ${ARTEFACTS}/
popd

Building system device tree

push system-device-tree
echo "Building system device tree"
make
ln -f ${PROJECT}/system-device-tree/system-top.dtb ${ARTEFACTS}/
popd

Building Petalinux with Yocto to build qemu support for ZynQ ZCU104

It is possible to rebuild a PetaLinux distribution for Zynq ZCU104, although it takes quite a long time even on a big machine. This is only for completeness but not mandatory.

apt install repo
mkdir yocto-2023.3
cd yocto-2023.2
repo init -u https://github.com/Xilinx/yocto-manifests.git -b rel-v2023.2
repo sync
source setupsdk
MACHINE=zcu104-zynqmp bitbake core-image-minimal

Testing direct boot of Linux kernel using Xilinx Qemu

To build a bootable SDcard containing all necessary software we rely on guestfish package. We provide a shell script that builds a SD card image for booting directly Linux on the emulated ZCU104 board:

sudo apt install guestfish u-boot-tools genext2fs

We provide a shell script (${PROJECT}/build-sdcard-linux.sh) that is able to create a disk image for a virtual sdcard pass to the emulator which contains all necessary files to boot Linux directly on the emulated ZCU 104.

chmod u+x ./build-sdcard-linux.sh
./build-sdcard-linux.sh

This will create a new file in the project (${PROJECT}/sdcard.img) which can be provided to the emulator to boot Linux on the ZCU 104. Since we need to launch two instances of Qemu in parallel (one for the MicroBlaze processor, the other one for the ARM A53 processor), we also provide a shell script to run the emulation:

chmod u+x ./qemu-linux.sh
./qemu-linux.sh

This should display:

Creating temporary directory: /tmp/tmp.vTH6JvxoTl
Launching the PMU
Launching the CPU
qemu-system-microblazeel: info: QEMU waiting for connection on: disconnected:unix:/tmp/tmp.vTH6JvxoTl/qemu-rport-_pmu@0,server=on
PMU Firmware 2024.2	Jan 11 2025   17:31:42
PMU_ROM Version: xpbr-v8.1.0-0
NOTICE:  BL31: Non secure code at 0x8000000
NOTICE:  BL31: v2.10.0	(release):xilinx-v2024.2
NOTICE:  BL31: Built : 13:43:05, Jan 10 2025
U-Boot 2024.01 (Feb 27 2025 - 11:06:22 +0100)
[...]
Starting kernel ...

[    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
[    0.000000] Linux version 5.15.1 (ftronel@sashimi) (aarch64-none-elf-gcc (crosstool-NG 1.26.0.143_32f288e) 5.5.0, GNU ld (crosstool-NG 1.26.0.143_32f288e) 2.27) #11 SMP Wed Feb 26 14:44:06 CET 2025
[    0.000000] Machine model: ZynqMP ZCU104 RevC
[...]
Welcome to Buildroot
buildroot login: root
# 

The root password is root.

Introspection of Linux VM using Xvisor

Building the modified version of xvisor

These instructions are adapted from the Xvisor documentation for ZCU board. We build a modified version of xvisor that enables the virtual machine introspection by the hypervisor itself. The modifications are provided as an external patch (for now).

push xvisor
zcat ${PROJECT}/vmi.patch.gz | patch -p1
cat ${PROJECT}/xvisor-qemu.patch | patch -p1
CROSS_COMPILE=aarch64-linux-gnu- make ARCH=arm generic-v8-defconfig
CROSS_COMPILE=aarch64-linux-gnu- make -j$NBCPUS
popd

Building the compiler for instrospection programs

We provide a compiler for a dedicated introspection language called HyperSec as described in the publication. The compiler is written in Rust programming language. To build it:

pushd rust-parser
cargo build
popd

We provide an example of introspection program that is able to detect the modification of the syscall table by a rootkit. The program is as follows:

fun f_syscall_alert(l: listener) {
    alert();
}

fun init_syscall_protections() {
    register_write($sys_call_table, 439*8, f_syscall_alert); # sys call table
}

Recompiling the example program is obtained by the following commands:

./target/debug/parser  -i programs/syscall-protection.src -s ../linux-build-5.15.1/System.map -o syscall-protection.S
aarch64-linux-gnu-as syscall-protection.S -o syscall-protection.o
aarch64-linux-gnu-objcopy -O binary -j .text ./syscall-protection.o  syscall-protection.bin

We provide the compiler with the Linux kernel symbol table and it produces a (ARM64) assembly file of the corresponding code. We then transform this assembly code into an ELF object file, from which we extract only the code section. This pure binary file will be sent to the hypervisor from the virtual machine by the mean of a dedicated kernel module.

Building extraneous kernel modules

We provide two kernel modules that can be built from outside the kernel:

  1. A kernel module called send-vmi.ko that takes a single parameter (filename) that can send a VMI program to the hypervisor.

  2. A rootkit module that attempt to modify the kernel system call table. It alters this table by modifying the semantic of kill signal.

To build them:

pushd kernel-modules
make
ln -f ${PROJECT}/module/send-vmi.ko ${ARTEFACTS}/
ln -f ${PROJECT}/module/syscall-rootkit.ko ${ARTEFACTS}/
popd

Booting Linux with VM introspection through Xvisor

We must first create a new dedicated SDcard image compliant with the boot chain used by xvisor:

./build-sdcard-xvisor-linux.sh

This boot chain is illustrated by the following figure:

Components involved for running a Linux kernel on top of modified version of xvisor, itself running on top of Xilinx Qemu for ZCU 10x boards\label{dtb}

Then we can execute a dedicated shell script that will run the emulation of xvisor on top of the ZCU104:

./qemu-xvisor-linux.sh
Creating temporary directory: 
Launching the PMU
Launching the CPU
qemu-system-microblazeel: info: QEMU waiting for connection on: disconnected:unix:/tmp/tmp.2H07QFr2vx/qemu-rport-_pmu@0,server=on
PMU Firmware 2024.2	Mar 26 2025   17:46:27
PMU_ROM Version: xpbr-v8.1.0-0
NOTICE:  BL31: Non secure code at 0x8000000
NOTICE:  BL31: v2.10.0	(release):xilinx-v2024.2
NOTICE:  BL31: Built : 17:46:52, Mar 26 2025
[...]
[guest0/uart0] Welcome to Buildroot
[guest0/uart0] buildroot login: root
[guest0/uart0] Password: 
[guest0/uart0] Login incorrect
[guest0/uart0] buildroot login: root
[guest0/uart0] Password: 

The password is once more root.

[guest0/uart0] Welcome to Buildroot
[guest0/uart0] buildroot login: root
[guest0/uart0] Password: 
[guest0/uart0] # ls
[guest0/uart0] send-vmi.ko             syscall-protection.bin  syscall-rootkit.ko
[guest0/uart0] # 

We can run insert the system call rootkit:

[guest0/uart0] insmod ./syscall-rootkit.ko
[guest0/uart0] [   65.983340] syscall_rootkit: loading out-of-tree module taints kernel.
[guest0/uart0] [   66.052385] Syscall rootkit! 
[guest0/uart0] [   66.080298] syscall table address: 0xffff800010e50988
[guest0/uart0] [   66.082599] kill syscall entry address: 0xffff800010e50d90
[guest0/uart0] [   66.086953] getdents64 syscall entry address: 0xffff800010e50b70
[guest0/uart0] [   66.093098] kill syscall previous function address: 0xffff80001006da40
[guest0/uart0] [   66.097082] new function address: 0xffff800008e50280
[guest0/uart0] [   66.099379] getdents64 syscall previous function address: 0xffff8000102c56a4
[guest0/uart0] [   66.102384] new function address: 0xffff800008e50000

This rootkit has the ability to make invisible a process by sending it the 64 signal (which does not exist). For example:

[guest0/uart0] # ps aux
[guest0/uart0] PID   USER     COMMAND
[guest0/uart0]     1 root     init
[guest0/uart0]     2 root     [kthreadd]
[...]
[guest0/uart0]   210 root     /sbin/syslogd -n
[guest0/uart0]   214 root     /sbin/klogd -n
[guest0/uart0]   234 root     /usr/sbin/crond -f
[guest0/uart0]   235 root     -sh
[guest0/uart0]   239 root     ps aux
[guest0/uart0] # kill -64 235
[guest0/uart0] [   70.176904] kill: 64
[guest0/uart0] [   70.179545] pid: 235
[guest0/uart0] # ps aux
[guest0/uart0] PID   USER     COMMAND
[guest0/uart0] [   81.630838] offset: 2880, name: 235
[guest0/uart0]     1 root     init
[guest0/uart0]     2 root     [kthreadd]
[...]
[guest0/uart0]   210 root     /sbin/syslogd -n
[guest0/uart0]   214 root     /sbin/klogd -n
[guest0/uart0]   234 root     /usr/sbin/crond -f
[guest0/uart0]   240 root     ps aux

We are able to make invisible (and visible) any process of our choice. To detect such attack, we provide a VMI program.

[guest0/uart0] Welcome to Buildroot
[guest0/uart0] buildroot login: root
[guest0/uart0] Password: 
[guest0/uart0] # ls
[guest0/uart0] send-vmi.ko             syscall-protection.bin  syscall-rootkit.ko
[guest0/uart0] # insmod ./send-vmi.ko filename=./syscall-protection.bin 
[guest0/uart0] [   27.190629] send_vmi: loading out-of-tree module taints kernel.
[guest0/uart0] written from 0x4a0b7000 to 0x10beca00 (504 bytes)
fundef: f_syscall_alert at 0x10becb88 (size: 9)
need injecting address of alert in register 9
fundef: init_syscall_protections at 0x10becbac (size: 13)
need injecting address of f_syscall_alert in register 9
need injecting address of register_write in register 12
funcall: init_syscall_protections
function returned

If we now try to insert the rootkit kernel module, its action is detected at hypervisor level and signalled as an alarm:

[guest0/uart0] # insmod ./syscall-rootkit.ko 
[guest0/uart0] [   40.649004] Syscall rootkit! 
[guest0/uart0] [   40.679240] syscall table address: 0xffff800010e50988
[guest0/uart0] [   40.684798] kill syscall entry address: 0xffff800010e50d90
[guest0/uart0] [   40.687854] getdents64 syscall entry address: 0xffff800010e50b70
[guest0/uart0] alert
alert
alert
alert
[guest0/uart0] kill syscall previous function address: 0xffff80001006da40
[guest0/uart0] [   40.700322] new function address: 0xffff800008e55280
[guest0/uart0] [   40.702803] getdents64 syscall previous function address: 0xffff8000102c56a4
[guest0/uart0] [   40.709713] new function address: 0xffff800008e55000
[guest0/uart0] # 

Future work

Although pretty stable, this work is still at the level of a proof of concept. We are working on several improvements notably a binary verifier on the hypervisor side.