2
2
Fork
You've already forked docker-vm-runner
0
[WIP+EXPERIMENTAL] Weird qemu VM runner image
  • Shell 70.5%
  • Dockerfile 29.5%
Jakob Meier 5b37746166
Some checks failed
/ build (push) Has been cancelled
/ checkout (push) Has been cancelled
/ artifacts (push) Has been cancelled
/ cache-save (push) Has been cancelled
/ cache-restore (push) Has been cancelled
Merge pull request 'Add tests for common actions' ( #4 ) from dev-tests into main
Reviewed-on: #4 
2025年03月31日 12:44:19 +00:00
.forgejo/workflows Add checkout test 2025年02月14日 14:55:54 +01:00
examples Create workflow example directory 2025年01月26日 08:00:37 +01:00
images Load image metadata from config file 2025年01月28日 17:43:48 +01:00
.gitignore Load image metadata from config file 2025年01月28日 17:43:48 +01:00
Dockerfile Allow setting binary wrappers at build time 2025年01月28日 18:43:24 +01:00
entrypoint.sh Add qemu monitor command actions 2025年01月29日 07:02:48 +01:00
exec.sh Move username and timeout into image config 2025年01月29日 07:02:48 +01:00
LICENSE Add MIT license 2025年01月26日 07:56:58 +01:00
README.md Explain config file setup in the README 2025年01月29日 09:45:42 +01:00

Docker VM Runner

Experimental Docker container to run forgejo-runner steps inside a VM.

Preparing a VM image

Creating the files

To register the VM, you want to create a new folder with the name of the VM. Then create a new file named config inside the folder. You also want to place the disk image to use in that folder (i.e. qemu-img create -f qcow2 disk.qcow2 4G)

Integrating into the Docker container

You can integrate your VM image into the Docker container, by placing it in the images folder and building the Docker container yourself (i.e. docker build -t vm-runner .)

Now set the workflow to use your custom-built Docker container.

Making the images available to the runner

If you don't want to build your own Docker container, you can create an images directory on the runner host and allow the workflows to access it.

You can generate a default config file for your runner using: forgejo-runner generate-config > config.yml. Afterwards, edit the file and add /images to container.volumes:

container:# [...]volumes:- /images

You also have to mount the volume in the workflow:

jobs:build:runs-on:qemucontainer:image:"codeberg.org/comcloudway/vm-runner:edge"# use the runner image collectionvolumes:- /images:/images

Configure the VM

In general, the following prerequisites need to be fulfilled in your VM:

  • The VM must boot under QEMU’s -nographic option.
  • The VM must have an sh executable.
  • The VM’s sshd must allow passwordless logins for root (ideally, root should not have a password at all), and must allow the use of the .ssh/environment file (PermitUserEnvironment yes).
  • The ~/.ssh folder must exist for the root user.
  • Verify how much memory your VM needs to boot. If necessary, add -m <memory size> to the VM_DEFAULT_ARGS options.

Booting the VM every time

If you don't mind waiting for the VM to start up every time, you can simply poweroff the system after setting it up.
Set VM_MONITOR_ACTIONS="" because you don't want to restore a state.

The VM can either mount the VirtFS host0 device under the /var/run/act/ directory itself. Example /etc/fstab entry: host0 /var/run/act 9p trans=virtio,version=9p2000.L 0 0.
Or use the VM_SETUP_COMMAND='sh -c "mkdir -p /var/run/act; mount -t 9p host0 /var/run/act 2&>1 || true"' config entry. (Remember to set VM_SETUP_COMMAND="" if you mount the volume yourself)

You also probably want to set VM_DEFAULT_ARGS="--snapshot" to prevent modifications to the base image.

Restoring VM state

This method allows you to restore a previous VM state, which can be useful if you don't want to wait for the VM to boot up every time.

Because qemus savevm doesn't allow for mounted volumes, you can not mount the volume using fstab.

After setting up the VM, power it off. Now add the following qemu arguments, to get the VM into a state that is similar to the CI. (You might have to add additional arguments depending on the system):

# NOTE: the virtfs host mountpoint doesn't matter, just use something temporary, like mnt
qemu-system-x86_64 \
 -monitor stdio \
 -serial none -virtfs local,path=/mnt,mount_tag=host0,security_model=passthrough,id=host0 \
 -drive file="disk.qcow2"

Once the VM has booted to the login prompt (or wherever you want your base state to be), run savevm base in the qemu monitor. (You can change the snapshot name to anything you like).
Lastly, set VM_MONITOR_ACTIONS="loadvm base" in the config. (Again, replace the snapshot name as you please.)

Example config file

For example, a config file for a simple alpine image that is booted directly to the command prompt could look like this:

# Path to the VM disk image
# change the path to point to your disk image, relative to the config file
VM_DISK="disk.qcow2"
# additional vm startup arguments
VM_DEFAULT_ARGS=""
# Architecture to use
# requires the architecture to be installed inside the container
# (see `arches` build argument)
VM_ARCH="x86_64"
# The command run on the server after ssh-ing into it.
# Should create and navigate into the workspace directory
# %s is replaced with the binary and arguments of the workflow script to run
# The following should work on most distros:
VM_COMMAND='sh -c "mkdir -p $GITHUB_WORKSPACE; cd $GITHUB_WORKSPACE; %s"'
# Maximum ssh connection attempts
# NOTE: there is currently a 1 second delay between them
VM_CONNECTION_TIMEOUT="120"
# ssh login user
VM_USER="root"
###################
# loadvm feature:
###################
# setup command to run before connection to guest
# should set up the mount points.
VM_SETUP_COMMAND='sh -c "mkdir -p /var/run/act; mount -t 9p host0 /var/run/act 2&>1 || true"'
# commands to run in the qemu monitor
VM_MONITOR_ACTIONS='loadvm base'

Build container

docker build -t vm-runner .

Change the image tag as needed.
You can set the following build arguments to customize behaviour:

  • arches - space separated list of qemu architectures to install (default: "x86_64")
  • bins - space separated list of binaries to link to the exec script. Used to forward the execution into the VM. (default: sh bash node python)

Run locally

forgejo-runner exec -W examples/<category>

Note: the /images mount is not supported when running locally