1
0
Fork
You've already forked setup-alpine
0
Easily use Alpine Linux on GitHub Actions, with support for QEMU user emulator (Forgejo fork) https://github.com/jirutka/setup-alpine
Shell 100%
Find a file
2024年03月02日 23:20:59 +00:00
.github/workflows CI: Use local action reference 2022年04月30日 21:51:13 +02:00
binfmts Completely rewrite without alpine-chroot-install 2022年04月30日 21:51:12 +02:00
keys Completely rewrite without alpine-chroot-install 2022年04月30日 21:51:12 +02:00
.editorconfig Completely rewrite without alpine-chroot-install 2022年04月30日 21:51:12 +02:00
action.yml action.yml aktualisiert 2024年03月02日 23:20:59 +00:00
alpine.sh forgejo actions compat 2024年02月04日 17:14:57 +01:00
destroy.sh Kill orphaned processes before destroying chroot 2023年09月11日 23:34:39 +02:00
LICENSE Create action 2022年04月30日 21:35:29 +02:00
README.adoc Escape typographic apostrophe 2024年02月04日 17:15:09 +01:00
setup-alpine.sh forgejo actions compat 2024年02月04日 17:14:57 +01:00

GitHub Action: Setup Alpine Linux

A GitHub Action to easily set up and use a chroot-based [1] Alpine Linux environment in your workflows and emulate any supported CPU architecture (using QEMU).

runs-on: ubuntu-latest
steps:
 - uses: jirutka/setup-alpine@v1
 with:
 branch: v3.15
 - run: cat /etc/alpine-release
 shell: alpine.sh {0}

See more examples...

Highlights

  • Easy to use and flexible

    • Add one step uses: jirutka/setup-alpine@v1 to set up the Alpine environment, for the next steps that should run in this environment, specify shell: alpine.sh {0}. See Usage examples for more.

    • You can switch between one, or even more, Alpine environments and the host system (Ubuntu) within a single job (i.e. each step can run in a different environment). This is ideal, for example, for cross-compilations.

  • Emulation of non-x86 architectures

    • This couldn’t be easier; just specify input parameter arch. The action sets up QEMU user space emulator and installs Alpine Linux environment for the specified architecture. You can then build and run binaries for/from this architecture, just like on real hardware, but (significantly) slower (it’s a software emulation after all).

  • No hassle with Docker images

    • You don’t have to write any Dockerfiles, for example, to cross-compile Rust crates with C dependencies.[2]

    • No, you really don’t need any so-called "official" Docker image for gcc, nodejs, python or whatever you need... just install it using apk from Alpine packages. It is fast, really fast!

  • Always up to date environment

    • The whole environment, all packages, are always installed directly from the Alpine Linux’s official repositories using apk-tools (Alpine’s package manager). There’s no intermediate layer that tends to lang behind with security fixes (such as Docker images). — You might be thinking, isn’t that slow? No, it’s faster than pulling a Docker image!

    • No, you really don’t need any Docker image to get a stable build environment. Alpine Linux provides stable releases (branches); these get just (security) fixes, no breaking changes.

  • It’s simple and lightweight

    • You don’t have to worry about that on a hosted CI service, but still... This action is written in ~220 LoC and uses only basic Unix tools (chroot, mount, wget, Bash, ...) and apk-tools (Alpine’s package manager). That’s it.

Parameters

Inputs

apk-tools-url

URL of the apk-tools static binary to use. It must end with #!sha256! followed by a SHA-256 hash of the file. This should normally be left at the default value.

Default: see action.yml

arch

CPU architecture to emulate using QEMU user space emulator. Allowed values are: x86_64 (native), x86 (native), aarch64, armhf [3], armv7, ppc64le, riscv64 [4], and s390x.

Default: x86_64

branch

Alpine branch (aka release) to install: vMAJOR.MINOR, latest-stable, or edge.

Example: v3.15
Default: latest-stable

extra-keys

A list of paths of additional trusted keys (for installing packages from the extra-repositories) to copy into /etc/apk/keys/. The paths should be relative to the workspace directory (the default location of your repository when using the checkout action).

Example: .keys/pkgs@example.org-56d0d9fd.rsa.pub

extra-repositories

A list of additional Alpine repositories to add into /etc/apk/repositories (Alpine’s official main and community repositories are always added).

mirror-url

URL of an Alpine Linux mirror to fetch packages from.

packages

A list of Alpine packages to install.

Example: build-base openssh-client
Default: no extra packages

shell-name

Name of the script to run sh in the Alpine chroot that will be added to GITHUB_PATH. This name should be used in jobs.<job_id>.steps[*].shell (e.g. shell: alpine.sh {0}) to run the step’s script in the chroot.

Default: alpine.sh

volumes

A list of directories on the host system to mount bind into the chroot. You can specify the source and destination path: <src-dir>:<dest-dir>. The <src-dir> is an absolute path of existing directory on the host system, <dest-dir> is an absolute path in the chroot (it will be created if doesn’t exist). You can omit the latter if they’re the same.

Please note that /home/runner/work (where’s your workspace located) is always mounted, don’t specify it here.

Example: ${{ steps.alpine-aarch64.outputs.root-path }}:/mnt/alpine-aarch64

Outputs

root-path

Path to the created Alpine root directory.

Usage examples

Basic usage

runs-on: ubuntu-latest
steps:
 - uses: actions/checkout@v2
 - name: Setup latest Alpine Linux
 uses: jirutka/setup-alpine@v1
 - name: Run script inside Alpine chroot as root
 run: |
 cat /etc/alpine-release
 apk add nodejs npm
 shell: alpine.sh --root {0}
 - name: Run script inside Alpine chroot as the default user (unprivileged)
 run: |
 ls -la # as you would expect, you're in your workspace directory
 npm build
 shell: alpine.sh {0}
 - name: Run script on the host system (Ubuntu)
 run: |
 cat /etc/os-release
 shell: bash

Set up Alpine with specified packages

- uses: jirutka/setup-alpine@v1
 with:
 branch: v3.15
 packages: >
 build-base
 libgit2-dev
 meson

Set up and use Alpine for a different CPU architecture

runs-on: ubuntu-latest
steps:
 - name: Setup Alpine Linux v3.15 for aarch64
 uses: jirutka/setup-alpine@v1
 with:
 arch: aarch64
 branch: v3.15
 - name: Run script inside Alpine chroot with aarch64 emulation
 run: uname -m
 shell: alpine.sh {0}

Set up Alpine with packages from the testing repository

- uses: jirutka/setup-alpine@v1
 with:
 extra-repositories: |
 http://dl-cdn.alpinelinux.org/alpine/edge/testing
 packages: some-pkg-from-testing

Set up and use multiple Alpine environments in a single job

runs-on: ubuntu-latest
steps:
 - name: Setup latest Alpine Linux for x86_64
 uses: jirutka/setup-alpine@v1
 with:
 shell-name: alpine-x86_64.sh
 - name: Setup latest Alpine Linux for aarch64
 uses: jirutka/setup-alpine@v1
 with:
 arch: aarch64
 shell-name: alpine-aarch64.sh
 - name: Run script inside Alpine chroot
 run: uname -m
 shell: alpine-x86_64.sh {0}
 - name: Run script inside Alpine chroot with aarch64 emulation
 run: uname -m
 shell: alpine-aarch64.sh {0}
 - name: Run script on the host system (Ubuntu)
 run: cat /etc/os-release
 shell: bash

Cross-compile Rust application with system libraries

runs-on: ubuntu-latest
strategy:
 matrix:
 include:
 - rust-target: aarch64-unknown-linux-musl
 os-arch: aarch64
env:
 CROSS_SYSROOT: /mnt/alpine-${{ matrix.os-arch }}
steps:
 - uses: actions/checkout@v1
 - name: Set up Alpine Linux for ${{ matrix.os-arch }} (target arch)
 id: alpine-target
 uses: jirutka/setup-alpine@v1
 with:
 arch: ${{ matrix.os-arch }}
 branch: edge
 packages: >
 dbus-dev
 dbus-static
 shell-name: alpine-target.sh
 - name: Set up Alpine Linux for x86_64 (build arch)
 uses: jirutka/setup-alpine@v1
 with:
 arch: x86_64
 packages: >
 build-base
 pkgconf
 lld
 rustup
 volumes: ${{ steps.alpine-target.outputs.root-path }}:${{ env.CROSS_SYSROOT }}
 shell-name: alpine.sh
 - name: Install Rust stable toolchain via rustup
 run: rustup-init --target ${{ matrix.rust-target }} --default-toolchain stable --profile minimal -y
 shell: alpine.sh {0}
 - name: Build statically linked binary
 env:
 CARGO_BUILD_TARGET: ${{ matrix.rust-target }}
 CARGO_PROFILE_RELEASE_STRIP: symbols
 PKG_CONFIG_ALL_STATIC: '1'
 PKG_CONFIG_LIBDIR: ${{ env.CROSS_SYSROOT }}/usr/lib/pkgconfig
 RUSTFLAGS: -C linker=/usr/bin/ld.lld
 SYSROOT: /dummy # workaround for https://github.com/rust-lang/pkg-config-rs/issues/102
 run: |
 # Workaround for https://github.com/rust-lang/pkg-config-rs/issues/102.
 echo -e '#!/bin/sh\nPKG_CONFIG_SYSROOT_DIR=${{ env.CROSS_SYSROOT }} exec pkgconf "$@"' \
 | install -m755 /dev/stdin pkg-config
 export PKG_CONFIG="$(pwd)/pkg-config"
 cargo build --release --locked --verbose
 shell: alpine.sh {0}
 - name: Try to run the binary
 run: ./myapp --version
 working-directory: target/${{ matrix.rust-target }}/release
 shell: alpine-target.sh {0}

History

This action is an evolution of the alpine-chroot-install script I originally wrote for Travis CI in 2016. The implementation is principally the same, but tailored to GitHub Actions. It’s so simple and fast thanks to how awesome apk-tools is!

License

This project is licensed under MIT License. For the full text of the license, see the LICENSE file.


1. If you don’t know what chroot is, think of it as a very simple container. It’s one of the cornerstones of containers and the only one that is actually needed for this use case.
2. The popular cross tool used by actions-rs/cargo action doesn’t allow you to easily install additional packages or whatever needed for building your crate without creating, build and maintaining custom Docker images (cross-rs/cross#281). This just impose unnecessary complexity and boilerplate.
3. armhf is armv6 with hard-float.
4. riscv64 is available only for branch edge for now.