profile picture

Cross-compiling Crystal applications - Part 1

June 22, 2024 - crystal linux compiling alpine macos

Exploring simplification on compiling Crystal applications to other platforms and architectures.

Native still requires runtime dependencies

While Crystal language provides a friendly way to generate native binaries for your current platform (crystal build), the cross-compilation to target other platforms (--cross-compile) still requires a bit of manual juggling in order to completely build a proper native for these platforms.

Let's take a simple Hello World application (

puts "Hello World!"

To generate a native executable, we can simply do:

$ crystal build

This generates a binary named hello in your current directory. This is a translated version of your application source code to native, machine code.

Crystal automatically did several things for us:

  1. It generated an object file of our code
  2. It linked this object file with the libraries dependencies

When executed:

$ ./hello
Hello World!

It will no longer require Crystal to be installed. However, it will still require other libraries to be be present in your installation when executed:

$ ldd hello
    /lib/ (0xffffa79ce000) => /usr/lib/ (0xffffa77ec000) => /usr/lib/ (0xffffa776d000) => /usr/lib/ (0xffffa773c000) => /lib/ (0xffffa79ce000)

Those are dynamic linked dependencies. Above list shows the output of an Alpine Linux installation, which will be different if you're using other Linux distribution, specially those that use glibc as the C library (pretty much all distributions and all with different versions).

If you're under macOS, you can use otool -L to obtain a list of the runtime dependencies of your program:

$ otool -L hello
    /opt/homebrew/opt/pcre2/lib/libpcre2-8.0.dylib (compatibility version 14.0.0, current version 14.0.0)
    /opt/homebrew/opt/bdw-gc/lib/libgc.1.dylib (compatibility version 7.0.0, current version 7.3.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1345.100.2)
    /opt/homebrew/opt/libevent/lib/libevent-2.1.7.dylib (compatibility version 8.0.0, current version 8.1.0)
    /usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)

These dependencies will be required if you plan to distribute your executable (Eg. install XYZ before using this application).

Dynamic vs Static linked dependencies

There is a whole debate around the treatment of dependencies. Advocates from each front will come with a long list of the benefits of their approach and why the other is wrong.

I personally will not get into that debate, but I will try to scratch my own itch. When shipping my applications, I care about:

  1. Reduce as much as possible all the manual steps to users that can lead to issues (Eg. install XYZ before)
  2. Reduce debug time caused by mismatched dependencies between different users
  3. Have a reproducible build environment to avoid changes in my local system to impact by builds
  4. Automate as much as possible the build process to avoid forgetting details
  5. Be able to support both Linux and macOS environments (both on Intel and ARM)

With these in mind, here is my initial approach to validate this idea:

  1. Package build environment as a container image that I can use on any machine
  2. Ship to end-users standalone binaries without dependencies
  3. Allow building binaries for other architectures

Container image: a reproducible and descriptive build environment

I often switch between macOS, Linux or Windows computers, so I need a portable environment that doesn't require lot of ceremony on getting it running on any of those systems.

Over the years I found that Docker and container images provided me a stable solution to this.

I already use Crystal within a container thanks to hydrofoil-crystal, so makes sense to reuse that work as base.

This container image is based on Alpine Linux, which uses musl C library instead of glibc, commonly found bigger distributions like Debian, Fedora and others.

This presents a series of benefits that will cover later, in the meantime, let's write a basic Dockerfile file for this:

FROM AS base

And let's build the image:

$ docker build -t crystal-xbuild -f Dockerfile .

Above command generate a container image under 400MB:

$ docker image ls
crystal-xbuild   latest    be05b5a473a2   3 weeks ago   377MB

Ship a standalone executable (static linking)

This image already contains the static libraries necessary for you to build binaries that do not depend on the dynamic libraries to be available.

Let's use our fresh image to spawn an interactive container:

$ docker run -it --rm -u $(id -u):$(id -g) -v .:/app -w /app crystal-xbuild sh -i

Within the container, let's try our example again:

$ crystal build --static

Above command might be mouthful, so let's break it down:

By doing crystal build --static, it will attempt to generate a static version of our application.

Once compiled, the container terminates automatically and you should find hello executable in the same directory.

Let's inspect it with file:

$ file hello
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=8025c0fcdfed21df1579411bdab10c35fec83f94, with debug_info, not stripped

It generated a x86_64 architecture, static binary.

Now we can try to run this in another Linux distribution (Eg. Ubuntu) to confirm if it works:

$ docker run --rm -v .:/app ubuntu:24.04 /app/hello
Hello World!

Building binaries for other architectures (x86_64, aarch64)

While I was able to produce a standalone executable, there is one big caveat: it only works for my current platform.

This means that if I'm compiling on x86_64 (Intel/AMD 64bits architecture), my executable will be a x86_64 native.

If I was running on aarch64 (ARM 64bits architecture), my generated executable will be native to that one.

In order to produce a binary for another platform, I need to cross-compile: cross-compilation is a complex subject by itself, but in order to simplify it:

  1. You need a compiler that understands the target architecture you want to compile to
  2. You need the static libraries for that target platform
  3. You need a linker that can take your object file and link it against these static libraries

Crystal is capable of cross-compilation to different architectures and platforms, but leaves the linking process for you to figure out:

Let's take the following example, trying to build for ARM:

$ crystal build --cross-compile --target aarch64-linux-musl --static
cc hello.o -o hello  -rdynamic -static -L/usr/local/bin/../lib/crystal -lpcre2-8 -lgc -lpthread -ldl -levent

It now outputs a line that we didn't see before. This is was the linking command done automatically by Crystal when working natively. Let's break down what this command means:

No executable was generated, simply because Crystal doesn't know if cc is capable of linking that alien object file or if can find the appropriate static libraries needed for linking. For this, we will require a linker that can do that.

If you look around the internet, you will find different advice on which cross-linker or cross-compilation toolchain to use. From building everything from scratch to out-of-the-box solutions, but no silver bullet solution.

This is a rabbit hole I don't want to go down: figure everything out or build everything from scratch... I want to spend my time building my application!

So let's take a shortcut, let's do a good investment of our time and leverage on the work that other have done on this area.

Back in 2020 Andrew Kelley wrote about using Zig, specifically zig cc to replace your regular C compiler and easily cross-compile, all at once.

So let's add Zig to our container image:

Diff of changes to apply to Dockerfile
 FROM AS base
+# install cross-compiler (Zig)
+RUN --mount=type=cache,sharing=private,target=/var/cache/apk \
+    --mount=type=tmpfs,target=/tmp \
+    set -eux -o pipefail; \
+    # Tools to extract Zig
+    { \
+        apk add \
+            tar \
+            xz \
+        ; \
+    }; \
+    # Zig
+    { \
+        cd /tmp; \
+        mkdir -p /opt/zig; \
+        export ZIG_VERSION=0.13.0; \
+        case "$(arch)" in \
+        x86_64) \
+            export \
+                ZIG_ARCH=x86_64 \
+                ZIG_SHA256=d45312e61ebcc48032b77bc4cf7fd6915c11fa16e4aad116b66c9468211230ea \
+            ; \
+            ;; \
+        aarch64) \
+            export \
+                ZIG_ARCH=aarch64 \
+                ZIG_SHA256=041ac42323837eb5624068acd8b00cd5777dac4cf91179e8dad7a7e90dd0c556 \
+            ; \
+            ;; \
+        esac; \
+        wget -q -O zig.tar.xz${ZIG_VERSION}/zig-linux-${ZIG_ARCH}-${ZIG_VERSION}.tar.xz; \
+        echo "${ZIG_SHA256} *zig.tar.xz" | sha256sum -c - >/dev/null 2>&1; \
+        tar -C /opt/zig --strip-components=1 -xf zig.tar.xz; \
+        rm zig.tar.xz; \
+        # symlink executable
+        ln -nfs /opt/zig/zig /usr/local/bin; \
+    }; \
+    # smoke check
+    [ "$(command -v zig)" = '/usr/local/bin/zig' ]; \
+    zig version; \
+    zig cc --version

Wow 🤯, that looks complicated! Here is a summary of what is going on:

All this within a temporary directory that is not part of the container image, simply to avoid carrying over unnecessary files (and increasing the final image size).

Let's test compiling a simple C program to validate that it's working:

#include <stdio.h>

int main()
    puts("Hello World!");

    return 0;
$ zig cc examples/hello.c -o hello -target $(arch)-linux-musl

This compiles hello.c as hello and targets the same architecture we are currently running our container.

But thanks to the magic of -target, Zig should build a static version of musl library and link that automatically to the final executable, resulting in a standalone binary:

$ ldd hello
/lib/ hello: Not a valid dynamic program

$ file hello
hello: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, with debug_info, not stripped

Since I'm running the container in a ARM platform, let's target Intel/AMD now:

$ zig cc examples/hello.c -o hello-intel -target x86_64-linux-musl

And the new, standalone binary will be targetting x86_64:

$ file hello-intel
hello-intel: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, with debug_info, not stripped

But while it can cross-compile a simple C program, that does not mean it can cross-compile our Crystal one:

$ crystal build --cross-compile --target x86_64-linux-musl examples/ -o hello.o
cc hello.o -o hello  -rdynamic -L/usr/local/bin/../lib/crystal -lpcre2-8 -lgc -lpthread -ldl -levent

$ zig cc -target x86_64-linux-musl hello.o -o hello-crystal

Expect a flood of errors due missing libraries:

ld.lld: error: undefined symbol: _Unwind_SetGR
>>> referenced by (/usr/local/share/crystal/src/
>>>               hello.o:(__crystal_personality)
>>> referenced by (/usr/local/share/crystal/src/
>>>               hello.o:(__crystal_personality)
ld.lld: error: undefined symbol: GC_get_push_other_roots
>>> referenced by (/usr/local/share/crystal/src/gc/
>>>               hello.o:(*GC::before_collect<&Proc(Nil)>:Nil)
ld.lld: error: undefined symbol: event_base_new
>>> referenced by (/usr/local/share/crystal/src/crystal/system/unix/
>>>               hello.o:(*Crystal::LibEvent::Event::Base#initialize:Pointer(Void))

It was not able to find these functions since we didn't provide the needed libraries that it needs to link to, so perhaps is a good moment to bring those in.

Include necessary libraries for other architectures

While working on RubyInstaller, spent years in compiling and cross-compiling dependencies over and over again. This time, not going to repeat that and, the same way as done for the compiler/linker, going to leverage in the great work done by others.

Going to stick to Alpine Linux, which provides packages with all the static libraries necessary for me to build my applications.

From our example, we need the following libraries:

I'm going to use Alpine Linux package search to lookup for which packages contains these libs.

Now I know I need the following packages:

Since Zig already bundles musl source code and dependencies, we don't need to donwload musl-dev package.

At this time, the latest version of Alpine Linux is 3.20, so going to download these packages (.apk) for my intended architectures: x86_64 and aarch64.

$ mkdir -p /tmp/packages; cd /tmp/packages

$ wget \ \ \

It's time to extract the files we need from those packages: the precious static libraries (.a) files:

$ mkdir -p x86_64-linux-musl

$ tar -xf libevent-static-2.1.12-r7.apk \
	--strip-components=2 \
	-C ./x86_64-linux-musl/ \
	--wildcards --no-anchored '*.a'

The above will extract only the .a files from the .apk package and place them in the new platform-specific directory we just created.

Let's repeat the same step for the other libraries.

When inspected, we should now have a few files in there:

$ ls x86_64-linux-musl/
libcord.a            libevent_core.a      libevent_openssl.a   libgc.a              libgctba.a           libpcre2-32.a        libpcre2-posix.a
libevent.a           libevent_extra.a     libevent_pthreads.a  libgccpp.a           libpcre2-16.a        libpcre2-8.a

Good! Now that we have all the .a from those packages, let's attempt linking our Crystal application again:

$ cd /app

$ zig cc -target x86_64-linux-musl \
	hello.o -o hello-crystal \
	-L/tmp/packages/x86_64-linux-musl \
	-lpcre2-8 -lgc -lpthread -ldl -levent

But still fails:

ld.lld: error: undefined symbol: _Unwind_GetRegionStart
>>> referenced by (/usr/local/share/crystal/src/
>>>               hello.o:(__crystal_personality)

Its looking for Unwind functions, coming from libunwind, something that is part of musl, this is not detected/indicated by Crystal, so let's add that library and try again:

$ zig cc -target x86_64-linux-musl \
	hello.o -o hello-crystal \
	-L/tmp/packages/x86_64-linux-musl \
	-lpcre2-8 -lgc -lpthread -ldl -levent -lunwind

Success! No error were displayed! And inspecting the file:

$ ldd hello-crystal
/lib/ hello-crystal: Not a valid dynamic program

$ file hello-crystal
hello-crystal: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, with debug_info, not stripped

We obtain a similar result as the C program we compiled earlier.

Let's validate that against a x86_64 container:

$ docker run -it --rm --platform linux/amd64 -v .:/app -w /app ubuntu:24.04 bash -i

$ arch

$ ./hello-crystal
Hello World!

It works! 🥳

Note that this will also work the other way around too: been able to build aarch64 binaries from your x86_64 container, you will need to edit the shown commands and download the right packages, but you get the idea.

You will find the source code for this post in GitHub under luislavena/crystal-xbuild-container repository.

But enough for today, while I have made some great progress, I still manually downloaded and extracted some libraries, but we haven't validated we got the right things in order to ensure we have a reproducible environment.

And we haven't covered building binaries for macOS!

I promise we will tackle that in the next part.

Enjoy! ❤️