profile picture

Backup headscale using Litestream

February 25, 2023 - wireguard tailscale headscale homelab litestream

How we can ensure if something happens to the server running headscale that I can recover the configuration?

Always have a backup strategy!

This time I tested headscale and Litestream locally, combined with a S3-compatible storage (MinIO)

▶️ Also made a recording of this that you can watch on YouTube here.


1. Setup an S3-compatible solution

Built on top of previous TIL, updated compose.yaml to include MinIO service within service section:

+  minio:
+    image: quay.io/minio/minio:latest
+    command: minio server /data --console-address ":9090"
+
+    environment:
+      - MINIO_REGION=home
+      - MINIO_ROOT_USER=admin
+      - MINIO_ROOT_PASSWORD=changeme
+
+    ports:
+      # Expose S3-compatible endpoint
+      - "9000:9000"
+
+      # Limit MinIO console to localhost
+      - "127.0.0.1:9090:9090"
+
+    volumes:
+      - minio:/data

And the respective volume to store data:

 volumes:
   headscale:
     driver: local
+  minio:
+    driver: local

Made sure our headscale service will boot after MinIO by adjusting its configuration:

     # command: headscale serve
+
+    depends_on:
+      - minio

2. Add configuration for S3 replication

Created a new configuration file for Litestream:

---
dbs:
  - path: /data/headscale.sqlite3
    replicas:
      - name: s3_replica
        type: s3
        bucket: ${S3_BUCKET_NAME}
        path: headscale
        endpoint: ${S3_ENDPOINT}
        region: ${S3_REGION}
        access-key-id: ${S3_ACCESS_KEY_ID}
        secret-access-key: ${S3_SECRET_ACCESS_KEY}

Using ${VAR} format allow us to pass the values for those variables using environment variables when the container is run, which we will be adding to our compose.yaml file:

     depends_on:
       - minio
 
+    environment:
+      - S3_ACCESS_KEY_ID=admin
+      - S3_SECRET_ACCESS_KEY=changeme
+      - S3_REGION=home
+      - S3_BUCKET_NAME=homelab
+      - S3_ENDPOINT=http://minio:9000
+ 
     ports:

Please note that this is no way secure! Only used minio locally on my network and is not a good practice to hardcode your credentials in your compose.yaml file, don't do this on production! 😅

3. Add Litestream to our container image

Now that our configuration is in place, we need to add Litestream to our container image, after Headscale executable:

RUN --mount=type=cache,target=/var/cache/apk \
    --mount=type=tmpfs,target=/tmp \
    set -eux; \
    cd /tmp; \
    # Headscale
    { \
        export \
            HEADSCALE_VERSION=0.20.0 \
            HEADSCALE_SHA256=1553d776b915c897d15f86adec8648378610128fdad81a848443853691748e53; \
        wget -q -O headscale https://github.com/juanfont/headscale/releases/download/v${HEADSCALE_VERSION}/headscale_${HEADSCALE_VERSION}_linux_amd64; \
        echo "${HEADSCALE_SHA256} *headscale" | sha256sum -c - >/dev/null 2>&1; \
        chmod +x headscale; \
        mv headscale /usr/local/bin/; \
    }; \
    # Litestream
    { \
        export \
            LITESTREAM_VERSION=0.3.9 \
            LITESTREAM_SHA256=7c19a583f022680a14f530fe0950e621bedb59666a603770cbc16ec5d920c54b; \
        wget -q -O litestream.tar.gz https://github.com/benbjohnson/litestream/releases/download/v${LITESTREAM_VERSION}/litestream-v${LITESTREAM_VERSION}-linux-amd64-static.tar.gz; \
        echo "${LITESTREAM_SHA256} *litestream.tar.gz" | sha256sum -c - >/dev/null 2>&1; \
        tar -xf litestream.tar.gz; \
        mv litestream /usr/local/bin/; \
        rm -f litestream.tar.gz; \
    }; \
    # smoke tests
    [ "$(command -v headscale)" = '/usr/local/bin/headscale' ]; \
    [ "$(command -v litestream)" = '/usr/local/bin/litestream' ]; \
    headscale version; \
    litestream version

But also remember to copy the needed configuration files:

COPY ./config/headscale.yaml /etc/headscale/config.yaml
COPY ./config/litestream.yml /etc/litestream.yml

To make things easy to control, introduced a new entrypoint to the container:

COPY ./scripts/container-entrypoint.sh /container-entrypoint.sh

ENTRYPOINT ["/container-entrypoint.sh"]

This new script container-entrypoint.sh looks simple:

#!/usr/bin/env sh

set -e

echo "---> Starting Headscale using Litestream..."
exec litestream replicate -exec 'headscale serve'

We can now build our headscale container image:

$ compose build

4. Ensure bucket exist

Before we can start our new headscale container, we need to make sure there is a bucket to store the information.

Let's start MinIO container first:

$ compose up -d minio

And using our browser, visit http://localhost:9090, use our hardcoded credentials at MinIO login screen and create our homelab bucket.

Now we can start headscale container:

$ compose up

And see how Litestream shows replication information in the console:

---> Starting Headscale using Litestream...
litestream v0.3.9
initialized db: /data/headscale.sqlite3
replicating to: name="s3_replica" type="s3" bucket="homelab" path="headscale" region="home" endpoint="http://minio:9000" sync-interval=1s
2023-02-25T17:11:38Z INF No private key file at path, creating... path=/data/private.key
2023-02-25T17:11:38Z INF No private key file at path, creating... path=/data/noise_private.key

And we can confirm visiting MinIO bucket browser several files were uploaded to the bucket.

You can learn more about snapshots and WAL files reading Litestream documentation.

5. Handle recovery scenarios

All looks good, except no data is restored if the volume containing the database is gone.

To fix that, we will need to introduce some changes to our entrypoint script:

 set -e
 
+echo "---> Attempt to restore headscale database if missing..."
+litestream restore -v -if-db-not-exists -if-replica-exists /data/headscale.sqlite3
+
 echo "---> Starting Headscale using Litestream..."

These lines will make sure that:

If no data is found, nothing happens. 😎

You can now try by stopping the containers:

$ compose down

And manually removing the data volume used by headscale:

$ docker volume ls
DRIVER    VOLUME NAME
local     homelab-headscale_headscale
local     homelab-headscale_minio

$ docker volume rm homelab-headscale_headscale

So next start, you will see the restore process at work:

$ docker compose up
...
---> Attempt to restore headscale database if missing...
2023/02/25 17:15:38.121979 /data/headscale.sqlite3(s3_replica): restoring snapshot 6ebdd4bcbcc8b700/00000000 to /data/headscale.sqlite3.tmp
2023/02/25 17:15:38.129062 /data/headscale.sqlite3(s3_replica): restoring wal files: generation=6ebdd4bcbcc8b700 index=[00000000,00000000]
2023/02/25 17:15:38.131242 /data/headscale.sqlite3(s3_replica): downloaded wal 6ebdd4bcbcc8b700/00000000 elapsed=2.129252ms
2023/02/25 17:15:38.137348 /data/headscale.sqlite3(s3_replica): applied wal 6ebdd4bcbcc8b700/00000000 elapsed=6.08262ms
2023/02/25 17:15:38.137365 /data/headscale.sqlite3(s3_replica): renaming database from temporary location
...

And you can verify that the data is there by entering the headscale container and check the data is there:

$ headscale users list
ID | Name    | Created            
1  | homelab | 2023-02-25 17:06:42

Big win! 🥳

5. What's next?

Could I make a template for this and reduce the manual customization? Perhaps using fly secrets to store those and have a single image manage that automatically for us?

Looking forward to next session! 😊