How to use docker as alternative to k3s in TrueNAS SCALE

As a result of building my new NAS (read about the build in the efficient home server series) I’ve decided that I’m going to switch to TrueNAS SCALE as primary OS. However, coming from OpenMediaVault there were a few things to get used to. There also indeed were a few things that in my eyes definitely are huge deal breakers in TrueNAS SCALE. In this article we will be talking about k3s, which is used as backend in TrueNAS SCALE for the Apps, and why I chose docker instead.

Just one note to clarify something: If you’re after ease of use and maximum usability, stick to k3s. Obviously, using k3s will let you install, remove and manage all your apps through TrueNAS SCALE’s pretty nice WebUI. , if you know what you are doing (or think you know, like me 😁) and are not afraid of getting your hands dirty, I invite you to come along.

Why?

Firstly, why do I want to get rid of k3s? After all, if I replace it by something else, I won’t be able to use the nice WebUI, right? The main reason was my efficient home server build. Before installing any Apps, Containers or VMs, the idle power was around 12 W. With k3s only enabled and still not doing anything my power usage was already up to around 17 W. With minimal idle power usage while having a reasonably fast machine was the main goal, the overhead of k3s was just insane. TrueNAS SCALE does not support any other backend, so an alternative solution had to be found.

The Options

Generally, there are some alternatives that could take k3s’ place in this endavour. Altogether, I’ve narrowed the candidates down to:

  • docker
  • k8s
  • Podman

Just as k3s, k8s seems to have the huge overhead and thus increase power consumption substantially, so not an option. I’m not too familiar with Podman, just heard it a few times before, but never used it. I wanted to try Podman and give it a go, but due to the complexity of the project getting higher with time, I’ve decided to opt with docker, which I am quite familiar with already.

“What complexity?” you might ask. I’ll go through a few options later, but I chose to install docker inside a systemd-nspawn container for a few reasons. Since that’s also new to me, I didn’t want to have two completely new systems to work with. Plus, even through all the different virtualization layers I still need at least partial access to the host’s filesystem.

Docker in TrueNAS SCALE

To clarify, TrueNAS SCALE is not designed to run docker. You’re on your own, you won’t get any support from iX Systems or the official TrueNAS community. Hence, there also isn’t the best way to run docker on TrueNAS SCALE. Unlike on a normal debian system (that’s what TrueNAS SCALE is based on), where you’d just quickly install it using apt (after adding the docker-ce repo) and be ready to go. In fact, on TrueNAS SCALE it is possible to install docker using apt, using a few tricks . But, it’s an incredibly bad idea, which i will explain in a second. But let’s talk about the options we have first, sorted from bad to good:

  • Installing docker using apt (don’t do it!)
  • Installing docker manually (not recommended)
  • Installing docker in a VM (Plan B)
  • Installing docker in a systemd-nspawn container (Plan A)

Installing docker using apt (don’t do it!)

First off: installing docker using apt is comparatively easy, but not permanent! Why? Let’s take a small detour and look at TrueNAS SCALE for a moment, specifically the way system updates work.

System updates in TrueNAS SCALE

As I mentioned before, TrueNAS SCALE is based on debian. Just as debian, it also does come with all of the goodies like apt, too. But, for one thing, apt is disabled by default. Also, TrueNAS SCALE does not use apt for updating the system. In short, installing major updates on TrueNAS SCALE basically equals to a system reinstall. TrueNAS SCALE creates a backup of your current configuration, downloads the updated system partition image, extracts it on the system partition and reapplys your config. You might have guessed it: all changes you have previously done to the system partition will be gone. That is my understanding of the major system updates in TrueNAS SCALE.

Which directories are safe?

To clarify, from what I know only the system partition will be wiped, not the entire system disk. In general, use /mnt/services for my docker-related (and other services) data, which in my case is a services partition on the system disk. In fact, TrueNAS SCALE usually occupies the full system disk. I’ve written an article on how to claim the unused space, go check it out. If you don’t want to do it by hand, I’ve also written a script that will do it for you 👍

If you still want to do it…

With that out of the way, should you still want to install docker using apt, you need to enable apt first. Open up a terminal (WebUI or ssh, doesn’t matter), execute the following command:

Bash
sudo chmod +x /usr/bin/apt-get

Congratulations, you’ve enabled apt on TrueNAS SCALE. Wasn’t too hard, was it 😁 Most of the debian specific goodies are retained in the TrueNAS SCALE system. Most of them just have the execute bit masked out, so you can’t execute them without enabling execution. Keep in mind that all the execute bits you set will indeed be gone again after major system updates. Now you can simply follow the normal instructions on the docker site:

Bash
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

For your convenience, the above code is copied from the docker page at the time of writing this article. Hence, please refer to the docker site, should it not work.

Changing the location of files (also not recommended…)

Undoubtedly, running docker permanently with this configuration will nuke all your containers and images should you install any major update to TrueNAS SCALE (which you definitely should…). Changing the location of docker’s root folder because of this isn’t very hard. In this case just change the data-root parameter to whatever path you would like in the docker config file /etc/docker/daemon.json. Subsequently, docker will store everything in the specified path.

Still, after major updates to TrueNAS SCALE, you will have to

  • re-enable apt
  • re-install docker
  • change default path of docker again

To be honest, I’ve never tried re-enabling docker after a TrueNAS SCALE update, I have no idea if it even works. So take my word here with a graind of salt.

Installing docker manually (not recommended)

To save your docker install in case of TrueNAS SCALE updates, if you really hate your life, you might want to install docker manually. The big advantage is that you can choose the install location of docker, thus install it in a location that will survive TrueNAS SCALE updates. But there are two big disadvantages that come with this method:

  1. Statically linked docker binaries (unless you compile them yourself… you really do hate your life, do you?)
  2. No (semi-)automatic updates

Nonetheless, the binaries docker Inc. offers are statically linked to all libraries docker needs. Hence, you now have to keep your system updated plus additionally update your docker for it to be secure. That’s the second drawback: you have to keep the binary up-to-date manually. I’ve never installed docker manually, and I don’t see the advantage in this situation.

Should you want to go this path, check out the official installation docs, I’m of no help here 😅

Installing docker in a VM (Plan B)

TrueNAS SCALE itself offers VMs, which do survive system updates. So you might want to create a virtual machine with the linux distribution of your choice inside and install docker into that virtual machine. The big advantage of this method altogether is that your docker and all containers will survive all updates. Still, the method has two drawbacks:

  1. Overhead of the virtual machine
  2. Access to the host file system

Even so, the overhead of a virtual machine isn’t negledible. The system load is higher and thus the power consumption. Even without docker or a container running, the idle power of my system was already 4 W higher on my system. Better than k3s, but still around 30% higher than the original idle power of my efficient home server. So, not an option for me.

The second drawback is the far more severe one. Access to your files on your RAID is rather limited. So sum it up, according to the official documentation it’s only possible by accessing the host’s network shares (SMB, NFS or iSCSI) from the VM. This will add overhead not only to your power consumption, but also read and write speed are affected. If that’s ok for you, this installation method is totally fine.

Installing docker in a systemd-nspawn container (Plan A)

The following part is pretty similar to the 4th part of the efficient home server build 2024. If you haven’t read the series, you should absolutely go have a read 😁

Both of the drawbacks can be remedied by using a systemd-nspawn container and install docker into that container. Yes, TrueNAS SCALE does come with systemd-nspawn, but you need to have at least version 24.04 (“Dragonfish”). The overhead of such a container is minimal compared to a VM. Running the systemd-nspawn container without any additional containers inside didn’t have a measurable increase in idle power altogether. Additionally, by using simple bind commands, I can easily access the files on my RAID from inside the container and even pass it through to the docker containers inside.

The idea of these “Jails” (that’s what these containers are often called in TrueNAS SCALE following the BSD roots of TrueNAS) isn’t mine originally. Before I developed my own TrueNAS SCALE helper (more on that in a second), I used Jip ‘Jip-Hop’ de Beer‘s excellent Jailmaker. It’s an excellent tool, but I wanted something a bit more specific for me.

I’ll be explaining the steps needed get docker up and running in a systemd-nspawn container. But, you might be interested in my script, which can do all this for you automagically. All the steps I’ll be outlining in the following text are more or less commands taken our of my script. The steps you’ll have to go through are:

  1. Choosing the systemd-nspawn container rootfs’ storage location
  2. Preparing the rootfs
  3. Preparing the network bridge
  4. Starting the systemd-nspawn container
  5. Automatically starting the systemd-nspawn container at system boot
  6. Installing docker inside the systemd-nspawn container

1. Choosing the systemd-nspawn container rootfs’ storage location

Before creating the systemd-nspawn container, you must find a location to put the data for that container. And, as explained earlier, it must not be on the system partition, otherwise each update will wipe your systemd-nspawn container out of existence. You usually do not want that.

Personally, I am using the system disk, but a different partition on that disk as the system is installed on. Usually TrueNAS SCALE does not allow that out of the box. But, by pretty simple means it can be confined to a partition on the system disk and use the remaining space on the disk for other this, as I have outlined in another article. You can also easily do it retroactively when you already have an older TrueNAS SCALE instance.

/mnt/services is where I store all my data related to the services my TrueNAS SCALE server offers. The rootfs for my systemd-nspawn container will be located in /mnt/services/nspawn-docker/rootfs.

2. Preparing the rootfs

Now there’s the part I personally struggled most with. The folder on the disk where your systemd-nspawn container’s rootfs is located has to be filled with all the neccessary system files in order to use it. Debian does offer a pretty slick tool called debootstrap for exactly this purpose. But, you might have guessed it, by default it’s not installed in TrueNAS SCALE. I didn’t want to depend on disabled infrastructure of the system (apt) or my be entirely unavailable in the future, so I debootstrap wasn’t my preferred tool.

Eventually I found out that the nspawn team offered an array of different root filesystems for exactly this purpose. You can find the full list of root file systems the offer here. I went for the debian bookworm tar image. To download the image and extract it, execute the following commands:

Bash
cd /mnt/services/nspawn-docker
wget https://hub.nspawn.org/storage/debian/bookworm/tar/image.tar.xz
tar -xf image.tar.xz -C rootfs
rm rootfs/etc/machine-id
rm rootfs/etc/resolv.conf
touch rootfs/etc/securetty
for i in $(seq 0 10); do
  echo "pts/"$i >> rootfs/etc/securetty
done
Explanation of the commands

With the first line you change into the directory, where the folder for your rootfs will be in, so one level above your rootfs folder. The wget command will download the base system image, which the third command will extract into the folder “rootfs” in the current directory.

The following steps need a bit more explanation. The file /etc/machine-id in the systemd-nspawn container’s rootfs will be created by systemd-nspawn during the first boot. If it exists it will simply not start, hence, you need to delete it.

By default, systemd-nspawn will copy the host’s resolv.conf, so /etc/resolv.conf in the container’s rootfs. Again, if there already is a file, it will simply abort the boot process.

The lines starting from the 6th line just enable us to log into the systemd-nspawn container as root, once it’s booted. The file /etc/securetty contains a list of all terminals considered secure, which allow root logins. If you later log into the container from the host machine using the machinectl command, this usually happens through the pseudo terminal slaves pts0 and above. The for loop adds pts0 through pts10 to the list of secure terminals, so you basically can have 10 parallel secure “connections” from the host machine into the systemd-nspawn container.

3. Preparing the network bridge

I wanted the systemd-nspawn container to use a different IP than the host machine. The easiest way to acheive this is by using a network bridge. There are two ways to do this in TrueNAS SCALE, I’ll go through both here.

Adding the bridge through the WebUI

Make sure you’ve got a screen hooked up to your TrueNAS SCALE machine, since we need it to find out the new IP address, should you want the network bridge to be configured through DHCP.

  1. In the WebUI, open the network panel. This should show you all available network adapters, including bridges.
  2. Click on “Add”.
  3. In the popping up “Add Interface” panel, choose “Bridge” as type. Choose a name, I’ll go with “br0”, optionally add an description. Do not Check DHCP at this point and do not select a bridge member yet. Click on “Save” to save and return to the overview.
  4. Apply the settings and confirm them.
  5. Open the settings of the original network adapter, here enp0s3, by pressing the pen symbol in that row.
  6. In the now open “Edit Interface” panel, remove the DHCP checkbox. Click on “Save” and return to the overview.
  7. Do not apply the settings yet!
    Open the settings of the network bridge created in step 3, again by pressing the pen symbol next to it.
  8. After the “Edit Interface” panel has appeared again, check the DHCP checkbox and add “enp0s3” as Bridge Member. Click “Save” to return to the overview.
  9. Click apply the new settings.
  10. To confirm the settings, you’ve got to be quick. Take a look at the screen hooked up to the TrueNAS SCALE machine. At some point it will show you a message like
    The web user interface is at:
    http://192.168.178.xxx
    https://192.168.178.xxx

    Quickly head to that IP in the browser and log in. Open the “Network” panel and confirm the settings.

Now you should have a fully functioning network bridge.

You may also do everything in one go without applying the settings in step 4 if you have TrueNAS SCALE 24.04. With pre-24.04 versions of TrueNAS SCALE I sometimes had problems creating the network bridge and applying the settings in step 4 helped me in those cases.

Adding the bridge through the CLI

The commands we’ll be using assume the following:

  • Ethernet port name: enp4s0
  • Bridge name: br0
Bash
cli -c "network interface update enp4s0 ipv4_dhcp=false"
cli -c "network interface create name=\"br0\" type=BRIDGE bridge_members=\"enp4s0\" ipv4_dhcp=true"
cli -c "network interface commit"
cli -c "network interface checkin"

That’s it, we’re done 😊

What it does is similar to what you’d manually do in the WebUI:

  1. Disable DHCP for the ethernet port
  2. Create the bridge and add the ethernet port as a member, plus enable dhcp for the bridge
  3. Commit the changes (will start a 60s timer, after which the settings will be reverted)
  4. Confirm the changes and make them permanent.

Note that the IP of your bridge might be different. If you have a screen connected to your NAS, the new IP will be displayed there. Otherwise use the interface of your DHCP server to find out the new IP. If you want the bridge interface to have the IP of the ethernet adapter, either configure your DHCP server accordingly or manually set the IP of the bridge:

Bash
cli -c "network interface update enp4s0 ipv4_dhcp=false"
cli -c "network interface create name=\"br0\" type=BRIDGE bridge_members=\"enp4s0\" ipv4_dhcp=false aliases=[\"<IP of the ethernet adapter>/24\"]"
cli -c "network interface commit"
cli -c "network interface checkin"

Make sure to replace <IP of the ethernet adatper> in the second row with your IP and enp4s0 in the first two rows with your actual network adapter.

4. Starting the systemd-nspawn container

With the network bridge in place now, we can finally start the systemd-nspawn container.

This basically is just a single command, bu you need to adjust a few things for your case:

  • --unit sets the unit’s name to, in my case “dockerNspawn”. Check this article if you want to know details about systemd units.
  • --working-directory has to be set to the directory where your rootfs folder is in. Make sure to not give the rootfs folder here, but rather it’s parent folder! In my case I’ve used /mnt/services/docker.
  • --description is optional, but it will certainly help if you have multiple systemd-nspawn containers running on the same machine
  • --machine selects the container’s name. I’ve chosen the same as the unit’s name here.
  • --network-bridge must match the name of the network bridge you’ve created earlier
Bash
systemd-run --property=KillMode=mixed --property=Type=notify --property=RestartForceExitStatus=133 --property=SuccessExitStatus=133 --property=Delegate=yes --property=TasksMax=infinity --collect --setenv=SYSTEMD_NSPAWN_LOCK=0 --unit=dockerNspawn --working-directory=/mnt/services/docker '--description=systemd-nspawn container creates by TNSH to run docker' --setenv=SYSTEMD_SECCOMP=0 --property=DevicePolicy=auto -- systemd-nspawn --keep-unit --quiet --boot --machine=dockerNspawn --directory=rootfs --capability=all '--system-call-filter=add_key keyctl bpf' --network-bridge=br0 --resolv-conf=bind-host

It would have taken me considerably longer to figure this out without Jip-Hop’s jailmaker, so I would like to send a big thanks to him at this point. Go check out his script.

Additionally, if you want to bind some folder of your host machine to the container, you can add --bind='hostFolder:containerFolder' to the command. You can also add multiple bind arguments to the systemd-run command.

5. Automatically starting the systemd-nspawn container at system boot

The container has to be started manually during each boot. Since TrueNAS SCALE uses its own init system, you need to start the container from there. Which, luckily, is easily possible through the WebUI. In the WebUI, navigate to Settings > Tasks > Init/Shutdown Scripts. To add a new init command, click on Add. In the now open dialog, select Command as Type. In the command field, copy the command you used to start the container from above. For “when” just select After start.

Alternatively, if you’re using my TrueNAS SCALE helper script, it will automatically handle all this for you.

6. Installing docker inside the systemd-nspawn container

Now that we’ve got our systemd-nspawn container running, we need to get docker (docker engine, to be correct) running inside. Once you open a shell inside the systemd-nspawn container, installing docker is no different that installing it on any other machine. You just need to get inside the machine and then install docker as usual. To enter the container, use the machinectl command:

Bash
machinectl shell dockerNspawn

Make sure to replace the dockerNspawn argument with the container’s name you specified with the --machine argument during creation/start.

Finally, to install docker, from this point on you can just follow the instructions on the docker docs page. As a result of following the guide, the debian you now have inside the systemd-nspawn container is an ordinary debian bookworm. The easiest method is to install the docker engine inside the systemd-nspawn container using apt at this time. Here are the commands you need, copy and pasted from the docker docs in short:

Bash
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

Optional: Installing Poirtainer

To manage the containers I personally like to use portainer. Again, at this point the systemd-nspawn container can be handled as an ordinary debian bookworm system. Installing the Portainer CE is simple. Just create a permanent storage for the configuration and starting the Portainer container using docker:

Bash
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest

Conclusion

To conclude, in this article I’ve shown you a few ways to get docker running on your TrueNAS SCALE install. I hope that I was also able to convey the idea to you, why some methods for this are better than others. Hopefully you now have a better idea of TrueNAS SCALE’s inner workings.

Should you have ideas for improvements or questions, just leave a comment below! If you just want to say hi, drop me a line through the contact form. 👋

Leave a Comment