A major piece of self-hosted infrastructure is https://www.docker.com/. I have a love/hate relationship with Docker. On the one hand, it makes installing and running applications easy, but on the other hand, it feels like wasteful abstraction. The point of virtualization and containerization is to make deploying applications easier. I like to do this with [https://www.turnkeylinux.org/](Turnkey Linux) containers and VMs. Like all application libraries, there is always some app that isn’t available in the flavor that you’d like. Docker has the largest library of things, so it’s basically a necessary evil.

My problem with Docker is that it’s a “platform within a platform” meaning that you have replaced fiddling with Linux commands and config files with fiddling with Docker commands and config files. Only, you need a working Linux (commands and configs) to get Docker working. AND once you get Docker working, how do you back it up and restore it? I guess you could run your Linux in a VM…

Aaaand now my eyes have gone crossed.

At this point in time, with my current set of skills, I think that the key is to use a Linux VM that is easy to fiddle with (configure networking, add software like NFS, etc.) and to backup and restore.

Proxmox Plus Turnkey Linux#

This is why I like the Proxmox + Turnkey Linux stack. Yes, there are lighter weight versions of Linux, like https://www.alpinelinux.org/ but this is a Docker post, not a “checkout how little Linux I need to Linux with” post.

A day may come when I use Apline for everything. But it is not this day. This day I bid you stand, Men of Debian!

[https://www.turnkeylinux.org/core](Turnkey Linux Core) is the right mix of minimal-ish Linux, with utilities to reduce config file fiddling. It’s easy to install a VM from the ISO and configure networking and SSH access in a few minutes. Just use the ‘confconsole’ command to set the hostname and change to a static IP. Make sure you’ve updated and upgraded to the latest version and the Linux part is done. It is at this point that I recommend shutting down the VM and using Proxmox Backup to create a backup of your VM.

The next phase is NFS, which is why I chose a VM instead of a container. I am sure you can get NFS working on an unprivileged container, but I’ve not figured out how.

First, you need to install NFS-Common and create a mountpoint for NFS:

apt install nfs-common
# I like to use /mnt for local disks and /nfs for remote disks
mkdir -p /nfs/data

Then, edit your /etc/fstab and add this line at the end of the file, where 192.168.1.2 is the IP of the NFS server, and /mnt/data is the exported file share:

192.168.1.2:/mnt/data /nfs/data/ nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0

Then reboot the VM (the TKL Core VM) and log in. Assuming the NFS server is correctly configured, and your security is done correctly, you should see files from your NAS in the /nfs/data directory on your VM. I will write up NFS server configuration in another post. I use my Synology for most of my NFS stuff.

It’s Dockerin’ Time!#

Once you get NFS working and you are able to create and modify files in your /nfs/data directory, I recommend making another backup of the VM. If you are low in disk space, you can delete the previous backup.

Now it’s time to install Docker. Like I always do, here is a bunch of commands with ‘sudo’ removed. Just paste the whole blob in to your ssh session:

# remove old Docker installs. This step probably isn't necessary on a new VM
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do apt-get remove $pkg; done

# Add Docker's official GPG key:
apt-get update
apt-get install ca-certificates curl
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update

apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

There are two ways to use Docker: Docker Compose and Portainer.

Docker Compose for SnARRf#

I have one docker host that I use for all my piracy shit. I use NFS to mount my media share to /nfs/data, and the torrent disk on my scratch-nas to /nfs/torrents. I then use [https://docs.docker.com/engine/storage/bind-mounts/](bind mounts) to access all of the data that the ARR apps will be working on.

The [https://wiki.servarr.com/docker-guide](*arr apps) (Sonarr, Radarr, Lidarr, etc.) prefers Docker Compose. So here is a sample file:

services:

  ddclient:
    image: lscr.io/linuxserver/ddclient:latest
    container_name: ddclient
    environment:
      - PUID=0
      - PGID=0
      - TZ=Etc/EST
    volumes:
      - /docker/ddclient/data:/config
    restart: unless-stopped

  transmission:
    image: lscr.io/linuxserver/transmission:latest
    container_name: transmission
    environment:
      - PUID=0
      - PGID=0
      - TZ=Etc/EST
      - TRANSMISSION_WEB_HOME= #optional
      - USER= #optional
      - PASS= #optional
      - WHITELIST= #optional
      - PEERPORT= #optional
      - HOST_WHITELIST= #optional
    volumes:
      - /docker/transmission/data:/config
      - /nfs/torrents/:/downloads
    ports:
      - 1337:9091
      - 9091:9091
      - 51413:51413
      - 51413:51413/udp
    restart: unless-stopped

  prowlarr:
    image: lscr.io/linuxserver/prowlarr:latest
    container_name: prowlarr
    environment:
      - PUID=0
      - PGID=0
      - TZ=Etc/EST
    volumes:
      - /docker/prowlarr/data:/config
    ports:
      - 9696:9696
    restart: unless-stopped

  bazarr:
    image: lscr.io/linuxserver/bazarr:latest
    container_name: bazarr
    environment:
      - PUID=0
      - PGID=0
      - TZ=Etc/EST
    volumes:
      - /docker/bazarr/data:/config
      - /nfs/data/Movies:/movies #optional
      - "/nfs/data/TV Shows:/tv" #optional
    ports:
      - 6767:6767
    restart: unless-stopped

  radarr:
    image: lscr.io/linuxserver/radarr:latest
    container_name: radarr
    environment:
      - PUID=0
      - PGID=0
      - TZ=Etc/EST
    volumes:
      - /docker/radarr/data:/config
      - /nfs/data/Movies:/movies #optional
      - /nfs/torrents/:/downloads #optional
    ports:
      - 7878:7878
    restart: unless-stopped

  sonarr:
    image: lscr.io/linuxserver/sonarr:latest
    container_name: sonarr
    environment:
      - PUID=0
      - PGID=0
      - TZ=Etc/EST
    volumes:
      - /docker/sonarr/data:/config
      - "/nfs/data/TV Shows:/tv" #optional
      - /nfs/torrents/:/downloads #optional
    ports:
      - 8989:8989
    restart: unless-stopped

  lidarr:
    image: lscr.io/linuxserver/lidarr:latest
    container_name: lidarr
    environment:
      - PUID=0
      - PGID=0
      - TZ=Etc/EST
    volumes:
      - /docker/lidarr/data:/config
      - /nfs/data/Music:/music #optional
      - /nfs/torrents/:/downloads #optional
    ports:
      - 8686:8686
    restart: unless-stopped

  readarr:
    image: lscr.io/linuxserver/readarr:develop
    container_name: readarr
    environment:
      - PUID=0
      - PGID=0
      - TZ=Etc/EST
    volumes:
      - /docker/readarr/data:/config
      - /nfs//Books:/books #optional
      - /nfs/torrents/:/downloads #optional
    ports:
      - 8787:8787
    restart: unless-stopped

That’s a ton of shit to go through, which I will do at a later date. But for now, that’s all the shit I have running on SnARRf, my docker host for piracy apps. I haven’t configured all of it yet.

Portainer for OctoCat#

I have a second host that I use for different cloud/network apps like https://rustdesk.com/, https://guacamole.apache.org/, and https://meshcentral.com/. For that host, I prefer https://docs.portainer.io/start/install-ce/server/docker/linux.

Portainer is like a graphical user interface for Docker. It’s a Docker container that lets you set up other docker containers. Yes, it’s a platform, within a platform, within another platform. It enables people like me that don’t know shit about Docker to actually use Docker, so just lean into it.

docker volume create portainer_data

docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:lts

If portainer is installed and working, you can view its status by typing ‘docker ps’

CONTAINER ID   IMAGE                          COMMAND                  CREATED       STATUS      PORTS                                                                                  NAMES             
de5b28eb2fa9   portainer/portainer-ce:lts     "/portainer"             2 weeks ago   Up 9 days   0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9443->9443/tcp, :::9443->9443/tcp   portainer

You can log into portainer by going to “http://dockerhostIP:8000” and following the prompts from there.

You will see an option to “add environments” and you will be tempted to add your piracy stack. I guess you could do that for monitoring, but all of the literature states to not use portainer with those apps. If you want to give it a shot, I recommend making a backup of both VMs before you go down that path.

Conclusion#

That is a pretty rough and tumble guide to Docker. There is a bunch of shit that I didn’t cover. But if you read through other docs and howtos, the commands and shit will start to look familiar.

Mostly, I use posts like this to jog my memory when I am doing these things.