Adding Infrastructure to Proxmox
In a previous post, I talked about the “AK-fortyserver”, a cheap NAS appliance that I added Proxmox VE to. I named mine “Scratchy”.
I got my Synology NAS backed up to Scratchy, so that I could remove the disks in it and replace them. I was buying 8tb disks for it, but I decided to go in another direction. I will discuss the that larger project in another post. Right now let’s focus on the apps I am setting up on Scratchy.
The importance of NFS#
I prefer to use the old Unix Network File System on Linux servers, rather than the newer Server Message Block file system that I use on Windows. Honestly, it’s probably a bias that I have from working with both Unix and Windows for a long time. Unix systems mount network shares into the filesystem and don’t care where. You can mount a file system to /home and Unix won’t care. You can mount it to /nfs/remote-server and Unix won’t care. You can do all sorts of group policy things to fool Windows into thinking a network drive is just part of the C: drive, but the NFS method is superior, in my opinion.
Yes there is a bunch of stuff that you can do with Proxmox to run NFS on containers, but in the end, it’s just easier and more portable to use VMs. So the first thing I did was rebuild my NAS as a VM.
Scratch-NAS#
Scratchy has room for 4 drives: 2 NVME drives and 2 SATA drives. One NVME is currently used for the Proxmox VE system drive where the boot drive for the Scratch-NAS VM resides. I originally planned to do ZFS for the SATA drives and use mountpoints for containers. But since the NAS is now a VM, I figured I would just pass the SATA drives to the VM and use NFS and Samba to make them available to other VMs.
I elected to use the Turnkey Linux File Server again, but this time I went with the ISO. One of the SATA drives is 16TB in size, the other is an old 2TB drive that I had laying around.
The 16TB is the main place for backing up data from my Synology. The other drive I will use for Bittorrent. It’s been my experience Bittorrent kicks the crap out of harddrives, so don’t use an expensive drive, and definitely don’t use a raid array.
snARRf#
For my piracy operation, I use a series of docker containers from LinuxServer.io. I use Transmission for downloading files, and various automation tools (RadARR, SonARR, LidARR, etc.) for doing piracy hands-free. I use NFS to mount a disk for keeping torrent files, and one for the final destination for those files. Also, I have found that a VM running Docker containers is easy to backup and restore, as well as move from host to host.
When you are working with Docker, do all of your operating system shit on the Docker host VM, not on the containers. I put the NFS mounts on the host, and then use bind mounts to make the NFS folders available to the docker containers.
Yes, I know that running containers on a VM that is running on a tiny Linux machine is kind of Inception-esque but it’s also kind of cool.
LXC networking tools#
I have a few different networking tasks that I like to run on my homelab:
-
Bastion Host- Getting safe access to your internal network is important. My ususal tool for access is an overlay network. A secondary/emergency tool I also use is a bastion host, or a stripped down Linux server that is accessible via SSH. I will go into greater detail about it in another post, but think of this container as the emergency entrance to your fortress.
-
Swedish Internet Router- In order to run Bittorrent without getting busted, I use a VPN client. I have several VMs, containers, and other things that want to use that connection, so I have a Linux container that serves as a kind of router to push traffic through a VPN tunnel to Sweden. There is a bit of work to be done for an unprivileged container to get access to the /dev/tun device on the host, which, you guessed it, I will go into detail about later.
-
Overlay routers- Ultimately I want to have servers in a few locations that I can access from anywhere via Zero Trust overlay networking. But for now I am just using Tailscale. Tailscale does a bunch of cool things, but two I am going to talk about first are exit nodes and subnet routers. An exit node lets traffic from inside your overlay network use the exit nodes network link. This is really handy for securing your Internet traffic. Subnet routes do something similar, which is make the local LAN available to your overlay network.
So I use one Linux container as a Tailscale exit node, which lets me secure traffic for devices on nosy or shady networks, like work or a coffee shop wifi. I use the another Linux container as a Tailscale subnet router to make the rest of my homelab’s LAN accessible to my Tailscale nodes. This way, I don’t have to install TS on every container and server I am hosting, as well as my Proxmox VE hosts. I can just run a couple of containers and access everything.
So there you have it, 1 storage VM, one Docker VM, and three Linux containers for specialty networking. I know that I hinted at how to connect this little NAS appliance to a your hosted Linux container, but the overlay network has to be in place first. A third VM that I want to run is a Plex server, but I may need some more cpu power for that.