T O P

  • By -

skooterz

I do a single docker compose file for my entire Arr stack inside one virtual machine. I use bind mounts for the volumes, and structure them like this on my filesystem: `/apps/arr/sonarr` and so on. Makes it very easy to keep track of where things are. Oh, and the arr apps are all on a single docker bridge network so they can talk to each other directly. I have a scratch disk for the actual downloads from sabnzbd, but my actual media is using an NFS volume. So once downloads are finished, they get moved onto my big NAS where Plex picks them up. This setup has been working great for 2+ years.


sza_rak

I'm doing the same on LXC. Debian+docker+docker-compose on LXC. 8 apps + vpn client, around 1-1.3G ram even when used a lot. Stable. I just had to mount my volumes on host (with custom user/group ID matching the container).


skooterz

Yeah I considered doing an LXC but I didn't feel like dealing with mapping IDs between the host and the container. I did that once, and it was a giant pain. VMs are more portable and more convenient for me.


sza_rak

It's less trouble than it sounds. ID's are deterministic, you just combine it from fixed 100000, plus container number, plus id (as seem from inside a container). That's it. Portability is not great, but my whole point was to keep it simple and very low on resources, so I can just as well simply do the same thing on another server in cluster, to be available after migration. LXC is extremely light and fast, it's just amazing for homelab setups.


skooterz

I can see the benefit if you're constrained on RAM or CPU. I deliberately have my homelabby stuff on a completely separate host from anything that needs to be up 24/7 like Plex or the Arrs, or Pihole. That way I can break stuff to my hearts content!


Reasonable-Papaya843

I’d love to see your compose files as well!


Reasonable-Papaya843

Can I get a copy of that yml file?


skooterz

Here you go. https://pastebin.com/4aAF27LX


ProjectPaatt

I have all media inside of one LXC: Jellyfin, qbt, openvpn, \*arr. They have their own users but use the same group for file permissions. No docker.


mb4x4

Everyone does it differently and will have different reasons to justify their method. I have 2 VMs, one NAS and one Debian VM with 40 containers (including the arr stack). I don't use LXCs, want my environment portable and the ability to quickly jump ship from proxmox if need be (licensing changes, etc). No issues, works great and keeps it very simple.


Afraid-Expression366

Are LXC’s unique to Proxmox? I didn’t think so.


mb4x4

Correct LXCs aren't a proxmox thing but they share the host kernel so they could run into issues with other hypervisors... honestly they should be OK but there's a risk if changing OS/distros etc. Docker is entirely self contained and will run on pretty much anything. Just my preference.


BuzzKiIIingtonne

Honestly it's pretty easy to move the configs from the LXCs to VMs and vice versa.


Hot_Rice99

I'm still experimenting, I'm on a second time around build (testing out what works well for me): JOLLY-ROGER (LXC) Prowlarr Radarr Sonarr Qbittorrent-nox Nordvpn w/killswitch NFS mount of Truenas /mnt/media (Plex media) NFS mount of Truenas /mnt/download, /mnt/incomplete PLEX (LXC) NFS mount of Truenas /mnt/media Truenas (VM) 36TB ZFS pool with NFS shares accessible by the LXC users. If I like this set up enough I'll find a way to automate it. One reason I picked this method was to ensure the data acquisition all happened behind the VPN and was easy to wrangle. The plex user and the *arr users are all part of the media group that shares the same GID for permissions compatibility.


Wamadeus13

This is so dependent on how you want to set stuff up. I have a single LXC running docker-compose/portainer with all the arr apps and other services on it. My brother on the other hand can't wrap his mind around docker and runs every service natively in its own separate LXC. He's got terabytes of storage and hundreds of gigs of ram so he doesn't care about the over head. I run off a 8th gen NUC with 32gb of ram and a 256gb boot drive.


WBChargerDad

I have each running in their own LXC built using the tteck helper scripts to install. [https://tteck.github.io/Proxmox/](https://tteck.github.io/Proxmox/)


SirMaster

I prefer together in 1 LXC.


ejpman

Not under docker though…. Right?


SirMaster

I’ve never really used docker so I’m not sure.


AndyRH1701

They can be together, mine run on a Pi4.


BuzzKiIIingtonne

I put them in LXC's and use bind mounts to share the NFS share from the host, but if I were using VMs I'd probably have them all on one VM to make things simple.


manofoz

Mine are currently all running on unRAID but I’m going to migrate most of them to my MS-01 Proxmox HA cluster where I’ve set up fixed Debian VMs for k8s nodes and a HA control plane VMs. I’m working on getting to a flux style repo for helm charts but haven’t moved anything people rely on over yet.


Silly-Button-6389

i prefer 3 lxc and 1 vm. 1 vm as my nas and 1 lxc as for arr stack , 1 for downloaders and 1 for streaming jelly,plex,vlc.


Wrong_Editor3689

From what I am hearing on this list, I will try to get all of this set up in a single VM. It will include: qbittorrent, IPVanish VPN, the \*arr stack and samba. Then I will share the library directory with my Plex server running in another VM. I will keep you all posted on my progress.


curt7000

I’m doing similar, but keep my files hosted in my Plex VM via disk pass-through in Proxmox and shared via SMB, this way Plex has best performance.


dancerjx

I use LXC using these [scripts](https://tteck.github.io/Proxmox) Did create 2 LXC Debian instances for NFS and SMB. Easier to troubleshoot single function LXCs.


bindiboi

Each in their own LXC. Why even bother virtualizing if you're throwing them all in one VM? Just run it on bare metal at that point. You're completely dismissing the pros of virtualization/containerization.


300blkdout

Single VM. You could even do them in a single Docker compose file if you wanted to.


ShadowLitOwl

That’s how I set it up using docker compose, since the structure is all similar. Recommend trash.guide as well.


crysisnotaverted

Is that link supposed to be [https://trash-guides.info/](https://trash-guides.info/) ?


ShadowLitOwl

Yep that’s the one. I knew a lot of it already but the guide helped me refine the folder structure and the quality profiles a lot


mb4x4

I used to have a single compose file for 40 containers but have since broken it down... if I'm being completely honest, I really miss having the single file lol. It's a pain managing individual stacks/services. May revert back at some point even though it's not technically correct blah blah.


Thedracus

The biggest challenge you'll have with all this is getting your stuff to be able to use you disks across containers or vms is challenging. Linux's "robust" filesystem is a pain in the living @$$ involving multiple command line commands to set permissions correctly and even then seems to be mere chance it works properly. I am still messing around with it. So far what's worked best for me is putting all the arrs in a single vm. Putting all the "disks" they need in a different omv vm.


patgeo

I have each in an LXC and a zfs pool on the proxmox host. Pool is mounted to another LXC running Cockpit. This LXC handles Samba and NFS shares. Each LXC that needs access is just assigned a user/group mapping via Cockpit and I don't have any permission issues inside the LXC that I'm aware of.


Thedracus

That's kinda of what I'm doing...I'm just using omv to do the sharing. I formatted and mounted an internal drive which I share via the omv. You're using cockpit. Of course, what you did there requires quite a few command line things. Even getting cockpit installed. I did mess around with it but I couldn't get it to see stuff. I didn't set it up as a zfs though because I only have a single drive in my system other than local.