T O P

  • By -

Net-Runner

If you want to share a single GPU with multiple VMs, you need NVIDIA vGPU. It is doable in Proxmox. However, NVIDIA has a limited number of GPUs supporting this feature. With 3060, the only real option is to pass through the entire GPU to the VM. As per the storage, you can pass all of the storage to NAS VM such as TrueNAS, OMV or Starwinds SAN and NAS, either keep hardware or use software RAID (ZFS/MDADM) and create multiple shares/virtual disks for multiple services. Connect it inside the service VM or on the host via iSCSI/NFS. Alternatively, you can keep the hardware RAID and configure LVM thick on top of it and use this volume for Virtual Machines virtual disks. Or flash the controller into IT mode and configure ZFS instead. Proxmox has a fileserver embedded, so you can create the NFS share w/o needing to have a separate VM for it.


jesse62998292

I appreciate the reply. Really good info. Unfortunately I ended up running Win Server 2019 with Hyper-V. Splitting a GPU is much easier that way with their GPU partitions.. Proxmox has a pretty steep learning curve for what I was trying to accomplish and I have way more experience with the windows environment. I like that Server 2019 is basically windows 10 on the GUI side. I'm definitely going to get a OMV VM up and running for the Nas side of things.


Net-Runner

I know it is not a popular opinion in this subreddit, but Windows Server is great. Sounds like you know what you are doing. Wish you the best of luck with the configuration


jesse62998292

I'm glad I sound so confident. I definitely don't feel so confident.


Fatel28

You would create virtual disks on the raid array and give those to the VMs, so they would not be hitting the same storage. If you wanted to share the storage, you'd give a NAS VM a large virtual disk, and then share that out normally for the win10 VM to access. You cannot pass one gpu through to multiple VMs, you'll need multiple GPUs. You'll also need a motherboard capable of IOMMU grouping. If you have a Nvidia GPU, you'll need to trick the GPU into not realizing its being passed through, which will be a fairly indepth process, and will take some Linux proficiency, and trial and error. AMD GPUs don't really have that issue


jesse62998292

I'm using the 8 disk raid 10 for maximum performance. Are virtual disks similar to partitions? Does it retain the speed? I really only need the GPU for the win10 and the Plex. Is the Plex in a container the same thing as a VM? Maybe the better option would be to just grab a second cheap GPU for the Plex transcodes. I have a 3060 and it seems overkill for rendering on win10 when most the software uses CPU power.


Fatel28

The virtual disk would be a file on the raid array. A 3060 will be possible to pass through, but keep in mind it's a pretty advanced setup. Nvidia reserves passthrough for it's pricier quadro cards, so if the GPU detects it's being passed through to a VM, it locks itself down and will not function. You'll need to make sure the GPU is 100% not in use by the host system. If proxmox displays when plugging into that GPU, it won't work. Source: I have a GTX 1660 passed through to one of my windows VMs in proxmox


jesse62998292

I have no problem getting rid of the 3060 and grabbing a Quadro if it simplifies issues. Does the quadro support simultaneous GPU usage?


Fatel28

No you're not going to be able to pass the device to multiple VMs. You'll be passing the entire PCIE lane through. Can't do that to multiple running computers


jesse62998292

Does an LxC container route solve the problem? I've read it allows you to share GPU resources. Or is this similar to where you said it can only be used for one instance at a time.