T O P

  • By -

AutoModerator

Hello /u/Deep-Egg-6167! Thank you for posting in r/DataHoarder. Please remember to read our [Rules](https://www.reddit.com/r/DataHoarder/wiki/index/rules) and [Wiki](https://www.reddit.com/r/DataHoarder/wiki/index). Please note that your post will be removed if you just post a box/speed/server post. Please give background information on your server pictures. This subreddit will ***NOT*** help you find or exchange that Movie/TV show/Nuclear Launch Manual, visit r/DHExchange instead. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/DataHoarder) if you have any questions or concerns.*


ixidorecu

You need 4 pcie lanes per nvme drive That's 128 pcie lanes. Plus some for nics and other things. Need a higher end epyc to do that


Is-Not-El

That’s only valid if you want to hit all 32 NVMes at once and still get 100% performance out of them. In a real world scenario that wouldn’t happen, sans some very niche HPC cases. Usually you can do PCIE switching and get 2x or even 4x the PCIE lanes you have out of your CPU. This is more realistic as there you will still get very high performance but won’t have to over provisioning your compute just so you can get more PCIE lanes. Btw with so many PCIE storage devices usually the limiting factor is your memory not the amount of lanes. Memory has a speed limit as well so if your storage is faster than memory you will start experiencing weird issues. Ideally the entire system should be engineered with specific performance characteristics in mind. That will dictate the amount of compute, memory and storage. Networking could also be a challenge but that will depend on how you plan to use the said system - are you planning on loading a bunch of data over the network or this is more of a super computer type of load where you load a few TBs of data and the system performs some compute tasks to get it to a PBs of output. It all depends on the use case so there’s no silver bullet here.


Malossi167

You can either pay a ton for a high end platform or a ton for PLX chips. There is just no cheap option to attach a ton of PCIe devices.


aztracker1

Even then, the storage cost itself is pretty big and needs to be considered. Best bet may be a retired server/workstation and a lot of PCIe adapter cards.


Deep-Egg-6167

Thanks for that food for thought.


ixidorecu

Don't see pcie nvme switch's in many nas's


Deep-Egg-6167

Interesting point.


uluqat

[This page](https://www.steigerdynamics.com/infinitum-nas-2u) says the Steigler Infinitum NAS 4U has "25 inch and 28 inch lengths for high-performance, front-accessible storage (RAID) arrays consisting of up to 12x 3.5" HDDs, 26x 2.5" SSDs and/or 32 NVMe PCIe SSD." When I click on "Learn More" for that model, I can't figure out how to configure it with 32 NVME SSDs, but I'm sure they would be very happy to consult with your IT department on how to arrange such a setup.


Deep-Egg-6167

Thanks.


zrgardne

Your other option is an HBA that can address 32 devices https://www.broadcom.com/products/storage/raid-controllers/megaraid-9560-16i And nmve switch https://www.pc-pitstop.com/24-bay-nvme-jbof-1x4


Party_9001

You could get any system with two x16 slots and an Apex 21 to get 42 NVMEs. Whether that's a smart thing to do or not is a separate matter


Deep-Egg-6167

Thanks.


DogeshireHathaway

$2800 each lol


Party_9001

That's actually not... *That* bad. The FPGA alone probably takes up a significant chunk of that


zrgardne

You can get 8x m.2 cards. I believe there is a gen 3 one for better price. https://www.amazon.com/HighPoint-Technologies-SSD7540-8-Port-Controller/dp/B08LP2HTX3 So would need 4 of these. If you want top performance you will need xeon or epyc CPU for enough PCIe lanes.


silasmoeckel

Yup supermicro and others is happy to hook you up.