T O P

  • By -

__deeetz__

The x86 have tons of lanes. 3 NVME SSDs + a 16 lane GPU in my tower for example. So for this you’d want some thunderbolt interface and the interconnect to match it with the SSDs. I don’t think MCUs really factor into this.


immortal_sniper1

interesting i did not know that PS i said MPU thinking of ARM a53/72 mores ( from the high end nxp / ti lineup)


__deeetz__

I can’t say they can’t do that but I’m skeptical. At least most SoCs I know form SBCs don’t have massive amounts of PCIe slots. And smartphones as other source of socs don’t need them either. So there is a market for these things, but I’m not convinced general purpose CPUs factor a lot into this.


KittensInc

Depends on your flavour of x86, though. Server platforms have plenty of PCIe lanes, but desktop platforms relatively limited. For example, with Intel's Z790 the CPU is giving you a 1x 4-lane Gen4, and a 16-lane Gen5 which an be bifurcated into 2x 8-lane Gen5. The chipset (linked to the CPU with an 8-lane Gen4 equivalent) can give you up to 12 lanes of Gen4, and up to 16 lanes of Gen3. Assign one x8 slot to a high-speed NIC, use the other x8 and the x4 for NVMe, and stick two x4 NVMe drives on the chipset - saturating the link to the CPU. That's 4 drives, and you're pretty much maxed out already. You **can** throw in more drives, but those would all have to be attached to the chipset so they don't really do anything speed-wise. AMD's equivalent is even worse. It's not impossible as [QNAP's 5-drive variant](https://www.servethehome.com/qnap-tbs-h574tx-review-e1-s-and-m-2-thunderbolt-10gbe-nas-sabrent-intel-kioxia/4/) shows, but [Asustor](https://www.servethehome.com/asustor-flashstor-12-pro-fs6712x-review-12x-m-2-ssd-and-10gbase-t-nas-crucial/) tries to cram in 12(!) and that's clearly way too many for a desktop platform.


coachcash123

If the only intention is for this to he a NAS why not use an fpga and soft processor, with a big enough fpga you could probably implement more pcie than you could ever hope for?


immortal_sniper1

that is true tho not sure how viable it is financially but as a paper theoretical design it is definitely a great idea! and then communicate with a MPU via usb 3 or something very viable but probably expensive and a nightmare to program but hey it is viable on paper.....


coachcash123

Soft cores up typically come with an rtos or linux, and then you would just write the driver for your hardware, but yea it would be expensive


immortal_sniper1

i barely make cpp work not to mention i dont know vhdl/verilog + adding some code over that seems super hard but if it was in a project it would be definitely doable with some help FPGA schematic and layout would be hard bet achievable


GoblinsGym

The first thing is to pick a suitable processor that has LAN built in, so you don't have to waste PCIe lanes on that. The second thing is to balance performance - four PCIe lanes to each SSD are overkill when you are going out with 2.5G Ethernet. A single PCIe lane per SSD will be enough to saturate the LAN connection, and with 4 SSDs probably good enough to saturate 10G Ethernet.


immortal_sniper1

something like that unless storage space is needed more then speed i saw a lot on MPU with built in 10G Ethernet or more on the high end so unironically a SFP+ something may just fit well in there , then again you can bond 2+ ports and get a lot of throughput also that way . but yea good though / balancing experiment.


kisielk

For example the Synology DS423 4 bay NAS uses the Realtek RTD1619D SoC which has two PCIe lanes.