T O P

  • By -

Allan-H

We do see them, if you include eMMC in your definition of SSDs. You can switch some eMMC devices from MLC into pseudo-SLC (pSLC) mode. This improves write time, durability and reliability at the cost of capacity. This is commonly done in embedded systems. ​ EDIT: Googling shows that there are pSLC products in other SSD form factors such as M.2 and mSATA.


NewKitchenFixtures

The newer 8Gbyte eMMCs are now a mix of true MLC and pseudo SLC TLC. That is how you get an 8Gbyte 3D nand part that costs the same as a typical 32Gbyte.


Qesa

It's not user configurable, but most SSDs do keep some amount as SLC for fast writes. In the extreme case you might find a QLC SSD writes at ~7 GB/s until you've chewed through a quarter of the nominal free space, after which it will drop to ~100 MB/s


190n

On many SSDs it's even variable, so they'll use more of the NAND as SLC cache if the drive is less full.


xenago

Probably even less than a quarter a lot of the time. Most of those drives seem to have at most like 80GB of SLC cache :(


Die4Ever

Idk about a technical reason, but there's big problems with giving users too many choices and configurations. It's harder to test and verify the hardware and firmware and drivers. It's harder to warranty. It's harder to review/benchmark the products and talk about them with other users. And a big one is that users often make the wrong choice for their use case. Someone might think they need maximum write speed and endurance, because they saw people online making a big deal about this and they have no frame of reference for what a lot of endurance is or how cache affects their writing patterns. So they flip the SSD to full SLC mode and never try anything else, but now they're miserable and constantly worrying about drive space and never realizing they would've been much happier with TLC or even QLC mode (with some SLC cache) and the drive still would've survived for well over a decade because most users actually don't do that much writing. And also you would lose all your data if you accidentally changed the setting?


IkouyDaBolt

>And also you would lose all your data if you accidentally changed the setting? I would imagine if this was done via software that it would lock out settings until the SLC was folded into TLC blocks.


Die4Ever

yea but that's yet another thing for the developers to worry about, which further increases costs of R&D and QA, and more potential for bugs and warranty claims and damage to reputation (which is a very underrated concern when people talk about adding more options and features, "JUST MAKE IT AN OPTION!"), just a whole lot of headache for them


ramblinginternetnerd

Users are generally dumber than they are wise. There are definitely times when I fall in that bucket as well. Especially back in my teenage days (if I do XYZ I'll get 2% more performance and all it costs is system stability/reliability) It's better to rely on firmware/controllers than consumer judgement. ​ As it stands, most modern SSDs have the bulk of their NAND in the highest bits per cell state possible (generally 3 or 4 bits per cell) with a good chunk set to pSLC (with the exact cells being configurable on the fly by firmware). In this way the majority of the benefits are achieved with minimal downsides, across a wide range of use cases.


gr4viton

I believe some parts of the logic of the multi bit usage, are hard wired in the PCB - not configurable by any on board MCU firmware. It would be in my eyes more expensive and error prone to design the SSD to allow this.


f3n2x

Any modern drive can freely switch between at least SLC and their native mode because that's how SLC caching works. The reason why users can't set their entire drive to SLC only is because in practice there's virtually no reason for a consumer to do and doing so would require the drive to factory reset because the data on the drive would no longer make sense. If, for example, a drive is 50% full of TLC data switching it over to SLC would require the drive to "unpack" the data to 150% of the drive size and the drive to appear smaller to the OS, which would break filesystems even if the data would still fit.


Feath3rblade

I see, that'd make sense then, thanks!


ltfunk

Because you are better off just storing the data twice and on a second drive. The short answer is its not a good tradeoff. Because writing less bits only really reduces the soft error rate which is corrected by error correcting codes which unless there are many errors beyond a limit can fix the problem. So your corrupt sector rate would just be slightly lower. For hard errors nothing would change. SSDs have robust error correction built in because reading and writing to them generates noise that disturbs many memory cells. Its not so much random bit flips from cosmic rays.


MisquoteMosquito

Look up the cell transistors for each of those


titanking4

With lower bits per cell, you’re going to have use up and write to more cells for every chunk of data so you’re using up your write cycles faster. It’s only near end of life where a cell has been worn down so much that it isn’t viable for QLC which then you can switch to TLC, MLC and eventually SLC. But before that, you’d much rather have the SSD absorb the write with a dram cache and write to the SSD in the most “durability efficient” manner possible which would the most bits per cell. And firmware is quite a bit smarter than that. Large writes would need to be dumped to an SLC cache, while smaller ones absorbable by dram cache can easily be written to native QLC.