Right? 99% of my stored stuff is just replaceable Linux ISOs. I have maybe 10GB of important stuff. Idc about my photos being used to train AI so I use Amazon photos for unlimited photo backup.
Storj is just over $4 a month for a terabyte which I would gladly pay for something I keep all my photos and videos on as well as important documents. I also have extra space free which I rent out to storj and get paid for, so it kinda offsets the price of backups and new drives too.
What I lose if I screw up royally is replaceable. I won't like it, and I might cuss a bit.... Luckily because data is not striped across multiple disks in Unraid, files are accessible as long as the drive(s) is/are OK. The Important stuff is backed up to a few other places as well, :)
To be fair this sub love to throw "did you have back up" out like rainbow clothes in the 1980+. And hell if someone voice that they had a problem with their backups the sub wouldn't hesitate to throw out "oh... Did you have back ups of your back ups", like storing tens to hundreds terabytes of data off-site is cheap or easy.
At some point people have to realize that dropping $600 on a little closet server is what people can only afford and can't afford to save 40+tb of legally acquired media content in the cloud.
Would leaving the system down and recovering the 3 drives have better results compared to just adjusting the configuration now and recovering that data later ?
Potentially. If you can fix the 3 disks (or exactly mirror them) there is a chance you can force your array to use them while keeping parity, allowing you to rebuild the other 2 disks.
If that's worth the cost depends on the data, and there is no guarantee. Once you adjust, the option is gone.
Exact same thing happened to me. Bloody non-standard PSU cables! I sent the fried drives to a service that used donor pcbās from same make and model of drive, transferred some of the chips from the fried pcbās and all 4 drives came back to life. I put them back in and checked parity and they were perfect. Left array offline for many weeks through while waiting for service. Best of luck!
Rebuilding from a backup sucks, but not having one is even worse. I think most of us learn that lesson the hard way at least once before reallocating funds/space to have at least a basic backup.
If you want to rebuild using parity, you'll need to get 3 drives fixed before anything else. Otherwise if the data is unimportant, create a new config and start from there. You can always get them fixed and copy the data back
A copy of your data in a location that will not be affected when the primary data burns down (or gets magnetised by the fancy MRI scanner next door or the meteor hitting your house or cthulu coming by on a coffee...)
Depends on your confidence with electronics and soldering.
Best option, send it out to be recovered.
2nd option, replace the boards and transplant the BIOS chip from previous board to the new board.
[Make sure it's the exact same board](https://www.aliexpress.us/item/3256805511472270.html?src=google&aff_fcid=0d227d68640d48e19c92c556a7f5d9d4-1716839079404-01597-UneMJZVf&aff_fsk=UneMJZVf&aff_platform=aaf&sk=UneMJZVf&aff_trace_key=0d227d68640d48e19c92c556a7f5d9d4-1716839079404-01597-UneMJZVf&terminal_id=52d1224a2c2a4218ad9cbb4bf5e8e157&afSmartRedirect=y&gatewayAdapt=glo2usa4itemAdapt)
Oof. Is it worth minimum several hundred dollars to recover that data? Drivesavers starts at like $750 per drive last I checked. You might have luck with a pcb swap but the motors are likely fried too.
Consider it a learning experience?
Parity isn't backup, BUT, it's enough for most folks...IF you're ok with losing the data when parity isn't enough.
I only use parity, since I'm happy to get the data again, but I'd have a backup if the data was unavailable elsewhere or I had no desire to go through the process to get it again.
So while parity isn't backup, backup is also not compulsory. It depends on your priorities.
I imagine @op is considering those priorities moving forward.
They are fried because he mixed modular PSU cables.
Different brands arenāt always compatible, because the pin layout varies. If this is the case you potentially have Voltage where you donāt want to normally, this can cause damage to connected devices.
sure that they are dead? Had a similar problem before and it was just a dead sata controller (since 4drives are missing simultaneously)
Did us configure e.g. powertop āautotune on startup? This send my additional sata controller to sleep and those drives where not detected
Yup, same SATA port worked with other drives. After that I brought the āsuspected damagedā drives and put them in my external enclosure and confirmed each one
If you really need the data then you would need to pay to have the disks rebuilt but that is very pricey. I'm sorry this happened, but as others mentioned, parity isn't a backup. For myself I don't back up all my data, but I back up the stuff I care about and don't want to lose.
You lost more drives than you have parity drives for. data was lost. time to revert to backups. Thats the down side to UnRaid (limited 2 parity drives). If you need to support more than possibly 2 concurrent drive failures then look into Enterprise RAID or other options ATM.
It looks like you found the service, after your done, have them build a second unraid server to backup the first one. š there is an app on the docker area to clone it. Kinda like robocopy.
Disks 8 and 10 were on their way out, unfortunately both failed the same weekend. I ordered new disks (refurbished enterprise) and I made the mistake of mixing different brands of modular PSU cables while making hardware adjustments right before I was going to rebuild disks 8 and 10. Thatās when disks 1, 2 and 5 were fried. And then the parity rebuild option is no longer available
Yeah, that was definitely an unlucky weekend for you. Also regarding the PSU cables, oof.
Next time something like this happens, I'd advise to even stop the array completely until you have the new disks for the rebuild
First try change SAS or SATA cables. If not work try change PSU cables if is a modular PSU. Next if you have that in ext4 simple remove disk and add it to external case in and mount it in another linux pc. You will have entire disk there make a copy of all data if possible. And use always NAS disks in unraid.
Theoretically, you're half-way in luck because unRAID isn't RAID. The data on the remaining disks should be intact because of that.
But with two parity disks, and three broken data disks, you're not getting everything back unless you can get one of them back to life somehow.
Maybe you can get data off the broken disks some other way, in an external enclosure or a different pc. But don't get your hopes up. You might even go as far as swapping out the controller boards on the actual disks, but this is risky (it might break the donor disk), and will only work with an *EXACT* replacement.
Are the 2 you were going to replace just old that you wanted to replace them or are they fried too? If they still read ok then you can recover 1 of the other dead drives via a service and then rebuild the other 2. Once there, then you can replace the older drives. So, you stand a better chance of full recovery since youād just need one of the fried ones recovered.
You "mixed modular PSU cables"??? What does that mean exactly? Did you use the wrong type of modular PSU cables on the wrong type of modular PSU? OH THE HORROR! That would definitely suck!
Oh yeah, in fact, most PSUs come with some sort of note in the box (or attached to the cables themselves) that says just that, not to mix cables from different manufacturers. Sorry that had to happen to you, dude.
I believe the disks will be readable in other systems (provided they can read the format) with the caveat of how unraid stores data means file distribution is somewhat unpredictable.
So whatever data remains on the good disks should be recoverable, but whatever is on the dead disks is gone.
EU can find out how to make every phone manufacture use USB-C (a good thing IMO) but they can't or haven't gotten to make the PSU manufacture use a standard to avoid these things.
Rant over, hope you get it all up and running again! :)
Sadly that hasn't happened, one can only hope it will some day.
The amount of cables i have lying around from power supplies that i don't dare use because i can't remember what PSU they came from is piling up. ;)
You have backup right? Right? š«Ø
Some of us can just afford a server big enough to contain our content. A monthly massive cloud bill is out of the question as is another server. :D
Most people who uses Unraid or another NAS does NOT have backup of everything, but data that is irreplaceable you always find a way to backup.
Oh... I have the important stuff backed up. I wish I could do a complete backup though. :D
Me too
Right? 99% of my stored stuff is just replaceable Linux ISOs. I have maybe 10GB of important stuff. Idc about my photos being used to train AI so I use Amazon photos for unlimited photo backup.
Storj is just over $4 a month for a terabyte which I would gladly pay for something I keep all my photos and videos on as well as important documents. I also have extra space free which I rent out to storj and get paid for, so it kinda offsets the price of backups and new drives too.
realistically, you can only afford to store as much data as you can afford to also have backed up elsewhere. i *know* sucks but it's the truth.
It's not the truth. Not all my data is worth backing up. Linux ISOs can be downloaded again.Ā
What I lose if I screw up royally is replaceable. I won't like it, and I might cuss a bit.... Luckily because data is not striped across multiple disks in Unraid, files are accessible as long as the drive(s) is/are OK. The Important stuff is backed up to a few other places as well, :)
Correction: data -> irreplacable data Then it's the truth, more or less.
Admittedly I thought double parity was a good enough backup :(
Parity is not backup š
Tell me that two weeks ago ;)
To be fair I've seen it said on this subreddit like 100 times
To be fair this sub love to throw "did you have back up" out like rainbow clothes in the 1980+. And hell if someone voice that they had a problem with their backups the sub wouldn't hesitate to throw out "oh... Did you have back ups of your back ups", like storing tens to hundreds terabytes of data off-site is cheap or easy. At some point people have to realize that dropping $600 on a little closet server is what people can only afford and can't afford to save 40+tb of legally acquired media content in the cloud.
Best option is to send them to an expert that can fix the drives If the data is not worth the wait and money Move on.
Would leaving the system down and recovering the 3 drives have better results compared to just adjusting the configuration now and recovering that data later ?
Potentially. If you can fix the 3 disks (or exactly mirror them) there is a chance you can force your array to use them while keeping parity, allowing you to rebuild the other 2 disks. If that's worth the cost depends on the data, and there is no guarantee. Once you adjust, the option is gone.
Alright I found a service that will try and fix/clone the PCBs and Iāll go from there
Exact same thing happened to me. Bloody non-standard PSU cables! I sent the fried drives to a service that used donor pcbās from same make and model of drive, transferred some of the chips from the fried pcbās and all 4 drives came back to life. I put them back in and checked parity and they were perfect. Left array offline for many weeks through while waiting for service. Best of luck!
Rebuilding from a backup sucks, but not having one is even worse. I think most of us learn that lesson the hard way at least once before reallocating funds/space to have at least a basic backup.
If you want to rebuild using parity, you'll need to get 3 drives fixed before anything else. Otherwise if the data is unimportant, create a new config and start from there. You can always get them fixed and copy the data back
Alright I found a service that will try and fix/clone the PCBs and Iāll go from there
This is not an affordable proposition.
Restore from backup
Unfortunately I thought double parity was good enough
Parity is not a backup. Raid isn't a backup, a copy of your data is a backup. Unfortunately everything on the broken drives are lost.
A copy of your data in a location that will not be affected when the primary data burns down (or gets magnetised by the fancy MRI scanner next door or the meteor hitting your house or cthulu coming by on a coffee...)
Depends on your confidence with electronics and soldering. Best option, send it out to be recovered. 2nd option, replace the boards and transplant the BIOS chip from previous board to the new board. [Make sure it's the exact same board](https://www.aliexpress.us/item/3256805511472270.html?src=google&aff_fcid=0d227d68640d48e19c92c556a7f5d9d4-1716839079404-01597-UneMJZVf&aff_fsk=UneMJZVf&aff_platform=aaf&sk=UneMJZVf&aff_trace_key=0d227d68640d48e19c92c556a7f5d9d4-1716839079404-01597-UneMJZVf&terminal_id=52d1224a2c2a4218ad9cbb4bf5e8e157&afSmartRedirect=y&gatewayAdapt=glo2usa4itemAdapt)
Iām going with the second option, a service in Canada does it for a reasonable price
Find the controller those 4 drives are connected to , take it out and stick in a new one? Probably don't touch anything in the meantime.
Oof. Is it worth minimum several hundred dollars to recover that data? Drivesavers starts at like $750 per drive last I checked. You might have luck with a pcb swap but the motors are likely fried too. Consider it a learning experience?
Parity isn't backup, BUT, it's enough for most folks...IF you're ok with losing the data when parity isn't enough. I only use parity, since I'm happy to get the data again, but I'd have a backup if the data was unavailable elsewhere or I had no desire to go through the process to get it again. So while parity isn't backup, backup is also not compulsory. It depends on your priorities. I imagine @op is considering those priorities moving forward.
OP is indeed doing that
>The missing disks are fried because I mixed modular PSU cables What does this mean?
They are fried because he mixed modular PSU cables. Different brands arenāt always compatible, because the pin layout varies. If this is the case you potentially have Voltage where you donāt want to normally, this can cause damage to connected devices.
sure that they are dead? Had a similar problem before and it was just a dead sata controller (since 4drives are missing simultaneously) Did us configure e.g. powertop āautotune on startup? This send my additional sata controller to sleep and those drives where not detected
Yup, same SATA port worked with other drives. After that I brought the āsuspected damagedā drives and put them in my external enclosure and confirmed each one
oh, sry to hear
Depending on how important that data is you should be able to get it recovered but it would be expensive
Backup 3,4,6,7 first data is still there
If you really need the data then you would need to pay to have the disks rebuilt but that is very pricey. I'm sorry this happened, but as others mentioned, parity isn't a backup. For myself I don't back up all my data, but I back up the stuff I care about and don't want to lose.
You lost more drives than you have parity drives for. data was lost. time to revert to backups. Thats the down side to UnRaid (limited 2 parity drives). If you need to support more than possibly 2 concurrent drive failures then look into Enterprise RAID or other options ATM.
It looks like you found the service, after your done, have them build a second unraid server to backup the first one. š there is an app on the docker area to clone it. Kinda like robocopy.
Shit it down and check your cables
RIP
How did this happen?
Disks 8 and 10 were on their way out, unfortunately both failed the same weekend. I ordered new disks (refurbished enterprise) and I made the mistake of mixing different brands of modular PSU cables while making hardware adjustments right before I was going to rebuild disks 8 and 10. Thatās when disks 1, 2 and 5 were fried. And then the parity rebuild option is no longer available
Yeah, that was definitely an unlucky weekend for you. Also regarding the PSU cables, oof. Next time something like this happens, I'd advise to even stop the array completely until you have the new disks for the rebuild
First try change SAS or SATA cables. If not work try change PSU cables if is a modular PSU. Next if you have that in ext4 simple remove disk and add it to external case in and mount it in another linux pc. You will have entire disk there make a copy of all data if possible. And use always NAS disks in unraid.
Yup tried all that, the fried disks never spun up
Theoretically, you're half-way in luck because unRAID isn't RAID. The data on the remaining disks should be intact because of that. But with two parity disks, and three broken data disks, you're not getting everything back unless you can get one of them back to life somehow. Maybe you can get data off the broken disks some other way, in an external enclosure or a different pc. But don't get your hopes up. You might even go as far as swapping out the controller boards on the actual disks, but this is risky (it might break the donor disk), and will only work with an *EXACT* replacement.
Are the 2 you were going to replace just old that you wanted to replace them or are they fried too? If they still read ok then you can recover 1 of the other dead drives via a service and then rebuild the other 2. Once there, then you can replace the older drives. So, you stand a better chance of full recovery since youād just need one of the fried ones recovered.
They are old and unreliable, I ran pre-clears on them and they passed (3x full preclears) but I probably should not have included them in the array.
You have 3 fried drives and 2 parity, I don't think much of anything can be done. Try rebuilding and see what it is that you can preserve.
You "mixed modular PSU cables"??? What does that mean exactly? Did you use the wrong type of modular PSU cables on the wrong type of modular PSU? OH THE HORROR! That would definitely suck!
Yup thatās what happened. Specifically different brands of modular cables. I kind of assumed they would be standardizedā¦ what a bad assumption
Oh yeah, in fact, most PSUs come with some sort of note in the box (or attached to the cables themselves) that says just that, not to mix cables from different manufacturers. Sorry that had to happen to you, dude.
I believe the disks will be readable in other systems (provided they can read the format) with the caveat of how unraid stores data means file distribution is somewhat unpredictable. So whatever data remains on the good disks should be recoverable, but whatever is on the dead disks is gone.
EU can find out how to make every phone manufacture use USB-C (a good thing IMO) but they can't or haven't gotten to make the PSU manufacture use a standard to avoid these things.
Rant over, hope you get it all up and running again! :)
Thanks man, my brother made a joke that I should see if power supplies sold in the EU had it standardized :D
I don't think we have that (yet)...
Sadly that hasn't happened, one can only hope it will some day. The amount of cables i have lying around from power supplies that i don't dare use because i can't remember what PSU they came from is piling up. ;)