Talk:Standard RAID levels

Clarifying the Need for Reed Solomon coding in RAID-6
In the RAID-6 section the explanation of the second parity says that orthogonal, diagonal and Reed Solomon can all be used. However, when Reed Solomon is explained it is presented as necessary. It isn't necessary, the other two methods work equally as well. Both of them require calculating a second parity block on a write but this is less CPU intense than an equivalent sized RS code. I will some clarifying language to this effect.

Outdated performance information
Many of the performance source are form around 2015 and seem outdated. Fro example RAID 0 Performance states that is often applied in gaming but looking at any recent PC hardware forum discussion (e.g. r/pcgaming/ 2020,  Linus Tech Tips forum 2021) it seems that today's SSD, especially NVMe ones, are fast enough even for simultaneous playing, recording, and streaming, therefore, RAID 0 has no practical benefits for gaming nowadays and almost nobody does it. And when someone wants a large logical volume, JBOD is recommended instead. Finally, it seems that for non-server use RAID 0 might only help if working with 8K or 4K60 footage that isn't encoded for hardware accelerated or for compiling a giant codebase as it might help with faster random IOPs. I wasn't able to find any reliable enough sources for this, so I'm unable to edit the article. Yaqub Kabanoki (talk) 12:56, 27 August 2022 (UTC)


 * There are lots of uses for RAID0, and the article mentions just a few. I used to use a four-disk RAID0 as intermediate backup target, before writing to tape. Currently, the claim is sourced, so imho you'd need a reliable source to remove as well, something along "nobody uses RAID0 for gaming rigs anymore". --Zac67 (talk) 15:18, 27 August 2022 (UTC)
 * I'd argue that it was never particularly useful (for gaming). "Attempted gaming blog with front page full of spammed AJAX code from broken scripts" isn't a very good source for Wikipedia.  The tests aren't even of actual performance, just load times.   They're effectively disk read speed / access time tests that could have been done in a free benchmarking program.  For all anyone knows everything on the RAID 0 was running at 30fps (but it sure loaded in a reasonable amount of time).  It has a lot of uses elsewhere, although for consumers it's not really feasible on NVMe drives while still using a GPU at full speed.  I had a good laugh at someone on Adobe's forums who was wondering why his configuration of a GPU + 8 NVMe drives in RAID 0 kept taking down the system on his 3rd gen Ryzen board.   I still don't know how he managed to install that many drives without something complaining;  most 4x slots don't like 4-way bifurcation, I think some of them would have been running at PCIe 3.0 speeds, if the motherboard had more than 1 or 2 built-in (lots of them do, because it sells hardware) it would saturate the Southbridge link pretty easily, then there was the matter of the something like 95% chance of system failure a year and the question of how the processor would cope with having around 8GB/s of non-disk memory bandwidth even if they all did run at full speed...   too many red flags to count.   The intended use?  Editing sub-4k video of his gameplay in Premiere to upload to youtube and spam up the search results more...    A Shortfall Of Gravitas (talk) 10:16, 16 October 2022 (UTC)

Modern RAID 3 (actual implementation?)
RAID 3 is described as byte-level striping, with the odd example of 6 byte blocks + 2 parity bytes given which doesn't really improve matters. Even in MS-DOS or Ring 0 with access to the disk controllers, operating in non-DMA mode, a read command from a disk drive can only operate inefficiently (data the granularity of the drive's buffer will still be read into the buffer itself), although you can either use direct port access or DMA to read word sized data. Requesting byte reads is an optional drive feature. Pretty sure it's the same for SCSI, and despite being serial interfaces SAS and SATA both incur overhead in the form of commands. I don't think accesses smaller than the sector size are allowed at all on SSD / NVMe storage because of the absolute nightmare managing TRIM would become and the requirement of storing a map of every byte on the device (which would end up larger than the stored data) in order to perform wear leveling. DIMMs don't even stripe on the byte level AFAIK. In any event, the description of RAID 3 as byte level doesn't really mesh with modern storage and controllers (modern being post-1980s) and sounds more like something that involved custom disk firmware and much slower drives when it was really doing what it claimed. I strongly suspect that it's kept around on the controllers which primarily target Macs (and video editing studios) because at some point in history there was a commercial solution (like the bit-level striping of RAID-4) that implemented it, maybe for Amiga users or maybe for Mac during the time period when they were still shipping with SCSI disks and hardware that justified their price and reputation as multimedia editing machines back when it was a nightmare to do on Windows NT or OS/2. Maybe it was for something higher end like the SGI workstations and larger visualization machines like Onyx 2. I found something by Sun about it, but they were likely just trying to make the JRE load quickly enough on servers to sell the idea of doing anything important with Java to customers and aren't really relevant. The working pros from back then either burned out from 90 hour work-weeks or became managers and they still want RAID 3, so why not sell it to them. Let's further suppose that disk RPMs and data density were still low enough that synchronizing heads across drives at the byte level was possible. Those days are past, modern drives have RPM drift tolerances, and for that matter platter drift tolerances (I can view mine in smartctl on the 12TB Toshiba). There's no hope of synchronizing even 2 drives to byte level accuracy for very long. The only way to really implement byte level striping would be by striping the data at the byte level when a write is issued, then doing full-sector writes. Then reverse it when data is read. That doesn't really provide any speed or parallelism advantage, so why not fake it further and just do RAID 4 with sector sized blocks and call it a day. You'll take an access time hit since the last drive to respond limits the speed, but it's not much these days. On solid state it's basically nothing. On the minimal array with 4k sector drives the stripes are 16k, but that's kind of standard when formatting a filesystem on a very large drive when small files aren't expected as in this case so it doesn't really matter. The pros who haven't updated their knowledge base are happy, and media people aren't going to forensically examine individual disks to make sure a given text string isn't split across all drives. Win-win. The old arrays aren't going to be compatible with modern controllers anyway and the hardware has almost certainly failed by now or been tossed due to obsolescence. There are lots of aspects of Mac OS that are extremely backwards for the sake of keeping long time Mac users happy so it wouldn't be surprising to see the hardware companies following suit. That's all speculation but if anyone can find one I'd love to see some kind of analysis or reference to the actual functioning of modern RAID 3 included. As usual all google searches either copy-pasted the wikipedia entry or an older version of it, and even the references in that section link to the RAID-2 portion of the book (which I don't have access to and am unlikely to find as outdated computer books, especially of that sort, get trashed faster than used tissue). A Shortfall Of Gravitas (talk) 13:31, 16 October 2022 (UTC)