A common approach to adding a layer of safety to RAID is to have spare drive(s) available so that replacement time for a failed drive is minimized. The most extreme form of this is referred to as having a “hot spare” – a spare drive actually sitting in the array but unused until the array detects a drive failure at which time the system automatically disables the failed drive and enables the hot spare, the same as if a human had just popped the one drive out of the array and popped in the other allowing a resilver operation (a rebuilding of the array) to begin as soon as possible. This can bring the time to swap in a new drive from hours or days to seconds and, in theory, can provide an extreme increase in safety.
First, I’d like to address what I personally feel is a mistake in the naming conventions. What we refer to as a hot spare should, I believe, actually be called a warm spare because it is sitting there ready to go but does not contain the necessary data to be used immediately. A spare drive stored outside of the chassis, one that requires a human to step in and swap the drives manually, would be a cold spare. To truly be a hot spare a drive should be full of data and, therefore, would be a participatory member of the RAID array in some capacity. Red Hat has a good article on how this terminology applies to disaster recovery sites for reference. This differentiation is important because what we call a hot spare does not already contain data and does not immediately step in to replace the failed drive but instead steps in to immediately begin the process of restoring the lost drive – a critical differentiation.
In order to keep concepts clear, from here on out I will refer to what vendors call hot spares as “warm spares.” This will make sense in short order.
There are two main concerns with warm spares. The first is the ineffectual nature of the warm spare in most use cases and the second is the “automated array destruction” risk.
Most people approach the warm spare concept as a means of mitigating the high risk of secondary drive failure on a parity RAID 5 array. RAID 5 arrays protect only against the failure of a single disk within the array. Once a single disk has failed the array is left with no form of parity and any additional drive failure results in the total loss of the array. RAID 5 is chosen because it is very low cost for the given capacity and sacrifices reliability in order to achieve this cost effectiveness. Because RAID 5 is therefore risky in comparison to other RAID options, such as RAID 6 or RAID 10, it is common to implement a warm spare in order to minimize the time that the array is left in a degraded state allowing the array to begin resilvering itself as quickly as possible.
So the takeaway here that is more relevant is that warm spares are generally used as a buffer against using less reliable RAID array types as a cost saving measure. Warm spares are dramatically more common in RAID 5 arrays followed by RAID 6 arrays. Both of which are chosen over RAID 10 due to cost for capacity, not for reliability or performance. There is one case where the warm spare idea truly does make sense for added reliability, and that is in RAID 10 with a warm spare, but we will come to that. Outside of that scenario I feel that warm spares make little sense in the real world.
We will start by examining RAID 1 with a warm spare. RAID 1 consists of two drives, or more, in a mirror. Adding a warm spare is nice in that if one of the mirrored pairs dies the warm spare will immediately begin mirroring the remaining drive and you will be protected again in short order. That is wonderful. Except for one minor flaw, instead of using a warm spare that same drive could have been added to the RAID 1 array all along where it would have been a tertiary mirror. In this tertiary mirror capacity the drive would have added to the overall performance of the array giving a nearly fifty percent read performance boost with write performance staying level and providing instant protection in case of a drive failure rather than “as soon as it remirrors” protection. Basically it would have been a true “hot spare” rather than a warm spare. So without spending a penny more the system would have had better drive array performance and better reliability simply by having the extra drive in a hot “in the array” capacity rather than sitting warm and idle waiting for disaster to strike.
With RAID 5 we see an even more dramatic warning against the warm spare concept, here where it is more common than anywhere else. RAID 5 is single parity RAID with the ability to rebuild, using the parity, any drive in the array that fails. This is where the real problems begin. Unlike in RAID 1 where a remirroring operation might be quite quick, a RAID 5 resilver (rebuild) has the potential to take quite a long time. The warm spare will not assist in protecting the array until this resilver process completes successfully – this is commonly many hours and is easily days and possibly weeks or months depending on the size of the array and how busy the array is. If we took that same warm spare drive and instead tasked it with being a member of the array with an additional parity stripe we would achieve RAID 6. The same set of drives that we have for RAID 5 plus warm spare would create a RAID 6 array of the exact same capacity. Again, like the RAID 1 example above, this would be much like having a hot spare, where the drive is participating in the array with live data rather than sitting idly by waiting for another drive to fail before kicking in to begin the process of taking over. In this capacity the array degrades to a RAID 5 equivalent in case of a failure but without any rebuild time, so the additional drive is useful immediately rather than only after a possible very lengthy resilver process. So for the same money, same capacity the choice of setting up the drives in RAID 6 rather than RAID 5 plus warm spare is a complete win.
We can continue this example with RAID 6 plus warm spare. This one is a little less easy to define because in most RAID systems, except for the somewhat uncommon RAIDZ3 from ZFS, there is no triple parity system available one step above RAID 6 (imagine if there was a RAID 7, for example.) If there were the exact argument made for RAID 5 plus warm spare would apply to RAID 6 plus warm spare. In the majority of cases RAID 6 with a warm spare must justify itself against a RAID 10 array. RAID 10 is more performant and far more reliable than a RAID 6 array but RAID 6 is generally chosen to save money in comparison to RAID 10. But to offset RAID 6′s fragility warm spares are sometimes employed. In some cases, such as a small five disk RAID 6 array with a warm spare, this is dollar for dollar equivalent to a six disk RAID 10 array without a warm spare. In larger arrays the cost benefit of RAID 6 does become apparent but the larger the cost savings the larger the risk differential as parity RAID systems increase risk with array size much more quickly than do mirror based RAID systems like RAID 10. Any money saved today is done at the risk of outage or data loss tomorrow.
Where a warm spare comes into play effectively is in a RAID 10 array where a warm spare rebuild is a mirror rebuild, like in RAID 1, which does not carry parity risks, where there is no logical extension RAID system above RAID 10 from which we are trying to save money by going with a more fragile system. Here adding a warm spare may make sense for critical arrays because there is no more cost effective way to gain the same additional reliability. However, RAID 10 is so reliable without a warm spare that any shop contemplating RAID 5 or RAID 6 with a warm spare would logically stop at simple RAID 10 having already surpassed the reliability they were considering settling for previously. So only shops not considering those more fragile systems and looking for the most robust possible option would logically look to RAID 10 plus warm spare as their solution.
Just for technical accuracy, RAID 10 can be expanded for better read performance and dramatic improvement in reliability (but with a fifty percent cost increase) by moving to three disk RAID 1 mirrors in its RAID 0 stripe rather than standard two disk RAID 1 mirrors just like we showed in our RAID 1 example. This is a level of reliability seldom sought in the real world but can exist and is an option. Normally this is curtailed by drive count limitations in physical array chassis as well as competing poorly against building a completely separate secondary RAID 10 array in a different chassis and then mirroring these at a high level effectively created RAID 101 – which is the effective result of common, high end storage array clusters today.
Our second concern is that of “automated array destruction.” This applies only to the parity RAID scenarios of RAID 5 and RAID 6 (or the rare RAID 2, RAID 3, RAID 4 and RAIDZ3.) With the warm spare concept, the idea is that when a drive fails the warm spare is automatically and instantly swapped in by the array controller and the process of resilvering the array begins immediately. If resilvering was a completely reliable process this would be obviouslyd highly welcomed. The reality is, sadly, quite different.
During a resilver process a parity RAID array is at risk of Unrecoverable Read Errors (UREs) cropping up. If a URE occurs in a single parity RAID resilver (that is RAID 2 – 5) then the resilvering process fails and the array is lost completely. This is critical to understand because no additional drive has failed. So if the warm spare had not been present then the resilvering would have not commenced and the data would still be intact and available – just not as quickly as usual and at the small risk of secondary drive failure. URE rates are very high with today’s large drives and with large arrays the risks can become so high as to move from “possible” to “expected” during a standard resilvering operation.
So in many cases the warm spare itself might actually be the trigger for the loss of data rather than the savior of the data as expected. An array that would have survived might be destroyed by the resilvering process before the human who manages it is even alerted to the first drive having failed. Had a human been involved they could have, at the very least, taken the step to make a fresh backup of the array before kicking off the resilver knowing that the latest copy of the data would be available in case the resilver process was unsuccessful. It would also allow the human to schedule when the resilver should begin, possibly waiting until business hours are over or the weekend has begun when the array is less likely to experience heavy load.
Dual and triple parity RAID (RAID 6 and RAIDZ3 respectively) share URE risks as well as they too are based on parity. They mitigate this risk through the additional levels of parity and do so successfully for the most part. The risk still exists, especially in very large RAID 6 arrays, but for the next several years the risks remain generally quite low for the majority of storage arrays until far larger spindle-based storage media is available on the market.
The biggest problem with parity RAID and the URE risk is that the driver towards parity RAID (willing to face additional data integrity risks in order to lower cost) is the same driver that introduces heightened URE risk (purchasing lower cost, non-enterprise SATA hard drives.) Shops facing parity RAID generally do so with large, low cost SATA drives bringing two very dangerous factors together for an explosive combination. Using non-parity RAID 1 or RAID 10 will completely eliminate the issue and using highly reliable enterprise SAS drives will drastically reduce the risk factor by an order of magnitude (not an expression, it is actually a change of one order of magnitude.)
Additionally during resilver operations it is possible for performance to degrade on parity systems so drastically as to equate to a long-term outage. The resilver process, especially on large arrays, can be so intensive that end users cannot differentiate between a completely failed array and a resilvering array. In fact, resilvering at its extreme can take so long and be so disruptive that the cost to the business can be higher than if the array had simply failed completely and a restore from backup had been done instead. This resilver issue does not affect RAID 1 and RAID 10, again, because they are mirrored, not parity, RAID systems and their resilver process is trivial and the performance degradation of the system is minimal and short lived. At its most extreme, a parity resilver could take weeks or months during which time the systems act as though they are offline – and at any point during this process there is the potential for the URE errors to arise as mentioned above which would end the resilver and force the restore from backup anyway. (Typical resilvers do not take weeks but do take many hours and to take days is not at all uncommon.)
Our final overview can be broken down to the following (conventional term “hot spare” used again): RAID 10 without a “hot spare” is almost always a better choice than RAID 6 with a “hot spare.” RAID 6 without a “hot spare” is always better than RAID 5 with a “hot spare.” RAID 1 with additional mirror member is always better than RAID 1 with a “hot spare.” So whatever RAID level with a hot spare you decide upon, simply move up one level of RAID reliability and drop the “hot spare” to maximize both performance and reliability for equal or nearly equal cost.
Warm spares, like parity RAID, had they day in the sun. In fact it was when parity RAID still made sense for widespread use – when URE errors were unlikely and disk costs were high – that warm spare drives made sense as well. They were well paired, when one made sense the other often did too. What is often overlooked is that as parity RAID, especially RAID 5, has lost effectiveness it has pulled the warm spare along with it in unexpected ways.