This articleneeds additional citations forverification.(January 2008) |
Fault tolerance is the ability of asystemto maintain proper operation despite failures or faults in one or more of its components. This capability is essential forhigh-availability,mission-critical,or evenlife-critical systems.
Fault tolerance specifically refers to a system's capability to handle faults without any degradation or downtime. In the event of an error, end-users remain unaware of any issues. Conversely, a system that experiences errors with some interruption in service or graceful degradation of performance is termed 'resilient'. In resilience, the system adapts to the error, maintaining service but acknowledging a certain impact on performance.
Typically, fault tolerance describescomputer systems,ensuring the overall system remains functional despitehardwareorsoftwareissues. Non-computing examples include structures that retain their integrity despite damage fromfatigue,corrosionor impact.
History
editThe first known fault-tolerant computer wasSAPO,built in 1951 inCzechoslovakiabyAntonín Svoboda.[1]: 155 Its basic design wasmagnetic drumsconnected via relays, with a voting method ofmemory errordetection (triple modular redundancy). Several other machines were developed along this line, mostly for military use. Eventually, they separated into three distinct categories:
- Machines that would last a long time without any maintenance, such as the ones used onNASAspace probesandsatellites;
- Computers that were very dependable but required constant monitoring, such as those used to monitor and controlnuclear power plantsorsupercolliderexperiments; and
- Computers with a high amount of runtime that would be under heavy use, such as many of the supercomputers used byinsurance companiesfor theirprobabilitymonitoring.
Most of the development in the so-called LLNM (Long Life, No Maintenance) computing was done by NASA during the 1960s,[2]in preparation forProject Apolloand other research aspects. NASA's first machine went into aspace observatory,and their second attempt, the JSTAR computer, was used inVoyager.This computer had a backup of memory arrays to use memory recovery methods and thus it was called the JPL Self-Testing-And-Repairing computer. It could detect its own errors and fix them or bring up redundant modules as needed. The computer is still working, as of early 2022.[3]
Hyper-dependable computers were pioneered mostly byaircraftmanufacturers,[1]: 210 nuclear powercompanies, and therailroad industryin the United States. These entities needed computers with massive amounts of uptime that wouldfail gracefullyenough during a fault to allow continued operation, while relying on constant human monitoring of computer output to detect faults. Again, IBM developed the first computer of this kind for NASA for guidance ofSaturn Vrockets, but later onBNSF,Unisys,andGeneral Electricbuilt their own.[1]: 223
In the 1970s, much work happened in the field.[4][5][6]For instance,F14 CADChadbuilt-in self-testand redundancy.[7]
In general, the early efforts at fault-tolerant designs were focused mainly on internal diagnosis, where a fault would indicate something was failing and a worker could replace it. SAPO, for instance, had a method by which faulty memory drums would emit a noise before failure.[8]Later efforts showed that to be fully effective, the system had to be self-repairing and diagnosing – isolating a fault and then implementing a redundant backup while alerting a need for repair. This is known as N-model redundancy, where faults cause automatic fail-safes and a warning to the operator, and it is still the most common form of level one fault-tolerant design in use today.
Voting was another initial method, as discussed above, with multiple redundant backups operating constantly and checking each other's results. For example, if four components reported an answer of 5 and one component reported an answer of 6, the other four would "vote" that the fifth component was faulty and have it taken out of service. This is called M out of N majority voting.
Historically, the trend has been to move away from N-model and toward M out of N, as the complexity of systems and the difficulty of ensuring the transitive state from fault-negative to fault-positive did not disrupt operations.
Tandem Computers,in 1976[9]andStratuswere among the first companies specializing in the design of fault-tolerant computer systems foronline transaction processing.
Examples
editHardware fault tolerance sometimes requires that broken parts be taken out and replaced with new parts while the system is still operational (in computing known ashot swapping). Such a system implemented with a single backup is known assingle point tolerantand represents the vast majority of fault-tolerant systems. In such systems themean time between failuresshould be long enough for the operators to have sufficient time to fix the broken devices (mean time to repair) before the backup also fails. It is helpful if the time between failures is as long as possible, but this is not specifically required in a fault-tolerant system.
Fault tolerance is notably successful in computer applications.Tandem Computersbuilt their entire business on such machines, which used single-point tolerance to create theirNonStopsystems withuptimesmeasured in years.
Fail-safearchitectures may encompass also the computer software, for example by processreplication.
Data formats may also be designed to degrade gracefully.HTMLfor example, is designed to beforward compatible,allowingWeb browsersto ignore new and unsupported HTML entities without causing the document to be unusable. Additionally, some sites, including popular platforms such as Twitter (until December 2020), provide an optional lightweight front end that does not rely onJavaScriptand has aminimallayout, to ensure wideaccessibilityandoutreach,such as ongame consoleswith limited web browsing capabilities.[10][11]
Terminology
editA highly fault-tolerant system might continue at the same level of performance even though one or more components have failed. For example, a building with a backup electrical generator will provide the same voltage to wall outlets even if the grid power fails.
A system that is designed tofail safe,or fail-secure, orfail gracefully,whether it functions at a reduced level or fails completely, does so in a way that protects people, property, or data from injury, damage, intrusion, or disclosure. In computers, a program might fail-safe by executing agraceful exit(as opposed to an uncontrolled crash) to prevent data corruption after an error occurs.[12]A similar distinction is made between "failing well" and "failing badly".
A system designed to experiencegraceful degradation,or tofail soft(used in computing, similar to "fail safe"[13]) operates at a reduced level of performance after some component fails. For example, if grid power fails, a building may operate lighting at reduced levels or elevators at reduced speeds. In computing, if insufficient network bandwidth is available to stream an online video, a lower-resolution version might be streamed in place of the high-resolution version.Progressive enhancementis another example, where web pages are available in a basic functional format for older, small-screen, or limited-capability web browsers, but in an enhanced version for browsers capable of handling additional technologies or that have a larger display.
In fault-tolerant computer systems, programs that are consideredrobustare designed to continue operation despite an error, exception, or invalid input, instead of crashing completely.Software brittlenessis the opposite of robustness.Resilient networkscontinue to transmit data despite the failure of some links or nodes.Resilient buildings and infrastructureare likewise expected to prevent complete failure in situations like earthquakes, floods, or collisions.
A system with highfailure transparencywill alert users that a component failure has occurred, even if it continues to operate with full performance, so that failure can be repaired or imminent complete failure anticipated.[14]Likewise, afail-fastcomponent is designed to report at the first point of failure, rather than generating reports when downstream components fail. This allows easier diagnosis of the underlying problem, and may prevent improper operation in a broken state.
Asingle fault conditionis a situation where one means forprotectionagainst ahazardis defective. If a single fault condition results unavoidably in another single fault condition, the two failures are considered one single fault condition.[15]A source offers the following example:
Asingle-fault conditionis a condition when a single means for protection against hazard in equipment is defective or a single external abnormal condition is present, e.g. short circuit between the live parts and the applied part.[16]
Criteria
editProviding fault-tolerant design for every component is normally not an option. Associated redundancy brings a number of penalties: increase in weight, size, power consumption, cost, as well as time to design, verify, and test. Therefore, a number of choices have to be examined to determine which components should be fault tolerant:[17]
- How critical is the component?In a car, the radio is not critical, so this component has less need for fault tolerance.
- How likely is the component to fail?Some components, like the drive shaft in a car, are not likely to fail, so no fault tolerance is needed.
- How expensive is it to make the component fault tolerant?Requiring a redundant car engine, for example, would likely be too expensive both economically and in terms of weight and space, to be considered.
An example of a component that passes all the tests is a car's occupant restraint system. While theprimaryoccupant restraint system is not normally thought of, it isgravity.If the vehicle rolls over or undergoes severe g-forces, then this primary method of occupant restraint may fail. Restraining the occupants during such an accident is absolutely critical to safety, so the first test is passed. Accidents causing occupant ejection were quite common beforeseat belts,so the second test is passed. The cost of a redundant restraint method like seat belts is quite low, both economically and in terms of weight and space, so the third test is passed. Therefore, adding seat belts to all vehicles is an excellent idea. Other "supplemental restraint systems", such asairbags,are more expensive and so pass that test by a smaller margin.
Another excellent and long-term example of this principle being put into practice is the braking system: whilst the actual brake mechanisms are critical, they are not particularly prone to sudden (rather than progressive) failure, and are in any case necessarily duplicated to allow even and balanced application of brake force to all wheels. It would also be prohibitively costly to further double-up the main components and they would add considerable weight. However, the similarly critical systems for actuating the brakes under driver control are inherently less robust, generally using a cable (can rust, stretch, jam, snap) or hydraulic fluid (can leak, boil and develop bubbles, absorb water and thus lose effectiveness). Thus in most modern cars the footbrake hydraulic brake circuit is diagonally divided to give two smaller points of failure, the loss of either only reducing brake power by 50% and not causing as much dangerous brakeforce imbalance as a straight front-back or left-right split, and should the hydraulic circuit fail completely (a relatively very rare occurrence), there is a failsafe in the form of the cable-actuated parking brake that operates the otherwise relatively weak rear brakes, but can still bring the vehicle to a safe halt in conjunction with transmission/engine braking so long as the demands on it are in line with normal traffic flow. The cumulatively unlikely combination of total foot brake failure with the need for harsh braking in an emergency will likely result in a collision, but still one at lower speed than would otherwise have been the case.
In comparison with the foot pedal activated service brake, the parking brake itself is a less critical item, and unless it is being used as a one-time backup for the footbrake, will not cause immediate danger if it is found to be nonfunctional at the moment of application. Therefore, no redundancy is built into it per se (and it typically uses a cheaper, lighter, but less hardwearing cable actuation system), and it can suffice, if this happens on a hill, to use the footbrake to momentarily hold the vehicle still, before driving off to find a flat piece of road on which to stop. Alternatively, on shallow gradients, the transmission can be shifted into Park, Reverse or First gear, and the transmission lock / engine compression used to hold it stationary, as there is no need for them to include the sophistication to first bring it to a halt.
On motorcycles, a similar level of fail-safety is provided by simpler methods; first, the front and rear brake systems are entirely separate, regardless of their method of activation (that can be cable, rod or hydraulic), allowing one to fail entirely while leaving the other unaffected. Second, the rear brake is relatively strong compared to its automotive cousin, being a powerful disc on some sports models, even though the usual intent is for the front system to provide the vast majority of braking force; as the overall vehicle weight is more central, the rear tire is generally larger and has better traction, so that the rider can lean back to put more weight on it, therefore allowing more brake force to be applied before the wheel locks. On cheaper, slower utility-class machines, even if the front wheel should use a hydraulic disc for extra brake force and easier packaging, the rear will usually be a primitive, somewhat inefficient, but exceptionally robust rod-actuated drum, thanks to the ease of connecting the footpedal to the wheel in this way and, more importantly, the near impossibility of catastrophic failure even if the rest of the machine, like a lot of low-priced bikes after their first few years of use, is on the point of collapse from neglected maintenance.
Requirements
editThe basic characteristics of fault tolerance require:
- Nosingle point of failure– If a system experiences a failure, it must continue to operate without interruption during the repair process.
- Fault isolationto the failing component – When a failure occurs, the system must be able to isolate the failure to the offending component. This requires the addition of dedicated failure detection mechanisms that exist only for the purpose of fault isolation. Recovery from a fault condition requires classifying the fault or failing component. TheNational Institute of Standards and Technology(NIST) categorizes faults based on locality, cause, duration, and effect.[where?][clarification needed]
- Fault containment to prevent propagation of the failure – Some failure mechanisms can cause a system to fail by propagating the failure to the rest of the system. An example of this kind of failure is the "rogue transmitter" that can swamp legitimate communication in a system and cause overall system failure.Firewallsor other mechanisms that isolate a rogue transmitter or failing component to protect the system are required.
- Availability ofreversion modes[clarification needed]
In addition, fault-tolerant systems are characterized in terms of both planned service outages and unplanned service outages. These are usually measured at the application level and not just at a hardware level. The figure of merit is calledavailabilityand is expressed as a percentage. For example, afive ninessystem would statistically provide 99.999% availability.
Fault-tolerant systems are typically based on the concept of redundancy.
Fault tolerance techniques
editResearch into the kinds of tolerances needed for critical systems involves a large amount of interdisciplinary work. The more complex the system, the more carefully all possible interactions have to be considered and prepared for. Considering the importance of high-value systems in transport,public utilitiesand the military, the field of topics that touch on research is very wide: it can include such obvious subjects assoftware modelingand reliability, orhardware design,to arcane elements such asstochasticmodels,graph theory,formal or exclusionary logic,parallel processing,remotedata transmission,and more.[18]
Replication
editSpare components address the first fundamental characteristic of fault tolerance in three ways:
- Replication:Providing multiple identical instances of the same system or subsystem, directing tasks or requests to all of them inparallel,and choosing the correct result on the basis of aquorum;
- Redundancy:Providing multiple identical instances of the same system and switching to one of the remaining instances in case of a failure (failover);
- Diversity: Providing multipledifferentimplementations of the same specification, and using them like replicated systems to cope with errors in a specific implementation.
All implementations ofRAID,redundant array of independent disks,except RAID 0, are examples of a fault-tolerantstorage devicethat usesdata redundancy.
Alockstepfault-tolerant machine uses replicated elements operating in parallel. At any time, all the replications of each element should be in the same state. The same inputs are provided to eachreplication,and the same outputs are expected. The outputs of the replications are compared using a voting circuit. A machine with two replications of each element is termeddual modular redundant(DMR). The voting circuit can then only detect a mismatch and recovery relies on other methods. A machine with three replications of each element is termedtriple modular redundant(TMR). The voting circuit can determine which replication is in error when a two-to-one vote is observed. In this case, the voting circuit can output the correct result, and discard the erroneous version. After this, the internal state of the erroneous replication is assumed to be different from that of the other two, and the voting circuit can switch to a DMR mode. This model can be applied to any larger number of replications.
Lockstepfault-tolerant machines are most easily made fullysynchronous,with each gate of each replication making the same state transition on the same edge of the clock, and the clocks to the replications being exactly in phase. However, it is possible to build lockstep systems without this requirement.
Bringing the replications into synchrony requires making their internal stored states the same. They can be started from a fixed initial state, such as the reset state. Alternatively, the internal state of one replica can be copied to another replica.
One variant of DMR ispair-and-spare.Two replicated elements operate in lockstep as a pair, with a voting circuit that detects any mismatch between their operations and outputs a signal indicating that there is an error. Another pair operates exactly the same way. A final circuit selects the output of the pair that does not proclaim that it is in error. Pair-and-spare requires four replicas rather than the three of TMR, but has been used commercially.
Failure-oblivious computing
editFailure-oblivious computingis a technique that enablescomputer programsto continue executing despiteerrors.[19]The technique can be applied in different contexts. It can handle invalid memory reads by returning a manufactured value to the program,[20]which in turn, makes use of the manufactured value and ignores the formermemoryvalue it tried to access, this is a great contrast totypical memory checkers,which inform the program of the error or abort the program.
The approach has performance costs: because the technique rewrites code to insert dynamic checks for address validity, execution time will increase by 80% to 500%.[21]
Recovery shepherding
editRecovery shepherding is a lightweight technique to enable software programs to recover from otherwise fatal errors such as null pointer dereference and divide by zero.[22]Comparing to the failure oblivious computing technique, recovery shepherding works on the compiled program binary directly and does not need to recompile to program.
It uses thejust-in-timebinary instrumentationframeworkPin.It attaches to the application process when an error occurs, repairs the execution, tracks the repair effects as the execution continues, contains the repair effects within the application process, and detaches from the process after all repair effects are flushed from the process state. It does not interfere with the normal execution of the program and therefore incurs negligible overhead.[22]For 17 of 18 systematically collected real world null-dereference and divide-by-zero errors, a prototype implementation enables the application to continue to execute to provide acceptable output and service to its users on the error-triggering inputs.[22]
Circuit breaker
editThecircuit breaker design patternis a technique to avoid catastrophic failures in distributed systems.
Redundancy
editRedundancy is the provision of functional capabilities that would be unnecessary in a fault-free environment.[23] This can consist of backup components that automatically "kick in" if one component fails. For example, large cargo trucks can lose a tire without any major consequences. They have many tires, and no one tire is critical (with the exception of the front tires, which are used to steer, but generally carry less load, each and in total, than the other four to 16, so are less likely to fail). The idea of incorporating redundancy in order to improve the reliability of a system was pioneered byJohn von Neumannin the 1950s.[24]
Two kinds of redundancy are possible:[25]space redundancy and time redundancy. Space redundancy provides additional components, functions, or data items that are unnecessary for fault-free operation. Space redundancy is further classified into hardware, software and information redundancy, depending on the type of redundant resources added to the system. In time redundancy the computation or data transmission is repeated and the result is compared to a stored copy of the previous result. The current terminology for this kind of testing is referred to as 'In Service Fault Tolerance Testing or ISFTT for short.
Disadvantages
editFault-tolerant design's advantages are obvious, while many of its disadvantages are not:
- Interference with fault detection in the same component.To continue the above passenger vehicle example, with either of the fault-tolerant systems it may not be obvious to the driver when a tire has been punctured. This is usually handled with a separate "automated fault-detection system". In the case of the tire, an air pressure monitor detects the loss of pressure and notifies the driver. The alternative is a "manual fault-detection system", such as manually inspecting all tires at each stop.
- Interference with fault detection in another component.Another variation of this problem is when fault tolerance in one component prevents fault detection in a different component. For example, if component B performs some operation based on the output from component A, then fault tolerance in B can hide a problem with A. If component B is later changed (to a less fault-tolerant design) the system may fail suddenly, making it appear that the new component B is the problem. Only after the system has been carefully scrutinized will it become clear that the root problem is actually with component A.
- Reduction of priority of fault correction.Even if the operator is aware of the fault, having a fault-tolerant system is likely to reduce the importance of repairing the fault. If the faults are not corrected, this will eventually lead to system failure, when the fault-tolerant component fails completely or when all redundant components have also failed.
- Test difficulty.For certain critical fault-tolerant systems, such as anuclear reactor,there is no easy way to verify that the backup components are functional. The most infamous example of this isChernobyl,where operators tested the emergency backup cooling by disabling primary and secondary cooling. The backup failed, resulting in a core meltdown and massive release of radiation.
- Cost.Both fault-tolerant components and redundant components tend to increase cost. This can be a purely economic cost or can include other measures, such as weight.Crewed spaceships,for example, have so many redundant and fault-tolerant components that their weight is increased dramatically over uncrewed systems, which do not require the same level of safety.
- Inferior components.A fault-tolerant design may allow for the use of inferior components, which would have otherwise made the system inoperable. While this practice has the potential to mitigate the cost increase, use of multiple inferior components may lower the reliability of the system to a level equal to, or even worse than, a comparable non-fault-tolerant system.
Related terms
editThere is a difference between fault tolerance and systems that rarely have problems. For instance, theWestern Electriccrossbarsystems had failure rates of two hours per forty years, and therefore were highlyfault resistant.But when a fault did occur they still stopped operating completely, and therefore were notfault tolerant.
See also
edit- Byzantine fault tolerance
- Control reconfiguration
- Damage tolerance
- Data redundancy
- Defence in depth
- Ecological resilience
- Elegant degradation
- Error detection and correction
- Error-tolerant design(human error-tolerant design)
- Fail-safe
- Failure semantics
- Fall back and forward
- Graceful exit
- Intrusion tolerance
- List of system quality attributes
- Progressive enhancement
- Resilience (network)
- Robustness (computer science)
- Rollback (data management)
- Self-management (computer science)
- Crash-only software
References
edit- ^abcDaniel P. Siewiorek; C. Gordon Bell; Allen Newell (1982).Computer Structures: Principles and Examples.McGraw-Hill.ISBN0-07-057302-6.
- ^Algirdas Avižienis; George C. Gilley; Francis P. Mathur; David A. Rennels; John A. Rohr; David K. Rubin."The STAR (Self-Testing And Repairing) Computer: An Investigation Of the Theory and Practice Of Fault-tolerant Computer Design"(PDF).
- ^"Voyager Mission state (more often than not at least three months out of date)".NASA.Retrieved2022-04-01.
- ^Randell, Brian;Lee, P.A.; Treleaven, P. C. (June 1978)."Reliability Issues in Computing System Design".ACM Computing Surveys.10(2): 123–165.doi:10.1145/356725.356729.ISSN0360-0300.S2CID16909447.
- ^P. J. Denning(December 1976)."Fault tolerant operating systems".ACM Computing Surveys.8(4): 359–389.doi:10.1145/356678.356680.ISSN0360-0300.S2CID207736773.
- ^Theodore A. Linden (December 1976)."Operating System Structures to Support Security and Reliable Software".ACM Computing Surveys.8(4): 409–445.doi:10.1145/356678.356682.hdl:2027/mdp.39015086560037.ISSN0360-0300.S2CID16720589.
- ^ Ray Holt. "The F14A Central Air Data Computer, and the LSI Technology State-of-the-Art in 1968".
- ^Fault tolerant computing in computer design Neilforoshan, M.R Journal of Computing Sciences in Colleges archive Volume 18, Issue 4 (April 2003) Pages: 213 – 220,ISSN1937-4771
- ^"History of TANDEM COMPUTERS, INC".FundingUniverse.Retrieved2023-03-01.
- ^Nathaniel (17 March 2021)."Why your website should work without JavaScript".DEV Community.Retrieved2021-05-16.
- ^Fairfax, Zackerie (2020-11-28)."Legacy Twitter Shutdown Means You Can't Tweet From The 3DS Anymore".Screen Rant.Retrieved2021-07-01.
- ^Hudak, J.J.; Suh, B.-H.; Siewiorek, D.P.; Segall, Z. (1993)."Evaluation and comparison of fault-tolerant software techniques".IEEE Transactions on Reliability.42(2): 190–204.doi:10.1109/24.229487.ISSN1558-1721.
- ^Stallings, W (2009):Operating Systems. Internals and Design Principles,sixth edition
- ^Thampi, Sabu M. (2009-11-23). "Introduction to Distributed Systems".arXiv:0911.4395[cs.DC].
- ^"Control".IEEE.Archived fromthe originalon 1999-10-08.Retrieved2016-04-06.
- ^Baha Al-Shaikh, Simon G. Stacey,Essentials of Equipment in Anaesthesia, Critical Care, and Peri-Operative Medicine(2017), p. 247.
- ^Dubrova, E. (2013). "Fault-Tolerant Design", Springer, 2013,ISBN978-1-4614-2112-2
- ^Reliability evaluation of some fault-tolerant computer architectures.Springer-Verlag. November 1980.ISBN978-3-540-10274-8.
- ^Herzberg, Amir; Shulman, Haya (2012)."Oblivious and Fair Server-Aided Two-Party Computation".2012 Seventh International Conference on Availability, Reliability and Security.IEEE. pp. 75–84.doi:10.1109/ares.2012.28.ISBN978-1-4673-2244-7.S2CID6579295.
- ^Rigger, Manuel; Pekarek, Daniel; Mössenböck, Hanspeter (2018),"Context-Aware Failure-Oblivious Computing as a Means of Preventing Buffer Overflows",Network and System Security,Lecture Notes in Computer Science, vol. 11058, Cham: Springer International Publishing, pp. 376–390,arXiv:1806.09026,doi:10.1007/978-3-030-02744-5_28,ISBN978-3-030-02743-8,retrieved2020-10-07
- ^Keromytis, Angelos D. (2007),"Characterizing Software Self-Healing Systems",in Gorodetski, Vladimir I.; Kotenko, Igor; Skormin, Victor A. (eds.),Characterizing Software Self-Healing Systems,Computer network security: Fourth International Conference on Mathematical Methods, Models, and Architectures for Computer Network Security,Springer,ISBN978-3-540-73985-2
- ^abcLong, Fan; Sidiroglou-Douskos, Stelios; Rinard, Martin (2014). "Automatic Runtime Error Repair and Containment via Recovery Shepherding".Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation.PLDI '14'. New York, NY, US: ACM. pp. 227–238.doi:10.1145/2594291.2594337.ISBN978-1-4503-2784-8.S2CID6252501.
- ^Laprie, J. C. (1985). "Dependable Computing and Fault Tolerance: Concepts and Terminology",Proceedings of 15th International Symposium on Fault-Tolerant Computing (FTSC-15), pp. 2–11
- ^von Neumann, J. (1956). "Probabilistic Logics and Synthesis of Reliable Organisms from Unreliable Components",in Automata Studies, eds. C. Shannon and J. McCarthy, Princeton University Press, pp. 43–98
- ^Avizienis, A. (1976). "Fault-Tolerant Systems",IEEE Transactions on Computers, vol. 25, no. 12, pp. 1304–1312