A computer program may consist of millions of lines of computer code, written by hundreds of people who each work on small segments of the program and an error as tiny as a misplaced semicolon can cause a system to malfunction. Corporate computer programmers spend 80% of their time repairing software and updating it to keep it running. Software projects are typically 100% over budget and a year behind schedule. The best programmers can be 25 times as competent as the worst, with many software design supervisors unable to evaluate or even understand their programmers' work. If management changes during the development of a program, the programmers may find that their product is unwanted when it is completed. Confusion also develops in data-processing departments because programmers have written computer code without documenting how they approached the problem; the individual maintaining the program 10 or 20 years later, after the initial programmer has left, is out of luck. Such problems are likely to grow as industry and the military increasingly rely on software to run systems of phenomenal complexity.
Examples include: patients given fatal doses by malfunctioning hospital computers, 22 fatal crashes by the fly-by-wire UH-60 helicopter, 104 failures in a day at a single air traffic control location in 1989, and the failure during the 1970s and the 1980s of observation satellites to detect atmospheric ozone depletion (due to a programming error). A specific example of the problems with software is in its use in the Strategic Defence Initiative (SDI). An SDI program must make certain assumptions about target and decoy characteristics; but those characteristics are controlled by the attacker. It must also make assumptions about the structure of the attack. Those assumptions make it easier to overload the system by using an attack strategy that violates those assumptions. Realistic testing of the integrated hardware and software after deployment is impossible, and there will be no opportunity to modify the software during or after its first battle. Bugs have been the cause of death or maiming to individuals, serious financial harm to corporations, and nearly caused the collapse of the US government-securities market.
Computer systems are inherently flawed and too unreliable for critical or vital tasks. They cannot be designed without the ever-present threat of life-endangering malfunctions because their very complexity makes thorough testing for errors impossible. The way they are built means that they are prone to total catastrophic failure, rather than partial failure. As they become more complex, so the level of the catastrophe increases. For these reasons they are dangerous when used in sensitive areas such as intensive care wards, the nuclear industry, air traffic control and early warning and strike command systems. Checking a typical power station's computer program to ensure it is error free under all conditions would take software testers literally trillions of years.