The Code of Hammurabi stated 5000 years ago, that ‘If a builder builds a house and the house collapses and causes the death of the owner, that builder shall be put to death’.
Certainly, the Romans were also quite ruthless with execution of engineers who failed in the adequate construction of viaducts and bridges. Penalties are perhaps less harsh today; but consequences of negligence can be far more deadly due to the greater number of people using engineered facilities. Simply put: An engineered system fails when it stops working. And failure is often due to negligence in the design and construction – and often through human factors.
Think of the disasters
Think of some of the disasters that litter the engineering landscape:
- Challenger Space shuttle explodes killing 7 crew. Due to failure of the O-ring leading to the explosion of liquid fuel tanks.
- Bhopal. Piping systems failure leading to toxic vapour linked to the killing of thousands.
- Piper Alpha. An offshore platform exploded, killing numerous personnel.
- Chernobyl. A nuclear cloud is released over Europe.
- Therac-25, a cancer irradiation device. Due to a software bug patients are killed by the doses of radiation.
And recently, some spectacularly ugly train accidents. How on earth; after so much investment in train safety systems; can we still have head-on collisions?
The Primary Causes of Engineering Disasters
The primary causes of engineering disasters (according to SUNY at Stony Brook) are due to (entirely or in part):
- Human factors (incl. both ethical failure and accidents)
- Design flaws (resulting often from unethical practices)
- Materials failures
- Extreme conditions or environments
A recent study pointed out that in 800 structural failures; engineers were at fault with the top four reasons being:
- Insufficient knowledge (36%)
- Underestimation of influence (16%)
- Ignorance, carelessness, negligence (14%)
- Forgetfulness, error (13%)
How do we guard against these human flaws?
So, in our engineering endeavours, how do we guard against these human flaws?
Some suggestions are listed here:
- Build redundancy into design with functionally isolated systems
- Make use of spares especially when components are inexpensive/fail often/can be replaced easily
- Know the details in your design, such as; corners, connections, reinforcements in your design – do not assume anything
- Find trustworthy suppliers and stick to them
- Watch out for problems of scale (and when changing from static to dynamic conditions)
- If people are critical in the operation; then run tests looking at the optimal numbers of personnel needed and the necessary skill levels of the chosen personnel
- Train and retrain personnel; test and retest them if operator error can cause problems
- Use redundant software algorithms to minimize the impact of bugs
- Take care in filtering or allowing alarms to be disabled
- Adjust documentation immediately when changes are made to the operation and design and ensure everyone is aware of the changes
- Exercise management controls for improvement of procedures and changes
- Use real independent verification – not just rubberstamping – in cross-checking work
- Take extreme care in maintenance especially .with the release of stored energy and the removal of energy inputs to a system
- Use materials well within their safety limits
- Only operate equipment within design limits
- Inspect and test to eliminate defective components
- Stick strictly to applicable codes
Hopefully, what Doug Adams says is not true about you and me: ‘He attacked everything in life with a mix of extraordinary genius and naive incompetence, and it was often difficult to tell which was which’.
Thanks to the late Rich Barrett for his thoughts.
Yours in engineering Learning
Mackay’s Musings – 15th May’12 #477
125, 273 readers