Danger - Computers That Think For You

Add Your Comments

The recent crash of Air France Flight 447 over the Atlantic raises the specter of computers that do too much work.

We use PCs because they are tools to an end, whatever end that might be. Our fossil readers may remember the widespread belief that some predicted that IBM’s computers would take over for human jobs, displacing millions and causing widespread unemployment. Fast forward and you see so many jobs are “computer enhanced” that doing them any other way would be a giant step backwards.

My friend Ernie owns a small Cessna and I enjoy flying around with him, sightseeing and just enjoying flying. Needless to say this is totally hands-on flying – no “fly-by-wire” here, basically the pilot flying the plane with hard-wired controls. Not so the case with today’s jet liners.

Fly-by-Wire is a system in which “…a computer system is interposed between the operator and the final control actuators or surfaces.” In plain English this means that there is bunch of PCBs and circuits between the pilot and the plane. For some cars, you have the same thing – electronically controlled transmission, among other things. The VW bug that used to simplicity personified – you could fix the whole car with a hairpin. The last one I owned had sensors for just about everything, and one day a small sensor failed and the car was immobilized.

It appears increasingly likely that the Air France disaster was triggered by a sensor failure in the air speed indicators which lead to a series of catastrophic failures which was not foreseen in building the flight control system. Systems with millions of lines of code are impossible to make totally fail-safe; there is simply no way to test every possible sequence of events. Any of us who experienced an inexplicable BSOD knows how true this is (A good explanation of causes HERE.)

This issue is not limited to planes – the recent Washington DC Metro crash appears to be another example of system failure. According to this article

“The speed limit where the crash occurred is 59 mph, the top speed on the Metro system. If the track circuit failed to detect the idling train, computers onboard McMillan’s train would have set her train’s speed at 59 mph, making it difficult for her to hit the emergency brakes in time to avoid a crash.”

Apparently control systems are considered to be so reliable that training for catastrophic failure is not all that robust. A case in point is the recent US AIR water landing in NY – the odds are extremely low for this event and simulator training for it is not stressed, understandable given the odds.

And in a nutshell that’s about what we do with complex systems – play the odds. We build in redundancies and backs-ups for back-ups, but there is no way to foretell how increasingly complex systems might fail – and the more complex, the more likely there will be unforeseen events leading to a system failure.

Playing the odds usually works OK until the odds run out – just don’t make the wrong bet.

Leave a Reply

Your email address will not be published. Required fields are marked *