Features

Aviation safety and human element

Published

on

By Capt. G A Fernando
gafplane@sltnet.lk
RCyAF/ SLAF, Air Ceylon, Air Lanka, SIA and SriLankan Airlines.
Former Crew Resource Management (CRM) Facilitator, Singapore Airlines Ltd.
Member, Independent Air Accident Investigation Pool 

(The first part of this article appeared in The Island of 18 Sept., 2023)

In 1982, an Air Florida Boeing 737 crashed into the Potomac River, Washington D.C., after takeoff from Washington National Airport in icing conditions. Erroneous engine thrust readings (higher than actual), and the co-pilot’s lack of assertiveness in communicating his concern and comments about aircraft performance during the take-off run were among the factors cited (NTSB/AAR 82- 08).

Experts say that one needs to be ‘aggressively’ safe. All communications (verbal or written) and standard operating procedures (SOPs) should be proactive, predictive and preventive. Some of the accidents mentioned could have been prevented.

As can be seen in the diagram shown, the SHELL boundaries are not smooth but inherently full of serrations, and much effort is needed to interact efficiently and seamlessly. Some experts stress that it is Communication in the form of SOPs that ‘lubricate’ the system for smooth interaction between elements’ while the captain (team leader) sets the tone. In fact, where air safety is concerned, Capt. Tony Kern, a human factors expert, says in his book Redefining Airmanship that to maintain air safety, it is imperative that the team leader knows himself, knows his team, knows his aircraft and equipment, knows his mission and, above all, evaluates the risks involved with the task at hand.There can be problems with the interaction within the team (Liveware and Liveware). Sometimes the Captain (Leader) has to be an expert in conflict resolution! (See Figure 01)

Threats and Hazards

Almost every situation in life is full of ‘threats’. When it involves one personally, it becomes a ‘hazard’. In the aviation context, if there is a flock of birds in the vicinity of an aircraft, they constitute a ‘threat’. However, if that flock of birds starts crossing the flight path of the aircraft, it becomes a ‘hazard’ and avoidance action needs to be taken. Remember the ‘Miracle on the Hudson’? The engines failed because of bird ingestion.

 Many airports, too, contain manmade threats and hazards which are usually eliminated only after an accident. In fact, pilots say that blood has to be spilt for changes for the better to occur. At many airports high-rise or security-sensitive buildings are built without planning, and no consideration given to air safety, thus violating the law.

The Ratmalana International Airport is a case in point. On the landing approach from the Attidiya side there is the Parliament and Akuregoda Military Headquarters which are prohibited over flying areas. In the vicinity of the Ratmalana International Airport, there is the Kotelawala Defence Academy and Hospital. At the Galle Road end a solid wall creates a hazard for landing and departing aircraft. Elsewhere, at the Puttalam-Palavi airbase a cement factory is in line with the runway, whilst at China Bay-Trincomalee the silos of a flour mill obstruct landing and take-off paths. These hazards at the latter two airports render them useless as ‘alternate’ (alternative) international airports. If sufficient thought had been given to air safety planning, the loss-making Mattala Rajapaksa International Airport in the Hambantota District would never have been built.

The Swiss Cheese Model

Just as one proverbial swallow doesn’t make a summer, one error alone will not create an incident or an accident. Rather, it will be caused by a chain of unsafe events not picked up by the system. The triviality of one such potentially disastrous cause or lapse is echoed in the words of a poem from the 17th century, later popularised by Benjamin Franklin in his Poor Richard’s Almanac: (See Figure 02)

Reasons for accident occurrences are similar. In fact, the Toyota Corporation asks ‘why’ at least five times when determining the ‘root cause’ of a problem.Aircrew members are regularly taught to recognise unsafe patterns highlighted in past accident investigations, so as to nip them in the bud if and when identified.

Professor James Reason postulated the ‘Swiss Cheese’ model, which states that in any organisation, the layers of safety and security controls in place should be able to block, or cover, one another, to prevent accidents. But unfortunately, there are random holes of all sizes in these layers, like slices of Swiss cheese. Hence, the possibility that with the presence of latent conditions and active failures, these holes will align and allow a potentially dangerous situation or practice to go through without being trapped, thus creating an accident or incident. (See Figure 03)

As illustrated in Reason’s ‘Swiss Cheese’ diagram, latent failures of the system are those that compromise safety, having existed and been taken for granted for short or long periods of time; active failures are immediate, unsafe human acts. In fact, the crew (human element) is the last line of defence before an accident or incident occurs.

To illustrate these points, I shall revisit the 9-foot/3-meter concrete wall that was erected several years ago at the Galle Road end of the runway at Ratmalana International Airport.

This wall could be regarded as a man-made hazard. The runway is 1,833 metres (6,014 feet) in length, not long enough by worldwide standards for a so-called ‘international airport’. By international law, at a pre calculated critical speed (known as the go/no-go speed) pilots are allowed only two seconds in which to make a critical decision whether to stop or continue the take-off. According to calculations by the Boeing Company, a decision to stop any later than two seconds (called ‘dither time’) will result in an aircraft reaching the end of the runway at a speed of 60 knots (69 mph).

On a rainy day, if pilots of a medium-sized aircraft decide to abort the take-off three seconds late, they are unable to stop within the paved runway, with deployment of maximum braking and other stopping devices such as reverse thrust, and the aircraft will ‘overrun’. Because the grass in the overrun area is wet and slippery, the brakes are rendered ineffective. Consequently, in the case of Ratmalana, the aircraft will definitely impact the wall and perhaps catch fire as fuel tanks are usually full during departure.

So, the delay in making a decision to reject the take-off rather than continue would be an ‘active’ failure by the crew. The presence of a solid wall at Ratmalana is the presence of a ‘latent’ condition caused by the Airports Authorities. Although the wall is an ‘accident waiting to happen’, the Sri Lanka Air Force (SLAF), which earns ‘welfare’ money from advertisements on the wall, stubbornly refuses to replace it with a frangible fence, that would break on impact and reduce damage to an over-running aircraft and even vehicular traffic on the Galle Road.

Returning to Reason’s ‘Swiss Cheese’ postulation, air accident investigators usually work backwards from the incident/accident, using the ‘model’ to find the root cause, unsafe acts and any failed defences. The best witnesses are, of course, the crew themselves, although they may not want to voluntarily give information if a punitive attitude is adopted by accident investigators and the authorities. It is a long-held belief that the crew involved are damned if they tell the truth and damned if they don’t. In the recent past in Sri Lanka, the Law and the Police were quick to ‘criminalise’ air accidents. Almost two years ago the accountable manager and chief engineer were arrested and remanded for failure to prevent an accident. That is another story.

The protocol should be for an independent team to do a non-punitive inquiry, and if and only if elements of negligence are highlighted in the final accident report, then the law should take its course under the direction and oversight of the Attorney-General. In short, the authorities in Sri Lanka need to get their act together and conduct themselves in a professional, impartial, fair-minded manner.

Accidents don’t only happen to “other people”, and with threats everywhere we have to learn to mitigate and manage them. While it is human to err, could we eliminate error completely? I think not. But pilots can learn to trap and minimise their ability to make errors by using their team effectively, including pre- and post-flight briefings. A common question that should continue to be asked is: “Could we have (as a team) done things better?”

Will automation of some tasks help? Instructors often repeat the adage, “Fly the aircraft and don’t allow the aircraft to fly you.” Conversely, “The aircraft flies by itself. You assist it to fly”. I believe it is the level of automation that matters, depending on circumstances.Bernard Ziegler, a French pilot and engineer who served in Airbus Industrie as senior vice president for engineering – and was the son of Airbus founder Henri Ziegler – was well known for his evangelical zeal for the application of computerised control systems in Airbus airplanes, commencing with the revolutionary A320 airliner. Bernard Ziegler attempted to design the human out of the flight deck in Airbus’s so-called ‘fly-by-wire’ airplanes, which in their early days were involved in a series of incidents and fatal accidents, due mainly to the mismatch of the man/machine interface. So much so that the A320 was called the ‘Scare-bus’ in jest. Even today many Airbus pilots are heard to ask while flying, in perplexed tones: “What is it doing now?” or “I have never seen this happen before.”

A more recent story is that of the Boeing 737 MAX. When I flew the basic Boeing 737-200 many years ago, our Irish instructor called it the ‘thinking man’s aircraft’, a perfect match between man and machine. Somehow, due to design quirks in the newly designed 737 MAX, an automatic system called MCAS (Manoeuvring Characteristics Augmentation System) was incorporated. If the aircraft got into an unusual and unsafe nose-up attitude, MCAS would be automatically activated and lower the nose to a safer angle.

Unfortunately, during the somewhat rushed introduction of the 737 MAX onto the market, many airline crews were not sufficiently trained in how to override the system – if MCAS was activated due to false indications from, say, a computer or instrument malfunction. Worse still, some airlines’ pilots were not even told that their new airplanes were fitted with such a system, and therefore unaware of what to do if and when MCAS became activated for no apparent reason. This ignorance, through no fault of the pilots, resulted in two disastrous MAX crashes, in Indonesia and Ethiopia, with a total loss of 346 lives.

As the ‘cold hard facts’ later emerged, it became apparent that although the ‘MAX’ was arguably a totally new type of aircraft, it was designated as a Boeing 737-800 to minimise legal crew-training time on the type. This extra training was seen as an undesirable burden for Boeing’s customer airlines, who would have to withdraw captains and first officers from the line for training, thus incurring loss of productivity and revenue for the airlines.

Boeing’s intent was, therefore, for the training (non-productive) period to be as short as possible. But in practice corners were dangerously cut. The US regulator, the Federal Aviation Administration (FAA) – in this case the ‘human element’ – went along with the manufacturer’s sales and training programme, which ultimately resulted in incidents, accidents, and loss of life.

In summary, statistics show that although accidents have decreased to a small percentage in terms of flights and hours flown, the number of certified air operators is also increasing, which causes the number of accidents to increase. Difficult as it is to contemplate, it wouldn’t be wrong to say that the potential exists for more human factor-based accidents to occur in future.

Click to comment

Trending

Exit mobile version