DBEA55AED16C0C92252A6554BC1553B2 Clicky DBEA55AED16C0C92252A6554BC1553B2 Clicky
April 24, 2024
Care to share?

Recently we’ve seen a number of instances in which software problems have emerged on commercial aircraft, most recently, a glitch that could shut down a Boeing 787 in flight, rendering the electrical system that controls the aircraft useless.  We’ve also heard that a member of the hacking community, who runs a computer security service, may have caused a United Airlines aircraft to change direction after taking control of the aircraft via the wi-fi system on board.

Couple this with prior glitches with both A380s and 787 that have been told to us by industry insiders, who are afraid to go public to protect their jobs, and it appears the industry has a new problem to address — keeping aircraft systems safe from hackers, viruses and other threats.  So far, we’ve seen some gaps that don’t add to our confidence that the industry is doing all it could, either in initial design and development by the airframe manufacturers to the implementation of operational security procedures by airlines.

A recent Amtrak accident near Philadelphia brought up the issue of Positive Train Control, in which dispatchers could remotely correct for a problem such as speeding around a curve at twice the posted speed limit.   Revelations regarding the Boeing Uninterruptible Autopilot, the existence of which was addressed briefly in a lawsuit, are scant, with no details about who can take control of the aircraft and under what circumstances.  But if software like that could be hacked, look out.  Software appears to be the Achilles’ heel in aircraft development programs, introducing new types of risks that require mitigation.

Why Have Aircraft Programs Been so Late

Most new aircraft programs are late, with an average gestation period of 6 years, up from what was a consistent 48 months from announcement to delivery in the old days – almost like clockwork.  The mechanics of flight haven’t changed, nor the basics for constructing an aircraft. They still need wings, engines, cockpits and basic flight controls.  What has changed, however, is the development of software that now controls virtually every element of fight through computers, rather than mechanical devices.

“Fly-by-wire” systems, employed on all current Airbus aircraft and new aircraft from other manufacturers like the Boeing 787, Bombardier C Series, and Embraer E2s are essentially the glue that hold the aircraft flight systems together, and routinely run into several million lines of code.  Just double checking the code, not to mentioning testing under virtually every possible scenario that could be experienced, is problematic from a time and manpower standpoint.

Outsourced development also doesn’t help.  I was speaking with a technical expert reviewing software for a major avionics firm who indicated that comment codes explaining what was going on in a programming module were in Russian or Hindi, and needed to be translated during their review of the software for testing.   Sometimes outsourcing can be counter-productive in terms of what it actually costs, particularly when something goes wrong or logic needs to be reviewed.

The risks from a software mistake can be as high as those from a mechanical failure.  A recent un-commanded descent of a Lufthansa Airbus A321 is an example of software not being up to the task.  In this incident, an angle of attack sensor failed on the A321 in flight, resulting in a warning that the nose angle was too high.  Airbus “alpha protection” software, which cannot be overridden by the pilot, decided that a descent was necessary, and pushed the nose of the aircraft downward in a 4,000 foot per minute dive – more significant than the 1,000 feet per minute in a normal descent.  In this event, the good news is that the well-trained Lufthansa pilots were able to re-gain control of the aircraft, and return it to straight and level flight.  But the bad news is that the computer, for all of its complex protection software, failed to perform a basic cross-check that a human would instinctively do to determine whether a sensor reading was false or something real was occurring.

An increase in angle of attack would result in an increase in altitude with a corresponding decrease in airspeed at the same throttle setting, as it would be the same as a pilot pulling back on the stick.  To maintain altitude, throttle levels would need to be increased.  A simple cross check of altitude, airspeed and throttle with angle of attack would have indicated that the sensor must have gone haywire if the throttle, altitude and airspeed remained constant.  But the software apparently isn’t that sophisticated, allowing a failed sensor to put an aircraft at risk through mandatory software overrides of controls.

Airbus and Boeing differ philosophically regarding computers making judgments for pilots.   In an Airbus, the “alpha protection” system is designed to prevent crashes due to aerodynamic stalls and is always “on”.  In a Boeing, the pilot can override the computers and manually use the “stick and rudder” to fly the airplane.  The first aircraft with fly-by-wire, the A320, was designed to be easier to fly by including protections against inexperienced pilots, as the computer would prevent certain actions that could become problematic.

Airbus quickly found out how difficult this could be, when during a demonstration of their prototype at the Basel-Mulhouse air show in 1985, it crashed while attempting a low speed pass of the runway, killing three people.  Apparently, the software believed the airplane was in landing configuration, and overrode the pilots commands to fly low over the runway.  The computer commanded a cut of the throttles, and the aircraft crashed in the woods as there was nothing the pilot could do to override the computer.  Airbus defended its design.

But despite a different philosophy, Boeing is experiencing software quality control issues on its 787. Apparently if the electrical system is left on for 284 days, it can shut down.  From a software perspective, it sounds to us as if a program is allocating a certain amount of memory, and perhaps not releasing it after finishing its computations.  As a result, once memory is filled by not being properly released back for use, the system can crash, as it can no longer find enough space in which to function.  We’ve seen these types of “leaky memory” issues often in Microsoft products, and suspect a similar event may be occurring with the Dreamliner.

Over-Reliance on Software can also Backfire

Asiana Airlines flight 214 to San Francisco crashed, killing 3 people, on a beautiful day with the sun shining and great visibility in 2013.  The reason for this crash, according to the NTSB, was a lack of crew experience manually flying the aircraft.  Apparently Asiana, and several other airlines, mandate the use of the autopilot for virtually the entire flight, including a coupled approach to the instrument landing system to automatically land the aircraft.  But on that fateful day, the airport ILS was off for maintenance (and unnecessary on a clear day) requiring a manual landing by the pilots.  But lacking experience in actually flying the aircraft, and too much deference to a captain by the other crew members (a cultural problem with Asian carriers), the pilots, with no autopilot to bail them out, landed short of the runway.

What Needs to be Done

Creating strong, robust, hack-proof and easily checked software isn’t impossible.   But it does require the right tools.

First, they should be built in an easy to program language that automates the computer science elements, and allows engineers who know the design of the aircraft, to specify the operational parameters and logic for the areas they design.  They know best how the aircraft should perform, and also the secondary logic to test unusual circumstances, including those situations when a sensor may fail and data no longer make sense.

Second, that software needs to be secure, and “hack proof.”  It should be an old fashioned compiled language, without programmers having access to the source code.  By eliminating “go to” commands and restricting programmers to a limited instruction set that contains the required building blocks for software applications, there can be no “back doors” inserted into a program for later access or sabotage.  These are critical elements for security.

Third, should be machine independent.  Computers and technologies change rapidly over time.  Try finding parts to replace a 386 computer today – unless you are working on an airplane; in which case technologies become certified and can seem to become frozen in time.  Why not have a language that segregates the API layer that links the program to the specific operating system and computing technology from the program logic itself?  Then the maker of the software language could adapt the language to each new technology, and make existing programs truly machine independent.  Imagine being able to replace hardware without having to reprogram all of the software.  It is possible.

Fourth, the software needs to be highly productive.  There are many repetitive tasks in programming, and smart programmers often have subroutines to accomplish them, or copy previously developed lines of code which accomplish the same function.  The basics – including developing displays, colors, font sizes, graphics, and messaging, are fundamentals that are repetitive, and easy to incorporate as parameters.  Standardizing the basics, and enabling highly productive development, would significantly decrease the time for software development and enable new aircraft programs to reduce software-based delays.

Fifth, the software should be adaptable to any language.  With only a small set of pre-engineered commands, highly productive software could be language independent.  By simply translating the command names, software development could be easily reviewed and debugged, whether created in Hindi, Russian, Chinese or English.  If structured properly, the set of commands could be translated into any language easily and be a truly universal programming language.

Sixth, the software should be easy to use.  By having the software accomplish the complicated API layer, all users will need to do are identify the data, develop the logic, and design the output.  Those functions are relatively easy to accomplish with the right tool kit.  With a simplified instruction set, it will be possible to renew much of our technology infrastructure, and to do so cost-effectively, with designers and end-users able to create software themselves.

While this seems quite a wish list, it is not impossible.  We are aware of a technology consortium that already has these capabilities in place, and will be introducing those capabilities to interested parties in the aviation community.  Our consulting arm can provide an appropriate introduction.  Productive and secure systems, that can be rapidly developed on-time and at low cost are not impossible.  The future looks much brighter, as new technologies are on the way.

6 thoughts on “Is Software now the Achilles’ Heel of Aircraft Design?

  1. While it is unlikely in the BA 787 integer overflow case to be powered on ( ground and air combined ) for xxx days, the mere fact that an ‘ overflow ” for ANY reason can honk up ALL systems is very disturbing. As is the failure to have some sort of voting ( eg 2 of 3 ) re sensor inputs or rate changes outside of ” normal ” rates to provide alarms and or certain disconnects. Add to that that airlines now frown on manual flying seems to be setting the stage for more ” HAL” events. At least some carriers I hear are pushing the training in small aircraft for upset events like stalls. Seems to me a small jet – two person- could be tweaked to respond like a large jet via fly by wire and used at high altitude to such training. Yes it has been done by military and by NASA re space shuttle, and that was a few decades ago. Why not a consortium to provide such training ?

  2. Human error is still the underlying problem. Consider the recent a400m crash where software was incorrectly updated…

  3. Many of the systems do have “voting” among 3-4 redundant sensors — but can still fail on occasion. I agree that manual flying skills need to be updated. I personally took a course in aerobatics to understand how to recover from unusual attitudes, and found it beneficial. A consortium could be an interesting concept, if we could get egos out of the way. Thanks for the comment

  4. –> It should be an old fashioned compiled language, without programmers having access to the source code

    Please explain how a programmer can programming without access to source code?

  5. “We are aware of a technology consortium that already has these capabilities in place.” Who?

    I am in software outside of aviation but the concepts/architecture can be used by the rest of the software world too so I am curious.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.