||:: Site Map ::|
About this Website » Phase 1 Final Report
Perceived Human Factors Problems of Flightdeck Automation
This report is based upon work supported by the Federal Aviation Administration, Office of the Chief Scientific and Technical Advisor for Human Factors (AAR-100) (http://www.hf.faa.gov) under Grant No. 93-G-039. John Zalenchak and Tom McCloy were FAA technical monitors; their encouragement and assistance is greatly appreciated. The work described in this report was completed while Beth Lyall was Manager of Human Factors at America West Airlines; the involvement of the airline and its employees is acknowledged and appreciated.
Any opinions, conclusions, or recommendations expressed in this report are those of the authors and do not necessarily reflect the views of their employers (past or present) or of the Federal Aviation Administration.
Although there have been many concerns voiced about the human factors of flightdeck automation, until now there existed no comprehensive list of perceived flightdeck automation problems and concerns. The purpose of our study is to compile a comprehensive list of perceived flightdeck automation human factors problems and concerns, collect and develop evidence for and against the perceived problems and concerns, prepare a list of verified problems, and begin work on solutions to a subset of those problems.
In Phase 1 of the study we reviewed 960 source documents, including publications, accident reports, incident reports, completed questionnaires, and documentation from our own analysis. In these source documents, we found 2,428 specific citations of 114 distinct perceived problems and concerns, which we organized into a taxonomy.
This reports presents a summary of our Phase 1 methodology and findings: our primary taxonomy of perceived problems and concerns, an alternate taxonomy of perceived problems and concerns, a complete list of perceived problems and concerns, representative citations, citation statistics, and a bibliography of the publications we reviewed.
Perceived Human Factors Problems of Flightdeck Automation
Automation, as a concept, is the allocation of functions to machines that would otherwise be allocated to humans. The term is also used to refer to the machines which perform those functions. Flightdeck automation, therefore, consists of machines on the commercial transport aircraft flightdeck which perform functions otherwise performed by pilots. Current flightdeck automation includes autopilots, flight path management systems, electronic flight instrument systems, and warning and alerting systems.
With the advent of advanced technology, so called "glass cockpit", commercial transport aircraft and the transfer of safety-critical functions away from human control, pilots, scientists, and aviation safety experts have expressed concerns about the safety of flightdeck automation. For example, Wiener (1989) surveyed a group of pilots of advanced technology commercial transport aircraft and found significant concerns. Wise and his colleagues (1993) found similar concerns among pilots of advanced technology corporate aircraft. Based on incident and accident data, Billings (1991a, 1994) cited problems with flightdeck automation and proposed a more human-centered approach to design and use. Sarter and Woods (1994b, 1995) have sought to further investigate and verify some of the concerns expressed by pilots and others in a series of simulator experiments exploring pilot interaction with automation.
These and other studies of flightdeck automation have advanced the state of knowledge about the human factors of flightdeck automation, but so far, all have been restricted in scope or methodology. Most surveys have focussed on rather small groups of pilots and relied on anecdotal information and subjective assessments. The experimental studies have been necessarily restricted in scope with respect to the equipment and problems investigated. To date, there does not exist a comprehensive list of verified flightdeck automation human factors problems to be addressed. Consequently, the human factors problems of flightdeck automation are not well defined and a comprehensive search for effective solutions cannot proceed.
The overall objectives of our research are to:
This report summarizes our Phase 1 methodology and results.
To obtain a comprehensive list of perceived problems and concerns, we used a very broad approach. To avoid the confusion caused by the many ways of expressing a particular problem or concern, we developed an initial taxonomy of distinct problems with, and concerns about, flightdeck automation design, operation, use, and misuse. Then we identified and collected hundreds of potential sources of problems and concerns and analyzed these sources for specific citations of problems or concerns. We recorded the citations and classified them according to the taxonomy, expanding and revising the taxonomy as necessary. We compiled a database of citations and summarized our results. The following sections describe these components of the study in more detail.
We identified and collected over 418 documents that we believed might contain citations of flightdeck automation problems and concerns. These documents included journal and proceedings papers, technical reports, news articles from newspapers and aviation periodicals, training manuals, and personal communications. We prioritized them and analyzed 229 of them (See Appendix A). We found 1,635 citations in 150 of the documents and recorded and classified them.
From this literature and other sources, we identified aircraft accidents in which automation was a possible contributing factor. We obtained and reviewed 13 accident reports:
In each report, we studied the investigation board's conclusions, counting as citations only those passages in which the boards claimed that automation-related factors contributed to the accidents. We found 53 citations in the reports (each report had at least one citation) and recorded and classified them, as described above.
We obtained 591 reports about incidents involving advanced technology aircraft from the Aviation Safety Reporting System (ASRS). In each report we examined the narrative section -- in which the reporter describes the incident in his/her own words -- for clear citations of, or very strong inferences about, automation factors that contributed to the incident. We found 368 citations in 246 reports and recorded and classified these citations.
Survey of experts
We prepared a questionnaire for a survey of aviation experts (see Appendix B). Along with basic demographic and flight experience questions, the questionnaire probed the respondent for flightdeck automation problems he/she knew about or concerns he/she had about flightdeck automation (see Sample Questionnaire). We sent participation invitations to aviation and automation related newsgroups on the Internet, to selected individuals with demonstrated expertise in automation and flight safety, and to pilots. We distributed 1096 questionnaires and received back 128 completed questionnaires as follows: 83 from commercial transport pilots, 11 from air traffic controllers, 10 from aviation safety professionals (analysts and instructors), 12 from scientists (human factors scientists, aviation psychologists, and computer scientists), five from avionics engineers, and seven from other individuals claiming familiarity with flightdeck automation. We analyzed the responses and found 371 citations in 121 questionnaires. We classified the citations of problems and concerns as described above. Note that seven respondents (all pilots) took the time to fill out the questionnaire and either responded that they knew of no problems with automation or did not cite any problems.
We also performed a Function Allocation Issues and Tradeoffs (FAIT) analysis on a generic thrust management system. FAIT (Riley, 1989) is a methodology which identifies important system characteristics and, through a series of systematic pairwise assessments of characteristic interactions, develops a set of issues related to allocation of functions to humans and to automation. We recorded and classified potential problems raised by the FAIT analysis of the thrust management system, as well as by an earlier FAIT analysis of Flight Management Systems (Riley, undated).
In total, we reviewed 961 source documents and compiled 2,428 specific citations of perceived problems and concerns from 568 of these sources into a database. Each entry contains the text of the citation, a classification of the problem or concern cited (from the taxonomy), the analyst's confidence in the classification, and analyst's explanatory notes, where appropriate. Each citation, its contextual information, and its classification were reviewed at least once by two of us (Lyall and Funk). We made revisions as necessary to assure that the citations were accurately recorded as well as correctly and consistently classified.
Our review and analysis yielded 114 perceived problems and concerns (see Perceived Flightdeck Automation Problems and Concerns section on the Full Taxonomy page).
We organized these perceived problems and concerns (P/Cs) into a taxonomy consisting of three major categories of P/Cs: those having to do with the justification or reason for existence of automation, those having to do with the design of automation itself, and those having to do with the use of automation (see the Full Taxonomy). Each category is divided into subcategories.
The reader should bear in mind that the groups of P/Cs comprising the taxonomy, as well as the P/Cs themselves, are expressions of concerns raised by other authors and are merely hypotheses at this point; until they can be verified (in Phase 2), they should not be considered as assertions of fact.
In the taxonomy, for each category we present the count or number of citations falling into that category and the per cent of total citations that number represents. Since the taxonomy is hierarchical, higher level categories include citations from categories subordinate to them. Therefore, the counts do not sum to the total number of citations and the per cents do not sum to 100.
These statistics must be interpreted with some caution. While our sources reflect a broad cross section of the aviation and human factors/psychology communities, we cannot claim with confidence that they are a representative sample. Therefore, these statistics should not be taken as an indication of the validity or importance of problems or concerns. In addition, P/Cs receiving large numbers of citations may be relatively well known and, therefore, well guarded against, while those receiving few citations may not be well recognized in the operational community. It may be, then, that a perceived problem with few or only a single citation may in fact be more dangerous than a perceived problem with many citations simply because it has not been well recognized. .
The taxonomy we presented above reflects just one way of organizing the P/Cs. Some potential users will find it logical and useful. Others may find it difficult to apply to their understanding of flightdeck automation. Our database is flexible enough to support reorganization of P/Cs from a variety of perspectives. We also prepared an alternate taxonomy, where the P/C citations are grouped according to whether they focus on the automation itself, on the pilot, on the crew, or on an organization (e.g., an airline) (see the Alternate Taxonomy).
The results indicate a very broad set of concerns about flightdeck automation, ranging from equipment reliability to how airline companies require their pilots to use automation. They also reveal the depth of these concerns, ranging from pilots' views that equipment failures imperil their safety to the concerns of human factors scientists that manufacturer's flightdeck automation design philosophies do not adequately consider the pilot.
The list of human factors problems and concerns and the database of citations is a useful index of what might be wrong with flightdeck automation, and where efforts should perhaps be focussed to improve it.
There are some important limitations, however. First, the problems and concerns are not yet verified. At this time our taxonomy represents a list of perceived problems with flightdeck automation. Each perceived problem or concern must be treated as a hypothesis, subject to verification in Phase 2. Second, although we surveyed a very wide variety of sources to compile the list of problems and concerns, the sample can in no sense be considered a random one. Therefore, as noted above, any statistics derived from the database relating to the relative proportion of citations of a particular concern must be used with caution.
Our objectives for Phase 2 are to verify the problems and concerns in our list. We will refer to the accidents and incidents in which automation was a factor, we will survey the results of empirical studies of flightdeck automation, and we will conduct simulator experiments ourselves, as necessary. The end product of Phase 2 will be a list of problems supported by empirical evidence, prioritized according to the need and opportunity for solution.
Our bibliography lists all the documents reviewed for the study, exclusive of questionnaires
and ASRS incident reports (see Phase 1 Bibliography).
|Last update: 4 June 2003||
© 1997-2013 Research Integrations, Inc.