||:: Site Map ::|
About this Website » Phase 2 Methodology
Phase 2 Methodology
In Phase 2 of the study, we found and recorded evidence related to each issue identified as a problem or concern in Phase 1 (see Phase 1 Report for more information). Unlike in Phase 1, where we considered any assertion that a problem or concern exists as satisfactory grounds for recording a citation, in Phase 2 we looked for clear, documented evidence supporting or contradicting the issue statement.For example, consider the issue statement:
This statement suggests that a problem exists. In Phase 1, when we found an unsubstantiated claim that pilots do not adequately understand the automation they use, we recorded the text of that claim as a citation for Issue 105. Without some form of substantial justification, however, that claim could not be considered evidence related to Issue 105 in Phase 2. Rather, the criterion for information about Issue 105 to be considered evidence was that a reasoned, careful study of the issue by an individual or group with demonstrable expertise in aircraft automation and flight safety yielded results supporting or contradicting the Issue 105 statement. That evidence supporting the side of the issue suggested by its issue statement we call supportive evidence. Evidence supporting the other side of the issue we call contradictory evidence. For example, the following could be considered evidence related to Issue 105:
Types of Studies
Our analysis consisted of reviewing documented studies potentially yielding evidence related to one or more issues:
We identified these studies in several ways. First, our Phase 1 review yielded a large bibliography of documents related to flight deck automation. Since we were familiar with these documents, having reviewed them for citations of problems and concerns, we knew which ones described studies containing evidence. These documents in turn led us to other documents describing other studies. Our own survey of experts yielded evidence, plus the experts we surveyed in many cases identified documented studies on which they based their responses. Our Phase 1 incident analysis yielded many Aviation Safety Reporting System incident reports potentially containing evidence for issues. Reviews of news reports and accident report abstracts helped us identify accident reports that potentially contained evidence. Finally, by monitoring the current literature, attending professional conferences, and consulting our colleagues in the field, we supplemented our list of documented studies.
Document Review and Evidence Approval
We carefully read the documentation from each study, looking for information meeting the criterion for evidence described above. When we found it, we recorded the following in our database:
Each evidence record in the database also contained fields for analyst's notes and two approval fields. Both principal investigators (Lyall and Funk) reviewed each evidence record, checking the excerpt against the criterion for evidence, checking the excerpt and other data for completeness and correctness, correcting the record where necessary, and marking the approval fields accordingly. Consequently, each evidence record was reviewed at least three times: once by the original analyst and once each by both principal investigators. Through this process we documented what we consider to be a sound body of evidence related to the issues.
We found both supportive and contradictory evidence related to the flight deck automation issues, and some evidence was stronger than others. Although comparing the strength of two instances of evidence related to an issue from the same pilot survey, for example, was relatively straightforward, the same cannot be said for comparing the strength of evidence from, say, a pilot survey and an accident report. To try to make the assignment of strengths as objective and consistent as possible, we developed a set of strength assignment rules for each type of study. Evidence strength can range from -5, for strongly contradictory evidence, to +5, for strongly supportive evidence. Although a strength of 0 could be used for evidence that both supported and contradicted an issue, no such evidence was recorded. Rather, for studies with results supporting both sides of an issue, we recorded two instances of evidence, one supportive and one contradictory. The following table summarizes our strength assignment rules. Details are provided on the pages describing our studies.
The reader should exercise some discretion in interpreting strength values. These values reflect our assessments of the extent to which an instance of evidence supports one side or the other of an issue and, as such, are relative assessments. While we found this process extremely useful in analyzing, comparing, and organizing evidence related to the issues, we make no claim that these strengths have universal validity. We urge the reader to make his/her own assessment of how much significance to attach to a particular instance of evidence and to use our numbers merely as a relative guide.
|Last update: 4 June 2003||
© 1997-2013 Research Integrations, Inc.