56 KB – 22 Pages

PAGE – 3 ============
iTechnical Report Documentation Page DOT/FAA/AM-07/11 May 2007 Relationship of Complexity Factor Ratings With Operational Errors Pfleiderer EM, Manning CA, Goldman SM FAA Civil Aerospace Medical Institute P.O. Box 25082 Oklahoma City, OK 73125 Office of Aerospace Medicine Federal Aviation Administration 800 Independence Ave., S.W. Washington, DC 20591 Work was accomplished under approved task AM-BHRR-522. This study is an examination of the extent to which object ive static sector characteristics and controller ratings of static and dynamic sector complexity factors contribute d to the occurrence of operational errors (OEs) at the Indianapolis air route traffic control center (ZID). A mu ltiple regression model of the relationship between a combination of static sector characteristics (sector altitude strata and sector size) resulted in a modest prediction of the variance in OE incidence ( R = .70, R2 = .49). Sector size was negatively related to OEs, indicating that smaller sectors were associated with more OEs. Sector strata were positively related to OEs, indicating that higher altitude sectors were associated with more OEs. Princi pal Components Analysis (PCA) of the complexity ratings produced four components with eigenvalues >1.00, accoun ting for 62% of the variance in the data. Components were used as predictors in a multiple regression anal ysis of the number of OEs in the ZID sectors. Only Component 1 (climbing and descending aircraft in the vicinity of major airports) and Component 2 (services provided to non-towered airports) contributed significantl y to the total proportion of variance explained by the model ( R = .78, R2 = .61). Component 2 shared an inverse relationship with the number of OEs, indicating that the complexity related to providing services to non-towere d airports is associated with fewer OEs. These results will be used to guide the choice of objective measures fo r further analysis of the influence of static and dynamic sector characteristics in the occurrence of OEs. Air Traffic Control, Operational Errors, Complexity, Static Sector Characteristics, Dynamic Sector Characteristics, Subjective Ratings Document is available to the public through the Defense Technical Information Center, Ft. Belvior, VA 22060; and the National Technical Information Service, Springfield, VA 22161 Unclassified Unclassified 18 Form DOT F 1700.7

PAGE – 5 ============
iiiEXECUTIVE SUMMARY Extensive research has focused on the behavioral and organizational aspects of operational error (OE) occur -rence. While recognizing that the human component of OEs is extremely important, it is equally important to examine contextual and environmental factors. In this study, we analyzed the extent to which controller ratings of static and dynamic sector complexity factors related to the occurrence of OEs at the Indianapolis air route traf˜c control center (ZID). OE information was derived from ˜nal reports for 247 errors that occurred between 1/15/2001 and 5/28/2005. Thirty-six ZID volunteers (32 controllers and 4 operational supervisors) rated the importance of 22 static and dynamic complexity factors for each sector on which they were certi˜ed. Principal Components Analysis (PCA) of sector com -plexity ratings produced four components that accounted for approximately 62% of the variance in the dataset. The basic theory behind PCA is that variables cluster together into ficomponentsfl that re˚ect underlying dimensions in the data. In this PCA, the pattern of variables associated with Component 1 described climbing and descending aircraft in the vicinity of major airports. Component 2 variables involved ATC services provided to non-towered airports. Component 3 comprised variables associated with military operations and special use airspace. Component 4 described the effects of inclement weather on ATC operations. These results were comparable in many ways to the PCA of sector complexity factors at the Atlanta ARTCC conducted by Rodgers, Mogford, and Mogford (1998). Speci˜cally, Components 1 (major airports) and 3 (military activity and Special Use Areas [SUAs]) of the two analyses were strikingly similar, suggesting that these dimensions may be common to more than one facility. A multiple regression analysis was conducted to examine the relationship between the four dimensions revealed by the PCA and the number of OEs. Only Component 1 (major airports) and Component 2 (non-towered airports) explained a signi˜cant amount of the variance in OEs in the ZID sectors (R = .78, R2 = .61). Component 1 was positively associated with the number of OEs (i.e., higher scores were related to a higher number of OEs), whereas Component 2 had a negative relationship (higher scores were related to fewer OEs). The relationship between Component 2 and the incidence of OEs reminds us that sector complexity does not always produce a negative outcome. Indeed, a certain degree or type of complex -ity may actually be associated with a reduction in the number of OEs. The fact that Component 3 failed to contribute signi˜cantly to the prediction of OEs in this analysis does not mean that military airspace or SUAs do not make a sector more dif˜cult to work or increase the likelihood of an OE. It simply means that subjective ratings of these factors failed to predict OEs. Similarly, the inability of Component 4 to contribute signi˜cantly to the regression model may re˚ect the intermittent nature of this dynamic event, or it may simply be an artifact of the way the variable was measured. In other words, the presence of inclement weather might be highly correlated with the occurrence of OEs but the component scores based on subjective ratings of variables associated with inclement weather were not. We believe it is imprudent to make strong recom -mendations based on the results of analysis of subjective ratings. Practical prediction models must be calculated from objective measures because the actual characteristics of the sectors must be addressed when developing strate -gies to reduce OEs. However, these results represent a necessary and important step toward understanding how static and dynamic sector characteristics combine to cre -ate sector complexity, and how that complexity relates to the occurrence of OEs. Subjective information about the importance of complexity factors will guide our choice of objective measures in future analyses and may also be used to weight their importance, thus enabling us to make recommendations for reasonable, effective changes to the current system.

PAGE – 8 ============
2important to compare complexity factors in as many environments as possible. The Sector Characteristics and Operational Errors (SCOpE) project is an exten -sion of a study conducted by Rodgers, Mogford, and Mogford (1998) that examined the relationship between sector complexity factors and the occurrence of OEs at the Atlanta ARTCC (ZTL). Speci˜cally, the SCOpE project was initiated to compare and contrast the results of selected analyses from the 1998 study with similar analyses conducted using data from the Indianapolis ARTCC (ZID). The methodology of developing a re -gression model at one facility and applying the derived regression weights to another facility has met with limited success (e.g., Laudeman et al., 1998; Masalonis et al., 2003). The advantage of the SCOpE paradigm is that it employs discrete models, thus enabling us to collect a set of general factors (i.e., factors that may reliably predict OEs at more than one facility) while documenting dif -ferences between facilities. After all, facility differences often represent important environmental and contextual elements as well. The ˜rst phase of this project (Goldman, Manning, & P˚eiderer, 2006) compared a set of ZID static sector characteristics with those identi˜ed by Rodgers et al. (1998) at ZTL. With some exceptions, many of the static environmental and contextual factors that predicted OEs at ZTL also predicted the occurrence of OEs at ZID. In both studies, sector altitude strata, sector size, and number of major airports explained a signi˜cant proportion of the variance in the number of OEs per sector. However, some factors that were signi˜cantly correlated with OEs at ZTL failed to predict them in the ZID sample. The second phase of the SCOpE project considers the relationship between a set of subjective static and dynamic complexity factor ratings and ZID OEs. In their analysis of sector characteristics and OEs, Rodgers et al. (1998) collected subjective complexity ratings and combined them with objective static sector characteristics in a Principal Components Analysis (PCA) to identify and describe the dimensions represented by these dif -ferent types of variables. In the present study, subjective complexity ratings provided by ZID controllers will be examined in a series of discrete analyses to evaluate their relationship with OEs at ZID. MethodParticipants Participants were 37 volunteers from ZID. Of these, 32 were Certi˜ed Professional Controllers (CPCs), 4 were operations supervisors, and 1 was a developmental controller who had completed Radar Associate training on all sectors in his area of specialization but was not yet certi˜ed on the corresponding radar positions. To guaran -tee that all participants were treated fairly and ethically, the experimental protocol and materials were cleared through the FAA™s institutional review board. Treatment of participants also met with guidelines established by the American Psychological Association. Volunteers were assured complete anonymity and reminded of their right to terminate participation at any time. The mean age of the volunteer participants was 42 years (SD = 6 years). Participants had been certi˜ed to control traf˜c for an average of 15 years (SD = 7 years), had been working at an ARTCC facility for a mean of 17 years (SD = 7 years), and had been working at their current facility for an average of 16 years (SD = 8 years). Four had previous experience in the Terminal Radar Ap -proach Control (TRACON) environment, and six had previously worked at an Airport Traf˜c Control Tower (ATCT). ZID is divided into seven areas of specialization, each comprising either ˜ve or six sectors. All areas were reasonably well represented by the sample of volunteer controllers and supervisors. Materials Complexity Factor Questionnaire (Complexity-Q). fiComplexity-Qfl refers to an automated experimental protocol software program and the questionnaire it was designed to administer. The Complexity-Q program is divided into four sections (i.e., Work Experience, Tuto -rial, Demonstration, and Questionnaire). Each section is described separately in the following paragraphs. The Work Experience section recorded biographical data about participants™ work experience, collected infor -mation about the sectors on which they were certi˜ed, and generated a random-ordered list of sectors to be included in the questionnaire (based on input from the participant). It also randomized the presentation order of the complexity factor list and recorded the order of both lists in the output.The Tutorial section consisted of a Microsoft Pow -erPoint slide presentation (automatically opened by the Complexity-Q program) that explained the purpose of the study, provided participants with an operational de˜ -nition of sector complexity (i.e., fithe static and dynamic characteristics that increase the level of dif˜culty involved in working traf˜c in a sectorfl), familiarized them with the basic structure of the questionnaire, and provided instructions about the functionality of interface elements (e.g., buttons, sliders, and bars). The Demonstration section was an extension of the Tutorial that provided participants with an opportunity to practice using the elements described therein.

PAGE – 9 ============
3The Complexity Factor Questionnaire followed the same basic structure for each sector on which the partici -pants were certi˜ed. They were asked to provide a general fiComplexity Ratingfl for a sector using a slider object with an underlying scale ranging from 0 to 100. The end points of the slider were labeled fiLowfl and fiHighfl with visual anchors set at 10-point intervals. However, the slider was not restricted to these anchors, thus affording raters maximum response ˚exibility. Once the participants entered an overall complexity rating, they were presented sequentially with a series of 22 complexity factors and asked to indicate the level of in˚uence each factor had on the complexity of the sector. The fiFactor Ratingfl was made using the same slider and scale as the general complexity rating. If the participants were unsure about the meaning of the factor, a detailed description could be obtained by moving the mouse over the fiComplexity Factorfl label. The list of factors and their descriptions was initially derived from the 19 complexity factors identi˜ed by Mogford et al. (1994). Subject Matter Experts (SMEs) from the facility and the FAA Academy provided additional factors prior to data collection. The Complexity-Q factors and their descrip -tions are provided in Table 1. Note that only two extra factors were added by the SMEs, yet there are 22 factors in the list. The fimix of aircraft with different performance characteristicsfl and fiVFR versus IFR traf˜cfl factors were combined in the original list but were separated into two distinct factors for this study. After participants entered factor ratings for all 22 factors, they were given the opportunity to enter any complexity factors they believed were not included in the list. If a participant added a complexity factor, the new factor was added to that particular participant™s list for all subsequent sectors. Thus, whenever participants entered a new complexity factor to the list, they had to rate that factor for all remaining sectors. Procedure Testing took place from 6/13/2005 to 6/17/2005 in a classroom at ZID. The Complexity-Q automated protocol was administered on laptop computers arranged around a large table to provide participants with as much privacy as possible. Participants were ˜rst given informed consent forms to read and sign. Once their written consent was obtained, they were shown the basic structure of the Complexity-Q interface, and the fiWork Experiencefl section was brought up on the screen. Participants were requested to complete this section and then move through all subsequent sections in the order they appeared on the main interface (i.e., Tutorial, Demonstration, and Questionnaire). They were encouraged to ask questions about the interface or content of the Complexity-Q at any time during the automated protocol. Most participants completed the protocol in 40 minutes. Measures In addition to subjective Complexity-Q factor ratings provided by controllers, OE and sector characteristics information were collected from data provided by facil -ity management. The following sections describe these variables and their sources. Operational Error (OE) Data . The OE database con -sisted of information extracted from electronic records of the Final Operational Error/Deviation Report (FAA Form 7210-3) for 247 OEs occurring in ZID airspace from 1/15/2001 through 5/28/2005. Variables obtained from the ˜nal OE reports included the date and time of the OE and the number of controlled aircraft in the sector at the time the OE occurred. OEs were tallied for each sector in the ZID airspace.Sector Characteristics Data . Sector altitude strata (super high-, high-, intermediate high-, intermediate-, or low- altitude) were obtained from the facility™s Adaptation Control Environmental System (ACES) sector description ˜le and veri˜ed with sector maps. The number of associ -ated airports and the number of airports for which the sector provided approach services were derived from sector descriptions included in the center™s Standard Operational Procedures (SOPs). Staff from the facility™s airspace of˜ce clari˜ed and augmented this information.Results and Discussion Descriptive Statistics The sample consisted of 181 complexity ratings provided by CPCs ( n = 169) and operations supervisors (n = 12). Prior to the analysis, sector-by-sector compari -sons were made between the mean ratings provided by CPCs and those provided by supervisors to determine whether the two sets of observations were comparable. On the average, supervisors™ ratings were less than two standard deviations from those of controllers, indicating that supervisor and CPC ratings were similar enough to constitute a homogenous sample. Conversely, data from the single developmental controller were excluded from the analysis. Supervisors and CPCs had experience working both the Radar Associate and Radar positions, whereas the developmental controller did not. This difference in experience gave rise to a discernable pattern of rating dif -ferences, suggesting that the developmental was sampled from a different population. Table 2 lists descriptive sta -tistics for the Complexity-Q ratings. Though many of the distributions approximated normality, there were some notable exceptions. The distribution of the VFR versus IFR traf˜c, Shelves/Tunnels, Foreign aircraft/pilots, and

PAGE – 10 ============
4Table 1. Complexity-Q Complexity Factors and Descriptions *Complexity Factor Description Climbing and descending traffic Climbing and descending aircraft are those t hat are transitioning altitudes, including departure and arriva l traffic, aircraft that require different altitudes to alleviate c onflictions due to crossing traffic or other problems, and aircraft r equesting altitude changes due to turbulence, pilot preference, etc. Mix of aircraft with different performance characteristics Extent to which the mix of props , turboprops, and jets impacts the controller. VFR versus IFR traffic Extent to which differ ences in controlling VFR and IFR traffic, or VFR pilots encountering IFR conditions impacts controller workload. Number of intersecting aircraft flight paths The number of converging flight paths due to airways, arrival routes, frequent requests for direct routings ; number of airways coming into same NAVAID; number of routes converging on a STAR, etc. Number of multiple functions controller must perform Set of related tasks or services r equired in this sector (e.g., approach control, terminal feeder, en r oute, and in-trail spacing). Traffic volume Extent to which the number of aircraft relative to the amount of available airspace impac ts the controller. Amount of military or other special trafficNumber of special missions (e.g., m ilitary, NASA, flight inspection, Lifeguard) . Number of requir ed procedures that must be performed (i.e., crossing restrictions in LOAs) A group of tasks, or a specific task, required by regulation or direction. A procedure mandates controller actions and must be performed regardless of other required tasks. Amount of coordinat ion/ interfacing required Coordination with adjacent sectors, approach control, other en route centers, military facilities, etc. Major airports (inside and outside sector boundaries) that might influence the number of proc edures used, etc. Extent to which the controller’s wo rk is affected by the concentration of flights into one area due to the ori entation of the sector relative to one or more major airports. Extent operations are affected by weather Presence of weather conditions that necessitate requests for deviations, route changes, etc. Relative frequency of complex routings Frequency of airc raft that are not on a publis hed route structure, such as vectors or direct routings, etc. Special Use Areas (Restricted areas, warning areas, and military operating areas) and their associated activities Extent to which SUA reduces the amount of airspace for non- participating aircraft, create obstructions to flight routes, increase the likelihood of conflictions due to rer outes, create situations requiring special handling and monitoring, etc. Size of sector airspace The volume of ai rspace contained within t he lateral and horizontal boundaries of the sector and the ext ent to which it impacts the controllers’ ability to handle tra ffic volume, deal with special conditions (e.g., weather), and conflict resolution. Requirement for longitudinal spacing/ sequencing Combining aircraft from several streams into one stream. Adequacy of radio/radar coverage Radi o: Extent to which insufficient r adio coverage results in the use of alternate communication techniques, such as pilot-to-pilot relays. Radar: Extent to which lack of r adar results in use of non-radar procedures. Amount of radio frequency congestion Extent to which radio frequency congestion limits the controller’s ability to utilize the frequency for i ssuing instructions to aircraft. Traffic Management Initiatives Extent to which Traffic Management Initiatives impact operations (e.g., miles that must be made up, vectoring required to meet initiatives). Terrain/Obstructions Extent to which the terrain/obs tructions (e.g., mountainous areas) adds complexity. Shelves/Tunnels Extent to which shelve s and/or tunnels add to complexity of the sector (e.g., are the lateral boundarie s of the sectors above or below aligned or are they different?) .Foreign aircraft/pilots with English as a second language Extent to which communications wi th foreign aircraft/pilots with English as a second language in creases the difficulty of communications. Non-towered airports Extent to which providing service to non-towered airports increases the complexity of the sector. * Complexity factors and descriptions adapted from M ogford et al. (1994) except where indicated ( )

PAGE – 11 ============
5Table 2. Complexity-Q Descriptive Statistics N Variable Mean SD Skew. 1 Kurtosis 2 1SE Skew. = .181 2SE Kurt. = .359 Non-towered airports complexity ratings all had extreme positive skews. The distribution of Terrain/Obstructions ratings was both positively skewed and leptokurtotic. Such deviations are understandable, considering that these complexity factors only apply to some sectors (e.g., low-altitude sectors, sectors with shelves or tunnels). On the other hand, the Extent operations are affected by weather and Requirements for longitudinal spacing/ sequencing were given high ratings in almost every sector. Consequently, the distributions of these variables were signi˜cantly negatively skewed. Given the scale of some of the deviations, it is comfort -ing to know that assumptions regarding normality are not required when PCA is used descriptively. However, it is important to remember that PCA is sensitive to the magnitudes of correlations. To the extent that normality fails, the solution may be degraded (Tabachnick & Fidell, 1989). Table 3 contains a Pearson™s correlation matrix of all the Complexity-Q factors. Note that variables with non-normal distributions achieved a signi˜cant degree of association with several others in the dataset, suggesting that the deviations were not severe enough to prevent a satisfactory PCA solution. Principal Components Analysis PCA is a statistical technique often used to describe relationships between complex sets of variables. Compo -nents extracted by PCA contribute to our understand -ing of a phenomenon by consolidating variables into parsimonious groups. In orthogonal rotation, loadings represent the correlations between a variable and a com -ponent. Variables with stronger loadings are generally considered to be more representative of a component™s underlying processes. Thus, we can use PCA not only to identify the number and nature of unique dimensions described by the Complexity-Q factors but also to what extent each variable relates to them. More importantly, we can use component scores output from the PCA to analyze the relationship between the complexity factors and the number of OEs in each sector. For the Complexity-Q analyses, the number of sec -tors was reduced from 40 to 37 due to the combination of some sectors deemed appropriate by ZID personnel. With only 37 sectors in the sample and 22 complexity factors, the case-to-predictor ratio is unacceptable for most multivariate analyses. Component scores (computed by weighting variable scores using regression-like coef˜cients)

56 KB – 22 Pages