Predicting separation errors of air traffic controllers through integrated sequence analysis of multimodal behaviour indicators

Ruoxin Xiong, Yanyu Wang, Nancy J. Cooke, Sarah V. Ligda, Christopher S. Lieber, Yongming Liu

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


Predicting separation errors in the daily tasks of air traffic controllers (ATCOs) is essential for the timely implementation of mitigation strategies before performance declines and the prevention of loss of separation and aircraft collisions. However, three challenges impede accurate separation errors forecasting: 1) compounding relationships between many human factors and control processes require sufficient operation process data to capture how separation errors occur and propagate within controller-in-the-loop processes; 2) previous human factor measurement approaches are disruptive to controllers’ daily operations because they use invasive sensors, such as electroencephalography (EEG) and electrocardiography (ECG), 3) errors accumulated in using the tasks and human behaviors for estimating system dynamics challenge accurate separation error predictions with sufficient leading time for proactive control actions. This study proposed a separation error prediction framework with a long leading time (>50 s) to address the above challenges, including 1) a multi-factorial model that characterizes the inter-relationships between task complexity, behavioral activity, cognitive load, and operational performance; 2) a multimodal data analytics approach to non-intrusively extract the task features (i.e., traffic density) from high-fidelity simulation systems and visual behavioral features (i.e., head pose, eyelid movements, and facial expressions) from ATCOs’ facial videos; 3) an encoder-decoder Long Short-Term Memory (LSTM) network to predict long-time-ahead separation errors by integrating multimodal features for reducing accumulated errors. A user study with six experienced ATCOs tested the proposed framework using the Phoenix Terminal Radar Approach Control (TRACON) simulator. The authors evaluated the model performance through two types of metrics: 1) point-level metrics, including precision, recall, and F1-score, and 2) sequence-level metrics, including alignment accuracy and sequence similarity. The results showed that 1) the model using the task and visual behavioral features significantly improved the prediction performance compared to the model using one single feature (eyelid movements), with an improvement of up to 26.95% in alignment accuracy for 10s-ahead prediction; 2) the model that combined task and visual behavioral features had a higher or comparable performance to models with different hybrid features, achieving an alignment accuracy of 82.38% for 50s-ahead error prediction; and (3) the proposed method outperformed three baseline models – Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU), and classic LSTM – by 8.21%, 3.47%, and 3.14% in alignment accuracy, respectively, for predicting 50s-ahead separation errors. These results suggest that the proposed model can effectively predict separation errors in air traffic control.

Original languageEnglish (US)
Article number101894
JournalAdvanced Engineering Informatics
StatePublished - Jan 2023


  • Air traffic control
  • Behavior indicators
  • Loss of Separation (LoS)
  • Multi-step prediction
  • Multimodal data
  • Separation errors

ASJC Scopus subject areas

  • Artificial Intelligence
  • Information Systems


Dive into the research topics of 'Predicting separation errors of air traffic controllers through integrated sequence analysis of multimodal behaviour indicators'. Together they form a unique fingerprint.

Cite this