Explainable Artificial Intelligence-Based Verification & Validation for Increasingly Autonomous Aviation Systems
Status: Completed
Start Date: 2021-04-05
End Date: 2022-04-26
Description: Artificial Intelligence (AI) algorithms, which are at the heart of emerging autonomy technologies that are currently revolutionizing multiple industries including aviation, defense and manufacturing, are generally perceived as black boxes whose decisions are a result of complex rules learned on-the-fly. Unless these decisions are explained in a human understandable form, the end-users are less likely to accept them and certification personnel are less likely to clear these systems for field operation. Explainable AI (XAI) is an AI algorithm whose actions can be easily understood by humans. Phase I and II of this SBIR developed EXplained Process and Logic of Artificial INtelligence Decisions (EXPLAIND)—a prototype tool for verification and validation of AI-based aviation systems. We provided a proof-of-concept for EXPLAIND by applying it to generate reliable, human-understandable explanations for decisions made by a NASA-developed aircraft trajectory anomaly detection AI algorithm. Cognitive walkthroughs of EXPLAIND’s explanation interface with controller subject matter experts demonstrated that EXPLAIND represents an important step towards user acceptance and certification of AI based decision support tools (DSTs). In Phase II-E, we propose to extend and apply it to the problem of identifying operational shortfalls related to National Airspace System (NAS) capacity underuse and investigate the underlying causes. We develop AI and machine learning algorithms to identify NAS operational shortfalls related to capacity underuse and then develop explainability-enhancing algorithms to explain the underlying causes for these shortfalls. Our Phase II-E work continues the development of a commercial AI verification and validation tool for testing such AI black boxes by providing human-interpretable explanations of the reasoning behind the AI decisions. Our proposed SBIR research addresses an area of significant opportunity for the FAA, NASA, and NAS users.
Benefits: EXPLAIND can be applied to explain the decisions of NASA AI algorithms that are used for: (1) Operational shortfalls identification for NASA ATD and ATM-X projects; (2) Aviation anomaly detection and precursor identification (for NASA’s SWS project); (3) Image recognition/perception in support of autonomous search & rescue missions (for SWS); (4) Configuring search & rescue drone teams (for SWS); (5) Image recognition for Earth science datasets (NASA Earth Science); and (6) UAM and UTM path planning, de-confliction, and scheduling (for ATM-X).
EXPLAIND can be applied to explain the decisions of commercial AI algorithms that are used for: (1) TBO benefits analysis and monitoring (for FAA Office of NextGen); (2) Improved analysis of irregular operations responses for Airlines and Airports; (3) Aviation anomaly detection (for FAA Office of Safety); (4) V&V XAI platform for AI adoption into FAA, airline, and UAS/UAM flight operator systems.
EXPLAIND can be applied to explain the decisions of commercial AI algorithms that are used for: (1) TBO benefits analysis and monitoring (for FAA Office of NextGen); (2) Improved analysis of irregular operations responses for Airlines and Airports; (3) Aviation anomaly detection (for FAA Office of Safety); (4) V&V XAI platform for AI adoption into FAA, airline, and UAS/UAM flight operator systems.
Lead Organization: ATAC