Navigating AI in Aviation: A Roadmap for Risk and Security Management Professionals

Muhammad Usman
Author: Muhammad Usman, CISM, CDPSE, CISSP, ISO/IEC 27001 LI, ITIL v4 MP, AWS CSA-A
Date Published: 23 December 2024
Read Time: 5 minutes

The impact of artificial intelligence (AI) in recent times is undeniable, and the almost daily developments in AI continue to influence and revolutionize industries worldwide. The aviation industry is no exception. However, integrating AI into aviation comes with unique governance, risk and compliance (GRC) challenges. The Roadmap for Artificial Intelligence Safety Assurance, recently published by FAA, recognizes the potential of AI on aviation and emphasizes the need for safety assurance, industry collaboration and incremental implementation. This roadmap, combined with other international frameworks, offers a global framework for managing AI risks in aviation.

FAA’s AI Roadmap: Key Insights

The FAA's Roadmap for Artificial Intelligence Safety Assurance gives us a strategy to ensure that AI systems are integrated safely into aviation. While AI demonstrates the potential for enhanced operational efficiency, predictive maintenance and even autonomous flight, these benefits come with significant security and compliance risks. Following are some aspects of the roadmap that resonate with risk and security management:

  • Risk-based safety assurance: The FAA roadmap emphasizes that AI systems must undergo strict safety assurance processes before deployment.
  • Incremental deployment: The FAA roadmap advocates for phased AI integration, starting with lower-risk applications. This approach allows for real-world data collection, testing and iterative improvements while managing exposure to risks.
  • Managing learning AI: Differentiating between learned AI (static) and learning AI (adaptive) poses a significant challenge in AI risk management. The FAA roadmap calls for continuous monitoring and assurance, especially for learning AI, echoing the need for dynamic risk assessment protocols like those recommended in NIST-AI-600-1 for managing generative AI models.

Other Global Frameworks Shaping AI Risk Management in the Aviation Space

Beyond the FAA roadmap, and well-established frameworks like ISO/IEC 27000 family of standards, ISO/IEC 31000 and EU’s General Data Protection Regulation (GDPR), several emerging industry standards and frameworks offer additional guidance for managing the risks posed by AI systems, particularly in high-stakes environments like aviation.

1. EASA Artificial Intelligence Roadmap 2.0

The European Union Aviation Safety Agency’s (EASA)Artificial Intelligence Roadmap 2.0 outlines Europe’s strategic approach to integrating AI into aviation with a strong emphasis on safety, certification and regulation. Like the FAA’s roadmap, EASA’s plan emphasizes risk-based assessments, collaboration with industry stakeholders and incremental AI deployment to ensure that AI technologies are safely and effectively adopted in the aviation sector. The EASA roadmap provides a comprehensive framework that encourages global cooperation and safety-driven AI integration across the aviation industry.

2. ISO/IEC 42001

ISO/IEC 42001 sets the standard for AI management systems and emphasizes a risk-based approach for developing and managing AI technologies. ISO/IEC 42001 complements the FAA AI roadmap and promotes the need for continuous risk assessments. For aviation risk professionals, this framework provides a structured approach to managing AI risks throughout the technology lifecycle.

3. ISO/IEC 23894

ISO/IEC 23894 provides a comprehensive framework for AI systems risk management, especially in high-risk environments like aviation. ISO/IEC 23894 focuses on identifying, assessing and mitigating risks across the entire AI lifecycle, from development to deployment and operation. It complements the FAA and EASA AI roadmaps by providing structured risk management processes that ensure the AI systems meet both safety and ethical requirements and they are designed not to make discriminatory decisions or compromise passenger safety in any way. This standard, when combined with ISO/IEC 42001 and the NIST AI RMF, creates a robust risk management system for AI in aviation. It is specifically valuable if we want to ensure that AI systems treat all inputs fairly, especially in life-and-death situations.

4. NIST AI Risk Management Framework (AI RMF)

The NIST AI RMF offers a systematic framework for recognizing, assessing and reducing risks in AI systems, focusing not only on performance but also on security and governance. The NIST AI RMF’s emphasis on transparency, accountability and fairness directly aligns with the FAA roadmap, which calls for clear governance structures to manage AI risks in aviation.

5. NIST-AI-600-1

The rise of generative AI introduces new risks, particularly around model integrity, intellectual property and data security. NIST-AI-600-1 focuses on mitigating risks specific to generative AI, including data integrity and model manipulation. This is critical for aviation, where AI systems like predictive maintenance or operational analytics rely on secure, accurate data.

Practical Steps for Risk and Security Management Professionals

Incorporating AI in aviation is far from straightforward, and due to human safety concerns, it involves navigating a constantly evolving landscape of risks and at times overbearing regulatory requirements. For risk and security professionals, the key task is to align AI technologies with operational safety and evolving regulatory requirements. Here’s a non-exhaustive list of steps on how they can leverage the aforementioned global frameworks to effectively control AI integration in their systems:

1. Implement Rigorous Risk Assessment Processes

Frameworks like ISO/IEC 42001, ISO/IEC 23894 and NIST AI RMF can be used for a comprehensive risk assessment to evaluate the AI systems for risks. Continuous monitoring is critical, in particular for learning AI models that can evolve in unpredictable ways, potentially introducing new security vulnerabilities.

2. Ensure Compliance with Global Standards

AI management and governance policies’ alignment with FAA and EASA roadmaps, as well as international regulatory frameworks like the EU AI Act, especially since the EU AI Act has an extraterritorial reach (meaning that it also applies to non-EU organizations if their AI systems have an impact on the EU citizens), is important. This ensures that AI systems deployed by airlines and other organizations in the aviation space meet the highest global safety and compliance benchmarks. AI systems have been categorized as high-risk under the EU AI Act, which necessitates strong oversight and transparency under the Act.

3. Focus on AI Security

AI systems continue to be targets of cyber threats, and in sectors like aviation, protection of data and its integrity are critical. Security professionals must ensure that AI models and their data pipelines remain safe against threats of tampering, data poisoning and unauthorized access. This is important to reduce the risk of data manipulation or corruption, in line with NIST-AI-600-1 guidance.

4. Adopt an Incremental Approach

AI should be introduced incrementally, starting with low-risk applications like predictive maintenance, data analysis, or passenger services, before moving to higher-risk areas such as autonomous flight systems or cockpit assistance systems, which is also what is recommended by FAA in their roadmap. In this regard, implementing pilot programs to test AI technologies in controlled environments should allow for controlled experimentation and assurance before full-scale deployment.

5. Collaborate with Stakeholders

As AI continues to become a global technology, its risks continue to transcend borders. This makes it critical to engage with other industry stakeholders, such as airlines, technology and other service-providers and industry bodies, to ensure a collaborative approach to AI integration in aviation systems. This can help in addressing challenges and leveraging collective expertise for a successful AI implementation.

6. Training and Education

Tailored training sessions and workshops to assist stakeholders in understanding AI governance principles and best practices will help in development of a well-informed workforce that can implement AI responsibly in aviation systems while maintaining long-term regulatory compliance.

7. Continuous Monitoring and Feedback

AI systems, especially learning models, require ongoing oversight to ensure they function as intended. Establish continuous monitoring mechanisms that use real-world data to assess performance and adjust as needed. This aligns with the FAA roadmap’s adaptive safety assurance and ISO/IEC 42001’s focus on continuous risk management. Regular feedback helps to detect early deviations and enables timely interventions, and ensures safety, security and operational compliance.

8. Compliance and Audits

Regular audits are important not only for maintenance of AI systems’ compliance with the industry standard safety, security and performance benchmarks, but also to identify gaps to ensure that AI systems continue to operate within regulatory boundaries.

9. Stay Updated on AI Regulations and Standards

AI regulations and standards continue to rapidly evolve, making it critical to stay informed about relevant updates from regional organizations like the FAA and EASA, and global aviation organizations like ICAO and IATA. Regular review of the updates to various AI frameworks ensures that AI governance stays in line with global mandates and best practices. Staying updated also helps risk teams in adapting to the new compliance requirements more efficiently.

Risk and Security Professionals Play Crucial Role

The AI roadmaps from the FAA and EASA, when combined with other global frameworks like ISO/IEC 42001, ISO/IEC 23894, the NIST AI RMF and NIST-AI-600-1, offer a strong foundation for safely integrating AI into aviation. Risk and security management professionals in aviation and beyond are crucial in ensuring that AI technologies improve operational efficiency while minimizing associated risks. By aligning with these frameworks and keeping up with evolving standards, organizations can harness the advantages of AI while upholding the aviation industry’s high safety and security benchmarks.

Additional resources