This week, NATO Defence Ministers released the first-ever strategy for Artificial Intelligence (AI) that encourages the use of AI in a responsible manner.
Artificial Intelligence (AI) is changing the global defence and security environment, for this reason, NATO Defence Ministers released the first-ever strategy for this technology that promotes its development and use in a responsible manner.
Below are NATO principles of responsible use of Artificial Intelligence in defence:
- Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable.
B. Responsibility and Accountability:AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability.
C. Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and/or national level.
D. Reliability: AI applications will have explicit, well-defined use cases. The safety, security, and robustness of such capabilities will be subject to testing and assurance within those use cases across their entire life cycle, including through established NATO and/or national certification procedures.
E. Governability: AI applications will be developed and used according to their intended functions and will allow for: appropriate human-machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behaviour.
F. Bias Mitigation: Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets.
The new strategy also aims at accelerating and mainstream AI adoption in capability development and delivery, enhancing interoperability within the Alliance. NATO encourages to protect and monitor AI technologies used by its members.
The Alliance warns of malicious use of AI by threat actors and urges the adoption of measures and technologies to identify and safeguard against these threats.
NATO Allies have recognized seven high-priority technological areas for defence and security, including Artificial Intelligence. These technologies include quantum-enabled technologies, data and computing, autonomy, biotechnology and human enhancements, hypersonic technologies, and space.
NATO stresses the importance of addressing these technologies in an ethical way, all of them are dual-use and very pervasive.
“Some state and non-state actors will likely seek to exploit defects or limitations within our AI technologies. Allies and NATO must strive to protect the use of AI from such interference, manipulation, or sabotage, in line with the Reliability Principle of Responsible Use, also leveraging AI-enabled Cyber Defence applications.” concludes the announcement. “Allies and NATO should develop adequate security certification requirements for AI, such as specific threat analysis frameworks and tailored security audits for purposes of ‘stress-testing’. AI can impact critical infrastructure, capabilities and civil preparedness—including those covered by NATO’s seven resilience Baseline Requirements—creating potential vulnerabilities, such as cyberspace, that could be exploited by certain state and non-state actors.”
Follow me on Twitter: @securityaffairs and Facebook
Pierluigi Paganini
International Editor-in-Chief
Cyber Defense Magazine
Credit: Source link
Comments are closed.