Technology

Countries Leading Global Efforts for Safer Military AI

Countries Leading Global Efforts for Safer Military AI

American officials told "Breaking Defense" that four countries have been selected to join the United States in leading a year-long effort to explore safety frameworks for the use of artificial intelligence in military industries. The site reported that representatives from 60 countries met last week and selected five countries to lead the year-long effort: Canada, a partner in the "Five Eyes" alliance, Portugal, a NATO ally, Bahrain, a Middle Eastern ally, Austria, which is neutral, in addition to the U.S. All five will join the United States in a "working group" to gather international feedback for a second global conference scheduled for next year. Representatives from the U.S. Departments of Defense and State indicated that this represents a vital effort among governments to safeguard artificial intelligence.

As AI proliferates in armies around the world, the administration of President Joe Biden is pushing for a global initiative for the "responsible military use of AI and autonomy." This serves as a cornerstone of the official political declaration issued by the U.S. 13 months ago at the international "REAIM" conference in The Hague. Since then, 53 other countries have signed on to it. Last week, representatives from 46 governments (excluding the U.S.), along with 14 other observer states that have not officially ratified the declaration, met outside Washington, D.C., to discuss how to implement its broad ten principles.

**Future Wars: AI Will Determine the Strongest**

The proficiency of armies in wars now relies on artificial intelligence as part of technological advancement, driving the United States and China to currently compete for global dominance in this field to shape the future global landscape, according to the magazine "The National Interest." Acting U.S. Assistant Secretary of Defense for Strategic Affairs, Madelyn Mortilmans, said in a special interview with "Breaking Defense" after the meeting: "It's really important, from the State Department and the Department of Defense, that this is not just a piece of paper. It’s about state practices and how to build the capacity of states to meet those standards that we consider they are committed to."

She emphasized that this does not mean "imposing American standards on other countries with very different cultures, institutions, and levels of technological maturity." Mortilmans, who delivered the keynote address at the end of the conference, said: "While the United States is certainly a leader in AI, there are many countries that have expertise we can benefit from." She added: "For example, our partners in Ukraine have unique experience in understanding how to apply AI and autonomy in conflict."

Mallory Stewart, Assistant Secretary of State for Arms Control, Verification, and Compliance, echoed her sentiments, stating after opening the conference with a keynote address: "We’ve said it repeatedly... we don’t monopolize good ideas."

**Small Fighters to Accompany Large Fighters**

Small fighters capable of flying close to the ground and performing multiple tasks with the use of AI technology, all at a reasonable cost. However, Stewart told "Breaking Defense" that "the Department of Defense providing its expertise gained over more than a decade... was invaluable." To maintain ongoing momentum until the full group meets again next year (at a yet-to-be-determined location), the countries formed three working groups to delve into implementation details.

**Group One: Assurance**

The U.S. and Bahrain will co-lead the "Assurance" working group, which focuses on implementing the three most technically complex principles of the declaration: that AI and automated systems are built for "explicitly and well-defined uses," with "rigorous testing" and "appropriate assurances" against failure or "unintended behavior" – including if necessary, a switch to allow humans to shut down the technology.

**Group Two: Accountability**

Canada and Portugal will co-lead the work on "Accountability," which focuses on the human dimension: ensuring military personnel are properly trained to understand the "capabilities and limitations" of the technology, have "transparent and auditable" documentation explaining its application, and "exercise appropriate care."

**Group Three: Oversight**

At the same time, Austria will head the "Oversight" working group (without co-leadership at least for the moment), which will address public policy issues, such as requiring legal reviews for compliance with international humanitarian law, oversight by senior officials, as well as monitoring and eliminating "unintentional bias."

Our readers are reading too