
Sources say the list will closely follow an October report from a defense advisory board.
The Defense Department will soon adopt a detailed set of rules to govern how it develops and uses artificial intelligence, officials familiar with the matter told Defense One.
A draft of the rules was released by the Defense Innovation Board, or DIB, in October as “Recommendations on the Ethical Use of Artificial Intelligence.” Sources indicated that the Department’s policy will follow the draft closely.
“The Department of Defense is in the final stages of adopting AI principles that will be implemented across the U.S. military. An announcement will be made soon with further details,” said Lt. Cmdr. Arlo Abrahamson, a spokesman for the Pentagon’s Joint Artificial Intelligence Center.
The draft recommendations emphasized human control of AI systems. “Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems,” it reads.
The DIB guidelines and the accompanying implementation documentation, go well beyond the brief and largely superficial vision statements on AI issued by tech giants like Google, Facebook and Microsoft. For instance, the recommendations describe key dangers and pitfalls in AI development, like bias in datasets, that commercial players have only begun to grapple with.
Because the Department of Defense is adopting the principals now, at the beginning of a process of moving AI into far more activities, the hope is that good practices and design will become the norm in the way the U.S. military uses AI, rather than an afterthought that the Department has to retrofit into already existing ways of doing things.
The DIB also recommended that DoD rely on tools that are transparent, meaning, unlike some types of so-called “black box” neural networks, a technical expert (with permission) could describe the process by which the software reached a specific decision or action.
The board also recommended that such tools be used only within an “explicit, well-defined domain of use,” a codicil intended to keep software developed for noncombat activities from finding its way into lethal operations.
Heather Roff, who helped draft the DIB recommendations, said, “I’m very pleased to see that [Defense Secretary Mark Esper] has adopted the principles and is implementing them department-wide, and securing our national security through responsible research and innovation in artificial intelligence.”
Other ethicists and academics in artificial intelligence and weapons applauded the news of the DoD’s adoption but added that there was further to go, and that risks and and concerns about military use of AI would remain.
Rebecca Crootof, a law professor who specializes in technology and armed conflict at the University of Richmond School of Law, said, “I have little doubt that the process of working towards these principles was influential within the DoD. In learning about the different kinds of risks posed by AI, in working through how they might manifest in various military scenarios, and in thinking about what policies might minimize their manifestation or impact, participants in this process undoubtedly internalized why having and abiding by ethical principles for AI is critically important.” But Crootof said DoD still needs to follow the map provided to actually implement the principles.
She also said she hopes that DoD’s example would help establish international norms for the military use of AI.
“While it’s great that the DIB principles affirm the import of international law, there are a number of areas where it is still unclear what international law requires for AI systems or weapon systems with increasingly autonomous capabilities,” she said.