Within the framework of the National Defense Strategy, which supports the research and use of artificial intelligence as a warfighting tool, the Defense Department's AI Strategy calls for DoD to take the lead in developing ethical AI guidelines.
In July 2018, DoD leadership tasked the Defense Innovation Board to propose a set of ethics principles for consideration. Since then, the DIB has conducted an extensive study that included numerous discussions with experts in industry, academia and the private sector.
The board also led multiple public listening sessions, interviewed more than 100 stakeholders and held monthly meetings of an informal DoD working group in which representatives of partner nations also participated. The board also conducted two practical exercises with leaders and subject matter experts from DoD, the intelligence community and academia.
Board members met [Oct. 31] in a public meeting at Georgetown University in Washington to discuss and vote on their recommended AI ethics principles. These ethics principles received unanimous approval:
Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use and outcomes of DoD AI systems.
DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or noncombat AI systems that would inadvertently cause harm to persons.
DoD's AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.
DoD AI systems should have an explicit, well-defined domain of use, and the safety, security and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.
Defense Innovation Board members made clear that certain aspects of how DoD might develop and deploy AI already are covered by the department's ethics frameworks, which are based on the U.S. Constitution, Title 10 of the U.S. Code, the Law of War, existing international treaties and longstanding DoD norms and values.
Their proposed principles, the board's members explained, are meant to address only new ethical AI questions that DoD's existing ethics framework may not cover.
"The valuable insights from the DIB are the product of 15 months of outreach to commercial industry, the government, academia and the American public," said Air Force Lt. Gen. John N.T. "Jack" Shanahan, director of the Joint Artificial Intelligence Center. "The DIB's recommendations will help enhance the DOD's commitment to upholding the highest ethical standards as outlined in the DoD AI Strategy, while embracing the U.S. military's strong history of applying rigorous testing and fielding standards for technology innovations."
The Defense Innovation Board is an independent federal advisory committee. Its members are leaders in AI and related fields from around the United States, working in industry, academia and think tanks. The purpose of their work is to conduct extensive studies on AI and other research topics and present their findings to DoD leaders to aid in their decisions.
Follow Defense Department news, at www.defense.gov/, https://www.facebook.com/DeptofDefense or https://twitter.com/DeptofDefense