In February of this year, the Department of Defense officially adopted its Ethical Principles for Artificial Intelligence based on recommendations provided by the Defense Innovation Board. The DIB is an independent federal advisory committee that provides advice and recommendations to DoD senior leaders.
DoD leaders tasked the DIB with proposing AI Ethical Principles for the Department’s design, development, deployment, and use of AI capabilities for both combat and non-combat purposes.
The DoD’s AI Ethical Principles mandate that all DoD AI capabilities, regardless of their service component or organization, must be responsible, equitable, traceable, reliable, and governable and meet the same legal, ethical, and policy standards.
These principles are especially critical to the Joint Artificial Intelligence Center’s Joint Common Foundation as it works to scale AI’s impact across the DoD. Once the JCF is fully established, it will provide an array of cloud and technical services to the Mission Initiatives, along with multiple Development, Security, and Operations (DevSecOps) environments for AI product development.
As MI teams and developers create their products, they’ll need to determine how to put these principles into practice in the AI lifecycle from concept to deployment and use.
That’s where Alka Patel comes in. Patel is the head of the JAIC’s AI Ethics Policy and is leading the organization’s efforts to implement the AI ethics principles in tangible ways that align with the JAIC’s mission to accelerate the DoD’s adoption and integration of AI to achieve mission impact at scale.
Patel’s job is to determine what the principles mean from an operational perspective and then help create practices and policies that can be used by the JAIC and further leveraged within the DoD. However, broad implementation is not an overnight process, especially for a large-scale organization like the DoD.
“The fact that AI, from the training data to the output, is not static means you have to be intentional in creating meaningful decision points and processes to ensure that we are building technology in a responsible manner,” Patel said.
“And this doesn’t just include technical solutions but also requires non-technical solutions, including factors such as workforce training, organizational culture, risk management practices, etc.,” said Patel. “Aligning all of these pieces takes time to do it right. Ethics is everyone’s responsibility, not just the developers or our warfighter end users, nor is it an afterthought. We need to find ways to embed it into the DNA of our organization.”
In response, the JAIC launched two major initiatives: the DoD-wide Responsible AI Subcommittee and the Responsible AI Champions pilot, both of which are led by Patel.
The subcommittee is part of a broader DoD AI Working Group and is an interdisciplinary group of individuals representing all of the major components of the Department. The Responsible AI Champions pilot is a program within the JAIC that pulls together a cross-functional group of individuals who receive training on the DoD AI Ethical Principles.
The pilot provides a forum to discuss the challenges that AI brings and, more significantly, how to operationalize the principles in each of their respective roles and within their broader functional area. The initial training was provided by Carnegie Mellon University, the Chief Ethics Officer from the Army AI Task Force, and an ethicist from the Defense Innovation Unit.
Don Bitner, JCF Strategy Chief, is a member of the Responsible AI Champions cohort. “We can’t design, build, or scale in silos,” he said. “Ethics has to be a part of every AI project coming into the Common Foundation. This program is teaching us which questions to ask and how to think well beyond just building an AI infrastructure,” Bitner said. “We can then go back and share this information to ensure technology enables muscle memory and influences the JCF ecosystem.”
In addition, Patel is a part of the JAIC’s Data Governance Council Working Group and is working with the JCF’s Data team to develop datacards that include a policy appendix. The appendix is intended to help identify any privacy and ethical concerns about the data prior to ingestion into the JCF. A similar effort to help identify ethics issues early in the product development process is also being embedded into the JCF’s Product Requirements Document, which is provided to the MI teams who use the JCF.
“These AI ethical principles are basically our ‘North Star’ for the DoD, and we understand that with this technology comes great responsibility,” Patel said. “These principles are going to be our guiding mechanism as we think about how we develop, deploy, and use AI technology, including the resulting outcomes, consequences, and impact.”
For more detailed information on the DoD’s Five Ethical Principles for Artificial Intelligence, please visit: