By Brett Wilkins

The US military is adopting new principles for what it calls the ethical use of artificial intelligence technology in warfare, sparking widespread skepticism from ethics advocates and other critics. 

Defense Secretary Mark Esper announced the new AI ethics guidelines in a memo released on February 24 in which he outlined five principles he said will drive Pentagon policy going forward. “The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” Esper said. “AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior,” he continued,  adding that “the adoption of AI ethical principles will enhance the department’s commitment to upholding the highest ethical standards.” 

The Five Principles 

Esper said the following five principles “will apply to both combat and non-combat functions and assist the US military in upholding legal, ethical and policy commitments in the field of AI.” The principles are: 

  • Responsibility: Pentagon personnel “will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.” 
  • Equitability: Pentagon planners “will take deliberate steps to minimize unintended bias in AI capabilities.”
  • Traceability: Defense Department AI capabilities “will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.” 
  • Reliability: Military AI capabilities “will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.” 
  • Governability: The Defense Department “will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

The new principles are the product of a 15-month study conducted by the Defense Innovation Board (DIB), an organization set up in 2016 and tasked with incorporating the technological innovation and best practices of Silicon Valley technology companies into the US military. The board is chaired by former Google CEO Eric Schmidt, with prominent members including Amazon founder and CEO Jeff Bezos, LinkedIn co-founder and venture capitalist Reid Hoffman, Code For America founder and director Jennifer Pahlka, retired four-star admiral William McRaven and astrophysicist Neil deGrasse Tyson. Last October, DIB released its recommendations for the ethical use of military AI, proposing that AI systems should always be controlled by humans and deployed “responsibly” and in keeping with the other principles outlined in the new ethics policy. 

DIB chairman Schmidt praised Esper’s “leadership on AI and his decision to issue AI principles,” which he said “demonstrates… to countries around the world that the US… is committed to ethics and will play a leadership role in ensuring democracies adopt emerging technology responsibly.” 

“Our intentions are clear: We will do what it takes to ensure that the US military lives up to our nation’s ethical values while maintaining and strengthening America’s technological advantage,” said Lt. Gen. Jack Shanahan, head of the Pentagon’s Joint Artificial Intelligence Center (JAIC), which last week announced it was hiring AI ethics specialist Alka Patel to lead the implementation of the new ethical principles. Shanahan said the military “owes it to the American people and our men and women in uniform to adopt AI principles that reflect our nation’s values of a free and open society,” adding that such a commitment “runs in stark contrast to Russia and China, whose use of AI tech for military purposes raises serious concern about human rights, ethics and international norms.”

Can ‘Ethical’ Military AI Even Exist? 

However, critics note that the very use of the term “ethical AI” by the Pentagon raises red flags. The US military is, after all, the organization which has killed more foreign civilians than any other armed force on the planet since waging the world’s first and only nuclear war against Japanese cities in 1945. It has killed an estimated 500,000 to perhaps over 2 million people in more than half a dozen nations during the course of its unending war against terrorism, now in its 19th year. US forces have also committed widespread human rights abuses including torture over the course of the war, one in which the administrations of presidents George W. Bush, Barack Obama and Donald Trump — the latter who vowed to “bomb the shit out of” Islamist militants and “take out their families” — have used drones to assassinate actual and suspected terrorists, along with a foreign military leader, many hundreds of civilians, a US citizen killed without charge or trial and even an innocent American teenager and his 8-year-old sister, in separate strikes. 

Now the military is weaponizing AI so that drone footage will be scanned using machine learning to help determine which people, buildings and vehicles will be bombed, and the Pentagon wants the world to believe that it can successfully tackle issues of AI bias that can result in discriminatory outcomes in peacetime — and deadly outcomes during war. DIB members are all-in, with Pahlka of Code for America defending her complicity in the death and destruction of war with the muddled analogy of “sharp” versus “dull” knives. Confusingly, it’s the “sharp knives” of improved tools, technology and techniques that will purportedly save lives when used instead of the “dull knives” of outdated tools they’re replacing. 

“Having poor tools doesn’t make us fight less, it makes us fight badly,” argues Pahlka, seemingly oblivious to the bigger questions of why it is that we fight at all, and whether it is appropriate for Silicon Valley to seek complicity in the crimes of a country that has been at war, attacking or occupying other countries for 238 of its 243 years in existence. 

Dissent from Outside — and from Within 

Prominent ethics advocates are wary of Pentagon promises. “I worry that the principles are a bit of an ethics-washing project,” Lucy Suchman, an anthropologist who studies the role of AI in warfare, told the Associated Press. “The word ‘appropriate’ is open to a lot of interpretations.” Others noted the dilemmas inherent in tech company participation in the machinery of death and destruction. David Gershgorn, senior writer at OneZero who specializes in AI, tweeted that the Pentagon’s list of AI principles “seems to be missing, ‘don’t kill somebody with a robot.’” 

The Campaign to Stop Killer Robots, a coalition of nongovernmental organizations — including Ethics In Tech — advocating a preemptive ban on the development of lethal autonomous weapons, has long called on tech companies to publicly endorse such a ban. “Doing so would support the rapidly-expanding international effort to ensure the decision to take human life is never delegated to a machine in warfare or in policing and other circumstances,” the organization said in a May 2018 statement. 

At the time, thousands of Google employees signed an open letter demanding that Sundar Pichai, CEO of parent company Alphabet, pledge that neither the firm nor its contractors would ever develop “warfare technology.” The company then released its own set of ethical AI principles, including a promise not to develop AI-driven weapons, while announcing that “Google, including Google Cloud, will not support the use of AI for weaponized systems.” This meant that Google would no longer be participating in Project Maven, a Pentagon initiative to militarize machine learning that was the cause of so much employee alarm

The news of Google’s withdrawal from Project Maven came amid a battle between Amazon and Microsoft over a $10 billion contract to build and deploy Joint Enterprise Defense Infrastructure (JEDI, yes, really), a cloud computing project that will store and process vast quantities of classified data so that the US military can use artificial intelligence to greatly improve its war planning and fighting capabilities. Microsoft was awarded the contract last October. However, it hasn’t been able to launch the decade-long project because Amazon sued the Pentagon, claiming that President Donald Trump’s personal animosity toward the company and Bezos prejudiced its chances at winning the contract. 

Beyond the Battlefield

While not quite warfare, Microsoft has also come under fire for betraying its stated AI ethics principles by investing in an Israeli company that produces facial recognition technology used to monitor and control Palestinians living in the illegally-occupied West Bank. In November 2018 Microsoft announced a set of six AI principles: fairness, reliability, privacy and security, inclusiveness, transparency and accountability. The company also partnered with the Vatican to promote AI “at the service of human life,” while at the same time essentially investing in the life-crushing Israeli occupation, apartheid and ethnic cleansing of Palestine. 

Concerns have also been raised over the unethical use of AI in domestic surveillance. Last June, the American Civil Liberties Union (ACLU) published a report warning that the installation of tens of millions of security cameras equipped with AI technology will lead to abuses including over-enforcement and discrimination. The report recommended a ban on mass government surveillance and the implementation of “policies that will prevent private-sector deployments from becoming equally oppressive.” 

Some localities — first and most notably San Francisco — have passed laws or other regulations banning or restricting government agencies from using facial recognition and other AI technology against their people.

Please subscribe to receive the latest newsletter!
We respect your privacy.