The U.S. military, algorithmic warfare, and big tech

We learned this week that the Department of Defense is using facial recognition at scale, and Secretary of Defense Mark Esper said he believes China is selling lethal autonomous drones. Amid all that, you may have missed Joint AI Center (JAIC) director Lieutenant General Jack Shanahan — who is charged by the Pentagon with modernizing and guiding artificial intelligence directives — talk about a future of algorithmic warfare, one that could be entirely different from wars the U.S. has fought in past decades.

Algorithmic warfare is built on the assumption that actions will take place faster than humans can make decisions. Shanahan says algorithmic warfare would require some reliance on AI systems, and a need to implement rigorous testing and evaluation before using AI in the field to ensure it doesn’t “take on a life of its own, so to speak,” according to Shanahan.

“We are going to be shocked by the speed, the chaos, the bloodiness, and the friction of a future fight in which this will be playing out, maybe in microseconds at times. How do we envision that fight happening? It has to be algorithm against algorithm,” Shanahan said during a conversation with former Google CEO Eric Schmidt and Google VP of global affairs Kent Walker. “If we’re trying to do this by humans against machines, and the other side has the machines and the algorithms and we don’t, we’re at an unacceptably high risk of losing that conflict.”

The three spoke Tuesday in Washington, D.C. for the National Security Council on AI conference, which took place a day after the group delivered its first report to Congress with help from some of the biggest names in tech and AI — like Microsoft Research director Eric Horvitz, AWS CEO Andy Jassy, and Google Cloud chief scientist Andrew Moore. The final report will be released in October 2020.

The Pentagon began a venture into algorithmic warfare and a range of AI projects with Project Maven, an initiative to work with tech companies like Google and startups like Clarifai. It was created two years ago with Shanahan as director — following a recommendation by Schmidt and the Defense Innovation Board.

In a world of algorithmic warfare, Shanahan says the Pentagon needs to bring AI to service members at every level of the military so people with first-hand knowledge of problems can apply AI to achieve military goals. A decentralized approach to development, experimentation, and innovation will be accompanied by higher risk, but could be essential to winning battles and wars, he said.

Algorithmic warfare is included in the National Security Council on AI draft report, which minces no words about the importance of AI to U.S. national security and states unequivocally that the “development of AI will shape the future of power.”

“The convergence of the artificial intelligence revolution and the reemergence of great power competition must focus the American mind. These two factors threaten the United States’ role as the world’s engine of innovation and American military superiority,” the report reads. “We are in a strategic competition. AI will be at the center. The future of our national security and economy are at stake.”

The report also acknowledges that in the age of AI the world may experience an erosion of civil liberties and acceleration of cyber attacks. It also references China more than 50 times, noting the intertwined nature of Chinese and U.S. AI ecosystems today, and China’s goal to be a global AI leader by 2030.

The NSCAI report also chooses to focus on narrow artificial intelligence, rather than artificial general intelligence (AGI), which doesn’t exist yet.

“When we might see the advent of AGI is widely debated. Rather than focusing on AGI in the near term, the Commission supports responsibly dealing with more ‘narrow’ AI-enabled systems,” the report reads.

Last week, the Defense Innovation Board (DIB) released its AI ethics principles recommendations for the Department of Defense, a document created with contributions from LinkedIn cofounder Reid Hoffman, MIT CSAIL director Daniela Rus, and senior officials from Facebook, Google, and Microsoft. The DoD and JAIC will now consider which principles and recommendations to adopt going forward.

Former Google CEO Eric Schmidt acted as chair of both the NSCAI and DIB board and oversaw the creation of both reports released in recent days. Schmidt was joined on the NSCAI board by Horwitz, Jassy, and Moore, along with former Deputy Secretary of Defense Robert Work.

Google, Project Maven, and tech companies working with the Pentagon

At the conference on Tuesday, Schmidt, Shanahan, and Walker revisited the controversy at Google over Project Maven. When Google’s participation in the project became public in spring 2018, thousands of employees signed an open letter to protest Google’s involvement.

In months following employee unrest, Google adopted its own set of AI principles, which includes a ban on creating autonomous weaponry.

Google also pledged to end its Project Maven contract by the end of 2019.

“It’s been frustrating to hear concerns around our commitment to national security and defense,” Walker said, noting work Google is doing with JAIC on issues like cybersecurity and health care. Google will continue to work with the Department of Defense. “This is a shared responsibility to get this right,” he added.

A view of military applications of AI as a shared responsibility is critical to U.S. national security, Lt. Gen. Shanahan said, acknowledging that mistrust between the military and industry flared up during the Maven episode at Google.

The Maven computer vision work that Google did was for unarmed drones, Shanahan said, but the Maven episode was made clear the concerns tech workers may have about working with the military and the need to clearly communicate objectives.

But, Shanahan said, the military is in a state of perpetual catch up, and bonds between government, industry, and academia must be strengthened for the U.S. to keep economic and military supremacy.

The NSCAI report also references a need for people in academia and business to “reconceive their responsibilities for the health of our democracy and the security of our nation.”

“No matter where you stand with respect to the government’s future use of AI enabled technologies, I submit that we can never attain the vision outlined in the Commission’s interim report, without industry and academia  together in an equal partnership, there’s too much at stake to do otherwise,” he said.

Autonomous weapons

Heather Roff is a senior research analyst at Johns Hopkins University and former research scientist at Google’s DeepMind. She was the primary author of the DIB report and an ethics advisor for the creation of the NSCAI report.

She thinks media coverage of the DIB report sensationalized use of autonomous weaponry but generally failed to recognize an effort to consider applications of AI across the military as a whole, in areas like logistics, planning, cybersecurity, and audits for the U.S. military, which has the largest budget in the world and is one of the largest employers in the United States.

The draft version of the NSCAI report says autonomous weaponry can be useful but adds that the commission intends to address ethical concerns in the coming year, Roff said.

People concerned about the use of autonomous weapons should recognize that despite ample funding, the military has much bigger structural challenges to address today, issues raised in the NSCAI report that service members can’t even use open source software or download the GitHub client.

“The only people doing serious work on AGI right now are DeepMind and OpenAI, maybe a little Google Brain, but the department doesn’t have the computational infrastructure to work to do what OpenAI and Deep Mind are doing. They don’t have the compute, they don’t have the expertise, they don’t have the hardware, they don’t have the data source or the data,” she said.

The NSCAI is scheduled to meet with NGOs to discuss issues like autonomous weapons, privacy, and civil liberties next week.

Liz O’Sullivan is a VP of ArthurAI in New York and part of the Human Rights Watch Campaign to Stop Killer Robots. Last year, after voicing opposition to autonomous weapons systems with coworkers, she quit her job at the startup Clarifai in protest over work being done on Project Maven. She thinks the two reports have a lot of good substance but that they take no explicit stance on certain issues like whether or not historical hiring data that will have a bias in favor of men can be used.

O’Sullivan is concerned a 2013 DoD directive that calls for “appropriate levels of human judgement” that’s mentioned in both reports is being interpreted to mean autonomous weapons will always have human control. She would rather the military adopt “meaningful human control” like the kind that’s been advocated in the United Nations.

Roff, who previously worked in autonomous weapons research, said a misconception about the AI ethics report is the idea that deploying AI systems requires a human in the loop. Last minute edits made to the document clarify a need for the military to have an off switch if AI systems begin to take actions on their own or attempt to avoid being turned off.

“Humans in the loop is not in the report for a reason, which is [that] a lot of these systems will act autonomously in the sense that it will be programmed to do a task and there won’t be a human in the loop per se. It will be a decision aid or it will have an output or if it’s cybersecurity it’s going to be finding bugs and patching them on their own and humans can’t be in the loop,” Roff said.

Although the AI ethics report was compiled with multiple public comment sessions, O’Sullivan believes the DIB AI ethics report and NSCAI report lacks input from people who are not in favor of autonomous weapons.

“It’s pretty clear they selected these groups to be representative of industry, all very centrist,” she said. “That explains to me at least why there’s not a single representative on that board who is anti-autonomy. They stacked the deck, and they had to know what they were doing when they created these groups.”

O’Sullivan agrees that the military needs technologists, but the military has to be upfront about what people are working on. Concern over computer vision-based projects like Maven springs from the fact that AI is a dual-use technology, and an object detection system can be used for weapons.

“I don’t think it’s smart for all of the tech industry to abandon our government. They need our help, but simultaneously, we’re in a position where in some cases we can’t know what we’re working on because it’s classified or parts of it might be classified,” she said. “There are plenty of people within the tech industry who do feel comfortable working with the Department of Defense, but it has to be consensual, it has to be something where they really do understand the impact and the gravity of the tasks that they’re working on. I mean if for no other reason than understanding the use cases when you’re building something is incredibly important to design it in a responsible way.”

Leave a Reply

Your email address will not be published. Required fields are marked *