The controversy surrounding military artificial intelligence is rooted in “grave misperceptions” about what the department is actually trying to do, according to current and former Defense officials.
Protecting the U.S. in the decades ahead will require the Pentagon to make “substantial, sustained” investments in military artificial intelligence, and critics need to realize it doesn’t take that task lightly, according to current and former Defense Department officials.
Efforts to expand the department’s use of AI systems have been met with public outcry among many in the tech and policy communities who worry the U.S will soon entrust machines to make life-and-death decisions on the battlefield. Last year, employee protests led Google to pull out an Air Force project that used machine-learning to sort through surveillance footage.
On Wednesday, officials said the Pentagon is going to great lengths to ensure any potential applications of AI adhere to strict ethical standards and international norms. Even if the U.S. military balks on deploying the tech, they warned, global adversaries like Russia and China certainly will not, and their ethical framework will likely be lacking.
“The Department of Defense is absolutely unapologetic about pursuing this new generation of AI-enabled weapons,” former Deputy Defense Secretary Robert Work said Wednesday at an event hosted by AFCEA’s Washington, D.C. chapter. “If we’re going to succeed against a competitor like China that’s all in on this competition … we’re going to have to grasp the inevitability of AI.”
Released in February, the Pentagon’s AI strategy explicitly requires human operators to have the ability to override any decisions made by a military AI system and ensures the tech abides by the laws of armed conflict.
“I would argue the U.S. military is the most ethical military force in the history of warfare, and we think the shift to AI-enabled weapons will continue this trend,” Work said. And despite the criticism, he added, the tech could potentially save lives by reducing friendly fire and avoiding civilian casualties.
Lt. Gen. Jack Shanahan, who leads the department’s newly minted Joint Artificial Intelligence Center, told the audience much of the criticism he’s heard directed at military AI efforts is rooted in “grave misperceptions about what [the department] is actually working on.” While some may envision a general AI system “that’s going to roam indiscriminately across the battlefield,” he said, the tech will only be narrowly applied, and humans will always stay in the loop.
If anything, the outcry shows the Pentagon isn’t engaging enough with industry about the projects it’s pursuing, according to Shanahan.
“Somehow that conversation has got[ten] off track with some aspects of industry, largely because of an assumption of what [the Defense Department] might be working on rather than what we’re actually working on,” he said. “We know there’s work to do to continue a healthy dialogue about what our value system is, how we adhere to international norms and how some of our potential adversaries are likely not to.”
But while there are many important discussions to be had, he said that shouldn’t stop the military from working to advance the tech in the here and now.
“If we don’t have a fully AI-enabled force, we will incur an unacceptably high risk of losing” the next major conflict, Shanahan said. “That’s how important this is to our national security.”
Read More:Machines That Attack On Their Own: Autonomous Artificial Intelligence Needs To Be Thought Through
Read More:Autonomous Killing Drones Soon Flying In Asia, Africa, Middle East