US Army Clarifies Policy on Tank Automated Targeting System
The US Army has updated their notice calling for industry and academic input on the development of the US Army’s planned automated targeting and fire control integration system for use in tank operations to reflect that human operators will have veto power over any proposed engagement with a target.
Quelling The Controversy
Likely unaware of how their original notice, available here, would sound to civilian ears, the Army had to scramble last week after a controversy arose around an announced invitation to industry and academic parties to provide input on the development of the US Army's planned AI-driven, autonomous targeting system for US Army tanks.
The updated announcement now emphasizes that this weapon system does not intend to go beyond what is explicitly spelled out in Army regulations, which purports to forbid autonomous weapons systems from being able to make the decision to engage human targets on their own.
SEE ALSO: THE SHORT FILM 'SLAUGHTERBOTS' SHOWS A HORRIFYING WARNING AGAINST AI-BASED WEAPONS
One Army official, speaking to Defense One about the controversy, said that the ability of the new system to locate and engage targets on a battlefield automatically doesn’t necessarily mean that “we’re putting the machine in a position to kill anybody.”
The new program, called ATLAS (Advanced Targeting and Lethality Automated System), will “leverage recent advances in computer vision and Artificial Intelligence / Machine Learning (AI/ML) to develop autonomous target acquisition technology, that will be integrated with fire control technology, aimed at providing ground combat vehicles with the capability to acquire, identify, and engage targets at least 3X faster than the current manual process,” according to the Army’s announcement.
Controversy Over AI Targeting and Engagement in Tank Operations
Tanks are a mainstay of the US Army, so a major upgrade to their systems will naturally have a huge impact on how the Army itself performs, good or bad. So it’s no wonder that the controversy erupted last week when Quartz first reported on the announcement.
Stuart Russel, professor of computer science at UC Berkeley and respected in the field of artificial intelligence, raised alarms about the memo, saying that it represented “another significant step towards lethal autonomous weapons.”
[see-also]
While the Army insists that humans must “always” have veto power over autonomous weapon systems, Russel clearly doesn’t think that is an acceptable standard, telling Quartz that “it looks very much as if we are heading into an arms race where the current ban on full lethal autonomy will be dropped as soon as it’s politically convenient to do so.”
Michael Horowitz, associate professor of political science at the University of Pennsylvania and a senior adjunct fellow at the Center for New American Security doesn’t go as far a Russel but agrees that clarity is needed.
“Lack of clarity concerning what would truly constitute an autonomous weapon system, even under the existing DoD directive [PDF], means it is not entirely clear the ATLAS program would be fully autonomous,” Horowitz said, as reported in Defense One.
“It is critical that any revisions to the ATLAS program not only clarify the degree of autonomy and the level of human involvement in the use of force, but also ensure that any incorporation of AI occurs in a way that ensures safety and reliability.”