A group of researchers are warning the world about weaponizing artificial intelligence and robotics, but that machine-learning genie may already be out of the bottle.
There are clear signs that the United States is already engaged in an AI arms race with China and Russia to develop weapons systems for the land, sea and air that can talk to each other and select targets autonomously, making decisions now dictated by humans.
“They are definitely moving in that direction,” Toby Walsh, professor of AI at Australia’s University of New South Wales, told Benzinga. “It’s unclear how many systems are operational in the field. I think any sphere of battle, you can name a prototype.”
Walsh, chairman of the prestigious International Joint Conference on Artificial Intelligence being held Aug. 19–25 in Melbourne, Australia, is one of 3,105 AI and robotics researchers so far who have signed an open letter from The Future of Life Institute, a research foundation that seeks to keep tech from threatening humanity.
The group, which includes entrepreneur Elon Musk and physicist Stephen Hawking, said AI will be a cheap, devastating alternative to nuclear weapons that could fall more easily into the wrong hands.
“Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits,” the letter reads.
“Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”
Too Late: Putin On The Blitz
Boris Obnosov, CEO of Russia’s Tactical Missiles Corporation, told state-run media his country was already developing missiles and drones that will think for themselves to catch up to the strides made by the United States and China.
He said one such weapon will debut in the next few years and was inspired by the Raytheon Company RTN’s Block IV Tomahawk cruise missile, which was deployed against Russia's allies in Syria and can switch to preprogrammed, alternate targets in mid-flight. It’s currently being upgraded by Raytheon.
"Work in this area is under way. This is a very serious field where fundamental research is required. As of today, certain successes are available, but we’ll still have to work for several years to achieve specific results," Obnosov said, according to the state-run TASS Russian News Agency.
China Also Says It’s Working On Cutting-Edge AI Weapons
The U.S. Navy describes its latest Tomahawk as “capable of loitering over a target area in order to respond to emerging targets or, with its on-board camera, provide battle damage information to warfighting commanders.”
“It certainly is cutting edge, the Tomahawk,” said Walsh. “Now we’re seeing it being extended. It raises questions. Is that a military convoy or a first-aid convoy?”
Just as Russian media reported Obnosov’s remarks, a Chinese newspaper said that country’s aerospace industry was developing tactical missiles with built-in intelligence that would help seek out targets in combat.
Whether AI, military or otherwise, will eventually have enough machine learning to turn on humans is actually a matter of serious debate.
But the more immediate concern is the military-industrial complex and the competition among nations that is analogous to the nuclear arms race of the Cold War.
“There is a driver there. Your only defense against autonomous weapons will be autonomous weapons itself,” says Walsh. “If I were the Chinese and knew that the US (had weaponized AI) I I would devise it myself, and vice versa.”
U.S. Military: Full Speed Ahead
“Artificial Intelligence technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” The Future of Life Institute’s letter said.
A Department of Defense study last year, since removed but archived here, says the Pentagon is in danger of not only being surpassed by its adversaries, but by the private sector. It calls for an all-out embrace not only of developing machine-learning for weapons systems, but systems to defend against it.
One of the central themes is building in “trustworthiness” of AI to tamp down the sort of public aversion of nukes and chemical weapons.
“Autonomous capabilities are increasingly ubiquitous and are readily available to allies and adversaries alike,” the report says. “Because an autonomous system may have different sensors and data sources than any of its human teammates, it may be operating on different contextual assumptions of the operational environment.”
Another moral issue that has faced the evolution of high-tech war is it has depersonalized death. Sending in a guided missile or a drone metes out massacres from afar. The danger lies in “flash wars” triggered by smart weapons in not-so-smart hands.
“You can fight a war without body bags coming home, without putting boots on the ground,” says Walsh.
© 2022 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.