Expert Slams Western Governments For Missing Out On AI In War: 'Have Let The 'Deftech' Sector Escape Their Oversight'

Loading...
Loading...

The rapid establishment of AI Safety Institutes by Western governments has overlooked the governance of military AI use, despite the growing potential for serious safety risks.

What Happened: Concerns regarding the lack of governance over military AI use were raised by Marietje Schaake, International Policy Director at Stanford University's Cyber Policy Center and Special Adviser to the European Commission, in an op-ed in the Financial Times on Tuesday.

As per Schaake, AI Safety Institutes have been announced by the U.K., U.S., Japan, and Canada. The U.S. Department of Homeland Security also recently introduced an AI Safety and Security Board. However, none of these bodies oversee the military application of AI.

Schaake also highlighted that with the boost from venture capitalists, defense tech is also flourishing at an unregulated speed.

“But though it's easy to point the finger at private companies who hype AI for warfare purposes, it is governments who have let the ‘deftech' sector escape their oversight,” she added.

Schaake pointed out that AI safety risks are already evident on the modern battlefield. For instance, an AI-enabled program, Lavender, was reportedly used by the Israel Defense Forces to identify targets for drone attacks, resulting in considerable collateral damage.

While the UN has called for a ban on autonomous weapons, talks have stalled due to resistance from Russia, the US, the UK, and Israel.

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.comT&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/da03f8e1-0ae4-452d-acd1-ec284b6acd78

“Making sure human rights standards and laws of armed conflict continue to protect civilians in a new age of warfare is critical. The unregulated use of AI on the battlefield cannot continue,” Schaake said.

See Also: Trump Vs Biden: New Poll Reveals Shift In 2024 Race As Public Divided On Presidents’ Legacies

Why It Matters: The absence of regulation on military AI use carries serious implications. Despite their inherent imprecision, AI-enabled weapons are often trusted excessively in military settings and may not comply with international humanitarian law. The Pentagon’s Defense Innovation Unit has been boosting startups with increased funding and support for products ranging from autonomous drones to cybersecurity software.

This is particularly relevant given recent developments such as the use of iPads by the Ukraine Air Force to operate modern weapons in older fighter jets, and the U.S.’s $138 million deal to maintain and upgrade Ukraine’s HAWK air defense systems.

Photo by Alexander Yartsev on Shutterstock

Read Next: Americans Split On Biden Vs. Trump — But Nearly Half Of All Voters Crave An Alternative, New Poll Shows


Engineered by Benzinga Neuro, Edited by Pooja Rajkumari


The GPT-4-based Benzinga Neuro content generation system exploits the extensive Benzinga Ecosystem, including native data, APIs, and more to create comprehensive and timely stories for you. Learn more.


Loading...
Loading...
Market News and Data brought to you by Benzinga APIs
Posted In: NewsPoliticsOpinionTechGeneralAIAI safetycyber securityDefense-techPooja RajkumariStories That Matterwar
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...