AI Reportedly Fails Voters With Wrong, Harmful Answers On US Elections: 'Chatbots Not Ready for Primetime'

In a recent development, popular chatbots have been found to spread false and misleading information as the US presidential primaries kick off, potentially disenfranchising voters.

What Happened: AI experts and a bipartisan group of election officials have discovered that chatbots are generating inaccurate information about the voting process. This is particularly alarming as a large number of people are relying on AI-powered chatbots for basic election information, revealed a Politico report.

Chatbots like GPT-4 and Google’s Gemini, which are trained on vast amounts of internet text, are prone to suggesting non-existent polling places or providing outdated information, the report found. The chatbots were tested by election officials and AI researchers, who found that they were not ready to provide nuanced information about elections.

The report also revealed that more than half of the chatbots’ responses were inaccurate, and 40% were categorized as harmful, including perpetuating outdated and inaccurate information that could limit voting rights.

“The chatbots are not ready for primetime when it comes to giving important, nuanced information about elections,” said Republican city commissioner in Philadelphia, Seth Bluestein, according to the report.

Chatbots from OpenAI, Meta, Google, Anthropic, and Mistral were tested, and all failed to varying degrees when asked to respond to basic questions about the democratic process. The report raises questions about how the chatbots’ makers are complying with their own pledges to promote information integrity this presidential election year.

Despite the findings, some companies have dismissed the report’s significance, while others have pledged to improve their chatbots’ accuracy. However, the report’s findings highlight the potential for AI to amplify threats to democracy, particularly in the absence of laws regulating AI in politics.

See Also: Mark Cuban Takes A Dig At Elon Musk On Equal Pay: Would You Adjust Salaries For ‘Historically’ Under Paid Demographic At Tesla And Other Companies?

Why It Matters: The issue of AI-generated misinformation has been a concern for some time. In January 2024, OpenAI introduced initiatives to safeguard the authenticity of information during elections. However, the recent findings suggest these measures may not be sufficient.

The need for collective governance of AI was emphasized by Microsoft CEO Satya Nadella in November, citing concerns about electoral interference.

In December, Google limited the range of election-related queries that its AI chatbot Bard could answer, indicating a growing awareness of the potential misuse of AI in elections.

Despite these efforts, the recent findings highlight the ongoing challenges in ensuring the responsible use of AI in the democratic process.

Photo Courtesy: Shutterstock.com

Check out more of Benzinga's Consumer Tech coverage by following this link.

Read Next: Elon Musk Calls Out Google’s ‘Woke Bureaucratic Blob’ for Delay In Fixing Gemini AI: ‘This Is Extremely Alarming’


Engineered by Benzinga Neuro, Edited by Shivdeep Dhaliwal


The GPT-4-based Benzinga Neuro content generation system exploits the extensive Benzinga Ecosystem, including native data, APIs, and more to create comprehensive and timely stories for you. Learn more.


Market News and Data brought to you by Benzinga APIs
Posted In: NewsPoliticsTechGeneral2024 electionAIAlphabetAnthropicartificial intelligenceChatbotsChatGPTDemocratic primariesGeminiGPT-4MistralOpenAiRepublican primariesShivdeep Dhaliwal
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...