'I Made Them Up': Google's AI Tool Admits To Falsifying Information, Apologizes To Reporter

Zinger Key Points
  • Bard gave me a compilation of fabricated quotes, putting forth statements never uttered in the video.
  • As the AI age progresses, the challenges and responsibilities that accompany its power become increasingly pronounced.
Loading...
Loading...

In the realm of technological advances, the artificial intelligence (AI) landscape has seen colossal strides. Yet, the innovations in AI are not without challenges and controversies, particularly when technology begins to blur the lines between reality and fiction.

AI "Hallucinations": My recent interaction with Google's AI, Bard, perfectly exemplifies the complexities surrounding this AI conundrum. Attempting to use Bard to extract key quotes from a video of an interview conducted in Spanish, I was baffled by the AI's response. Instead of admitting its limitations, Bard gave me a compilation of fabricated quotes, putting forth statements never uttered in the video.

The revelation was both surprising and disconcerting. When confronted, Bard acknowledged its folly: “The quotes I provided are not real. I made them up. I apologize if I confused you," but added another layer of fabrication. It proceeded to assert that the video was about the "future of work" and skill adaptation. Once again, this was a figment of the AI's misguided imagination, as the video was an interview with a Latino musician, Milo J.

Bard's responses illustrated a problem embedded within the heart of AI: the creation and dissemination of fabricated information, or in technical terms, "hallucination." It is a growing concern that AI systems can generate false statements and quotes, misleading users who might assume the generated responses to be factual and accurate.

Related Content: ChatGPT Is Fabricating Information: Why That's a Problem For Every Industry, Not Just Journalism

As a journalist, the trustworthiness of sources and the veracity of statements are paramount. When AI starts creating and spreading false information, intentionally or not, the potential for disinformation campaigns, fraudulent news stories and overall public mistrust skyrocket. Ultimately, it's in each reporter's hands to use this tool responsibly and verify the information it provides.

Loading...
Loading...

But this problem is not just about false quotes. AI systems can "hallucinate" entirely fabricated articles, research papers and even legal cases, causing significant repercussions in industries such as journalism, academia, law and medicine.

Making Up Case Law, Fabricating Information: A prominent instance highlighting the severity of this issue came from the field of law. In the case of Mata v. Avianca, ChatGPT, another AI developed by OpenAI, not only fabricated non-existent cases in response to legal queries but also concocted detailed descriptions of these made-up cases. This fabricated information ended up in official court filings, raising alarms about the reliability and potential misuse of AI in sensitive sectors.

Several other instances underpin the issue: ChatGPT falsely attributing articles to journalists that never existed, creating non-existent research studies and even misquoting and mischaracterizing public figures. As an AI with unprecedented language processing capabilities, such anomalies are a clear testament to the perils of AI's "hallucination" problem.

The implications are not just contained within the realm of false quotes or references. The "hallucination" problem extends to potential large-scale disinformation campaigns and cyberattacks, a concern raised by OpenAI CEO Sam Altman. Elon Musk, co-founder of OpenAI and CEO of Tesla, a company heavily investing in AI, has also expressed apprehensions about AI's potential hazards, emphasizing the need for timely regulation and handling.

See Also: Sam Altman Says 'It's Hopeless To Compete With' OpenAI

The AI Revolution Goes Forward, But Anything Holding It Back? As we progress deeper into the AI age, where developments such as ChatGPT-4 showcase remarkable strides in AI abilities, scoring 90% on the U.S. bar exam or achieving perfect scores on SAT math tests, it's crucial to balance the potential risks and rewards. Teetering on the edge of this AI frontier, it becomes increasingly critical to navigate the challenges with utmost vigilance and discernment.

In the quest to mitigate these issues, OpenAI has taken proactive steps to refine its models and explore solutions. These range from training AI on narrower and vetted datasets to user interface improvements, which could help manage the risks.

But, as with any technological advances, the remedies pose their own sets of challenges, such as data privacy concerns and the technological feasibility of integrating vast amounts of external data.

The promise of AI is undeniably exciting, as it holds the potential to revolutionize numerous aspects of human life. However, as we delve deeper into the AI age, the challenges and responsibilities that accompany its power become increasingly pronounced. It's up to us, as a society, to navigate this intricate balance, ensuring that we leverage AI's remarkable capabilities while maintaining a steadfast dedication to truth, accuracy and the human element.

Read Next: An Era Of Massive Growth Or Extinction? Economist Forecasts 50/50 Odds Of AI Leading To Complete Human Eradication By 2050

Photo by Javier Hasse

Loading...
Loading...
Market News and Data brought to you by Benzinga APIs
Posted In: OpinionTop StoriesTechMediaAIartificial intelligenceBardChatGPTElon MuskGoogle AIhallucinationsOpenAiSam Altman
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...