Lawyers Fooled By OpenAI's ChatGPT's Creative Legal Research Fined $5K

In a bizarre turn of events, two New York lawyers have been slapped with a $5,000 fine after unwittingly falling victim to the creative legal research capabilities of OpenAI's chatGPT

What Happened: An incident in the legal realm has caused quite a stir, as two New York lawyers faced sanctions and a $5,000 fine for unknowingly incorporating fictitious case citations generated by OpenAI’s chatGPT into their legal brief, reported Reuters.

The lawyers, Steven Schwartz and Peter LoDuca of the law firm Levidow, Levidow & Oberman, were accused of acting in bad faith and making false and misleading statements to the court.

See Also: This Former Google Officer Predicts AI Will Be 1 Billion Times Smarter Than Us

Despite their stance of a “good faith mistake,” the judge deemed the actions unacceptable, highlighting the importance of attorney gatekeeping and accuracy in legal filings. 

In his sanctions order on Thursday, the judge acknowledged that lawyers using AI for assistance is not inherently improper. However, he emphasized that legal ethics rules place attorneys responsible for serving as gatekeepers, ensuring the accuracy and reliability of their filings.

For the unversed, in May this year, it was reported that in the Mata vs. Avianca case — where a customer sued the airline for a knee injury caused by a serving cart — chatGPT took an unexpected and comedic turn. 

In an attempt to counter Avianca’s motion to dismiss the case, Mata’s lawyers submitted a brief containing numerous alleged court decisions generated by the AI-powered chatbot. 

When challenged to provide evidence of the referenced cases, the plaintiff’s lawyer once again turned to chatGPT for assistance, resulting in the AI fabricating intricate details of these nonexistent cases. Screenshots of the AI’s imaginative responses were then captured and incorporated into the legal filings.

Adding to the surreal sequence of events, the lawyer even asked chatGPT to confirm the authenticity of one of the fabricated cases, receiving an affirmative response from the AI. As a result, screenshots of the AI’s confirmation were included in yet another filing, further compounding the situation’s absurdity.

Why It's Important: Generative AI models such as OpenAI’s chatGPT, Microsoft Corp’s MSFT Bing AI, and Alphabet Inc.’s GOOG GOOGL Google Bard have gained notoriety for their tendency to produce fabricated information confidently, a phenomenon commonly referred to as “hallucinations.”

Instances of these AI models generating false facts have raised concerns, as highlighted by a Reddit user who warned about the dangers of excessively relying on chatGPT’s advice in fields like Medicine or Law, disregarding the potential risks involved.

What makes matters worse is that even individuals well-versed in technology can unknowingly become victims of the well-documented hallucinatory capabilities exhibited by chatGPT and similar AI models.

In April this year, Google CEO Sundar Pichai also admitted that current AI technology generally is still struggling with "hallucination problems" with no clear explanation. 

Check out more of Benzinga's Consumer Tech coverage by following this link.

Read Next: Artificial Intelligence Stocks Surge, But Do Investors Trust AI With Financial Decisions? New Poll Provides Answers

Market News and Data brought to you by Benzinga APIs
Posted In: NewsTechArtificial InteliigenceBing AIChatGPTConsumer TechGoogle BardOpenAiSundar Pichai
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...