Tim Cook Admits He Has Doubts About Apple's Ability To Prevent AI Hallucinations: 'I Would Never Claim That It's 100%'

Apple Inc. AAPL Tim Cook has expressed uncertainty about the company’s ability to completely prevent AI hallucinations.

What Happened: In an interview with The Washington Post that was published on Tuesday, Cook admitted that the company’s new Apple Intelligence system might generate false or misleading information, reported The Verge.

Despite Apple’s efforts to ensure the quality of the technology, Cook acknowledged that there is still a possibility of errors.

“I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we're using it in,” he said, adding, “So I am confident it will be very high quality. But I'd say in all honesty that's short of 100%. I would never claim that it's 100%.”

See Also: Apple’s Warranty Updates, iPhone Support Duration, And More: This Week In Apple News

The Apple Intelligence system, unveiled at the Worldwide Developers Conference, introduced several AI features to devices like the iPhone, iPad, and Mac. These features will allow users to perform tasks such as generating email responses, creating custom emojis, and summarizing text.

However, like other AI systems, there is a risk of generating false information, as seen previously with Google’s AI model Gemini and OpenAI’s ChatGPT.

At the WWDC 2024, Apple also announced a partnership with OpenAI to integrate ChatGPT into Siri. Cook stated that Apple chose OpenAI due to its strong privacy measures and the quality of its model. He also hinted at potential future partnerships with other companies.

Apple's senior VP of software engineering, Craig Federighi, has also hinted that the tech giant could leverage Google Gemini in the future.

Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.

Why It Matters: The issue of AI hallucinations is not new in the tech industry. Previously, in a podcast episode of the Lex Fridman podcast, Yann LeCun, the chief AI scientist at Meta Platforms Inc., explained the phenomenon of AI hallucinations.

“Because of the autoregressive prediction, every time an AI produces a token or a word, there is some level of probability for that word to take you out of the set of reasonable answers,” explained LeCun, illustrating how chatbots can veer off track in conversations.

Last year in April, Google CEO Sundar Pichai also acknowledged that AI technology is still struggling with “hallucination problems” without a clear explanation. “No one in the field has yet solved the hallucination problems. All models do have this as an issue,” he said at the time.

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next: As Elon Musk Blasts Apple’s Partnership With ChatGPT-Parent OpenAI, Analysts Say There’s Definitely ‘An Unanswered Question’

Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.

Image via Shutterstock

Market News and Data brought to you by Benzinga APIs
Posted In: NewsTop StoriesTechMediaApple Intelligenceartificial intelligencebenzinga neuroConsumer TechStories That MatterTim Cook
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!