What's Next For Apple's AI? Insights From Research Provide Clues

Zinger Key Points
  • Apple's approach to AI, as hinted by its research, focuses on efficiency and on-device processing.
  • Many of the advancements show promise, but are still in the research phase and face challenges in real-world implementation.

Apple Inc‘s AAPL delayed entry into the AI arena might seem like a disadvantage compared to competitors which swiftly capitalized on the emergence of ChatGPT in late 2022.

However, recent rumors reported by The Verge suggested Apple was strategically preparing its AI strategy. Reports indicated discussions with OpenAI and Alphabet Inc‘s GOOGLGOOG Google for potential collaborations, alongside the development of its own AI model named Ajax.

See Also: Apple CEO Tim Cook Hints At ‘Exciting’ AI Developments In 2024, But Skimps On Details: ‘We Are Making Significant Investments’

Apple’s approach to AI, as hinted by its research, focused on efficiency and on-device processing. The company aims to significantly enhance Siri by deploying smaller, more efficient AI models. Bloomberg reported plans for iOS 18 to have all AI features running on an on-device and fully offline model, which indicated a shift towards local processing.

Apple’s AI Roadmap: From Siri Upgrades To Innovative Creative Tools

In one of its papers titled “LLM in a Flash: Efficient Large Language Model Inference with Limited Memory,” Apple’s researchers detailed a method for storing model data on Solid State Drives (SSDs) instead of Random Access Memory (RAM), resulting in accelerated inference speed.

Researchers stated: “We have demonstrated the ability to run LLMs up to twice the size of available DRAM [on the SSD], achieving an acceleration in inference speed by 4-5x compared to traditional loading methods in CPU, and 20-25x in GPU.”

Moreover, Apple’s researchers developed systems like EELBERT, which compress Large Language Models (LLMs) without significant loss in quality.

Their compressed version of Google’s Bert model was notably smaller — only 1.2 megabytes — while maintaining high quality. However, the company acknowledged latency tradeoffs associated with this compression method.

In addition to improving Siri’s functionality, Apple explored AI applications beyond virtual assistants. The company’s research delved into areas such as health monitoring and creative tools.

For instance, it investigated motion data analysis, gait recognition and heart rate tracking to enhance health-related AI functionalities.

Apple also developed creative tools such as Keyframer, enabling users to iteratively refine designs, and MGIE, allowing image editing through natural language commands.

Rather than inputting a prompt and receiving an image, users engage in an iterative process where they begin with a prompt and then utilize a toolkit to adjust and refine specific elements of the image according to their preferences. This iterative artistic process could potentially be implemented in various Apple tools, ranging from the Memoji creator to more advanced artistic applications.

While these advancements show promise, they are still in the research phase and face challenges in real-world implementation.

Apple CEO Tim Cook hinted at significant AI-related announcements during the upcoming Worldwide Developers Conference (WWDC), indicating potential transformative impacts on iPhones and user experiences.

Read Next: Apple’s Secret AI Lab in Zurich Poised to Enhance iPhone Capabilities, Poach Google Staff

Photo: Below the Sky on Shutterstock.

Market News and Data brought to you by Benzinga APIs
Posted In: NewsTechAIartificial intelligenceOpenAiSIRIStories That Mattertech
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!