
Background information
Apple is walking the personal intelligence tightrope
by Samuel Buchmann
As the development of its own voice model is faltering, Apple is apparently considering a change of strategy: the new Siri could be based on software from third-party providers.
Apple is considering operating the new Siri with a language model (LLM) from OpenAI or Anthropic. This is reported by «Bloomberg», citing internal sources. Corresponding negotiations are underway and the Californians are said to have already asked for test versions. However, a final decision has yet to be made.
This would mean that Apple would not only be abandoning its previous strategy of strict in-house development, but also admitting that it has fallen behind in the race for generative artificial intelligence (AI). So far, users can already have Siri make ChatGPT requests. However, an in-house LLM was supposed to serve as the basis for the voice assistant itself. However, its development recently ran into problems. As a result, the new Siri was postponed by at least a year to 2026.
The competition never sleeps: Google, Amazon and Samsung have equipped their voice assistants with powerful AI models that are increasingly making Siri look old. According to «Bloomberg», internal tests have shown that Apple's own foundation models lag significantly behind the solutions from OpenAI and Anthropic in terms of accuracy and range of functions. ChatGPT and Claude are currently regarded as the industry benchmark for generative AI and multimodal language processing.
The change in strategy would be a first for Apple and harbours risks. Dependence on external partners could weaken control over central functions and jeopardise in-house innovation in the long term. In addition, suppliers such as Anthropic demand high licence fees. On the other hand, Siri could quickly catch up with other bots such as Google Assistant by integrating leading AI models.
A key promise at the launch of Apple Intelligence was that no user data would be passed on to third parties - except with explicit permission, for example in the case of a ChatGPT request. In order to comply with this standard with LLMs from other suppliers, Apple apparently wants to operate the models on its own private cloud compute servers. This is likely to mean complex custom versions, as these servers do not run with Nvidia GPUs, but with Mac chips.
In the long term, Apple is apparently pursuing a hybrid strategy: for particularly sensitive tasks and on the devices themselves, it will continue to use its own compact AI models. However, external models such as ChatGPT or Claude could be used for complex, cloud-based requests. Apple's goal is to strike a balance between the pace of innovation and data protection.
According to «Bloomberg», the potential change in strategy is already leaving its mark on the company. The mood in the AI teams is said to be tense. Renowned expert Tom Gunter resigned last week. In addition, almost the entire MLX team, which is responsible for the development of AI on Apple chips, has left.
Many employees would feel unfairly devalued by the possible outsourcing of Siri AI. What's more, Meta is currently courting AI talent for its new «Superintelligence Lab» with salaries of up to 40 million US dollars per year. According to «Bloomberg», Apple's department for the development of LLMs currently comprises around 100 people.
My fingerprint often changes so drastically that my MacBook doesn't recognise it anymore. The reason? If I'm not clinging to a monitor or camera, I'm probably clinging to a rockface by the tips of my fingers.