If you recently upgraded to a new iPhone model, you will notice that Apple Intelligence appears in some of the most commonly used apps, such as messages, emails, and notes. Apple Intelligence (yes, also abbreviated to AI) appeared in Apple’s ecosystem in October 2024. Apple will compete with Google, Openai, humanity and others, so stay here to build the best AI tools.
What is Apple Intelligence?

The Marketing Executive at Cupertino has branded Apple Intelligence. The platform is designed to utilize what generate AI already works, such as text and image generation, to improve existing functionality. Like other platforms, including ChatGpt and Google Gemini, Apple Intelligence was trained on a large information model. These systems use deep learning to form connections, such as text, images, videos, and music.
The LLM-powered text offering presents itself as a writing tool. This feature is available in a variety of Apple apps, including emails, messages, pages, and notifications. Content and tone prompts can be used to provide long text summary, proofreading, and even message writing.
Image generation is integrated in a similar way, but not a bit seamless. Users can encourage Apple Intelligence to generate custom emojis (Genmojis) in the Apple House style. Image Playground, on the other hand, is a standalone image generation app that uses prompts to create visual content that can be shared via messages, keynotes, or social media.
Apple Intelligence also marks Siri’s long-awaited facelift. Smart Assistant was early in the game, but has been largely ignored for the past few years. Siri is much deeper integrated into Apple’s operating system. For example, instead of the familiar icon, users will see a glittering light on the edge of their iPhone screen.
More importantly, the new Siri works in the app. This means, for example, you can ask Siri to edit the photo and then insert it directly into your text message. This is a frictionless experience that assistants lacked before. On-screen recognition means using the context of content Siri is currently working on to provide the right answer.
Until WWDC 2025, many people have been hoping that Apple will showcase more versions of Siri, but we will have to wait a little longer.
“As we share, we continue to work on providing the capabilities that make Siri even more personal,” says Apple SVP of Craig Federighi, software engineering at WWDC 2025.
A more personalized version of Siri that has not been released so far should allow you to understand your “personal context” such as your relationships, communication routines, and more. However, according to a report from Bloomberg, this new in-developed version of Siri is far too error-free to ship. So there’s that delay.
At WWDC 2025, Apple also announced a new AI feature called Visual Intelligence. Apple has also announced a live translation feature that allows you to translate conversations in real time with Messages, FaceTime and Phone apps.
Visual intelligence and live translation are expected to be available in the second half of 2025 when the iOS 26 is released to the public.
When was Apple Intelligence announced?
After months of speculation, Apple Intelligence won center stage at WWDC 2024. The platform was announced in the wake of a torrent of generated AI news from Google and Open AI.
But contrary to such speculation, Apple has a team in place and is working on what has proven to be a very Apple approach to artificial intelligence. There was still pizza in the demo – Apple always loves to do shows – but Apple Intelligence is ultimately a very practical view of this category.
Apple Intelligence is not a standalone feature. Rather, it’s about integrating it into existing products. It’s a branding exercise in a very realistic sense, but large-scale language model (LLM)-driven technology works behind the scenes. As far as consumers are concerned, this technology appears mostly in the form of new features in existing apps.
I learned more at the Apple iPhone 16 event in September 2024. During the event, Apple promoted many AI-powered features on the device, including translations for Apple Watch Series 10, visual search on iPhone, and fine-tuning Siri features. The first wave of Apple Intelligence arrives at the end of October as part of iOS 18.1, iPados 18.1 and Macos Sequoia 15.1.
The feature was first released in US English. Apple later added UK localizations in Australia, Canada, New Zealand, South Africa and the UK. Support for Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish and Vietnamese will arrive in 2025.
Who will get Apple Intelligence?

The first wave of Apple Intelligence arrived in October 2024 via updates for iOS 18.1, iPados 18., and Macos Sequoia 15.1. These updates included integrated lighting tools, image cleanup, article summaries, and typing inputs for the redesigned SIRI experience. A second feature is now available as part of iOS 18.2, iPados 18.2, and Macos Sequoia 15.2. That list includes Genmoji, Image Playground, Visual Intelligence, Image Wand and ChatGpt integration.
These products are freely available as long as you have one of the following hardware:
All iPhone 16 Models iPhone 15 Pro Max (A17 Pro) iPhone 15 Pro (A17 Pro) iPad Pro (M1 and later) iPad Air (A17 and later) iPad Mini (A17 and later) MacBook Air (M1 and Lath) MacBook Pro (M1 and later) IMAC (M1 and later) IMAC (M1 and later) MAC MINI (M1 and later MAC Studio (M1 MAX and MAC Pro)
In particular, only the Pro version of the iPhone 15 is accessed due to the drawbacks of the standard model chipset. However, perhaps the entire iPhone 16 line will be able to run once Apple Intelligence arrives.
How does Apple’s AI work without an internet connection?

When you ask a question to GPT or Gemini, the query is sent to an external server to generate a response. This requires an internet connection. However, Apple has adopted a small model-made approach to training.
The biggest advantage of this approach is that many of these tasks are much less resource intensive and can be performed on the device. This is because we compiled our datasets in-house for specific tasks such as creating emails, rather than relying on the type of kitchen sink approach that promotes platforms such as GPT and Gemini.
However, that doesn’t apply to all. More complex queries take advantage of new private cloud computing offerings. The company currently operates a remote server running on Apple Silicon, claiming it can provide the same level of privacy as consumer devices. Whether an action is running locally or through the cloud is invisible to the user unless the device is offline. At that point, the remote query will throw an error.
Apple Intelligence using third-party apps

Ahead of the launch of Apple Intelligence, there was a lot of noise about Apple’s pending partnership with Openai. But ultimately, the deal on empowering Apple Intelligence turned out to be less about offering alternative platforms for things that aren’t actually built. It is an implicit approval that there are limitations to building small model systems.
Apple Intelligence is free. There is also access to chatgpt. However, those with paid accounts in the latter will have access to free users of premium features, including unlimited queries.
Debuting on iOS 18.2, iPados 18.2, and Macos Sequoia 15.2, ChatGpt Integration has two main roles that will be added to Siri’s knowledge base replenishment and existing writing tool options.
Enabling the service prompts new Siri to ask users to authorize them to access ChatGPT. Recipes and travel plans are examples of questions that could bring your options to the surface. Users can also directly encourage Siri to “question chatgpt.”
Compose is another primary ChatGPT feature available through Apple Intelligence. Users can access it in any app that supports the new lighting tool functionality. Compose adds the ability to write content based on prompts. Participate in existing writing tools such as styles and summaries.
We certainly know that Apple is planning to partner with additional generator AI services. The company all say Google Gemini is next on that list.
Can developers build on Apple’s AI model?
At WWDC 2025, Apple announced what is called the Foundation Models Framework. This allows developers to take advantage of AI models while offline.
This allows developers to build AI capabilities in third-party apps that leverage Apple’s existing systems.
“For example, if you’re ready for an exam, an app like Kahoot can create personalized quizzes from notes to study more engagingly,” Federighi said at WWDC. “And this happens without cloud API costs because it occurs using on-device models (…) We couldn’t be more excited about how developers can build Apple Intelligence and bring new experiences that are smart and offline available.”