
After months of speculation, Apple Intelligence will take center stage at WWDC 2024 in June. The platform was announced after a flurry of generative AI news from companies like Google and Open AI, sparking concerns that the famously tight-lipped tech giants have missed the boat on the latest tech craze.
But contrary to such speculation, Apple has formed a team to study Apple’s approach to artificial intelligence. There was still plenty of energy during the demo. Apple always likes to put on a show, but Apple Intelligence is ultimately very practical for this category.
Apple Intelligence (yes, AI for short) is not a standalone feature. Rather, it is important to integrate it into existing products. This is a branding exercise in a very real sense, but Large Language Model (LLM)-based technologies work behind the scenes. As far as consumers are concerned, technology will mostly come in the form of new features in existing apps.
We learned more from Apple’s iPhone 16 event on September 9th. During the event, Apple announced a number of tweaks to translation on the Apple Watch Series 10, visual search on the iPhone, and Siri’s features. The first wave of Apple Intelligence will be released in late October as part of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. The second feature is available as part of iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2 developer betas.
This feature was first released in US English. Apple has since added English localizations for Australia, Canada, New Zealand, South Africa, and the United Kingdom.
Support for Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese will be released in 2025. In particular, users in China and the EU may not be able to access the following services: Apple Intelligence features are available due to regulatory hurdles.
What is Apple Intelligence?

Cupertino’s marketing executives branded it Apple Intelligence. “AI for all of us.” The platform is designed to improve existing capabilities by leveraging what generative AI already does well, such as generating text and images. Like other platforms, including ChatGPT and Google Gemini, Apple Intelligence is trained on large-scale intelligence models. These systems use deep learning to form connections between text, images, video, music, and more.
The textual offerings of the LLM are presented as writing tools. This feature is available in a variety of Apple apps, including Mail, Messages, Pages, and Notifications. You can use it to provide long text summaries, proofread, and craft your message using content and tone prompts.
Image creation was also integrated in a similar way. Although it’s less smooth. Users can ask Apple Intelligence to create custom emoji (Genmoji) in the style of the Apple house. Image Playground is a standalone image creation app that leverages prompts to create visual content that can be used in Messages, Keynote, or shared via social media.
Apple Intelligence marks the long-awaited overhaul of Siri. Smart assistants were early in the game, but have been largely ignored over the past few years. Siri is much more deeply integrated into Apple’s operating system. For example, instead of the familiar icon, users will see a glowing light appear at the edge of their iPhone screen.
More importantly, the new Siri works with multiple apps. This means, for example, that you can ask Siri to edit a photo and then insert it directly into a text message. It’s a seamless experience that was previously lacking in Assistant. Screen recognition means Siri uses the context of the content you’re currently engaging with to provide appropriate answers.
Who gets Apple Intelligence and when?

The first wave of Apple Intelligence arrives in October with updates to iOS 18.1, iPadOS 18, and macOS Sequoia 15.1. This includes integrated writing tools, image organizing, article summaries, and input input for a redesigned Siri experience.
Many of the remaining features will be added in the upcoming October release as part of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. The second feature comes as part of iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. That list includes Genmoji, Image Playground, Visual Intelligence, Image Wand, and ChatGPT integrations.
This product is free to use as long as you have one of the following hardware:
- All iPhone 16 models
- iPhone 15 Pro Max (A17 Pro)
- iPhone 15 Pro (A17 Pro)
- iPad Pro (M1 or later)
- iPad Air (M1 or later)
- iPad mini (A17 or later)
- MacBook Air (M1 or later)
- MacBook Pro (M1 or later)
- iMac (M1 or later)
- Mac mini (M1 or later)
- Mac Studio (M1 Max or later)
- Mac Pro (M2 Ultra)
In particular, only the iPhone 15 Pro version is accessible due to the shortcomings of the standard model chipset. But perhaps the entire iPhone 16 line will be able to run Apple Intelligence when it launches.
private cloud computing

Apple takes a small-scale, personalized approach to education. Rather than relying on a kitchen sink approach to support platforms like GPT and Gemini, the company compiled datasets in-house for specific tasks, such as writing emails. The biggest benefit of this approach is that many of these tasks are much less resource-intensive and can be performed on-device.
But that doesn’t apply to everything. More complex queries will leverage new private cloud computing products. The company currently operates remote servers running on Apple Silicon, which it claims can provide the same level of privacy as consumer devices. Whether the operation is performed locally or via the cloud is not visible to the user unless the device is offline, at which point any remote query will result in an error.
Apple Intelligence with third-party apps

Ahead of WWDC, there’s been a lot of talk about Apple’s pending partnership with OpenAI. But ultimately it turns out that this deal isn’t about strengthening Apple Intelligence, but about providing an alternative platform for things it wasn’t actually built for. This is an implicit acknowledgment that there are limitations to building small-scale model systems.
Apple Intelligence is free. The same goes for access to ChatGPT. However, users with a paid account for the latter have access to premium features that free users do not have, including unlimited queries.
First introduced in iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, the ChatGPT integration has two main roles: complementing Siri’s knowledge base and adding to existing writing tool options.
Once the service is activated, the new Siri will ask you specific questions asking you to authorize ChatGPT access. Recipes and travel plans are examples of questions that can surface options. Users can also message Siri directly: “Ask ChatGPT.”
Compose is another native ChatGPT feature available through Apple Intelligence. Users can access the new Writing Tools feature from any app that supports it. Compose adds the ability to compose content based on prompts. It combines existing authoring tools such as Styles and Outlines.
We certainly know that Apple plans to partner with additional generative AI services. The company said Google Gemini is next on the list.









