Apple is known for its innovative technology. But in the AI field, it was behind its competitors, and users wondered why it was so discrete. But it introduced a new feature during yesterday’s WWDC 2024: Apple Intelligence.
Imagine your iPhone creating custom emojis, or Siri summarizing your hour-long meeting in seconds. Apple Intelligence promises to deliver AI-powered features into your daily life. But as our devices become more intelligent, what are the implications for our personal data?
What is Apple Intelligence
Apple Intelligence is an AI platform designed to bring generative AI features across Apple’s ecosystem and apps. The company; who closed a deal with OpenAI, wants to provide users with a powerful AI experience.
Key Features
Personalized Image Generation
Users will be able to create custom images for their conversations on iMessage or Mail, over 3 different styles:
- Sketch
- Animation
- Illustration
The “Image Playground” app allows more customization, like combining photos and other visual elements to generate contextual images.
The “Genmoji” tool will use generative AI to create emojis based on text descriptions.
AI-Powered Writing Tools
Apple Intelligence integrates advanced writing tools into apps like Safari and Notes, to enhance productivity.
It will facilitate proofreading, summarizing, and generating text, adding new creative possibilities.
Enhanced Siri Experience
Siri, first released in February 2010, received a significant upgrade. With new conversational AI capabilities, it should be more intuitive and understanding, follow multi-step instructions, and use user’s data to get personal context.
Private Cloud Compute
AI tasks will be executed either on-device or in the cloud, and will use Apple’s secure cloud servers for complex computations. During the keynote, Apple said that all the sensitive data would remain on the device.
This separation aims at ensuring optimal performance without compromising user privacy.
Apple’s Commitment to Privacy
Of course, it’s Apple, so privacy concerns emerged, but the WWDC Keynote addressed some of them directly:
- Apple Cloud only receives the minimum data required for the specific task
- Data should be encrypted in transit and never stored or made accessible to Apple.
- Cloud servers will use Apple’s custom silicon with hardware security capabilities (Secure Enclave and Secure Boot) to prevent unauthorized access.
- Independent experts can inspect the code running on these servers to verify Apple’s privacy claims.
- User data should only used to fulfill the current request and then be discarded.
- Apple should not have access to the user’s personal data processed by the AI models.
- User IP addresses will be hidden when using tools like ChatGPT.
- Apple will provide options to lock and hide sensitive apps from view when sharing devices.
Still, some personalities have already warned about the potential dangers of this integration.
On X/Twitter, Elon Musk even warned that if Apple devices integrate OpenAI, they would be banned in his companies.
What’s next
With all its competitors releasing AI features, it’s a smart move from Apple to hit hard during the keynote.
The AI adoption is clear now (source: Microsoft Worklab and Microsoft News):
- The use of generative AI has nearly doubled in the last six months
- 75% of global knowledge workers use it
- 79% of leaders agree AI adoption is critical to remain competitive
Apple Intelligence could be a game changer, but several questions remain unanswered: how secure will the data be in the cloud, will they be used for targeted advertising or product development, etc.
That being said, Apple Intelligence will be an opt-in feature, and be in beta release first, so we’ll get to learn more about privacy and transparency.
It will require an iPhone 15 Pro or newer iPhones, and M1 or newer Macs/iPads for full functionality.
No responses yet