iOS

Apple Reveals Bold AI Model Training Strategy Focused on Privacy and Performance

Apple may have slipped out of the AI spotlight lately, but it’s not backing down. The tech giant has just unveiled a deep-dive into its latest AI model training tactics in a comprehensive technical report titled “Apple Intelligence Foundation Language Models – Tech Report 2025.” This marks one of Apple’s most transparent moves in the AI field, laying bare the building blocks of its next-gen models.

At WWDC, Apple teased the world with its new Liquid design for upcoming operating systems-but quietly, it also introduced the next generation of foundational AI models. 

These are set to power both on-device and cloud-based experiences with a sharp focus on privacy and performance.

One standout from the report is Apple’s hybrid AI model structure. The models are split into two blocks: Block 1 carries over 60% of the model’s core-mainly transformer layers-to handle language understanding. Block 2, streamlined by removing memory-intensive components like key and value projection, cuts down memory use by 38% and significantly speeds up performance.

On the server side, Apple has built a custom Private Cloud Compute system using a Parallel-Track Mixture-of-Experts (PT-MoE) design. Instead of running the full model every time, only specialized “experts” are activated based on the task. This ensures faster, more efficient processing while minimizing system strain.

Language support-previously a major shortfall for Apple Intelligence-has been dramatically improved. Apple boosted non-English data in training from 8% to 30%, using both authentic and synthetic datasets. This enhances multilingual tools like Writing Tools and boosts global accessibility.

Apple’s data sources for training are equally varied. Applebot, its web crawler, gathers public web data unless explicitly blocked. The company also uses licensed content (from unnamed media partners) and synthetic data generated by smaller models for specific tasks like image-to-text and code generation.

Visual data plays a big role, too. With over 10 billion image-caption pairs-including handwritten notes and screenshots-Apple’s training dataset is visually rich. Its own AI models are used to enhance these captions, making them smarter over time.

Despite AI buzz often being dominated by rivals, Apple’s latest strategy shows it’s quietly carving a strong path forward-prioritizing privacy, efficiency, and multilingualism in a way few others can claim.

Related posts

Apple Introduces Stricter App Store Age Ratings with iOS 26

iOS 26 Public Beta Review: Liquid Glass Returns, Love It or Hate It

Top 5 iOS 26 Features You Need to Try in the Public Beta