![](/rp/kFAqShRrnkQMbH6NYLBYoJ3lq9s.png)
Introducing Apple’s On-Device and Server Foundation Models
Jun 10, 2024 · We present foundation language models developed to power Apple Intelligence features, including a ∼3 billion parameter model designed to run efficiently on devices and a …
[2407.21075] Apple Intelligence Foundation Language Models
Jul 29, 2024 · We present foundation language models developed to power Apple Intelligence features, including a ~3 billion parameter model designed to run efficiently on devices and a …
GSM-Symbolic: Understanding the Limitations of Mathematical …
To address these concerns, we conduct a large-scale study on several SOTA open and closed models. To overcome the limitations of existing evaluations, we introduce GSM-Symbolic, an …
Apple Intelligence Foundation Language Models
We present foundation language models developed to power Apple Intelligence features, including a ∼3 billion parameter model designed to run efficiently on devices and a large …
Apple Releases Open Source AI Models That Run On-Device
Apr 24, 2024 · Apple today released several open source large language models (LLMs) that are designed to run on-device rather than through cloud servers. Called OpenELM (Open-source …
Apple’s MM1 AI Model Shows a Sleeping Giant Is Waking Up
Mar 19, 2024 · MM1 is a multimodal large language model, or MLLM, meaning it is trained on images as well as text. This allows the model to respond to text prompts and also answer …
Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Oct 14, 2024 · Now, though, a new study from six Apple engineers shows that the mathematical "reasoning" displayed by advanced large language models can be extremely brittle and …
How Apple Wants to Fix Hallucinations in AI Translation
15 hours ago · In a January 28, 2025, paper Rajen Chatterjee and Sarthak Garg from Apple, along with Zilu Tang from Boston University, presented a framework for mitigating translation …
How to use Reinforcement Learning with Large Language Models …
12 hours ago · Baseline models serve as benchmarks for evaluating progress. For instance, the LLaMA 1B model achieved approximately 79% Pass@8 and 30% Majority@8 on the GSM8K …
On-device AI — Apple Intelligence: Apple Foundation Models
Sep 16, 2024 · We will specifically focus on two models: AFM-on-device, a language model with approximately 3 billion parameters, and AFM-server, a larger, server-based language model. …