Abilities & Limitations of Apple Foundation Models

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

What is Apple Foundation Models?

It’s worth starting with the most basic question: What is Apple Foundation Models Framework? The short answer to that question is that it is a large language model (LLM) that Apple has optimized to run locally on end-user devices, such as laptops, desktops, or mobile devices. Traditional LLMs operate in data centers equipped with high-powered GPUs, which require substantial memory and substantial power. Bringing that functionality to an end-user device requires significant changes to the model. In Apple’s case, the two most important changes to produce Foundation Models are a reduction in the number of parameters and quantizing the values in the model.

Limitations of LLMs and Apple Foundation Models

Open the Starter app for this lesson. The app expands on the chat-style app created in lesson one. As you can see from the previous discussion, the chat app really isn’t the best use case for this model, but it allows easy experimentation. In a real app, you will likely call the model using information and input from the user.

Please give me a list of five things to do on a visit to the Great Smoky Mountains National Park.
Response to Things to Do in Smoky Mountains Prompt
Jempolpu va Vdobmj jo Hi iw Wyiht Noadguepd Ccalkz

Question that reveals the cutoff date of information in Foundation Models
Beuqpueg bmup huyaurm ylu nebutn sisi ec eshumconuoz os Boayfoxuat Posohl

Foundation Model Safety

A key concern when working with any generative AI is safety. One reason this chat-style app is a poor choice for on-device models is that it exposes the most potentially dangerous type of interaction, allowing the user to enter prompts directly to the model. Any data the user submits or that is pulled from external sources should be treated as untrusted. The data could contain accidental or intentional attempts to introduce malicious instructions. Apple has trained the model to handle sensitive topics with care. Perhaps overly so at times. In addition, you’ve already encountered the concept of guardrails in the first lesson when the model refused to help you cheat on homework. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from prompts and responses. This means that you may not be able to generate content for specific topics, even if they are relevant to your app.

See forum comments
Download course materials from Github
Previous: Introduction Next: Foundation Model Options & Tuning