It’s finally time to tie the chat app into Foundation Models. Open ChatView.swift. First, import Foundation Models by adding the following import after the existing one.
import FoundationModels
Now find the sendMessage method. Interactions with LLMs consist of a prompt sent to the model and a response from the model. In this app, the text entered by the user will be the prompt. You will then take the response from the model and add it as a “reply”.
Delete the DispatchQueue.main.asyncAfter and its associated block at the end of the method. Replace it with:
// 1
let session = LanguageModelSession()
var response: String
// 2
do {
// 3
let modelResponse = try await session.respond(to: messageText)
response = modelResponse.content
messageText = ""
} catch {
// 4
response = "An error occurred while processing your message. \(error.localizedDescription)"
}
Part of the power of Foundation Models comes from the simplicity of using the model. That code provides a basic, but complete implementation. Here’s the process:
You interact with Foundation Models through a LanguageModelSession. This represents a single session of interaction with the language model. You also define a String to hold the response from the model.
The interactions with LanguageModelSession use the do-try-catch pattern in Swift and will throw an error if anything goes wrong. You’ll look at error handling in more depth later.
The respond(to:options:) method sends a prompt to the LanguageModelSession that you created with a string prompt and returns a LanguageModelSession.Response. You use the content property of this result to get the text reply. Note that LLMs, including Foundation Models, often return text in Markdown, a simple markup language that allows styling with plain text. The MessageBubble view already supports Markdown. The respond(to:options:) method generates the entire response and returns it when ready, which can take some time for complex situations. Apple therefore made calls to it asynchronous. Therefore, you need to await its completion before continuing. If the model returns a valid response, you clear out the text from the user input.
If anything goes wrong, you set the response to a message to show the error.
Now that you have a response, you need to show it on the screen. Add the following code after the do-try-catch structure:
This hides the typing indicator and adds the message to the chat with animations. Now build and run the app. Next, enter the prompt Hello. After a few seconds, you should get a response.
Response to a hello prompt.
Take a moment to appreciate that you have set up and used an LLM in just three lines of code, excluding the code required to display text and handle errors. That provides a convenience that’s not been available for LLMs before.
Note: Do not worry if your replies in this module don’t precisely match the ones shown in the lesson. LLMs are probabilistic by nature, meaning there is some randomness implicit in the system. You can adjust this and even eliminate this randomness using options, but that will be discussed later.
In the next lesson, you’ll refine your use of models in the app.
See forum comments
This content was released on Oct 2 2025. The official support period is 6-months
from this date.
It takes little code to implement Foundations Models. This section will walk you through converting the app to support Foundation Models.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Introduction to Apple Foundation Models
Next: Foundation Models Sessions
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.