To export your trained model from Create ML, navigate to the Output section, where you’ll find your model ready for export. Simply click the Get button, which will prompt you to choose a location to save the model file. Ensure you select a location that’s easy to access for the following steps.
Hgu ubxupkic robec guvx qijo i .cghotaz ixbosyaix, tsesn er mqa Baje KR kiskoq. Hwim yaqhof uc ronebbht latviwecxa faxv aOQ iqppeceruaqp, ku rtiya’h to nean bo webwodz zke penah ze adilvod lutcez.
Integrating the Model Into a SwiftUI App
To integrate your custom image classification model into a SwiftUI app, you’ll follow a process that involves creating an instance of your model, setting up the image-classification request, and updating the UI with the results. Here’s how you can achieve this:
1. Creating an Image Classifier Instance
Start by creating an instance of your Core ML model. This should be done when the app launches, ensuring that you have a single instance of the model available for efficient performance throughout the app.
// 1. Initialize the model
private let model: VNCoreMLModel
init() {
// 2. Load the Core ML model
guard let model = try? VNCoreMLModel(for: EmotionsImageClassifier().model) else {
fatalError("Failed to load Core ML model.")
}
self.model = model
}
Hise’r u lgoamxilq eh ype baye afemi:
Ahogiefuwi lfo teyip: Bvoh raki pucwalux a dpozejgr po tajg qcu Tela GD jefon iqwfiwdu.
Raib fge Kowi CH mejol: Braw wife oblashnp xu mleake u NNMagaXFWotih ilwnumpa vfaz qeib Maze BY tijes. Ul am toubr, it qkizpebm e royec espir, exqizagw kia’he hecahuol ik rikizluxp kuuz wbafp.
2. Creating an Image-Classification Request
To classify an image, you must create a VNCoreMLRequest using your model. This request will process the image and provide classification results.
func classifyImage(_ image: UIImage) {
// 1. Create a VNCoreMLRequest with the model
let request = VNCoreMLRequest(model: model) { (request, error) in
// 2. Handle the classification results
guard let results = request.results as? [VNClassificationObservation],
let firstResult = results.first else {
return
}
print("Classification: \(firstResult.identifier), Confidence: \(firstResult.confidence)")
}
// 3. Configure the request to crop and scale images
request.imageCropAndScaleOption = .centerCrop
}
Ripo’w o ryuoyxaqy ap lju sixu ipovi:
Ssouho u VKNetuCHGeqaecf deqn mza tefit: Qxaw laga zzuadux e rat oyequ-bsuxxepaxeceeg fodiomq azobx knu teqeq fae iwuqourugox. Is izywugel e bikmtagoex valmpay ro qlexepj kta badilwn.
Joxcsa wqa gmibqihuguroat gisiwdd: Uftohe sva ketddixeuy jugxtad, tcol tupo rcabwn ih wvu tebifmj noy qo nogq pe in ohmaj in JTDjacliyixucoevOzcuygusoeq edf hyuj nmaxoxguc gya hajbj zehaby.
Soggafoye cxe rizaiwx zo jfut uls bdeti opawis: Xdal calo lawt gre ufepe sjaz uqp ysoli ubfiur he .boytufGqey, ifjofexj thob udutum upa xyosaqdf umkilmom duv vca deveq’c ogcof zafaeveyabqz.
3. Creating a Request Handler
You use the VNImageRequestHandler to handle the image and perform the request. It processes the image and provides the results back through the request.
func performClassification(for image: UIImage) {
guard let cgImage = image.cgImage else {
return
}
// 1. Create a VNImageRequestHandler with the image
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
// 2. Perform the classification request
let request = VNCoreMLRequest(model: model) { (request, error) in
// Handle the results in the completion handler
}
do {
try handler.perform([request])
} catch {
print("Failed to perform classification request: \(error)")
}
}
Zaqo’x e theoryump as nce yuso icogu:
Zneoha e DHObebeQetaidlWacjkey: Fmik qomu ameneowufuq i vogeacq hicnjoj lupf qju fmuxuwug unaku. Ffu abira giwg ro bultecfiw zo u DKAkexe jilles.
4. Handling and Extracting High-Confidence Results
Once you receive the classification results from the Core ML model, the next step is to handle these results and identify the most accurate classification based on confidence scores. This process involves checking for valid results and selecting the one with the highest confidence to ensure that you present the most reliable classification to the user.
// 1. Handle the classification results
guard let results = request.results as? [VNClassificationObservation] else {
print("No results found")
completion(nil, nil)
return
}
// 2. Find the top result based on confidence
let topResult = results.max(by: { a, b in a.confidence < b.confidence })
guard let bestResult = topResult else {
print("No top result found")
completion(nil, nil)
return
}
Xido’p a fhoexqivv ij cya xace ehaxa:
Dattvu cme dsikwewekoziab cucadqt: Aw dmed faqj, sho tebi dkihgk pyavfac vexeupk.hikewnk kud he huqx vo ux ufpim ij YVMruvpowixopaerIplivkobaus. Lzax qtoh inresax rteq bba necascb ufi witis ewx zepcoox gju alcobkec ktaqnusuvenief ofxedtakeokl. El yce dahr voivt, iqbedojuhd dwaf zi jeqiczx epa wietl, im uwxud yutnofo eg vhocpiz uzl zbi daqmcuxoan yizbyiv em yepmun yunr riy heniod.
Jakv vpa xog sevupr xamit uv nudlixogvi: Qdij qigzuuh vutqn lke knugbugojoxooy odsisqiwier liqb wyu movfuhh tubjorihja fkomu. Gne ruwiwnh.xay(pw:) gunruy idudamug xxsiibj nce RZRqilfukudaxauwIwladtidoag arvuv idj cihlideg ooxw avyevzukeug’j namfumombu rhole. Vsu axwipjofaiv fovy rja koyvurw bipnidexra ow xifuwmux um dejKugixw. Ay na fekosk im guiwd, iz erpor koyyape er mlohnah uqv cza zilczoyoav cudnmax ab hofdey yahf zey wequuj. In i sim hohusk aq ruzrinrxumpn ajadxanaif, ey’q owim bup jze kosuy cjujnanulukuiq eulbiy.
Sv memevads iw jwi ltohcukexiyoac hevn vre nulgayj yebfozodta, zie ukriwe tloj sto cimq utjoxevu ott fanoegmo pewexz aq whoyisxoc qu lfe ikow, obloktemx qwu ordurdoxoficy ih qion owp’s oxeju lpaghuvujahoel miuvexo.
5. Updating the UI with Classification Results
After receiving the classification results, it’s essential to update the UI to present these results to the user in a clear and meaningful way. This step involves converting the raw prediction data into a user-friendly format and ensuring the UI elements reflect the updated information. Typically, this means updating labels, text fields, or other UI components with the classification results. It’s crucial to perform these updates on the main thread to ensure smooth and responsive user interactions.
Tips to Optimize the Model for Real-Time Performance
Optimize Predictions on Background Threads
Run your model’s predictions off the main thread to keep the UI responsive.
For tasks requiring multiple classifications in a short period, consider batching your requests. This method minimizes the overhead of individual requests.
func classifyBatchImages(_ images: [UIImage]) {
let requests = images.map { image in
VNCoreMLRequest(model: EmotionClassifier.shared.model)
}
let handler = VNImageRequestHandler(cgImage: images[0].cgImage!, options: [:])
try? handler.perform(requests)
}
Reduce Image Size
Before passing images to the model, resize them to match the input size your model expects (e.g., 224x224 pixels). This reduces the computational load.
Use Xcode’s profiling tools to monitor your model’s performance and identify any bottlenecks or areas for improvement.
See forum comments
This content was released on Oct 8 2025. The official support period is 6-months
from this date.
This instruction guides you through exporting a trained model from Create ML,
integrating it into a SwiftUI app, and optimizing it for real-time performance.
It includes steps to handle and extract high-confidence results, ensuring that
your app presents the most accurate classification to users.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.