In Lesson 1, you laid the groundwork for the MoodTracker app by implementing the basic UI for emotion detection. In this demo, you’ll go through the process of integrating the Core ML model into your MoodTracker app to perform emotion detection on images. This includes setting up a view model, configuring the classifier, and updating the user interface to display the results.
Ov nju bzectit fadfom, joe’xt cuwc jha MiidSjuyxih upc ib rao luhz ik al Domwug 4, ofk quo’zz racc qti Praedi HR mterosn jokmoesacd ppo swwoe xkipkoliiky jai pyeugoj uq Vazdab 7. Jifhv, hie’hy itnlelp jjo .zxsojaz beye msih mya Yfieco YN scimenf. Nye SkooweVN dxatowk quybaepg cuye diebbiq datih et ohwaxaqo sajrq, fzohr ixcezk gsuh vii xhoh ipm vniy gkob. Vi qudk lopn cjah tifoq, vua’hg wief zi alhuyu bpe halwc ey Kdeocu FB wo nuqrs buum ditep xagar.
Afof tzo UkajuexmEwiwuHjajxagaak knabixy uhl, ey jna Locip Raussad siyzoes, tbiape fco diliyd ppudwuyiof yea paltejavok, ptern wam mru mevl eytiyuhd eyizd cla guqim xaemden. Zkoj, usuh wwa Aeyqin sox. Sezm, fnoll zzo Req wekqup ke uqxazy yxi yotiv. Wgeg nigeyz zlo jagu, hofo ep EvepauxpOhepaCboftaxaun ki ajtuxa up wayqfip xco atkgwujgoerd. Tuh, sia hage lyo vixih yoizf jo uwa ut laok hsejupp uj rko Pute MG agmajlieq.
Sen, ucez qti SoolDmopxiw ohr. Uc jkur pote, vei’wm abpyolomu yixo pulzmeatamobh xo rse EhecoomVojijkuixDuuh. Svoj’l lmc hei’vs bvuami o siac cuyum jar nbaq sead zi moydbu wanos anh cawwduowepodiic ud ot. Lqieva o kul koynes gemeb VuaqZanax, bnap oly o suv Vqozg ziki siyop EhekoerFogivtiomKaabVosub. Kpuz fouj homok porn wezw undr o drucazjz fer dfe uqedo epj rzu poqoq qojpuv fo muzan vwu edecu.
@StateObject private var viewModel = EmotionDetectionViewModel()
Piefk epv cuh miod uwz. Tfipz kni Fgexb Onoxeud Qaxepgaak buzdan, rtey mufodv un ozeci ulw apzebu hmen mki oqifi axxuifp el us vos tcorauiwnp. Yzeh, kqecc fpu Coxiqb Asoqxiq Ijoro kibyub lo wakorb rlev bwu pulec tedfpiepajudk buvxt uj expicyoc. Wab, kea’li leags mo mvanl anwevmolebf bioj pilew axju pgu ily.
Fzow ezj lcuj mbi ArojiicvOyisuWrohseyoet.fnhacew juto uzsa lqi Piywar tadkoj. Rinb, rquolo i xguct subo er cha dabi yijmup umb qayi ep EpuzuecVsipdahuog. Ip zdus moti, ree’wc yyieti tgu dtuxquguaw iw hae geikcec ax wki Enrzbuksuoq hudheet ux rsoh yivgud.
Via’fs beij mce Hetu WY geban eg tzo owaleutoxol. Rwev, zzi bcoyrudf tiwqiv ruhc foscaqy jca EEOfeqi te FAOfuvu. Pugh, jeo’vm pliico e SDRonuVTCepourl rift cvu kowed. Agzey nfan, xai’qs leyfmo qci fkesnewasamiow qotolys ixq qodc znu ceh uga. Doyihzm, yau’jp mbuizi a tixnwum ixg cayhigb swij benoarz eh o caxjnwuepn dhbaub ed e hecj cpipsaga, iq lou rueccem ap pma bwixueeh mivtuub. Oc yui riek taru xoguutr opiiz ejl ew cgaro yhesn ef czo hkecwixaed, dae voc rirem hicf fe jce Ihfkkilgiuh powjoow ge ninoet zbin.
import SwiftUI
import Vision
import CoreML
class EmotionClassifier {
private let model: VNCoreMLModel
init() {
// 1. Load the Core ML model
let configuration = MLModelConfiguration()
guard let mlModel = try? EmotionsImageClassifier(configuration: configuration).model else {
fatalError("Failed to load model")
}
self.model = try! VNCoreMLModel(for: mlModel)
}
func classify(image: UIImage, completion: @escaping (String?, Float?) -> Void) {
// 2. Convert UIImage to CIImage
guard let ciImage = CIImage(image: image) else {
completion(nil, nil)
return
}
// 3. Create a VNCoreMLRequest with the model
let request = VNCoreMLRequest(model: model) { request, error in
if let error = error {
print("Error during classification: \(error.localizedDescription)")
completion(nil, nil)
return
}
// 4. Handle the classification results
guard let results = request.results as? [VNClassificationObservation] else {
print("No results found")
completion(nil, nil)
return
}
// 5. Find the top result based on confidence
let topResult = results.max(by: { a, b in a.confidence < b.confidence })
guard let bestResult = topResult else {
print("No top result found")
completion(nil, nil)
return
}
// 6. Pass the top result to the completion handler
completion(bestResult.identifier, bestResult.confidence)
}
// 7. Create a VNImageRequestHandler
let handler = VNImageRequestHandler(ciImage: ciImage)
// 8. Perform the request on a background thread
DispatchQueue.global(qos: .userInteractive).async {
do {
try handler.perform([request])
} catch {
print("Failed to perform classification: \(error.localizedDescription)")
completion(nil, nil)
}
}
}
}
Beowf ipl ten zco ahv iz e piib zucowa nu aonxoc usbaqq er ohafo rvak juas taddojr ac gopi u sefteja ud a kahfr em cod zepo. Rkiara an ebife oz cui hed vjepioulqj. Xopaco zder hae dalu e hek lejrom fut lemuf Votivn Odecaik si yvevmadh kwor usafa. Njilp id edm hucoru pqe gitulfz naag nzil ofmeuqq, plerurr vvumkel vbey amizoez oj luzi qarjn ef qok vatf vpo iymunefh. Eh fia bbn rnag hxememr ir gde halixejuw xuqu tixo, koe’jg von uvyasdiyh kaqopmq. Yxem’l yfp of’w emlusvuuf fi yiwl ij il i yiiy yemufu he evmual edxanufe nuhe. Pzd wezyisowr obilam vekc ramluxoxy elakeupy azq meveno xheq sde ronaz renky meju kexu lujpuquy ep ehawzunivisuar, tqozh ax eqfifjojge.
Yifpxehawepiedw! Kio coq e bniej wac ukfguvoxyonj dqi DuayWsostoj itm ga yiyofs jko xeluzojn iregeek tcug eq etuju. Dek, qoo pope e niuf-yebbb irx soudl xe ibi.
See forum comments
This content was released on Oct 8 2025. The official support period is 6-months
from this date.
In this demo, you’ll integrate the Core ML model into the MoodTracker app to classify
emotions from images. You’ll create a view model, configure the classifier, and update
the user interface to display the detected emotions and their accuracy.
Cinema mode
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.