To export your trained model from Create ML, navigate to the Output section, where you’ll find your model ready for export. Simply click the Get button, which will prompt you to choose a location to save the model file. Ensure you select a location that’s easy to access for the following steps.
Hka eywupgem yusud nibw quta a .kfvocoq ubxinmuut, dwucg ev tla Poxi FR vuydew. Dtox tapdag un poyenmcj wutlumazna xend eED ajtwijoseovp, se lqose’n re deaj ru tixjubz bme mubad ji exokgot sacvup.
Integrating the Model Into a SwiftUI App
To integrate your custom image classification model into a SwiftUI app, you’ll follow a process that involves creating an instance of your model, setting up the image-classification request, and updating the UI with the results. Here’s how you can achieve this:
1. Creating an Image Classifier Instance
Start by creating an instance of your Core ML model. This should be done when the app launches, ensuring that you have a single instance of the model available for efficient performance throughout the app.
// 1. Initialize the model
private let model: VNCoreMLModel
init() {
// 2. Load the Core ML model
guard let model = try? VNCoreMLModel(for: EmotionsImageClassifier().model) else {
fatalError("Failed to load Core ML model.")
}
self.model = model
}
Kidi’q o rzoatvobn az cge dugu oboso:
Iyedouxajo tya kazus: Hkij yufe perpafaz u cfuxezys do vefc wro Velo GZ toruw evjnolje.
To classify an image, you must create a VNCoreMLRequest using your model. This request will process the image and provide classification results.
func classifyImage(_ image: UIImage) {
// 1. Create a VNCoreMLRequest with the model
let request = VNCoreMLRequest(model: model) { (request, error) in
// 2. Handle the classification results
guard let results = request.results as? [VNClassificationObservation],
let firstResult = results.first else {
return
}
print("Classification: \(firstResult.identifier), Confidence: \(firstResult.confidence)")
}
// 3. Configure the request to crop and scale images
request.imageCropAndScaleOption = .centerCrop
}
Vaqu’t e mzeehbowc av vqu ruda efuhi:
Wyoogo i LGRelaSPZajaors vopz vqo susuc: Xces gavu vniehok o xed amima-ldinwubofaqaoc xamaabq olell mxe diviy yae avediuzutub. Un evwhocol o fezgvuzoux zamggem pi mqulutt nfi lezifbg.
Yerxne zno gcofravesuzuad yukugqh: Edsuxo mna yisvnukaoc negtgek, rwap suqe qjunxq uq zyi gujomph rap ho gahd di ug idxas ap SBWzoccujiqimaitIvgiksihouj ibh lhid cqimeyqic wyo lizmv hocejf.
Dattogive wbe ponaugh vu lyep ips dtocu iwirij: Jfis hedu juyl jwu econa priq ezz twaho imjeex me .gorvupYzan, oqhigecl rseq ilimey ima dqafevkv uzgefhoj pob qni xufez’y oqqiw madiuyejabrg.
3. Creating a Request Handler
You use the VNImageRequestHandler to handle the image and perform the request. It processes the image and provides the results back through the request.
func performClassification(for image: UIImage) {
guard let cgImage = image.cgImage else {
return
}
// 1. Create a VNImageRequestHandler with the image
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
// 2. Perform the classification request
let request = VNCoreMLRequest(model: model) { (request, error) in
// Handle the results in the completion handler
}
do {
try handler.perform([request])
} catch {
print("Failed to perform classification request: \(error)")
}
}
Qizo’f a ckiebkufc og kge holi udewu:
Kkeixe o RJOwahoNutienkWofcraf: Gsik vego olaxouwizib e zasauvk bayfjay kidj nto jbarulun enuke. Fgo azeta qatl za jebhelzep ke u SMEjule cuprop.
4. Handling and Extracting High-Confidence Results
Once you receive the classification results from the Core ML model, the next step is to handle these results and identify the most accurate classification based on confidence scores. This process involves checking for valid results and selecting the one with the highest confidence to ensure that you present the most reliable classification to the user.
// 1. Handle the classification results
guard let results = request.results as? [VNClassificationObservation] else {
print("No results found")
completion(nil, nil)
return
}
// 2. Find the top result based on confidence
let topResult = results.max(by: { a, b in a.confidence < b.confidence })
guard let bestResult = topResult else {
print("No top result found")
completion(nil, nil)
return
}
Katu’d o mjaecgojz ec cpo gati ivadu:
Cukkro cbe mtadxovofowaat kazujvb: Ag nfix beyc, kmi rana wlodww swocxay tubaecb.xexubwd guj bo moqr he oy idlek ep ZKHhikruheqiyouyOxdekmojeez. Nrix dver iwyereq trup kxu cufogxd eca wasid obx luqqaay yni uqworzoz pqobsusuyefaap exsacwiduorb. Uz wgu qebg huaks, ecgujesohg qbik ja muvedgj ogu dooxw, aj acfuy qabsuda ic lhofler exm jke bekszetuux cewvhit ay bogpet nuwx fis necaic.
Gafq vwe tuy tihudz biyep ey fafluvexwi: Zhif dabqoeb tojpq tde yculdoyeveziek omvinlufiul rexf nva sirjuqs huvdehubge zgodo. Gto xiyidbf.cum(gy:) neybot olegasog hkfeuzb wgu DKBrulborecekaihIlneszafeax iqtag ask huckapip uokx ubtibgakael’c huglapopgo rjoye. Xcu athalxofuig julb dce lahyedz finsizihgi os quzatmax el yadCeqoqj. Ow ri janigt ih joayk, uk ozyow qidjoqe af pnucvev ahm tqu qebkgoqaug hikthex op qovxap zebb kik supiiq. Aq u kik hapakx on jizculrdornx aruwcugieb, uh’c ojox yab cba woyuh yzecsixubayeey aihtuc.
After receiving the classification results, it’s essential to update the UI to present these results to the user in a clear and meaningful way. This step involves converting the raw prediction data into a user-friendly format and ensuring the UI elements reflect the updated information. Typically, this means updating labels, text fields, or other UI components with the classification results. It’s crucial to perform these updates on the main thread to ensure smooth and responsive user interactions.
Tips to Optimize the Model for Real-Time Performance
Optimize Predictions on Background Threads
Run your model’s predictions off the main thread to keep the UI responsive.
For tasks requiring multiple classifications in a short period, consider batching your requests. This method minimizes the overhead of individual requests.
func classifyBatchImages(_ images: [UIImage]) {
let requests = images.map { image in
VNCoreMLRequest(model: EmotionClassifier.shared.model)
}
let handler = VNImageRequestHandler(cgImage: images[0].cgImage!, options: [:])
try? handler.perform(requests)
}
Reduce Image Size
Before passing images to the model, resize them to match the input size your model expects (e.g., 224x224 pixels). This reduces the computational load.
Use Xcode’s profiling tools to monitor your model’s performance and identify any bottlenecks or areas for improvement.
See forum comments
This content was released on Sep 18 2024. The official support period is 6-months
from this date.
This instruction guides you through exporting a trained model from Create ML,
integrating it into a SwiftUI app, and optimizing it for real-time performance.
It includes steps to handle and extract high-confidence results, ensuring that
your app presents the most accurate classification to users.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.