Beginning Machine Learning with Keras & Core ML

In this Keras machine learning tutorial, you’ll learn how to train a convolutional neural network model, convert it to Core ML, and integrate it into an iOS app. By Audrey Tam.

Leave a rating/review
Save for later
Share
You are currently viewing page 6 of 6 of this article. Click here to view the first page.

Add Metadata for Xcode

Now add the following, substituting your own name and license info for the first two items, and run it.

coreml_mnist.author = 'raywenderlich.com'
coreml_mnist.license = 'Razeware'
coreml_mnist.short_description = 'Image based digit recognition (MNIST)'
coreml_mnist.input_description['image'] = 'Digit image'
coreml_mnist.output_description['output'] = 'Probability of each digit'
coreml_mnist.output_description['classLabel'] = 'Labels of digits'

This information appears when you select the model in Xcode’s Project navigator.

Save the Core ML Model

Finally, add the following, and run it.

coreml_mnist.save('MNISTClassifier.mlmodel')

This saves the mlmodel file in the notebook folder.

Congratulations, you now have a Core ML model that classifies handwritten digits! It’s time to use it in the iOS app.

Use Model in iOS App

Now you just follow the procedure described in Core ML and Vision: Machine Learning in iOS 11 Tutorial. The steps are the same, but I’ve rearranged the code to match Apple’s sample app Image Classification with Vision and CoreML.

Step 1. Drag the model into the app:

Open the starter app in Xcode, and drag MNISTClassifier.mlmodel from Finder into the project’s Project navigator. Select it to see the metadata you added:

If instead of Automatically generated Swift model class it says to build the project to generate the model class, go ahead and do that.

Step 2. Import the CoreML and Vision frameworks:

Open ViewController.swift, and import the two frameworks, just below import UIKit:

import CoreML
import Vision

Step 3. Create VNCoreMLModel and VNCoreMLRequest objects:

Add the following code below the outlets:

lazy var classificationRequest: VNCoreMLRequest = {
  // Load the ML model through its generated class and create a Vision request for it.
  do {
    let model = try VNCoreMLModel(for: MNISTClassifier().model)
    return VNCoreMLRequest(model: model, completionHandler: handleClassification)
  } catch {
    fatalError("Can't load Vision ML model: \(error).")
  }
}()

func handleClassification(request: VNRequest, error: Error?) {
  guard let observations = request.results as? [VNClassificationObservation]
    else { fatalError("Unexpected result type from VNCoreMLRequest.") }
  guard let best = observations.first
    else { fatalError("Can't get best result.") }

  DispatchQueue.main.async {
    self.predictLabel.text = best.identifier
    self.predictLabel.isHidden = false
  }
}

The request object works for any image that the handler in Step 4 passes to it, so you only need to define it once, as a lazy var.

The request object’s completion handler receives request and error objects. You check that request.results is an array of VNClassificationObservation objects, which is what the Vision framework returns when the Core ML model is a classifier, rather than a predictor or image processor.

A VNClassificationObservation object has two properties: identifier — a String — and confidence — a number between 0 and 1 — the probability the classification is correct. You take the first result, which will have the highest confidence value, and dispatch back to the main queue to update predictLabel. Classification work happens off the main queue, because it can be slow.

Step 4. Create and run a VNImageRequestHandler:

Locate predictTapped(), and replace the print statement with the following code:

let ciImage = CIImage(cgImage: inputImage)
let handler = VNImageRequestHandler(ciImage: ciImage)
do {
  try handler.perform([classificationRequest])
} catch {
  print(error)
}

You create a CIImage from inputImage, then create the VNImageRequestHandler object for this ciImage, and run the handler on an array of VNCoreMLRequest objects — in this case, just the one request object you created in Step 3.

Build and run. Draw a digit in the center of the drawing area, then tap Predict. Tap Clear to try again.

Larger drawings tend to work better, but the model often has trouble with ‘7’ and ‘4’. Not surprising, as a PCA visualization of the MNIST data shows 7s and 4s clustered with 9s:

If you don’t use Vision, include image_scale=1/255.0 as a parameter when you convert the Keras model to Core ML: the Keras model trains on images with gray scale values in the range [0, 1], and CVPixelBuffer values are in the range [0, 255].

Note: Malireddi says the Vision framework uses 20% more CPU, so his app includes an extension to convert a UIImage object to CVPixelBuffer format.

Thanks to Sri Raghu M, Matthijs Hollemans and Hon Weng Chong for helpful discussions!

Where To Go From Here?

You can download the complete notebook and project for this tutorial here. If the model shows up as missing in the app, replace it with the one in the notebook folder.

You’re now well-equipped to train a deep learning model in Keras, and integrate it into your app. Here are some resources and further reading to deepen your own learning:

Resources

Further Reading

I hope you enjoyed this introduction to machine learning and Keras. Please join the discussion below if you have any questions or comments.