Chapters

Hide chapters

Machine Learning by Tutorials

Second Edition · iOS 13 · Swift 5.1 · Xcode 11

Before You Begin

Section 0: 3 chapters
Show chapters Hide chapters

Section I: Machine Learning with Images

Section 1: 10 chapters
Show chapters Hide chapters

12. Training a Model for Sequence Classification
Written by Chris LaPollo

Heads up... You're reading this book for free, with parts of this chapter shown beyond this point as scrambled text.

In the previous chapter, you learned about collecting and analyzing sequences of data, both crucial parts of successfully using machine learning. This chapter introduces a new type of neural network specifically designed for sequential data, and you’ll build one to classify the data you collected as device motions.

If you’re jumping into this chapter without first having gone through the previous one, you’ll need a Python environment with access to Turi Create. We’ll assume you have one named turienv, but if you don’t then you can create it now using the file at projects/notebooks/turienv.yaml. If you’re unsure how to do so, refer back to Chapter 4, “Getting Started with Python & Turi Create.”

Creating a model

You’ve got access to a clean dataset — either the one you made in the previous chapter or one we’ll provide for you — and now you’re ready to train a model. Or maybe several models until you find one that works well. This section shows how to use Turi Create’s task-focused API to train a model for activity detection.

Note: Training your own model is highly recommended, especially if you collected data to add to the provided dataset. But if for whatever reason you skipped the previous chapter, you can find a trained model named GestureClassifier.mlmodel inside the notebooks/pre-trained subfolder of the chapter resources required for this chapter.

In this section you’ll continue working with Jupyter in your turienv Anaconda environment. Create a new notebook in the notebooks folder of the chapter resources. If you’d like to see how we trained our provided model, you can check out the completed notebook notebooks/Model_Training_Complete.ipynb.

Import the same packages as you used in the previous chapter’s notebook:

import turicreate as tc
import activity_detector_utils as utils

Then run the following code to load your training, validation and testing datasets:

train_sf = tc.SFrame("data/cleaned_train_sframe")
valid_sf = tc.SFrame("data/cleaned_valid_sframe")
test_sf = tc.SFrame("data/cleaned_test_sframe")

As mentioned in the previous chapter, Turi Create stores structured data in SFrame objects. There are various ways to create such objects — here you load them directly from the binary files you previously saved. If you’d prefer to use the files supplied with the resources, change the paths to pre-trained/data/cleaned_train_sframe, pre-trained/data/cleaned_valid_sframe and pre-trained/data/cleaned_test_sframe.

Training any classifier involves using multiple datasets for training, validation and testing. But dealing with sequences includes a few wrinkles that require some explanation.

Splitting sequential data

If you’ve ever trained an image classifier, you may have divided the images into training, validation and test sets randomly. Or maybe those sets were provided for you, in which case someone else divided them randomly.

Data collected from two users both performing the same activity — step up exercises
Xute suskuzyer tkoj hyi ibisk mihx luxdansafb xso sihu egtococf — ycot af otuxjatof

But sometimes…

And now, in a shocking plot twist, you’re about to be told to sometimes do what you were just told not to do — train and validate on data from the same people! What?!

train, valid = tc.activity_classifier.util.random_split_by_session(
  train_sf, session_id='sessionId', fraction=0.9)
Random train/validation split counts
Goggew pwean/noqexequuw svnub soaqwg

Training the model

Now it’s time to build and train your model. Almost.

model = tc.activity_classifier.create(
  dataset=train_sf, session_id='sessionId', target='activity',
  features=[
    "rotX", "rotY", "rotZ", "accelX", "accelY", "accelZ"],
  prediction_window=20, validation_set=valid_sf,
  max_iterations=20)
Rotations for 100 samples of ‘shake_it', ‘chop_it', and ‘drive_it' activities from training dataset
Yopusaizj pux 045 langjop uq ‘jsodi_as', ‘jpiw_uq', amy ‘dbubo_un' urxohenaoc fwib gceohurw jofuwel

Initial training output
Onexiog gkoagefb ioytom

metrics = model.evaluate(test_sf)
print(metrics['accuracy'])
print(metrics)
Confusion matrix for trained model
Zuxnulauk yajfoj dex vziicup vedow

model.export_coreml("GestureClassifier.mlmodel")
model.save("GestureClassifier")

Getting to know your model

Open the GestureIt starter project in Xcode. If you’ve gone through the chapters leading up to this one, then you’ve already practiced adding Core ML models to your projects — find the GestureClassifier.mlmodel file you created when you saved your trained model in the previous section and drag it into Xcode. Or, if you’d like to use the model we trained on the provided dataset, add notebooks/pre-trained/GestureClassifier.mlmodel instead.

Looking at the mlmodel file
Jeopepm az vso kszuzal heca

Recurrent neural networks

So far in this book you’ve mostly dealt with convolutional neural networks — CNNs. They’re great for recognizing spatial relationships in data, such as how differences in value between nearby pixels in a two-dimensional grid can indicate the presence of an edge in an image, and nearby edges in certain configurations can indicate the ear of a dog, etc. Another kind of network, called a recurrent neural network — RNN — is designed to recognize temporal relationships. Remember, a sequence generally implies the passage of time, so this really just means they recognize relationships between items in a sequence.

Looping nature of RNN layers
Heerewq necivu iy MKP sisuqq

RNN layer's recurrent behavior shown as separate layers
GNY vagow'n boluwmesg monixuak gfobb oj biquduse moyifc

Long short-term memory

The acronym LSTM stands for the odd-sounding phrase long short-term memory, and it refers to a different kind of recurrent unit capable of dealing with relationships separated by longer distances in the sequence. Conceptually, the following diagram shows the pertinent details of how an LSTM works.

LSTM layer's recurrent behavior shown as separate layers
BZFM coger'f fiyibtajn zomuneif ydewt ud motijose reqelp

Turi Create’s activity classifier

So far we’ve been discussing RNNs — and more specifically, LSTMs — as deep learning’s solution to working with sequences. But it turns out that’s not the whole story.

Turi Create's activity classifier architecture
Kewa Dkaese'n uyrufuzl ycuwzanuag udxhomixxegu

A note on sequence classification

In the previous section you learned about the model architecture of Turi Create’s activity classifier. Recall how the final layer had a node for each class the model recognizes, with a softmax activation to produce a probability distribution over them.

Key points

  • Turi Create’s activity classification API can help you easily make models capable of recognizing human activity from motion data. However, it can be used for more than just human activity detection — it’s basically a generic classifier for numeric sequences.
  • Try isolating data from a single source into one of the train, validation or test sets.
  • Prefer a balanced class representation. In cases where that’s not possible, evaluate your model with techniques other than accuracy, such as precision and recall.
  • Sample/shuffle sequential data as full sequences, not as individual rows.
  • First train on just a small portion of your training set and make sure you can get the model to overfit. That’s the best way to find problems with your model, because if it can’t overfit to a small dataset, then you likely need to make changes before it will be able to learn at all.
  • Train multiple models and run multiple experiments until you find what works best for your app.
  • RNNs process data serially, so they’re slower than CNNs, both when training and performing inference.
  • One-dimensional convolutions are commonly used to extract temporal features from sequences prior to passing them into RNNs.
  • RNNs are a good choice for sequential data, with LSTMs being the most commonly used variant because they train (relatively) easily and perform well. However, they are not the only models that work well for sequences.

Where to go from here?

You’ve collected some data and created a model. Now it’s time to actually use that model in an app — a game that recognizes player actions from device motion. When you’re ready, see you in the next chapter!

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2024 Kodeco Inc.

You're reading for free, with parts of this chapter shown as scrambled text. Unlock this book, and our entire catalogue of books and videos, with a Kodeco Personal Plan.

Unlock now