Face Detection Tutorial Using the Vision Framework for iOS

In this tutorial, you’ll learn how to use Vision for face detection of facial features and overlay the results on the camera feed in real time. By Yono Mittlefehldt.

Leave a rating/review
Download materials
Save for later
Share
You are currently viewing page 3 of 3 of this article. Click here to view the first page.

Using Detected Faces

Face detection is something you’ve probably been seeing more of recently. It can be especially useful for image processing, when you want to really make the people in the images shine.

But you’re going to do something way cooler than that. You’re going to shoot lasers out of your eyes!

Time to get started.

While still in FaceDetectionViewController.swift, right below updateFaceView(for:), add the following method:

// 1
func updateLaserView(for result: VNFaceObservation) {
  // 2
  laserView.clear()
    
  // 3
  let yaw = result.yaw ?? 0.0
    
  // 4
  if yaw == 0.0 {
    return
  }
    
  // 5
  var origins: [CGPoint] = []
    
  // 6
  if let point = result.landmarks?.leftPupil?.normalizedPoints.first {
    let origin = landmark(point: point, to: result.boundingBox)
    origins.append(origin)
  }
    
  // 7
  if let point = result.landmarks?.rightPupil?.normalizedPoints.first {
    let origin = landmark(point: point, to: result.boundingBox)
    origins.append(origin)
  }
}

Whew! That was quite a bit of code. Here’s what you did with it:

  1. Define a new method that will update the LaserView. It’s a bit like updateFaceView(for:).
  2. Clear the LaserView.
  3. Get the yaw from the result. The yaw is a number that tells you how much your face is turned. If it’s negative, you’re looking to the left. If it’s positive, you’re looking to the right.
  4. Return if the yaw is 0.0. If you’re looking straight forward, no face lasers. 😞
  5. Create an array to store the origin points of the lasers.
  6. Add a laser origin based on the left pupil.
  7. Add a laser origin based on the right pupil.
NOTE: Although the Vision framework includes a left and right pupil among detected face landmarks, it turns out that these are just the geometric centers of the eyes. They’re not actually detected pupils. If you were to keep your head still, but look to the left or right, the pupils returned in the VNFaceObservation would not move.

OK, you’re not quite done with that method, yet. You’ve determined the origin of the lasers. However, you still need to add logic to figure out where the lasers will be focused.

At the end of your newly created updateLaserView(for:), add the following code:

// 1
let avgY = origins.map { $0.y }.reduce(0.0, +) / CGFloat(origins.count)

// 2
let focusY = (avgY < midY) ? 0.75 * maxY : 0.25 * maxY

// 3
let focusX = (yaw.doubleValue < 0.0) ? -100.0 : maxX + 100.0
    
// 4
let focus = CGPoint(x: focusX, y: focusY)
    
// 5
for origin in origins {
  let laser = Laser(origin: origin, focus: focus)
  laserView.add(laser: laser)
}

// 6
DispatchQueue.main.async {
  self.laserView.setNeedsDisplay()
}

Here you:

  1. Calculate the average y coordinate of the laser origins.
  2. Determine what the y coordinate of the laser focus point will be based on the average y of the origins. If your pupils are above the middle of the screen, you'll shoot down. Otherwise, you'll shoot up. You calculated midY in viewDidLoad().
  3. Calculate the x coordinate of the laser focus based on the yaw. If you're looking left, you should shoot lasers to the left.
  4. Create a CGPoint from your two focus coordinates.
  5. Generate some Lasers and add them to the LaserView.
  6. Tell the iPhone that the LaserView should be redrawn.

Now you need to call this method from somewhere. detectedFace(request:error:) is the perfect place! In that method, replace the call to updateFaceView(for:) with the following:

if faceViewHidden {
  updateLaserView(for: result)
} else {
  updateFaceView(for: result)
}

This logic chooses which update method to call based on whether or not the FaceView is hidden.

Currently, if you were to build and run, you would only shoot invisible lasers out of your eyes. While that sounds pretty cool, wouldn't it be better to see the lasers?

To fix this, you need tell the iPhone how to draw the lasers.

Open LaserView.swift and find the draw(_:) method. It should be completely empty. Now add the following code to it:

// 1
guard let context = UIGraphicsGetCurrentContext() else {
  return
}
    
// 2
context.saveGState()

// 3
for laser in lasers {
  // 4
  context.addLines(between: [laser.origin, laser.focus])
      
  context.setStrokeColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 0.5)
  context.setLineWidth(4.5)
  context.strokePath()
      
  // 5
  context.addLines(between: [laser.origin, laser.focus])
      
  context.setStrokeColor(red: 1.0, green: 0.0, blue: 0.0, alpha: 0.8)
  context.setLineWidth(3.0)
  context.strokePath()
}

// 6
context.restoreGState()

With this drawing code, you:

  1. Get the current graphics context.
  2. Push the current graphics state onto the stack.
  3. Loop through the lasers in the array.
  4. Draw a thicker white line in the direction of the laser.
  5. Then draw a slightly thinner red line over the white line to give it a cool laser effect.
  6. Pop the current graphics context off the stack to restore it to its original state.

That's it. Build and run time!

Tap anywhere on the screen to switch to Lasers mode.

Great job!

Where to Go From Here?

You can do all sorts of things with your new found knowledge. Imagine combining the face detection with depth data from the camera to create cool effects focused around the people in your photos. To learn more about using depth data, check out this tutorial on working with image depth maps and this tutorial on working with video depth maps.

Or how about trying out a Vision and CoreML tag team? That sounds really cool, right? If that piques your interest, we have a tutorial for that!

You could learn how to do face tracking using ARKit with this awesome tutorial.

There are, of course, plenty of other Vision APIs you can play with. Now that you have a foundational knowledge of how to use them, you can explore them all!

We hope you enjoyed this tutorial and, if you have any questions or comments, please join the forum discussion below!