Image Depth Maps Tutorial for iOS: Getting Started
Learn how you can use the incredibly powerful image manipulation frameworks on iOS to use image depth maps with only a few lines of code. By Owen L Brown.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Image Depth Maps Tutorial for iOS: Getting Started
20 mins
- Getting Started
- Reading Depth Data
- Implementing the Depth Data
- How Does the iPhone Do This?
- Depth vs Disparity
- Creating a Mask
- Setting up the Left Side of the Mask
- Setting up the Right Side of the Mask
- Combining the Two Masks
- Your First Depth-Inspired Filter
- Color Highlight Filter
- Change the Focal Length
- More About AVDepthData
- Where to Go From Here?
Color Highlight Filter
Return to DepthImageFilters.swift and add the following new method again:
func createColorHighlight(
for image: SampleImage,
withFocus focus: CGFloat
) -> UIImage? {
let mask = createMask(for: image, withFocus: focus)
let grayscale = image.filterImage.applyingFilter("CIPhotoEffectMono")
let output = image.filterImage.applyingFilter("CIBlendWithMask", parameters: [
"inputBackgroundImage" : grayscale,
"inputMaskImage": mask
])
guard let cgImage = context.createCGImage(output, from: output.extent) else {
return nil
}
return UIImage(cgImage: cgImage)
}
This should look familiar. It’s almost exactly the same as createSpotlightImage(for:withFocus:)
, which you just wrote. The difference is that this time, you set the background image to be a grayscale version of the original image.
This filter will show full color at the focal point based on the slider position, and fade to gray from there.
Open DepthImageViewController.swift and, in the same switch statement, replace the code for (.filtered, .color)
with the following:
return depthFilters.createColorHighlight(for: image, withFocus: focus)
This calls your new filter method and displays the result.
Build and run to see the magic:
Don’t you hate it when you take a picture only to discover later that the camera focused on the wrong object? What if you could change the focus after the fact?
That’s exactly what the depth-inspired filter you’ll write next does!
Change the Focal Length
Under createColorHighlight(for:withFocus:)
in DepthImageFilters.swift, add one last method:
func createFocalBlur(
for image: SampleImage,
withFocus focus: CGFloat
) -> UIImage? {
// 1
let mask = createMask(for: image, withFocus: focus)
// 2
let invertedMask = mask.applyingFilter("CIColorInvert")
// 3
let output = image.filterImage.applyingFilter(
"CIMaskedVariableBlur",
parameters: [
"inputMask" : invertedMask,
"inputRadius": 15.0
])
// 4
guard let cgImage = context.createCGImage(output, from: output.extent) else {
return nil
}
// 5
return UIImage(cgImage: cgImage)
}
This filter is a little different from the other two.
- First, you get the initial mask that you’ve used previously.
- You then use
CIColorInvert
to invert the mask. - Then you apply
CIMaskedVariableBlur
, a filter that was new with iOS 11. It will blur using a radius equal toinputRadius
multiplied by the mask’s pixel value. When the mask pixel value is 1.0, the blur is at its max, which is why you needed to invert the mask first. - Once again, you generate a
CGImage
usingCIContext
. - You use that
CGImage
to create aUIImage
and return it.
Before you can run, you need to once again update the switch statement back in DepthImageViewController.swift. To use your shiny new method, change the code under (.focused, .blur)
to:
return depthFilters.createFocalBlur(for: image, withFocus: focus)
Build and run.
It’s… so… beautiful!
More About AVDepthData
Remember how you scaled the mask in createMask(for:withFocus:)
? You had to do this because the depth data captured by the iPhone is a lower resolution than the sensor resolution. It’s closer to 0.5 megapixels than the 12 megapixels the camera can take.
Another important thing to know is the data can be filtered or unfiltered. Unfiltered data may have holes represented by NaN, which stands for Not a Number — a possible value in floating point data types. If the phone can’t correlate two pixels or if something obscures just one of the cameras, it uses these NaN values for disparity.
Pixels with a value of NaN display as black. Since multiplying by NaN will always result in NaN, these black pixels will propagate to your final image, and they’ll look like holes.
As this can be a pain to deal with, Apple gives you filtered data, when available, to fill in these gaps and smooth out the data.
If you’re unsure, check isDepthDataFiltered
to find out if you’re dealing with filtered or unfiltered data.
Where to Go From Here?
Download the final project using the Download Materials button at the top or bottom of this tutorial.
There are tons more Core Image filters available. Check Apple’s Core Image Filter Reference for a complete list. Many of these filters create interesting effects when you combine them with depth data.
Additionally, you can capture depth data with video, too! Think of the possibilities.
I hope you had fun building these image filters. If you have any questions or comments, please join the forum discussion below!