Saliency Analysis in iOS using Vision

In this tutorial, you’ll learn how to use the Vision framework in iOS to perform saliency analysis and use it to create an effect on a live video feed. By Yono Mittlefehldt.

Leave a rating/review
Download materials
Save for later
Share
You are currently viewing page 3 of 3 of this article. Click here to view the first page.

Where to Go From Here?

Congratulations! You've learned a lot, done a ton of coding and created a cool effect with it. What now?

You could create more effects using the heat maps and potentially even combine saliency data with depth map data to create even cooler effects. If you'd like some ideas for other effects, check out our tutorials Image Depth Maps Tutorial for iOS: Getting Started and Video Depth Maps Tutorial for iOS: Getting Started.

You could also try to create an auto-crop or auto-focus feature using the saliency analysis data. The robotic iPhone world is your oyster!

If you have any questions or comments, please join the forum discussion below.