Using CoreML and the Vision framework to detect objects in real time
We’ve seen what CoreML can do in terms of object detection, but taking everything we’ve done so far into account, we can certainly go a step further. Apple’s Vision framework offers a unique set of detection tools from landmark detection and face detection in images to tracking recognition.
With the latter, tracking recognition, the Vision framework allows us to take models built with CoreML and use them in conjunction with CoreML’s object detection to identify and track the object in question.
In this section, we’ll take everything we’ve learned so far, from how AVFoundation
works to implementing CoreML, and build a real-time object detection app using a device camera.
Getting ready
For this section, you’ll need the latest version of Xcode available from the Mac App Store.
Next, head on over to the Apple Developer portal at the following address: https...