Using CoreML and Vision in Swift
The Swift programming language has come a long way since its first introduction, and in comparison to many other programming languages, it’s still well within its infancy.
However, with this in mind, with every release of Swift and its place in the open source community, we’ve seen it grow from strength to strength over such a short period of time. One of these core strengths is machine learning.
In this chapter, we’re going to look at Apple’s offering for machine learning – CoreML – and how we can build an app using Swift to read and process machine learning models, giving us intelligent image recognition.
We’ll also take a look at Apple’s Vision framework and how it works alongside CoreML to allow us to process video being streamed to our devices in real time, recognizing objects on the fly.
This will lay the foundation for bringing machine learning into your apps and their features...