The full app is available in the chapter repository. A Mac computer and an iOS device are necessary to build and run it. Let's briefly detail the steps to get predictions from the model. Note that the following code is written in Swift. It has a similar syntax to Python:
private lazy var model: VNCoreMLModel = try! VNCoreMLModel(for: mobilenet().model)
private lazy var classificationRequest: VNCoreMLRequest = {
let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
self?.processClassifications(for: request, error: error)
})
request.imageCropAndScaleOption = .centerCrop
return request
}()
The code consists of three main steps:
- Loading the model. All the information about it is available in the .mlmodel file.
- Setting a custom callback. In our case, after the image is classified, we will call processClassifications.
- Setting imageCropAndScaleOption. Our model was designed to accept square images, but the...