At the moment, our process involves iterating over each photo and performing inference on each one individually. With the release of Core ML 2, we now have the option to create a batch and pass this batch to our model for inference. As with efficiencies gained with economies of scale, here, we also gain significant improvements; so let's walk through adapting our project to process our photos in a single batch rather than individually.
Let's work our way up the stack, starting in our YOLOFacade class and moving up to the PhotoSearcher. For this we will be using our model directly rather than proxying through Vision, so our first task is to replace the model property of our YOLOFacade class with the following declaration:
let model = tinyyolo_voc2007().model
Now, let's rewrite the detectObjects method to handle an array of photos rather than...