Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Machine Learning Projects for Mobile Applications

You're reading from   Machine Learning Projects for Mobile Applications Build Android and iOS applications using TensorFlow Lite and Core ML

Arrow left icon
Product type Paperback
Published in Oct 2018
Publisher Packt
ISBN-13 9781788994590
Length 246 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Karthikeyan NG Karthikeyan NG
Author Profile Icon Karthikeyan NG
Karthikeyan NG
Arrow right icon
View More author details
Toc

Core ML

Core ML helps us to build ML learning applications for iOS platforms.

Core ML uses trained models that make predictions based on new input data. For example, a model that's been trained on a region's historical land prices may be able to predict the price of land when given the details of locality and size.

Core ML acts as a foundation for other frameworks that are domain-specific. The major frameworks that Core ML supports include GamePlayKit to evaluate the learned decision trees, natural language processing (NLP) for text analysis, and vision framework for image-based analysis. 

Core ML is built on top of accelerate, basic neural network subroutines (BNNSs), and Metal Performance Shaders, as shown in the architecture diagram from the Core ML documentation:

  • With the Accelerate Framework, you can do mathematical computations on a large scale as well as calculations based on images. It is optimized for high performance and also contains APIs written in C for vector and matrix calculations, Digital Signal Processing (DSP), and other computations.
  • BNNS help to implement neural networks. From the training data, the subroutine methods and other collections are useful for implementing and running neural network. 
  • With the Metal framework, you can render advanced three-dimensional graphics and run parallel computations using the GPU device. It comes with Metal shading language, the MetalKit framework, and the Metal Performance Shaders framework. With the Metal Performance Shaders framework, it is tuned to work with the hardware features of each GPU family for optimal performance.

Core ML applications are built on top of the three layers of components mentioned, as shown in the following diagram:

Core ML is optimized for on-device performance, which minimizes memory footprint and power consumption. 

Core ML model conversion

To run your first application on iOS, you don't need to start building your own model. You can use any one of the best existing models. If you have a model that is created using another third-party framework, you can use the Core ML Tools Python package, or third-party packages such as MXNet converter or TensorFlow converter. The links to access these tools are given next. If your model doesn't support any of these converters, you can also write your own converter. 

Core ML Tools Python package can be downloaded from: https://pypi.org/project/coremltools/
TensorFlow converter can be accessed through the link : https://github.com/tf-coreml/tf-coreml
MXNet converter can be downloaded from: https://github.com/apache/incubator-mxnet/tree/master/tools/coreml

The Core ML Tools Python package supports conversion from Caffe v1, Keras 1.2.2+, scikit-learn 0.18, XGBoost 0.6, and LIBSVM 3.22 frameworks. This covers models of SVM, tree ensembles, neural networks, generalized linear models, feature engineering, and pipeline models.

You can install Core ML tools through pip:

pip install -U coremltools

Converting your own model into a Core ML model

Convert your existing model into a Core ML model can be done through the coremltools Python package. If you want to convert a simple Caffe model to a Core ML model, it can be done with the following example:

import coremltools
my_coremlmodel =
coremltools.converters.caffe.convert('faces.caffemodel')
coremltools.utils.save_spec(my_coremlmodel, 'faces.mlmodel')

This conversion step varies between different models. You may need to add labels and input names, as well as the structure of the model.

Core ML on an iOS app

Integrating Core ML on an iOS app is pretty straightforward. Go and download pre-trained models from the Apple developer page. Download MobileNet model from there. 

After you download MobileNet.mlmodel, add it to the Resources group in your projectThe vision framework eases our problems by converting our existing image formats into acceptable input types. You can see the details of your model as shown in the following screenshot. In the upcoming chapters, we will start creating our own models on top of existing models.

Let's look at how to load the model into our application:

Open ViewController.swift in your recently created Xcode project, and import both Vision and Core ML frameworks:

/**
Lets see the UIImage given to vision framework for the prediction.
The results could be slightly different based on the UIImage conversion.
**/
func visionPrediction(image: UIImage) {
guard let visionModel = try? VNCoreMLModel(for: model.model) else{
fatalError("World is gonna crash!")
}
let request = VNCoreMLRequest(model: visionModel) { request, error
in
if let predictions = request.results as? [VNClassificationObservation] {
//top predictions sorted based on confidence
//results come in string, double tuple
let topPredictions = observations.prefix(through: 5)
.map { ($0.identifier, Double($0.confidence)) }
self.show(results: topPredictions)
}
}
}

Let's load the same image through the Core ML MobileNet model for the prediction:

/** 
Method that predicts objects from image using CoreML. The only downside of this method is, the mlmodel expects images in 224 * 224 pixels resolutions. So we need to manually convert UIImage
into pixelBuffer.
**/
func coremlPrediction(image: UIImage) {
if let makeBuffer = image.pixelBuffer(width: 224, height: 224),
let prediction = try? model.prediction(data: makeBuffer) {
let topPredictions = top(5, prediction.prob)
show(results: topPredictions)
}
}
You have been reading a chapter from
Machine Learning Projects for Mobile Applications
Published in: Oct 2018
Publisher: Packt
ISBN-13: 9781788994590
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image