Core ML helps us to build ML learning applications for iOS platforms.
Core ML uses trained models that make predictions based on new input data. For example, a model that's been trained on a region's historical land prices may be able to predict the price of land when given the details of locality and size.
Core ML acts as a foundation for other frameworks that are domain-specific. The major frameworks that Core ML supports include GamePlayKit to evaluate the learned decision trees, natural language processing (NLP) for text analysis, and vision framework for image-based analysis.
Core ML is built on top of accelerate, basic neural network subroutines (BNNSs), and Metal Performance Shaders, as shown in the architecture diagram from the Core ML documentation:
- With the Accelerate Framework, you can do mathematical computations on a large scale as well as calculations based on images. It is optimized for high performance and also contains APIs written in C for vector and matrix calculations, Digital Signal Processing (DSP), and other computations.
- BNNS help to implement neural networks. From the training data, the subroutine methods and other collections are useful for implementing and running neural network.
- With the Metal framework, you can render advanced three-dimensional graphics and run parallel computations using the GPU device. It comes with Metal shading language, the MetalKit framework, and the Metal Performance Shaders framework. With the Metal Performance Shaders framework, it is tuned to work with the hardware features of each GPU family for optimal performance.
Core ML applications are built on top of the three layers of components mentioned, as shown in the following diagram:
Core ML is optimized for on-device performance, which minimizes memory footprint and power consumption.