OpenVINO consists of libraries and tools created by Intel that enable you to optimize your trained DL model from any framework and then deploy it using an inference engine on Intel hardware. Supported hardware includes Intel CPUs, integrated graphics in Intel CPUs, Intel's Movidius Neural Compute Stick, and FPGAs. OpenVINO is available for free from Intel.
OpenVINO includes the following components:
- Model optimizer: A tool that imports trained DL models from other DL frameworks, converts them, and then optimizes them. Supported DL frameworks include Caffe, TensorFlow, MXNet, and ONNX. Note the absence of support for Caffe2 or PyTorch.
- Inference engine: These are libraries that load the optimized model produced by the model optimizer and provide your applications with the ability to run the model on Intel hardware.
- Demos and samples: These simple applications...