According to the TensorFlow website (www.tensorflow.org):
Initially developed by Google for its internal consumption, it was released as open source on November 9, 2015. Since then, TensorFlow has been extensively used to develop machine learning and deep neural network models in various domains and continues to be used within Google for research and product development. TensorFlow 1.0 was released on February 15, 2017. Makes one wonder if it was a Valentine's Day gift from Google to machine learning engineers!
TensorFlow can be described with a data model, a programming model, and an execution model:
- Data model comprises of tensors, that are the basic data units created, manipulated, and saved in a TensorFlow program.
- Programming model comprises of data flow graphs or computation graphs. Creating a program in TensorFlow means building one or more TensorFlow computation graphs.
- Execution model consists of firing the nodes of a computation graph in a sequence of dependence. The execution starts by running the nodes that are directly connected to inputs and only depend on inputs being present.
To use TensorFlow in your projects, you need to learn how to program using the TensorFlow API. TensorFlow has multiple APIs that can be used to interact with the library. The TF APIs or libraries are divided into two levels:
- Lower-level library: The lower level library, also known as TensorFlow core, provides very fine-grained lower level functionality, thereby offering complete control on how to use and implement the library in the models. We will cover TensorFlow core in this chapter.
- Higher-level libraries: These libraries provide high-level functionalities and are comparatively easier to learn and implement in the models. Some of the libraries include TF Estimators, TFLearn, TFSlim, Sonnet, and Keras. We will cover some of these libraries in the next chapter.