Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

TensorFlow 2.0 beta releases with distribution strategy, API freeze, easy model building with Keras and more

Save for later
  • 5 min read
  • 10 Jun 2019

article-image

After all the hype and waiting, Google has finally announced the beta version of TensorFlow 2.0. The focus feature is the tf.distribute.Strategy which distributes training across multiple GPUs, multiple machines or TPUs with minimal code changes. TensorFlow 2.0 beta version also has a number of major improvements, breaking changes and multiple bug fixes. Earlier this year, the TensorFlow team had updated the users on what to expect from TensorFlow 2.0.

The 2.0 API is final with the symbol renaming/deprecation changes completed. The 2.0 API is ready and available as part of the TensorFlow 1.14 release in compat.v2 module.

TensorFlow 2.0 support for Keras features

Distribution Strategy for hardware


The tf.distribute.Strategy supports multiple user segments, including researchers, ML engineers, etc. It also provides good performance and easy switching between strategies. Users can use the tf.distribute.Strategy API to distribute training across multiple GPUs, multiple machines or TPUs. Users can distribute their existing models and training code with minimal code changes.

The tf.distribute.Strategy can be used with:

    1. TensorFlow's high level APIs
    2. Tf.keras
    3. Tf.estimator

Custom training loops


TenserFlow 2.0 beta also simplifies the API for custom training loops. This is also based on the distribution strategy - tf.distribute.Strategys. Custom training loops give flexibility and a greater control on training. It is also easier to debug the model and the training loop.

Model Subclassing


Building a fully-customizable model by subclassing tf.keras.Model, allows user to define its own forward pass. Layers can be created in the  __init__  method and set them as attributes of the class instance. The forward pass is defined in the call method. Model subclassing is particularly useful when eager execution is enabled, because it allows the forward pass to be written imperatively. Model subclassing gives greater flexibility when creating models that are not easily expressible.

Breaking Changes

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
  • The tf.contrib has been deprecated and its functionality has been migrated to the core TensorFlow API, to tensorflow/addons or removed entirely.
  • In the tf.estimator.DNN/Linear/DNNLinearCombined family, the premade estimators have been updated to use the tf.keras.optimizers instead of the tf.compat.v1.train.OptimizerS. A checkpoint converter tool, for converting optimizers has also been included with this release.

Bug Fixes and Other Changes


This beta version of 2.0 includes many bug fixes and other changes. Some of them are mentioned below:

  • In the tf.data.Options, the experimental_numa_aware option has been removed and a support for TensorArrays has been added.
  • The tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format. This allows the saved checkpoints to be compatible with model.load_weights.
  • The tf.contrib.estimator.add_metrics has been replaced with tf.estimator.add_metrics.
  • Gradient for SparseToDense op, GPU implementation of tf.linalg.tridiagonal_solve, broadcasting support to tf.matmul has been added.
  • This beta version also exposes a flag that allows the number of threads to vary across Python benchmarks.
  • The unused StringViewVariantWrapper and the tf.string_split from v2 API has been removed.


The TensorFlow team has provided a TF 2.0 Testing User Group to users for any snag experience and for feedback purpose.

General reaction to the release of TensorFlow 2.0 beta is positive.

https://twitter.com/markcartertm/status/1137238238748266496

https://twitter.com/tonypeng_Synced/status/1137128559414087680

A user on reddit comments, “Can't wait to try that out !”

However some users have compared it to PyTorch calling it more comprehensive than TensorFlow. PyTorch provides a more powerful platform for research and is good for production.

A user on Hacker News comments, “Maybe I'll give TF another try, but right now I'm really liking PyTorch. With TensorFlow I always felt like my models were buried deep in the machine and it was very hard to inspect and change them, and if I wanted to do something non-standard it was difficult even with Keras. With PyTorch though, I connect things however how I want, write whatever training logic I want, and I feel like my model is right in my hands. It's great for research and proofs-of-concept. Maybe for production too.”

Another user says that “Might give it another try, but my latest incursion in the Tensorflow universe did not end pleasantly. I ended up recording everything in Pytorch, took me less than a day to do the stuff that took me more than a week in TF. One problem is that there are too many ways to do the same thing in TF and it's hard to transition from one to the other.”

The TensorFlow team hopes to resolve all the additional issues before the release candidate (RC) 2.0 version, including complete Keras model support on Cloud TPUs and TPU pods and improve the overall performance of 2.0. The RC release is expected sometime this summer.

Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more

Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet

ML.NET 1.0 RC releases with support for TensorFlow models and much more!