What is new in TensorFlow 2.0

March 11, 2019


What's up, everybody! TensorFlow 2.0 is finally here! It seems that they've fixed many of the problems that people have been complaining about in the previous version. One of the things people have been complaining about was how verbose TensorFlow code was and how hard it was to master it. TensorFlow 2 provides a much simpler API, focusing on simplicity and ease of use. Now, TensorFlow code looks much more pythonic. Sessions are gone. Eager execution is improved. There's now a unified high-level API that cleans up all the clutter and confusion about so many different functions that do the same thing. And they have a new logo!

Ok, let's start with talking about what happened to sessions. Sessions are gone! Graph execution is not entirely gone, but TensorFlow now runs with eager execution by default. Some of you might remember that I had a tutorial earlier showing how to compute the geometric mean of two numbers in TensorFlow, and the code looked like this. Let's see how we can do the same thing in TensorFlow 2. I'm going to copy this and then paste it here. This is going to compute the geometric mean of 2 and 8. Now we are just going to print it here. We no longer need the session. That's it, here we have the output.

Ok, eager execution is convenient, but how about performance? Is it as fast as using sessions? In many cases, yes, it's almost as fast. You might notice a performance difference if your model has a lot of small operations though. But it doesn't really matter because it's now easier to switch between these modes. You can use eager execution for experimentation and debugging and easily switch to graph execution mode once you are ready to train the model.

For example, if you want to create an optimized TensorFlow graph in the first example, you can create a function to compute the geometric mean. Then, you can use the tf.function decorator to convert it into a TensorFlow graph. TensorFlow no longer has explicit sessions. They are replaced with TensorFlow functions. Let's run this. Here we have the same result.

How about building a real model? There used to be a bazillion ways of doing it in the past. You could use tf.slim, tf.layers, Keras, or implement everything from scratch using the core functions. Now the API is simplified by cleaning up and unifying duplicative functions. There used to be so many different types of coding styles. Now, the recommended way of implementing something is now to use the highest-level API that you can use so that your code can benefit from future backend improvements. And Keras is now the standard high-level API for building models.

It's not the same as the original implementation of Keras though. TensorFlow uses Keras as an API standard while adding some specific enhancements, such as supporting eager execution.

If you followed my previous series, I made a video showing how to classify handwritten digits using a simple multi-layer perceptron. And this is how the code looked like. Now we can do the same thing with much fewer lines of code. Load the data, define the model, compile the model, and train the model. If your model has skip-connections or anything extraordinary, you can also manually define the connections between layers, just like we did earlier using tf.layers. We can still convert this into a Keras model by providing the inputs and the outputs.

Dataset API is also eager execution compatible now. Datasets now behave just like numpy arrays. You can loop over datasets to see what's inside. And there's no need for creating separate iterators. Datasets are fully compatible with Keras models. You can easily feed data to the training function.

It also now became easier to handle non-imaging data, such as structured data, text, and different types of time series. You can check out the new tutorials on TensorFlow's website.

What about TensorBoard? That's also fully compatible with Keras models. You can easily create a TensorBoard callback and attach it to the model fit function. The new TensorBoard also has a profiling section which helps you identify what might be taking so long.

Now, it's also easier to distribute your models across multiple GPUs or other hardware. It only takes a few lines of code to do so. You can easily sync variables on multiple GPUs by defining a mirrored strategy and moving your code under the distribution strategy scope.

Overall, TensorFlow 2 seems to have addressed most of the criticisms from the previous version. But just like any major update, it'll be painful to update existing codebases into the new version. I know there are tools to facilitate this migration, but the migrated code will never be as elegant as the ones written from scratch using the new version. So if you are new to TensorFlow, you're lucky! This won't be an issue for you.

Alright, that's all for today. I hope you liked it. I'll add the links. If you have any comments or questions, let me know in the comments section below. Subscribe for more videos. As always, thanks for watching, stay tuned, and see you next time.