Author: Aly Khimji


If you are a machine learning practitioner, enthusiast or just enjoy tinkering with ML then you’ve heard of TensorFlow. TensorFlow is easily the most popular deep learning framework. Developed by Google, TF is the powerhouse that drives all of Google’s massively scaled services like Gmail, YouTube, Search and Maps. Google went on to Open Source TensorFlow and it has been adopted by many companies including Uber, AirBnB, Ebay and many others. Of all the deep learning frameworks, TensorFlow has the most activity and contribution on GitHub and its adoption has only been growing.

The Past

tensorflow

TensorFlow 1.x is known for having a very sharp learning curve. You would be hard pressed to find someone who would say that TensorFlow is user friendly; its syntax can be challenging, it’s difficult to debug in many cases, and error output can be quite complex. There are many concepts to understand like placeholders, hyper-parameters, variables, different optimizers and many different functions. Another issue with TensorFlow 1.x was rooted in the paradigm used by TensorFlow; leveraging static computation graphs. Data, nodes and computations are represented as a graph, and only when the graph is executed in a session does data flow through it. This can make debugging the data context at any given time quite difficult, not to mention that you cannot debug within the graph as its being built. Getting started with ML has its own challenges, but when the tooling is also difficult to use it can drastically discourage budding ML minds.

The Present

keras-tensorflow

In our recent ML workshops at Arctiq, students run through a variety of topics. One such topic is on deep learning, in which students are shown how neural networks are built in TensorFlow. Then, they recreate those same neural networks with Keras. Keras is a high level API that lives on top of TensorFlow. What Keras brings is a simple, readable and clear approach to building neural networks. For many, Keras is a saving grace as it provides clear, concise, Python style syntax that maps nicely to the concepts of how neural networks are built. There are many advantages for using native TensorFlow compared to Keras, but for new users trying to grapple with deep learning concepts, Keras helps focus on the code. Keras was built from the ground up to be “Python-ish”; it was designed to be flexible and simple to learn. As of mid-2017, Keras was adopted and integrated into TensorFlow and this integration has made TensorFlow accessible to a much wider audience.

The Future

logo-color-tensorflow

TensorFlow’s focus is performance and scale. For many, the main draw-back of using Keras on top of TensorFlow was a performance hit, and for others, simply a lack of fine-grained TensorFlow specifics were not always available in the Keras API. With TensorFlow 2.0, Keras and its inviting syntax style is now built right into the core. Keras will be the high level API for TensorFlow with all the advanced features of TensorFlow directly from tf.keras.

Many of the paradigms that made TensorFlow 1.x too complicated are a thing of the past with TensorFlow 2. Sessions are gone, and eager execution is the default. Eager execution is an imperative programming paradigm that allows evaluation of operations immediately, replacing the older static graph with a dynamic computation graph. This makes things much easier to debug and is much more inline with how Python works. No sessions or placeholders, but rather pass data into functions as an argument. This makes debugging and prototyping much easier while maintaining performance and scalability.

Over the past year there have been many additions and deprecations to the TensorFlow 1.x API. This became difficult to run code from older API’s as some function or attribute was always broken and required retrofitting. With TensorFlow 2.0 many of these API’s have be consolidated across both TensorFlow and Keras. You can now work with the Keras style of syntax and have access to optimizers, metrics, loaders and layer building.

As powerful as the tf.keras high level API is, there is also tf.raw_ops. This is a new full low-level API that provides access to all native internal operations for things like variables, checkpoints and layers. The entire ecosystem received an overhaul, from high and low level API’s, data pre/post processing, and pipelines all the way to TensorBoard integration with Keras.

Code Comparison

Here is an example of deep neural network based of the MNIST data set with TensorFlow 1.x. Code can be found here. [Accuracy: 92%]

from __future__ import print_function
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=False)
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np

learning_rate = 0.1
num_steps = 1000
batch_size = 128
display_step = 100
n_hidden_1 = 256 # 1st layer number of neurons
n_hidden_2 = 256 # 2nd layer number of neurons
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)

input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': mnist.train.images}, y=mnist.train.labels,
    batch_size=batch_size, num_epochs=None, shuffle=True)

def neural_net(x_dict):
    x = x_dict['images']
    layer_1 = tf.layers.dense(x, n_hidden_1)
    layer_2 = tf.layers.dense(layer_1, n_hidden_2)
    out_layer = tf.layers.dense(layer_2, num_classes)
    return out_layer
  
def model_fn(features, labels, mode):
    logits = neural_net(features)
    pred_classes = tf.argmax(logits, axis=1)
    pred_probas = tf.nn.softmax(logits)
    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)   
    loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
        logits=logits, labels=tf.cast(labels, dtype=tf.int32)))
    optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
    train_op = optimizer.minimize(loss_op, global_step=tf.train.get_global_step())

    acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)
    estim_specs = tf.estimator.EstimatorSpec(
      mode=mode,
      predictions=pred_classes,
      loss=loss_op,
      train_op=train_op,
      eval_metric_ops={'accuracy': acc_op})
    return estim_specs
  
model = tf.estimator.Estimator(model_fn)
model.train(input_fn, steps=num_steps)
input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': mnist.test.images}, y=mnist.test.labels,
    batch_size=batch_size, shuffle=False)
model.evaluate(input_fn)

Here is an example of a neural network also based on the MNIST data set but using TensorFlow 2.x code/syntax. [Accuracy: 98%]

# Import the TensorFlow
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install -q tensorflow==2.0.0-alpha0
import tensorflow as tf
mnist = tf.keras.datasets.mnist

# Load and prepare the MNIST dataset.
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Build the tf.keras.Sequential model by stacking layers.
model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(256, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(256, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Train and evaluate model:
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

Shown above is a clear reduction in the code size and complexity, with a gain in readability and comprehension. Keeping in mind this is a toy dataset, when running both snippets in CoLab, there is also a gain in performance AND a 6% increase in accuracy with the TF2.x model. You should test it out on your own datasets and see if you see an improvement. The new TensorFlow is a welcome improvement that will increase accessibility to the best deep learning library. It’s a huge step in AI democratization and having more people take an interest in ML/AI. TensorFlow 2.0 Alpha is available and ramping up fast, I look forward to its future and the change it will bring to the community.

//take the first step

Tagged:



//comments


//blog search


//other topics