TensorFlow 2.1.0 features - Last version of TensorFlow 2.x to support Python 2
Google released TensorFlow 2.1.0 in January and this release is the last release of TensorFlow with Python 2 support. The makers of Python already announced the end of life for Python 2 and there will be no release of Python after January 1, 2020. This means Python 2 support is officially ended and there will be no further release. That's why Google TensorFlow team decided to end the support of Python 2 in TensorFlow 2.x and TensorFlow 2.1.0 is the last version having support for Python 2.
In this section we are going to discuss the features and improvement coming with the TensorFlow 2.1.0.
Top Features and Improvements in TensorFlow 2.1.0
Default pip installer includes GPU and CPU packages
The pip installer of TensorFlow 2.1.0 includes the packages for GPU support for both Linux and Windows operating system. The pip installer includes both CPU and GPU package, but if a user just looking for a CPU-only package due to download size then CPU-only package can also be downloaded. The TensorFlow 2.1.0 installed with pip installer can work on the machine with and without the NVIDIA GPUs.
Windows pip installer is built with Visual Studio 2019 version 16.4
The official pip installer for Windows is compiled with Visual Studio 2019 version 16.4 and this is done to take advantage of the new /d2ReducedOptimizeHugeFunctions compiler flag. Developer must install the "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019" library by downloading it from Microsoft website. The Microsoft library is necessary for taking full advantage of new /d2ReducedOptimizeHugeFunctions compiler flag feature.
CUDA 10.1 and cuDNN 7.6
The TensorFlow 2.1.0 pip installer is built with the CUDA 10.1 and cuDNN 7.6. This means developer can take full advantage of CUDA 10.1 and cuDNN 7.6.
Keras changes
- The tf.keras library comes with the experimental support for
mixed precision which is available on GPUs and Cloud TPUs.
- New TextVectorization layer is introduced with the
TensorFlow 2.1.0 which automatically takes care of text
standardization, tokenization, n-gram generation, and vocabulary
indexing. The TextVectorization just takes the raw input
string and then take care of text vectorizations of data, which
can be very useful for development of NLP models.
- The
Keras .compile .fit .evaluate and .predict are now allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a
scope.
- The new experimental features .compile, .fit, .evaluate, and
.predict of Keras library is available for Cloud TPUs,
Cloud TPU. These features are available for all types of Keras
Models.
- In the TensorFlow 2.1.0 the automatic outside compilation is enabled for Cloud TPUs.
Developers can use the tf.summary function with Cloud TPUs.
- This version of TensorFlow brings dynamic batch sizes with DistributionStrategy and Keras on Cloud
TPUs.
- The TensorFlow 2.1.0 also provides support for .fit, .evaluate, .predict on TPU even using numpy data and tf.data.Dataset.
Changes in tf.data
- TensorFlow 2.1.0 brings changes related to rebatching for tf.data datasets + DistributionStrategy
for better performance.
- In TensorFlow 2.1.0
tf.data.Dataset supports automatic data distribution and sharding when it runs
in a
distributed environments and TPU pods.
- In TensorFlow 2.1.0 the
Distribution policies for tf.data.Dataset can now be tuned by:
1. tf.data.experimental.AutoShardPolicy(OFF, AUTO, FILE, DATA)
2. tf.data.experimental.ExternalStatePolicy(WARN, IGNORE, FAIL)
- The
tf.debugging can be enabled by adding tf.debugging.enable_check_numerics() and tf.debugging.disable_check_numerics(),
which helps in finding the root cause of issues while debugging
the infinities and NaNs errors.
- Now custom training loop can be executed on the TPUs and TPU
pods.
- The TensorFlow 2.1.0 comes with the default enabled TensorRT 6.0, which includes more TensorFlow operations including Conv3D, Conv3DBackpropInputV2, AvgPool3D, MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor.
Important changes of TensorFlow 2.1.0
- In this version Operation.traceback_with_start_lines was
deleted as there is no usages.
- The id has been removed from tf.Tensor.__repr__() as id is not useful other than internal
debugging.
- Some tf.assert_* are modified and now some these methods
raise assertions at operation creation time.
- The experimental tag has been removed from tf.config.list_logical_devices,
tf.config.list_physical_devices, tf.config.get_visible_devices,
tf.config.set_visible_devices, tf.config.get_logical_device_configuration,
tf.config.set_logical_device_configuration.
- The
tf.config.experimentalVirtualDeviceConfiguration has been renamed to
tf.config.LogicalDeviceConfiguration.
- The tf.config.experimental_list_devices has been removed and developer should use tf.config.list_logical_devices.
TensorFlow 2.1.0 Bug Fixes and Other changes
Here are the bug fixes and other major changes of TensorFlow 2.1.0.
- In the
tf.data the currency issue with tf.data.experimental.parallel_interleave
with sloppy=True is fixed now. The tf.data.experimental.dense_to_ragged_batch()
has been added and the support for RaggedTensors parsing ops
extended.
- In case of distributed processing the issue related to GRU
crash or incorrect output when using the a tf.distribute.Strategy
has been fixex.
- The TensorFlow 2.1.0 adds more features to
tf.estimator. Ad new open is added in tf.estimator.CheckpointSaverHook for not
saving the GraphDef. Now the the checkpoint reader is moved from swig to pybind11.
- The
tf.keras comes with many updates. For example the export depthwise_conv2d in tf.keras.backend
added. There are some explicit duplicates added in Keras.
In Keras Layers and Models, Variables in trainable_weights,
non_trainable_weights, and weights are explicitly deduplicated.
- The
tf.lite now legalized for NMS ops in TFLite.
- Many other critical bugs has been fixed.
In this section we just explored the new features and fixes that comes with the TensorFlow 2.1.0.
You can find more tutorials at: