π TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support officially ends an January 1, 2020. As announced earlier, TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019.
tensorflow pip package now includes GPU support by default (same as tensorflow-gpu) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. tensorflow-gpu is still available, and CPU-only packages can be downloaded at tensorflow-cpu for users who are concerned about package size.tensorflow Pip packages are now built with Visual Studio 2019 version 16.4 in order to take advantage of the new /d2ReducedOptimizeHugeFunctions compiler flag. To use these new packages, you must install "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019", available from Microsoft's website here.
EIGEN_STRONG_INLINE can take over 48 hours to compile without this flag. Refer to configure.py for more information about EIGEN_STRONG_INLINE and /d2ReducedOptimizeHugeFunctions.msvcp140.dll (old) or msvcp140_1.dll (new), are missing on your machine, import tensorflow will print a warning message.tensorflow pip package is built with CUDA 10.1 and cuDNN 7.6.tf.keras
TextVectorization layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this end-to-end text classification example..compile .fit .evaluate and .predict are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope..compile, .fit, .evaluate, and .predict is available for Cloud TPUs, Cloud TPU, for all types of Keras models (sequential, functional and subclassing models).tf.summary to be used more conveniently with Cloud TPUs..fit, .evaluate, .predict on TPU using numpy data, in addition to tf.data.Dataset.tf.data
tf.data datasets + DistributionStrategy for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas.tf.data.Dataset now supports automatic data distribution and sharding in distributed environments, including on TPU pods.tf.data.Dataset can now be tuned with 1. tf.data.experimental.AutoShardPolicy(OFF, AUTO, FILE, DATA) 2. tf.data.experimental.ExternalStatePolicy(WARN, IGNORE, FAIL)tf.debugging
tf.debugging.enable_check_numerics() and tf.debugging.disable_check_numerics() to help debugging the root causes of issues involving infinities and NaNs.tf.distribute
strategy.experimental_distribute_dataset, strategy.experimental_distribute_datasets_from_function, strategy.experimental_run_v2, strategy.reduce.tf.distribute.experimental_set_strategy(), in addition to strategy.scope().TensorRT
tf.experimental.tensorrt.Converter.TF_DETERMINISTIC_OPS has been added. When set to "true" or "1", this environment variable makes tf.nn.bias_add operate deterministically (i.e. reproducibly), but currently only when XLA JIT compilation is not enabled. Setting TF_DETERMINISTIC_OPS to "true" or "1" also makes cuDNN convolution and max-pooling operate deterministically. This makes Keras Conv*D and MaxPool*D layers operate deterministically in both the forward and backward directions when running on a CUDA-enabled GPU.Operation.traceback_with_start_lines for which we know of no usages.id from tf.Tensor. __repr__ () as id is not useful other than internal debugging.tf.assert_* methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during the session.run(). This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).tf.config.list_logical_devices, tf.config.list_physical_devices, tf.config.get_visible_devices, tf.config.set_visible_devices, tf.config.get_logical_device_configuration, tf.config.set_logical_device_configuration.tf.config.experimentalVirtualDeviceConfiguration has been renamed to tf.config.LogicalDeviceConfiguration.tf.config.experimental_list_devices has been removed, please usetf.config.list_logical_devices.tf.data
tf.data.experimental.parallel_interleave with sloppy=True.tf.data.experimental.dense_to_ragged_batch().tf.data parsing ops to support RaggedTensors.tf.distribute
tf.distribute.Strategy was used.tf.estimator
tf.estimator.CheckpointSaverHook to not save the GraphDef.tf.keras
depthwise_conv2d in tf.keras.backend.trainable_weights, non_trainable_weights, and weights are explicitly deduplicated.model.load_weights now accepts skip_mismatch as an argument. This was available in external Keras, and has now been copied over to tf.keras.Model.fit_generator, Model.evaluate_generator, Model.predict_generator, Model.train_on_batch, Model.test_on_batch, and Model.predict_on_batch methods now respect the run_eagerly property, and will correctly run using tf.function by default. Note that Model.fit_generator, Model.evaluate_generator, and Model.predict_generator are deprecated endpoints. They are subsumed by Model.fit, Model.evaluate, and Model.predict which now support generators and Sequences.tf.lite
NMS ops in TFLite.narrow_range and axis to quantize_v2 and dequantize ops.FusedBatchNormV3 in converter.errno-like field to NNAPI delegate for detecting NNAPI errors for fallback behaviour.NNAPI Delegate to support detailed reason why an operation is not accelerated.tf.tpu.experimental.initialize_tpu_system.RaggedTensor.merge_dims().uniform_row_length row-partitioning tensor to RaggedTensor.shape arg to RaggedTensor.to_tensor; Improve speed of RaggedTensor.to_tensor.tf.io.parse_sequence_example and tf.io.parse_single_sequence_example now support ragged features.while_v2 with variables in custom gradient.tf.cond and tf.while_loop using LookupTable.vectorized_map failed on inputs with unknown static shape.None now behaves as expected.tf.function(f)(), tf.function(f).get_concrete_function and tf.function(f).get_initialization_function thread-safe.tf.identity to work with CompositeTensors (such as SparseTensor)dtypes and zero-sized inputs to Einsum Op and improved its performanceNCCL all-reduce inside functions executing eagerly.RFFT, RFFT2D, RFFT3D, IRFFT, IRFFT2D, and IRFFT3D.pfor converter for SelfAdjointEigV2.tf.math.ndtri and tf.math.erfinv.tf.config.experimental.enable_mlir_bridge to allow using MLIR compiler bridge in eager model.tf.autodiff.ForwardAccumulator for forward-mode autodiffLinearOperatorPermutation.tf.reduce_logsumexp.AUC metriczeros_like.None or types with an __index__ method.tf.random.uniform microbenchmark._protogen suffix for proto library targets instead of _cc_protogen suffix.swig to pybind11.tf.device & MirroredStrategy now supports passing in a tf.config.LogicalDevice.bazelversion file at the root of the project directory.π This release contains contributions from many people at Google, as well as:
8bitmp3, Aaron Ma, AbdΓΌLhamit Yilmaz, Abhai Kollara, aflc, Ag Ramesh, Albert Z. Guo, Alex Torres, amoitra, Andrii Prymostka, angeliand, Anshuman Tripathy, Anthony Barbier, Anton Kachatkou, Anubh-V, Anuja Jakhade, Artem Ryabov, autoih, Bairen Yi, Bas Aarts, Basit Ayantunde, Ben Barsdell, Bhavani Subramanian, Brett Koonce, candy.dc, Captain-Pool, caster, cathy, Chong Yan, Choong Yin Thong, Clayne Robison, Colle, Dan Ganea, David Norman, David Refaeli, dengziming, Diego Caballero, Divyanshu, djshen, Douman, Duncan Riach, EFanZh, Elena Zhelezina, Eric Schweitz, Evgenii Zheltonozhskii, Fei Hu, fo40225, Fred Reiss, Frederic Bastien, Fredrik Knutsson, fsx950223, fwcore, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, giuros01, Gomathi Ramamurthy, Guozhong Zhuang, Haifeng Jin, Haoyu Wu, HarikrishnanBalagopal, HJYOO, Huang Chen-Yi, Ilham Firdausi Putra, Imran Salam, Jared Nielsen, Jason Zaman, Jasper Vicenti, Jeff Daily, Jeff Poznanovic, Jens Elofsson, Jerry Shih, jerryyin, Jesper Dramsch, jim.meyer, Jongwon Lee, Jun Wan, Junyuan Xie, Kaixi Hou, kamalkraj, Kan Chen, Karthik Muthuraman, Keiji Ariyama, Kevin Rose, Kevin Wang, Koan-Sin Tan, kstuedem, Kwabena W. Agyeman, Lakshay Tokas, latyas, Leslie-Fang-Intel, Li, Guizi, Luciano Resende, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manuel Freiberger, Mark Ryan, Martin Mlostek, Masaki Kozuki, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Muhwan Kim, Nagy Mostafa, nammbash, Nathan Luehr, Nathan Wells, Niranjan Hasabnis, Oleksii Volkovskyi, Olivier Moindrot, olramde, Ouyang Jin, OverLordGoldDragon, Pallavi G, Paul Andrey, Paul Wais, pkanwar23, Pooya Davoodi, Prabindh Sundareson, Rajeshwar Reddy T, Ralovich, Kristof, Refraction-Ray, Richard Barnes, richardbrks, Robert Herbig, Romeo Kienzler, Ryan Mccormick, saishruthi, Saket Khandelwal, Sami Kama, Sana Damani, Satoshi Tanaka, Sergey Mironov, Sergii Khomenko, Shahid, Shawn Presser, ShengYang1, Siddhartha Bagaria, Simon Plovyt, skeydan, srinivasan.narayanamoorthy, Stephen Mugisha, sunway513, Takeshi Watanabe, Taylor Jakobson, TengLu, TheMindVirus, ThisIsIsaac, Tim Gates, Timothy Liu, Tomer Gafner, Trent Lo, Trevor Hickey, Trevor Morris, vcarpani, Wei Wang, Wen-Heng (Jack) Chung, wenshuai, Wenshuai-Xiaomi, wenxizhu, william, William D. Irons, Xinan Jiang, Yannic, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Youwei Song, Zaccharie Ramzi, Zhang, Zhenyu Guo, ηζ―ε (Zhenhua Wang), ι©θ£, μ΄μ€κ±΄ Isaac Lee
π TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support officially ends an January 1, 2020. As announced earlier, TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019.
tensorflow pip package now includes GPU support by default (same as tensorflow-gpu) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. tensorflow-gpu is still available, and CPU-only packages can be downloaded at tensorflow-cpu for users who are concerned about package size.tensorflow pip package is built with CUDA 10.1 and cuDNN 7.6.tf.keras
Model.fit_generator, Model.evaluate_generator, Model.predict_generator, Model.train_on_batch, Model.test_on_batch, and Model.predict_on_batch methods now respect the run_eagerly property, and will correctly run using tf.function by default.Model.fit_generator, Model.evaluate_generator, and Model.predict_generator are deprecated endpoints. They are subsumed by Model.fit, Model.evaluate, and Model.predict which now support generators and Sequences..compile .fit .evaluate and .predict are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope.model.load_weights now accepts skip_mismatch as an argument. This was available in external Keras, and has now been copied over to tf.keras.TextVectorization layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this end-to-end text classification example..compile, .fit, .evaluate, and .predict is available for Cloud TPU Pods.tf.summary to be used more conveniently with Cloud TPUs.tf.data
tf.data datasets + distribution strategies for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas.TensorRT
tf.experimental.tensorrt.Converter.TF_DETERMINISTIC_OPS has been added. When set to "true" or "1", this environment variable makes tf.nn.bias_add operate deterministically (i.e. reproducibly), but currently only when XLA JIT compilation is not enabled. Setting TF_DETERMINISTIC_OPS to "true" or "1" also makes cuDNN convolution and max-pooling operate deterministically. This makes Keras Conv*D and MaxPool*D layers operate deterministically in both the forward and backward directions when running on a CUDA-enabled GPU.Operation.traceback_with_start_lines for which we know of no usages.id from tf.Tensor. __repr__ () as id is not useful other than internal debugging.tf.assert_* methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during the session.run(). This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).tf.config.list_logical_devices, tf.config.list_physical_devices, tf.config.get_visible_devices, tf.config.set_visible_devices, tf.config.get_logical_device_configuration, tf.config.set_logical_device_configuration.tf.config.experimentalVirtualDeviceConfiguration has been renamed to tf.config.LogicalDeviceConfiguration.tf.config.experimental_list_devices has been removed, please usetf.config.list_logical_devices.tf.data
tf.data.experimental.parallel_interleave with sloppy=True.tf.data.experimental.dense_to_ragged_batch().tf.data parsing ops to support RaggedTensors.tf.distribute
tf.distribute.Strategy was used.tf.estimator
tf.estimator.CheckpointSaverHook to not save the GraphDef.tf.keras
tf.keras.backend.trainable_weights, non_trainable_weights, and weights are explicitly deduplicated.tf.lite
NMS ops in TFLite.narrow_range and axis to quantize_v2 and dequantize ops.FusedBatchNormV3 in converter.errno-like field to NNAPI delegate for detecting NNAPI errors for fallback behaviour.NNAPI Delegate to support detailed reason why an operation is not accelerated.RaggedTensor.merge_dims().uniform_row_length row-partitioning tensor to RaggedTensor.shape arg to RaggedTensor.to_tensor; Improve speed of RaggedTensor.to_tensor.tf.io.parse_sequence_example and tf.io.parse_single_sequence_example now support ragged features.while_v2 with variables in custom gradient.tf.cond and tf.while_loop using LookupTable.vectorized_map failed on inputs with unknown static shape.None now behaves as expected.tf.function(f)(), tf.function(f).get_concrete_function and tf.function(f).get_initialization_function thread-safe.tf.debugging.enable_check_numerics() and tf.debugging.disable_check_numerics() to facilitate debugging of numeric instability (Infinitys and NaNs) under eager mode and tf.functions.tf.identity to work with CompositeTensors (such as SparseTensor)dtypes and zero-sized inputs to Einsum Op and improved its performanceNCCL all-reduce inside functions executing eagerly.RFFT, RFFT2D, RFFT3D, IRFFT, IRFFT2D, and IRFFT3D.pfor converter for SelfAdjointEigV2.tf.math.ndtri and tf.math.erfinv.tf.config.experimental.enable_mlir_bridge to allow using MLIR compiler bridge in eager model.tf.autodiff.ForwardAccumulator for forward-mode autodiffLinearOperatorPermutation.tf.reduce_logsumexp.AUC metriczeros_like.None or types with an __index__ method.tf.random.uniform microbenchmark._protogen suffix for proto library targets instead of _cc_protogen suffix.swig to pybind11.tf.device & MirroredStrategy now supports passing in a tf.config.LogicalDevice.bazelversion file at the root of the project directory.π This release contains contributions from many people at Google, as well as:
8bitmp3, Aaron Ma, AbdΓΌLhamit Yilmaz, Abhai Kollara, aflc, Ag Ramesh, Albert Z. Guo, Alex Torres, amoitra, Andrii Prymostka, angeliand, Anshuman Tripathy, Anthony Barbier, Anton Kachatkou, Anubh-V, Anuja Jakhade, Artem Ryabov, autoih, Bairen Yi, Bas Aarts, Basit Ayantunde, Ben Barsdell, Bhavani Subramanian, Brett Koonce, candy.dc, Captain-Pool, caster, cathy, Chong Yan, Choong Yin Thong, Clayne Robison, Colle, Dan Ganea, David Norman, David Refaeli, dengziming, Diego Caballero, Divyanshu, djshen, Douman, Duncan Riach, EFanZh, Elena Zhelezina, Eric Schweitz, Evgenii Zheltonozhskii, Fei Hu, fo40225, Fred Reiss, Frederic Bastien, Fredrik Knutsson, fsx950223, fwcore, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, giuros01, Gomathi Ramamurthy, Guozhong Zhuang, Haifeng Jin, Haoyu Wu, HarikrishnanBalagopal, HJYOO, Huang Chen-Yi, Ilham Firdausi Putra, Imran Salam, Jared Nielsen, Jason Zaman, Jasper Vicenti, Jeff Daily, Jeff Poznanovic, Jens Elofsson, Jerry Shih, jerryyin, Jesper Dramsch, jim.meyer, Jongwon Lee, Jun Wan, Junyuan Xie, Kaixi Hou, kamalkraj, Kan Chen, Karthik Muthuraman, Keiji Ariyama, Kevin Rose, Kevin Wang, Koan-Sin Tan, kstuedem, Kwabena W. Agyeman, Lakshay Tokas, latyas, Leslie-Fang-Intel, Li, Guizi, Luciano Resende, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manuel Freiberger, Mark Ryan, Martin Mlostek, Masaki Kozuki, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Muhwan Kim, Nagy Mostafa, nammbash, Nathan Luehr, Nathan Wells, Niranjan Hasabnis, Oleksii Volkovskyi, Olivier Moindrot, olramde, Ouyang Jin, OverLordGoldDragon, Pallavi G, Paul Andrey, Paul Wais, pkanwar23, Pooya Davoodi, Prabindh Sundareson, Rajeshwar Reddy T, Ralovich, Kristof, Refraction-Ray, Richard Barnes, richardbrks, Robert Herbig, Romeo Kienzler, Ryan Mccormick, saishruthi, Saket Khandelwal, Sami Kama, Sana Damani, Satoshi Tanaka, Sergey Mironov, Sergii Khomenko, Shahid, Shawn Presser, ShengYang1, Siddhartha Bagaria, Simon Plovyt, skeydan, srinivasan.narayanamoorthy, Stephen Mugisha, sunway513, Takeshi Watanabe, Taylor Jakobson, TengLu, TheMindVirus, ThisIsIsaac, Tim Gates, Timothy Liu, Tomer Gafner, Trent Lo, Trevor Hickey, Trevor Morris, vcarpani, Wei Wang, Wen-Heng (Jack) Chung, wenshuai, Wenshuai-Xiaomi, wenxizhu, william, William D. Irons, Xinan Jiang, Yannic, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Youwei Song, Zaccharie Ramzi, Zhang, Zhenyu Guo, ηζ―ε (Zhenhua Wang), ι©θ£, μ΄μ€κ±΄ Isaac Lee
π TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support officially ends an January 1, 2020. As announced earlier, TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019.
tensorflow pip package now includes GPU support by default (same as tensorflow-gpu) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. tensorflow-gpu is still available, and CPU-only packages can be downloaded at tensorflow-cpu for users who are concerned about package size.tf.keras
Model.fit_generator, Model.evaluate_generator, Model.predict_generator, Model.train_on_batch, Model.test_on_batch, and Model.predict_on_batch methods now respect the run_eagerly property, and will correctly run using tf.function by default.Model.fit_generator, Model.evaluate_generator, and Model.predict_generator are deprecated endpoints. They are subsumed by Model.fit, Model.evaluate, and Model.predict which now support generators and Sequences..compile .fit .evaluate and .predict are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope.model.load_weights now accepts skip_mismatch as an argument. This was available in external Keras, and has now been copied over to tf.keras.TextVectorization layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this end-to-end text classification example..compile, .fit, .evaluate, and .predict is available for Cloud TPU Pods.tf.summary to be used more conveniently with Cloud TPUs.tf.data
tf.data datasets + distribution strategies for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas.TensorRT
tf.experimental.tensorrt.Converter.π Because of issues with building on windows, we turned off eigen strong inlining for the Windows builds. Windows binaries are expected to be slightly slower until the build issues are resolved.
Operation.traceback_with_start_lines for which we know of no usages.id from tf.Tensor. __repr__ () as id is not useful other than internal debugging.tf.assert_* methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during the session.run(). This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).tf.config.list_logical_devices, tf.config.list_physical_devices, tf.config.get_visible_devices, tf.config.set_visible_devices, tf.config.get_logical_device_configuration, tf.config.set_logical_device_configuration.tf.config.experimentalVirtualDeviceConfiguration has been renamed to tf.config.LogicalDeviceConfiguration.tf.config.experimental_list_devices has been removed, please usetf.config.list_logical_devices.tf.data
tf.data.experimental.parallel_interleave with sloppy=True.tf.data.experimental.dense_to_ragged_batch().tf.data parsing ops to support RaggedTensors.tf.distribute
tf.distribute.Strategy was used.tf.estimator
tf.estimator.CheckpointSaverHook to not save the GraphDef.tf.keras
tf.keras.backend.trainable_weights, non_trainable_weights, and weights are explicitly deduplicated.tf.lite
NMS ops in TFLite.narrow_range and axis to quantize_v2 and dequantize ops.FusedBatchNormV3 in converter.errno-like field to NNAPI delegate for detecting NNAPI errors for fallback behaviour.NNAPI Delegate to support detailed reason why an operation is not accelerated.RaggedTensor.merge_dims().uniform_row_length row-partitioning tensor to RaggedTensor.shape arg to RaggedTensor.to_tensor; Improve speed of RaggedTensor.to_tensor.tf.io.parse_sequence_example and tf.io.parse_single_sequence_example now support ragged features.while_v2 with variables in custom gradient.tf.cond and tf.while_loop using LookupTable.vectorized_map failed on inputs with unknown static shape.None now behaves as expected.tf.function(f)(), tf.function(f).get_concrete_function and tf.function(f).get_initialization_function thread-safe.tf.debugging.enable_check_numerics() and tf.debugging.disable_check_numerics() to facilitate debugging of numeric instability (Infinitys and NaNs) under eager mode and tf.functions.tf.identity to work with CompositeTensors (such as SparseTensor)dtypes and zero-sized inputs to Einsum Op and improved its performanceNCCL all-reduce inside functions executing eagerly.RFFT, RFFT2D, RFFT3D, IRFFT, IRFFT2D, and IRFFT3D.pfor converter for SelfAdjointEigV2.tf.math.ndtri and tf.math.erfinv.tf.config.experimental.enable_mlir_bridge to allow using MLIR compiler bridge in eager model.tf.autodiff.ForwardAccumulator for forward-mode autodiffLinearOperatorPermutation.tf.reduce_logsumexp.AUC metriczeros_like.None or types with an __index__ method.tf.random.uniform microbenchmark._protogen suffix for proto library targets instead of _cc_protogen suffix.swig to pybind11.tf.device & MirroredStrategy now supports passing in a tf.config.LogicalDevice.bazelversion file at the root of the project directory.π This release contains contributions from many people at Google, as well as:
8bitmp3, Aaron Ma, AbdΓΌLhamit Yilmaz, Abhai Kollara, aflc, Ag Ramesh, Albert Z. Guo, Alex Torres, amoitra, Andrii Prymostka, angeliand, Anshuman Tripathy, Anthony Barbier, Anton Kachatkou, Anubh-V, Anuja Jakhade, Artem Ryabov, autoih, Bairen Yi, Bas Aarts, Basit Ayantunde, Ben Barsdell, Bhavani Subramanian, Brett Koonce, candy.dc, Captain-Pool, caster, cathy, Chong Yan, Choong Yin Thong, Clayne Robison, Colle, Dan Ganea, David Norman, David Refaeli, dengziming, Diego Caballero, Divyanshu, djshen, Douman, Duncan Riach, EFanZh, Elena Zhelezina, Eric Schweitz, Evgenii Zheltonozhskii, Fei Hu, fo40225, Fred Reiss, Frederic Bastien, Fredrik Knutsson, fsx950223, fwcore, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, giuros01, Gomathi Ramamurthy, Guozhong Zhuang, Haifeng Jin, Haoyu Wu, HarikrishnanBalagopal, HJYOO, Huang Chen-Yi, Ilham Firdausi Putra, Imran Salam, Jared Nielsen, Jason Zaman, Jasper Vicenti, Jeff Daily, Jeff Poznanovic, Jens Elofsson, Jerry Shih, jerryyin, Jesper Dramsch, jim.meyer, Jongwon Lee, Jun Wan, Junyuan Xie, Kaixi Hou, kamalkraj, Kan Chen, Karthik Muthuraman, Keiji Ariyama, Kevin Rose, Kevin Wang, Koan-Sin Tan, kstuedem, Kwabena W. Agyeman, Lakshay Tokas, latyas, Leslie-Fang-Intel, Li, Guizi, Luciano Resende, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manuel Freiberger, Mark Ryan, Martin Mlostek, Masaki Kozuki, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Muhwan Kim, Nagy Mostafa, nammbash, Nathan Luehr, Nathan Wells, Niranjan Hasabnis, Oleksii Volkovskyi, Olivier Moindrot, olramde, Ouyang Jin, OverLordGoldDragon, Pallavi G, Paul Andrey, Paul Wais, pkanwar23, Pooya Davoodi, Prabindh Sundareson, Rajeshwar Reddy T, Ralovich, Kristof, Refraction-Ray, Richard Barnes, richardbrks, Robert Herbig, Romeo Kienzler, Ryan Mccormick, saishruthi, Saket Khandelwal, Sami Kama, Sana Damani, Satoshi Tanaka, Sergey Mironov, Sergii Khomenko, Shahid, Shawn Presser, ShengYang1, Siddhartha Bagaria, Simon Plovyt, skeydan, srinivasan.narayanamoorthy, Stephen Mugisha, sunway513, Takeshi Watanabe, Taylor Jakobson, TengLu, TheMindVirus, ThisIsIsaac, Tim Gates, Timothy Liu, Tomer Gafner, Trent Lo, Trevor Hickey, Trevor Morris, vcarpani, Wei Wang, Wen-Heng (Jack) Chung, wenshuai, Wenshuai-Xiaomi, wenxizhu, william, William D. Irons, Xinan Jiang, Yannic, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Youwei Song, Zaccharie Ramzi, Zhang, Zhenyu Guo, ηζ―ε (Zhenhua Wang), ι©θ£, μ΄μ€κ±΄ Isaac Lee
tf.raw_ops.Switch (CVE-2020-15190)SparseFillEmptyRowsGrad (CVE-2020-15194, CVE-2020-15195)tf.strings.as_string (CVE-2020-15203)tf.raw_ops.StringNGrams (CVE-2020-15205)SavedModel validation (CVE-2020-15206)sqlite3 to 3.33.00 to handle CVE-2020-9327, CVE-2020-11655, CVE-2020-11656, CVE-2020-13434, CVE-2020-13435, CVE-2020-13630, CVE-2020-13631, CVE-2020-13871, and CVE-2020-15358.numpy to 1.18.5 to prevent ABI breakage when compiling code that uses both NumPy and TensorFlow headers.sqlite3 to 3.31.01 to handle CVE-2019-19880, CVE-2019-19244 and CVE-2019-19645 curl to 7.69.1 to handle CVE-2019-15601 libjpeg-turbo to 2.0.4 to handle CVE-2018-19664, CVE-2018-20330 and CVE-2019-13960 2.4.5 to handle CVE-2019-10099, CVE-2018-17190 and CVE-2018-11770 tf.float16 value produces a segmentation fault (CVE-2020-5215)curl to 7.66.0 to handle CVE-2019-5482 and CVE-2019-5481 sqlite3 to 3.30.01 to handle CVE-2019-19646, CVE-2019-19645 and CVE-2019-16168 tf.raw_ops.Switch (CVE-2020-15190)SparseFillEmptyRowsGrad (CVE-2020-15194, CVE-2020-15195)tf.strings.as_string (CVE-2020-15203)tf.raw_ops.StringNGrams (CVE-2020-15205)SavedModel validation (CVE-2020-15206)sqlite3 to 3.33.00 to handle CVE-2020-9327, CVE-2020-11655, CVE-2020-11656, CVE-2020-13434, CVE-2020-13435, CVE-2020-13630, CVE-2020-13631, CVE-2020-13871, and CVE-2020-15358.max_seq_length in CuDNN descriptor cache keynumpy to 1.18.5 to prevent ABI breakage when compiling code that uses both NumPy and TensorFlow headers.sqlite3 to 3.31.01 to handle CVE-2019-19880, CVE-2019-19244 and CVE-2019-19645 curl to 7.69.1 to handle CVE-2019-15601 libjpeg-turbo to 2.0.4 to handle CVE-2018-19664, CVE-2018-20330 and CVE-2019-13960 2.4.5 to handle CVE-2019-10099, CVE-2018-17190 and CVE-2018-11770 π Note that this release no longer has a single pip package for GPU and CPU. Please see #36347 for history and details
tf.float16 value produces a segmentation fault (CVE-2020-5215)curl to 7.66.0 to handle CVE-2019-5482 and CVE-2019-5481 sqlite3 to 3.30.01 to handle CVE-2019-19646, CVE-2019-19645 and CVE-2019-16168 π This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year.
tensorflow pip package will by default include GPU support (same as tensorflow-gpu now) for the platforms we currently have GPU support (Linux and Windows). It will work on machines with and without Nvidia GPUs. tensorflow-gpu will still be available, and CPU-only packages can be downloaded at tensorflow-cpu for users who are concerned about package size.compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.EagerTensor now supports numpy buffer interface for tensors.tf.enable_control_flow_v2() and tf.disable_control_flow_v2() for enabling/disabling v2 control flow.tf.enable_v2_behavior() and TF2_BEHAVIOR=1.tf.function-decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIS.enable_tensor_equality(), which switches the behavior such that:
== and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0.float16 for acceleration on Volta and Turing Tensor Cores. This feature can be enabled by wrapping an optimizer class with tf.train.experimental.enable_mixed_precision_graph_rewrite().TF_CUDNN_DETERMINISTIC. Setting to "true" or "1" forces the selection of deterministic cuDNN convolution and max-pooling algorithms. When this is enabled, the algorithm selection procedure itself is also deterministic.TrtGraphConverter API for TensorRT conversion.Gather, Slice, Pack, Unpack, ArgMin, ArgMax,DepthSpaceShuffle).CombinedNonMaxSuppression in TensorRT conversion whichtensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.constraint= and .constraint with ResourceVariable.tf.keras:
OMP_NUM_THREADS is no longer used by the default Keras config. To configure the number of threads, use tf.config.threading APIs.tf.keras.model.save_model and model.save now defaults to saving a TensorFlow SavedModel.keras.backend.resize_images (and consequently, keras.layers.Upsampling2D) behavior has changed, a bug in the resizing implementation was fixed.float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with tf.keras.backend.set_floatx('float64'), or pass dtype='float64' to each of the Layer constructors. See tf.keras.layers.Layer for more information.tf.assert_* methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).tf.estimator:
tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with model.load_weights.DenseFeatures usability in TF2tf.data:
unbatch from experimental to core API.from_tensors and from_tensor_slices and batching and unbatching of nested datasets.tf.keras:
tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with model.load_weights.tf.saved_model.save now saves the list of variables, trainable variables, regularization losses, and the call function.tf.keras.experimental.export_saved_model and tf.keras.experimental.function. Please use tf.keras.models.save_model(..., save_format='tf') and tf.keras.models.load_model instead.implementation=3 mode for tf.keras.layers.LocallyConnected2D and tf.keras.layers.LocallyConnected1D layers using tf.SparseTensor to store weights, allowing a dramatic speedup for large sparse models.experimental_run_tf_function flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to Dataset. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless run_eagerly=True is set in compile.batch_size argument is used when input is dataset/generator/keras sequence.tf.lite
GATHER support to NN API delegate.QUANTIZE.QUANTIZED_16BIT_LSTM.cycle_length argument of tf.data.Dataset.interleave to the number of schedulable CPU cores.parallel_for: Add converter for MatrixDiag.narrow_range attribute to QuantizeAndDequantizeV2 and V3.tf.strings.unsorted_segment_join.topK_v2.TypeSpec classes.Head as public API.batch_dims case.tf.sparse.from_dense utility function.TensorFlowTestCase.ResizeInputTensor now works for all delegates.EXPAND_DIMS support to NN API delegate TEST: expand_dims_testtf.cond emits a StatelessIf op if the branch functions are stateless and do not touch any resources.tf.cond, tf.while and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect non-V2 control flow.tf.while_loop emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.LogSoftMax.nested_value_rowids for ragged tensors.tf.math.cumulative_logsumexp operation.tf.ragged.stack.AddNewInputConstantTensor.MemoryAllocation::MemoryAllocation().NNAPIDelegateKernel from nnapi_delegate.ccFusedBatchNormV3 in converter.tf.gradients().precision_mode argument to TrtGraphConverter is now case insensitive.π This release contains contributions from many people at Google, as well as:
a6802739, Aaron Ma, Abdullah Selek, Abolfazl Shahbazi, Ag Ramesh, Albert Z. Guo, Albin Joy, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov, alhkad, Amit Srivastava, amoitra, Andrew Lihonosov, Andrii Prymostka, Anuj Rawat, Astropeak, Ayush Agrawal, Bairen Yi, Bas Aarts, Bastian Eichenberger, Ben Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian, Bryan Cutler, candy.dc, Cao Zongyan, Captain-Pool, Casper Da Costa-Luis, Chen Guoyin, Cheng Chang, chengchingwen, Chong Yan, Choong Yin Thong, Christopher Yeh, Clayne Robison, Coady, Patrick, Dan Ganea, David Norman, Denis Khalikov, Deven Desai, Diego Caballero, Duncan Dean, Duncan Riach, Dwight J Lyle, Eamon Ito-Fisher, eashtian3, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Fangjun Kuang, Fei Hu, fo40225, formath, Fred Reiss, Frederic Bastien, Fredrik Knutsson, G. Hussain Chinoy, Gabriel, gehring, George Grzegorz Pawelczak, Gianluca Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo Lima Chaves, haison, Haraldur TΓ³Mas HallgrΓMsson, HarikrishnanBalagopal, HΓ₯Kon Sandsmark, I-Hong, Ilham Firdausi Putra, Imran Salam, Jason Zaman, Jason Zavaglia, jayhpark530, jefby, Jeff Daily, Jeffrey Poznanovic, Jekyll Lai, Jeroen BΓ©Dorf, Jerry Shih, jerryyin, jiakai, JiangXIAO, Joe Bowser, Joel Shapiro, Johan Gunnarsson, Jojimon Varghese, Joon, Josh Beal, Julian Niedermeier, Jun Wan, Junqin Zhang, Junyuan Xie, Justin Tunis, Kaixi Hou, Karl Lessard, Karthik Muthuraman, Kbhute-Ibm, khanhlvg, Koock Yoon, kstuedem, Kyuwon Kim, Lakshay Tokas, leike666666, leonard951, Leslie-Fang, Leslie-Fang-Intel, Li, Guizi, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manraj Singh Grover, Margaret Maynard-Reid, Mark Ryan, Matt Conley, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Mei Jie, merturl, MichaelKonobeev, Michal W. Tarnowski, minds, mpppk, musikisomorphie, Nagy Mostafa, Nayana Thorat, Neil, Niels Ole Salscheider, Niklas SilfverstrΓΆM, Niranjan Hasabnis, ocjosen, olramde, Pariksheet Pinjari, Patrick J. Lopresti, Patrik Gustavsson, per1234, PeterLee, Phan Van Nguyen Duc, Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao, Rajeshwar Reddy T, Ramon ViΓ±As, Rasmus Diederichsen, Reuben Morais, richardbrks, robert, RonLek, Ryan Jiang, saishruthi, Saket Khandelwal, Saleem Abdulrasool, Sami Kama, Sana-Damani, Sergii Khomenko, Severen Redwood, Shubham Goyal, Sigrid Keydana, Siju Samuel, sleighsoft, smilu97, Son Tran, Srini511, srinivasan.narayanamoorthy, Sumesh Udayakumaran, Sungmann Cho, Tae-Hwan Jung, Taehoon Lee, Takeshi Watanabe, TengLu, terryky, TheMindVirus, ThisIsIsaac, Till Hoffmann, Timothy Liu, Tomer Gafner, Tongxuan Liu, Trent Lo, Trevor Morris, Uday Bondhugula, Vasileios Lioutas, vbvg2008, Vishnuvardhan Janapati, Vivek Suryamurthy, Wei Wang, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, winstonq, wyzhao, Xiaoming (Jason) Cui, Xinan Jiang, Xinping Wang, Yann-Yy, Yasir Modak, Yong Tang, Yongfeng Gu, Yuchen Ying, Yuxin Wu, zyeric, ηζ―ε (Zhenhua Wang)