π tf.distribute introduces experimental support for asynchronous training of Keras models via the tf.distribute.experimental.ParameterServerStrategy API. Please see below for additional details.
π MultiWorkerMirroredStrategy is now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on Multi-worker training with Keras.
π Introduces experimental support for a new module named tf.experimental.numpy which is a NumPy-compatible API for writing TF programs. See the detailed guide to learn more. Additional details below.
β Adds Support for
π TensorFloat-32 on Ampere based GPUs. TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs and is enabled by default.
π A major refactoring of the internals of the Keras Functional API has been completed, that should improve the reliability, stability, and performance of constructing Functional models.
Keras mixed precision API tf.keras.mixed_precision is no longer experimental and allows the use of 16-bit floating point formats during training, improving performance by up to 3x on GPUs and 60% on TPUs. Please see below for additional details.
π· TensorFlow Profiler now supports profiling MultiWorkerMirroredStrategy and tracing multiple workers using the sampling mode API.
TFLite Profiler for Android is available. See the detailed guide to learn more.
π¦ TensorFlow pip packages are now built with CUDA11 and cuDNN 8.0.2.
TF Core:
tf.config.experimental.enable_tensor_float_32_execution(False).tensorflow::tstring/TF_TStrings.TF_StringDecode, TF_StringEncode, and TF_StringEncodedSize are no longer relevant and have been removed; see core/platform/ctstring.h for string access/modification in C.tensorflow.python, tensorflow.core and tensorflow.compiler modules are now hidden. These modules are not part of TensorFlow public API.tf.raw_ops.Max and tf.raw_ops.Min no longer accept inputs of type tf.complex64 or tf.complex128, because the behavior of these ops is not well defined for complex types.TF_XLA_FLAGS=--tf_xla_enable_xla_devices if you really need them, but this flag will eventually be removed in subsequent releases.tf.keras:
steps_per_execution argument in compile() is no longer experimental; if you were passing experimental_steps_per_execution, rename it to steps_per_execution in your code. This argument controls the number of batches to run during each tf.function call when calling fit(). Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead.isinstance(x, tf.Tensor) instead of tf.is_tensor when checking Keras symbolic inputs/outputs should switch to using tf.is_tensor.tensor.ref(), etc.)get_concrete_function to trace Keras symbolic inputs directly should switch to building matching tf.TensorSpecs directly and tracing the TensorSpec objects.tf.map_fn/tf.cond/tf.while_loop/control flow as op layers and happens to work before TF 2.4. These will explicitly be unsupported now. Converting these ops to Functional API op layers was unreliable before TF 2.4, and prone to erroring incomprehensibly or being silently buggy.tf.rank used to return a static or symbolic value depending on if the input had a fully static shape or not. Now these ops always return symbolic values.tf.keras.Model layer by layer and assumes layers only ever have one positional argument. This assumption doesn't hold true before TF 2.4 either, but is more likely to cause issues now.keras.backend.get_graph() before building a functional model is no longer needed.Input objects in a Functional model, and the shape of the data passed to that model. You can fix this mismatch by either calling the model with correctly-shaped data, or by relaxing Input shape assumptions (note that you can pass shapes with None entries for axesmodel.input_spec = None.tf.keras.mixed_precision.experimental. Note that it is now recommended to use the non-experimental tf.keras.mixed_precision API.
AutoCastVariable.dtype now refers to the actual variable dtype, not the dtype it will be casted to.tf.keras.layers.Embedding now outputs a float16 or bfloat16 tensor instead of a float32 tensor.tf.keras.mixed_precision.experimental.LossScaleOptimizer.loss_scale isLossScale object. This means to get a loss scale of a LossScaleOptimizer as a tensor, you must now callopt.loss_scale instead of opt.loss_scale().should_cast_variables has been removed from tf.keras.mixed_precision.experimental.Policytf.mixed_precision.experimental.DynamicLossScale totf.keras.mixed_precision.experimental.LossScaleOptimizer, the DynamicLossScale's multiplier must be 2.tf.mixed_precision.experimental.DynamicLossScale to tf.keras.mixed_precision.experimental.LossScaleOptimizer,DynanmicLossScale are copied into the LossScaleOptimizer instead of being reused. This means modifying theDynamicLossScale will no longer affect the weights of the LossScaleOptimizer, and vice versa.tf.keras.mixed_precision.experimental.set_policyLayer.call, AutoCastVariables will no longer be casted within MirroredStrategy.run or ReplicaContext.merge_call. This isAutoCastVariables are casted, and those two functions run with aLayer.call; if one of those two functions calls Layer.call, AutoCastVariables will still be casted.tf.data:
tf.data.experimental.service.DispatchServer now takes a config tuple instead of individual arguments. Usages should be updated to tf.data.experimental.service.DispatchServer(dispatcher_config).tf.data.experimental.service.WorkerServer now takes a config tuple instead of individual arguments. Usages should be updated to tf.data.experimental.service.WorkerServer(worker_config).tf.distribute:
tf.distribute.Strategy.experimental_make_numpy_dataset. Please use tf.data.Dataset.from_tensor_slices instead.experimental_hints in tf.distribute.StrategyExtended.reduce_to, tf.distribute.StrategyExtended.batch_reduce_to, tf.distribute.ReplicaContext.all_reduce to options:tf.distribute.experimental.CollectiveHints to tf.distribute.experimental.CommunicationOptions.tf.distribute.experimental.CollectiveCommunication to tf.distribute.experimental.CommunicationImplementation.tf.distribute.Strategy.experimental_distribute_datasets_from_function to distribute_datasets_from_function as it is no longer experimental.tf.distribute.Strategy.experimental_run_v2 method, which was deprecated in TF 2.2.tf.lite:
tf.quantization.quantize_and_dequantize_v2 has been introduced, which updates the gradient definition for quantization which is outside the rangetf.quantization.quantize_and_dequantize(...) use tf.grad_pass_through(tf.quantization.quantize_and_dequantize_v2)(...).tf.experimental.numpy, whichndarray, which mimics the ndarray class in NumPy, and wraps an immutable tf.Tensor under the hood. A subset of NumPy functions (e.g. numpy.add) are provided. Their inter-operation with TF facilities is seamless in most cases.tf.types.experimental.TensorLike is a new Union type that can be used as type annotation for variables representing a Tensor or a valuetf.convert_to_tensor.tf.sparse.map_values to apply a function to the .values of SparseTensor arguments.Tensor (__and__, __or__, __xor__ and __invert__ now support non-bool arguments and applybool arguments continue to be supported and dispatch to logical ops. This brings them more in line withtf.SparseTensor.with_values. This returns a new SparseTensor with the same sparsity pattern, but with new provided values. It iswith_values function of RaggedTensor.StatelessCase op, and uses it if none of case branches has stateful ops.tf.config.experimental.get_memory_usage to return total memory usage of the device.RaggedTensorToVariant and RaggedTensorFromVariant.tf.debugging:
tf.debugging.assert_shapes() now works on SparseTensors (Fixes #36268).tf.config.experimental.enable_tensor_float_32_execution.tf.math:
tf.math.erfcinv, the inverse to tf.math.erfc.tf.nn:
tf.nn.max_pool2d now supports explicit padding.tf.image:
tf.image.stateless_random_* functions for each tf.image.random_* function. Added a new op stateless_sample_distorted_bounding_box which is a deterministic version of sample_distorted_bounding_box op. Given the same seed, these stateless functions/ops produce the same results independent of how many times the function is called, and independent of global seed settings.tf.image.resize backprop CUDA kernels for method=ResizeMethod.BILINEAR (the default method). Enable by setting the environment variable TF_DETERMINISTIC_OPS to "true" or "1".tf.print:
tf.print() with OrderedDict where if an OrderedDict didn't have the keys sorted, the keys and values were not being printedtf.train.Checkpoint:
root argument in the initialization, which generates a checkpoint with a root object. This allows users to create a Checkpoint object that is compatible with Keras model.save_weights() and model.load_weights. The checkpoint is also compatible with the checkpoint saved in the variables/ folder in the SavedModel.save_path can be a path to a SavedModel. The function will automatically find the checkpoint in the SavedModel.tf.data:tf.data.experimental.service.register_dataset and tf.data.experimental.service.from_dataset_id APIs to enable onework_dir when running your dispatcher server and setdispatcher_fault_tolerance=True. The dispatcher will store its state to work_dir, so that on restart it can continue from its previouswork_dir must be accessible from workers. If the worker fails to read from thework_dir, it falls back to using RPC for dataset graph transfer.exclude_cols parameter to CsvDataset. This parameter is the complement of select_cols; at most one of these should be specified.take and shard to happen earlier in the dataset when it is safe to do so. The optimization can be disabled via the experimental_optimization.reorder_data_discarding_ops dataset option.tf.data.Options were previously immutable and can now be overridden.tf.data.Dataset.from_generator now supports Ragged and Sparse tensors with a new output_signature argument, which allows from_generator totf.TypeSpec.tf.data.experimental.AUTOTUNE is now available in the core API as tf.data.AUTOTUNE.tf.distribute:tf.distribute.experimental.ParameterServerStrategy:
tf.distribute.experimental.ParameterServerStrategy symbol with a new class that is for parameter server training in TF2. Usage oftf.compat.v1.distribute.experimental.ParameterServerStrategy].tf.distribute.experimental.coordinator.* namespace, including the main API ClusterCoordinator for coordinating the training cluster, the related data structure RemoteValue and PerWorkerValue.tf.distribute.Strategy.gather and tf.distribute.ReplicaContext.all_gather APIs to support gathering dense distributed values.tf.keras:tf.image.ssim_multiscaleOptimizer.minimize can now accept a loss Tensor and a GradientTape as an alternative to accepting a callable loss.beta hyperparameter to FTRL optimizer classes (Keras and others) to match FTRL paper.Optimizer. __init__ now accepts a gradient_aggregator to allow for customization of how gradients are aggregated across devices, as well asgradients_transformers to allow for custom gradient transformations (such as gradient clipping).Attention and AdditiveAttention layers, the call() method now accepts a return_attention_scores argument. When set totf.metrics.log_cosh and tf.metrics.logcosh API entrypoints with the same implementation as their tf.losses equivalent.Model.evaluate uses no cached data for evaluation, while Model.fit uses cached data whenvalidation_data arg is provided for better performance.save_traces argument to model.save/ tf.keras.models.save_model which determines whether the SavedModel format stores the Keras model/layer call functions. The traced functions allow Keras to revive custom models and layers without the original class definition, but if this isn't required the tracing can be disabled with the added option.tf.keras.mixed_precision API is non non-experimental. Thetf.keras.mixed_precision.Policy no longer takes in atf.mixed_precision.experimental.LossScale in the constructor, and noLossScale associated with it. Instead, Model.compileLossScaleOptimizer usingPolicy.name is "mixed_float16".tf.keras.mixed_precision.LossScaleOptimizer's constructor takes inLossScale,LossScale associated with theLossScaleOptimizer. Instead, LossScaleOptimizer directly implementstf.keras.mixed_precision.experimental.LossScaleOptimizer LossScaleOptimizer and the new non-experimental LossScaleOptimizer.tf.mixed_precision.experimental.LossScale and its subclasses aretf.keras.mixed_precision.LossScaleOptimizertf.lite:TFLiteConverter:
inference_input_type and inference_output_type for full integer quantized models. This allows users to modify the model input and output type to integer types (tf.int8, tf.uint8) instead of defaulting to float type (tf.float32).Interpreter.setUseNNAPI(boolean) Java API. Use Interpreter.Options.setUseNNAPI instead.Interpreter::UseNNAPI(bool) C++ API. Use NnApiDelegate() and related delegate configuration methods directly.Interpreter::SetAllowFp16PrecisionForFp32(bool) C++ API. Prefer controlling this via delegate options, e.g. tflite::StatefulNnApiDelegate::Options::allow_fp16' orTfLiteGpuDelegateOptionsV2::is_precision_loss_allowed`.DynamicBuffer::AddJoinedString() will now add a separator if the first string to be joined is empty.TensorRTsession_config parameter for the TF1 converter is used or the rewrite_config_template field in the TF2beta parameter of the FTRL optimizer for TPU embeddings. Users of other TensorFlow platforms can implement equivalentl2 parameter.tf.function(experimental_compile=True) instead.tf.function.experimental_get_compiler_ir which returns compiler IR (currently 'hlo' and 'optimized_hlo') for given input for given function.tf.raw_ops.Switch, (CVE-2020-15190)SparseFillEmptyRowsGrad
RaggedCountSparseOutput and SparseCountSparseOutput operations
tf.strings.as_string, (CVE-2020-15203)tf.raw_ops.StringNGrams, (CVE-2020-15205)SavedModel validation, (CVE-2020-15206)tf.quantization.quantize_and_dequantize, (CVE-2020-15265)tf.config.experimental.mlir_bridge_rollout which will help us rollout the new MLIR TPU bridge.tf.experimental.register_filesystem_plugin to load modular filesystem plugins from Pythonπ This release contains contributions from many people at Google and external contributors.
8bitmp3, aaa.jq, Abhineet Choudhary, Abolfazl Shahbazi, acxz, Adam Hillier, Adrian Garcia Badaracco, Ag Ramesh, ahmedsabie, Alan Anderson, Alexander Grund, Alexandre Lissy, Alexey Ivanov, Amedeo Cavallo, anencore94, Aniket Kumar Singh, Anthony Platanios, Ashwin Phadke, Balint Cristian, Basit Ayantunde, bbbboom, Ben Barsdell, Benjamin Chetioui, Benjamin Peterson, bhack, Bhanu Prakash Bandaru Venkata, Biagio Montaruli, Brent M. Spell, bubblebooy, bzhao, cfRod, Cheng Chen, Cheng(Kit) Chen, Chris Tessum, Christian, chuanqiw, codeadmin_peritiae, COTASPAR, CuiYifeng, danielknobe, danielyou0230, dannyfriar, daria, DarrenZhang01, Denisa Roberts, dependabot[bot], Deven Desai, Dmitry Volodin, Dmitry Zakharov, drebain, Duncan Riach, Eduard Feicho, Ehsan Toosi, Elena Zhelezina, emlaprise2358, Eugene Kuznetsov, Evaderan-Lab, Evgeniy Polyakov, Fausto Morales, Felix Johnny, fo40225, Frederic Bastien, Fredrik Knutsson, fsx950223, Gaurav Singh, Gauri1 Deshpande, George Grzegorz Pawelczak, gerbauz, Gianluca Baratti, Giorgio Arena, Gmc2, Guozhong Zhuang, Hannes Achleitner, Harirai, HarisWang, Harsh188, hedgehog91, Hemal Mamtora, Hideto Ueno, Hugh Ku, Ian Beauregard, Ilya Persky, jacco, Jakub BerΓ‘nek, Jan Jongboom, Javier Montalt Tordera, Jens Elofsson, Jerry Shih, jerryyin, jgehw, Jinjing Zhou, jma, jmsmdy, Johan NordstrΓΆm, John Poole, Jonah Kohn, Jonathan Dekhtiar, jpodivin, Jung Daun, Kai Katsumata, Kaixi Hou, Kamil Rakoczy, Kaustubh Maske Patil, Kazuaki Ishizaki, Kedar Sovani, Koan-Sin Tan, Koki Ibukuro, Krzysztof Laskowski, Kushagra Sharma, Kushan Ahmadian, Lakshay Tokas, Leicong Li, levinxo, Lukas Geiger, Maderator, Mahmoud Abuzaina, Mao Yunfei, Marius Brehler, markf, Martin Hwasser, Martin KubovΔΓk, Matt Conley, Matthias, mazharul, mdfaijul, Michael137, MichelBr, Mikhail Startsev, Milan Straka, Ml-0, Myung-Hyun Kim, MΓ₯ns Nilsson, Nathan Luehr, ngc92, nikochiko, Niranjan Hasabnis, nyagato_00, Oceania2018, Oleg Guba, Ongun Kanat, OscarVanL, Patrik Laurell, Paul Tanger, Peter Sobot, Phil Pearl, PlusPlusUltra, Poedator, Prasad Nikam, Rahul-Kamat, Rajeshwar Reddy T, redwrasse, Rickard, Robert Szczepanski, Rohan Lekhwani, Sam Holt, Sami Kama, Samuel Holt, Sandeep Giri, sboshin, Sean Settle, settle, Sharada Shiddibhavi, Shawn Presser, ShengYang1, Shi,Guangyong, Shuxiang Gao, Sicong Li, Sidong-Wei, Srihari Humbarwadi, Srinivasan Narayanamoorthy, Steenu Johnson, Steven Clarkson, stjohnso98, Tamas Bela Feher, Tamas Nyiri, Tarandeep Singh, Teng Lu, Thibaut Goetghebuer-Planchon, Tim Bradley, Tomasz Strejczek, Tongzhou Wang, Torsten Rudolf, Trent Lo, Ty Mick, Tzu-Wei Sung, Varghese, Jojimon, Vignesh Kothapalli, Vishakha Agrawal, Vividha, Vladimir Menshakov, Vladimir Silyaev, VoVAllen, VΓ΅ VΔn NghΔ©a, wondertx, xiaohong1031, Xiaoming (Jason) Cui, Xinan Jiang, Yair Ehrenwald, Yasir Modak, Yasuhiro Matsumoto, Yimei Sun, Yiwen Li, Yixing, Yoav Ramon, Yong Tang, Yong Wu, yuanbopeng, Yunmo Koo, Zhangqiang, Zhou Peng, ZhuBaohe, zilinzhu, zmx
π tf.distribute introduces experimental support for asynchronous training of Keras models via the tf.distribute.experimental.ParameterServerStrategy API. Please see below for additional details.
π MultiWorkerMirroredStrategy is now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on Multi-worker training with Keras.
π Introduces experimental support for a new module named tf.experimental.numpy which is a NumPy-compatible API for writing TF programs. See the detailed guide to learn more. Additional details below.
β Adds Support for
π TensorFloat-32 on Ampere based GPUs. TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs and is enabled by default.
π A major refactoring of the internals of the Keras Functional API has been completed, that should improve the reliability, stability, and performance of constructing Functional models.
Keras mixed precision API tf.keras.mixed_precision is no longer experimental and allows the use of 16-bit floating point formats during training, improving performance by up to 3x on GPUs and 60% on TPUs. Please see below for additional details.
π· TensorFlow Profiler now supports profiling MultiWorkerMirroredStrategy and tracing multiple workers using the sampling mode API.
TFLite Profiler for Android is available. See the detailed guide to learn more.
π¦ TensorFlow pip packages are now built with CUDA11 and cuDNN 8.0.2.
TF Core:
tf.config.experimental.enable_tensor_float_32_execution(False).tensorflow::tstring/TF_TStrings.TF_StringDecode, TF_StringEncode, and TF_StringEncodedSize are no longer relevant and have been removed; see core/platform/ctstring.h for string access/modification in C.tensorflow.python, tensorflow.core and tensorflow.compiler modules are now hidden. These modules are not part of TensorFlow public API.tf.raw_ops.Max and tf.raw_ops.Min no longer accept inputs of type tf.complex64 or tf.complex128, because the behavior of these ops is not well defined for complex types.TF_XLA_FLAGS=--tf_xla_enable_xla_devices if you really need them, but this flag will eventually be removed in subsequent releases.tf.keras:
steps_per_execution argument in compile() is no longer experimental; if you were passing experimental_steps_per_execution, rename it to steps_per_execution in your code. This argument controls the number of batches to run during each tf.function call when calling fit(). Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead.isinstance(x, tf.Tensor) instead of tf.is_tensor when checking Keras symbolic inputs/outputs should switch to using tf.is_tensor.tensor.ref(), etc.)get_concrete_function to trace Keras symbolic inputs directly should switch to building matching tf.TensorSpecs directly and tracing the TensorSpec objects.tf.map_fn/tf.cond/tf.while_loop/control flow as op layers and happens to work before TF 2.4. These will explicitly be unsupported now. Converting these ops to Functional API op layers was unreliable before TF 2.4, and prone to erroring incomprehensibly or being silently buggy.tf.rank used to return a static or symbolic value depending on if the input had a fully static shape or not. Now these ops always return symbolic values.tf.keras.Model layer by layer and assumes layers only ever have one positional argument. This assumption doesn't hold true before TF 2.4 either, but is more likely to cause issues now.keras.backend.get_graph() before building a functional model is no longer needed.Input objects in a Functional model, and the shape of the data passed to that model. You can fix this mismatch by either calling the model with correctly-shaped data, or by relaxing Input shape assumptions (note that you can pass shapes with None entries for axesmodel.input_spec = None.tf.keras.mixed_precision.experimental. Note that it is now recommended to use the non-experimental tf.keras.mixed_precision API.
AutoCastVariable.dtype now refers to the actual variable dtype, not the dtype it will be casted to.tf.keras.layers.Embedding now outputs a float16 or bfloat16 tensor instead of a float32 tensor.tf.keras.mixed_precision.experimental.LossScaleOptimizer.loss_scale isLossScale object. This means to get a loss scale of a LossScaleOptimizer as a tensor, you must now callopt.loss_scale instead of opt.loss_scale().should_cast_variables has been removed from tf.keras.mixed_precision.experimental.Policytf.mixed_precision.experimental.DynamicLossScale totf.keras.mixed_precision.experimental.LossScaleOptimizer, the DynamicLossScale's multiplier must be 2.tf.mixed_precision.experimental.DynamicLossScale to tf.keras.mixed_precision.experimental.LossScaleOptimizer,DynanmicLossScale are copied into the LossScaleOptimizer instead of being reused. This means modifying theDynamicLossScale will no longer affect the weights of the LossScaleOptimizer, and vice versa.tf.keras.mixed_precision.experimental.set_policyLayer.call, AutoCastVariables will no longer be casted within MirroredStrategy.run or ReplicaContext.merge_call. This isAutoCastVariables are casted, and those two functions run with aLayer.call; if one of those two functions calls Layer.call, AutoCastVariables will still be casted.tf.data:
tf.data.experimental.service.DispatchServer now takes a config tuple instead of individual arguments. Usages should be updated to tf.data.experimental.service.DispatchServer(dispatcher_config).tf.data.experimental.service.WorkerServer now takes a config tuple instead of individual arguments. Usages should be updated to tf.data.experimental.service.WorkerServer(worker_config).tf.distribute:
tf.distribute.Strategy.experimental_make_numpy_dataset. Please use tf.data.Dataset.from_tensor_slices instead.experimental_hints in tf.distribute.StrategyExtended.reduce_to, tf.distribute.StrategyExtended.batch_reduce_to, tf.distribute.ReplicaContext.all_reduce to options:tf.distribute.experimental.CollectiveHints to tf.distribute.experimental.CommunicationOptions.tf.distribute.experimental.CollectiveCommunication to tf.distribute.experimental.CommunicationImplementation.tf.distribute.Strategy.experimental_distribute_datasets_from_function to distribute_datasets_from_function as it is no longer experimental.tf.distribute.Strategy.experimental_run_v2 method, which was deprecated in TF 2.2.tf.lite:
tf.quantization.quantize_and_dequantize_v2 has been introduced, which updates the gradient definition for quantization which is outside the rangetf.quantization.quantize_and_dequantize(...) use tf.grad_pass_through(tf.quantization.quantize_and_dequantize_v2)(...).tf.experimental.numpy, whichndarray, which mimics the ndarray class in NumPy, and wraps an immutable tf.Tensor under the hood. A subset of NumPy functions (e.g. numpy.add) are provided. Their inter-operation with TF facilities is seamless in most cases.tf.types.experimental.TensorLike is a new Union type that can be used as type annotation for variables representing a Tensor or a valuetf.convert_to_tensor.tf.sparse.map_values to apply a function to the .values of SparseTensor arguments.Tensor (__and__, __or__, __xor__ and __invert__ now support non-bool arguments and applybool arguments continue to be supported and dispatch to logical ops. This brings them more in line withtf.SparseTensor.with_values. This returns a new SparseTensor with the same sparsity pattern, but with new provided values. It iswith_values function of RaggedTensor.StatelessCase op, and uses it if none of case branches has stateful ops.tf.config.experimental.get_memory_usage to return total memory usage of the device.RaggedTensorToVariant and RaggedTensorFromVariant.tf.debugging:
tf.debugging.assert_shapes() now works on SparseTensors (Fixes #36268).tf.config.experimental.enable_tensor_float_32_execution.tf.math:
tf.math.erfcinv, the inverse to tf.math.erfc.tf.nn:
tf.nn.max_pool2d now supports explicit padding.tf.image:
tf.image.stateless_random_* functions for each tf.image.random_* function. Added a new op stateless_sample_distorted_bounding_box which is a deterministic version of sample_distorted_bounding_box op. Given the same seed, these stateless functions/ops produce the same results independent of how many times the function is called, and independent of global seed settings.tf.image.resize backprop CUDA kernels for method=ResizeMethod.BILINEAR (the default method). Enable by setting the environment variable TF_DETERMINISTIC_OPS to "true" or "1".tf.print:
tf.print() with OrderedDict where if an OrderedDict didn't have the keys sorted, the keys and values were not being printedtf.train.Checkpoint:
root argument in the initialization, which generates a checkpoint with a root object. This allows users to create a Checkpoint object that is compatible with Keras model.save_weights() and model.load_weights. The checkpoint is also compatible with the checkpoint saved in the variables/ folder in the SavedModel.save_path can be a path to a SavedModel. The function will automatically find the checkpoint in the SavedModel.tf.data:tf.data.experimental.service.register_dataset and tf.data.experimental.service.from_dataset_id APIs to enable onework_dir when running your dispatcher server and setdispatcher_fault_tolerance=True. The dispatcher will store its state to work_dir, so that on restart it can continue from its previouswork_dir must be accessible from workers. If the worker fails to read from thework_dir, it falls back to using RPC for dataset graph transfer.exclude_cols parameter to CsvDataset. This parameter is the complement of select_cols; at most one of these should be specified.take and shard to happen earlier in the dataset when it is safe to do so. The optimization can be disabled via the experimental_optimization.reorder_data_discarding_ops dataset option.tf.data.Options were previously immutable and can now be overridden.tf.data.Dataset.from_generator now supports Ragged and Sparse tensors with a new output_signature argument, which allows from_generator totf.TypeSpec.tf.data.experimental.AUTOTUNE is now available in the core API as tf.data.AUTOTUNE.tf.distribute:tf.distribute.experimental.ParameterServerStrategy:
tf.distribute.experimental.ParameterServerStrategy symbol with a new class that is for parameter server training in TF2. Usage oftf.compat.v1.distribute.experimental.ParameterServerStrategy].tf.distribute.experimental.coordinator.* namespace, including the main API ClusterCoordinator for coordinating the training cluster, the related data structure RemoteValue and PerWorkerValue.tf.distribute.Strategy.gather and tf.distribute.ReplicaContext.all_gather APIs to support gathering dense distributed values.tf.keras:tf.image.ssim_multiscaleOptimizer.minimize can now accept a loss Tensor and a GradientTape as an alternative to accepting a callable loss.beta hyperparameter to FTRL optimizer classes (Keras and others) to match FTRL paper.Optimizer. __init__ now accepts a gradient_aggregator to allow for customization of how gradients are aggregated across devices, as well asgradients_transformers to allow for custom gradient transformations (such as gradient clipping).Attention and AdditiveAttention layers, the call() method now accepts a return_attention_scores argument. When set totf.metrics.log_cosh and tf.metrics.logcosh API entrypoints with the same implementation as their tf.losses equivalent.Model.evaluate uses no cached data for evaluation, while Model.fit uses cached data whenvalidation_data arg is provided for better performance.save_traces argument to model.save/ tf.keras.models.save_model which determines whether the SavedModel format stores the Keras model/layer call functions. The traced functions allow Keras to revive custom models and layers without the original class definition, but if this isn't required the tracing can be disabled with the added option.tf.keras.mixed_precision API is non non-experimental. Thetf.keras.mixed_precision.Policy no longer takes in atf.mixed_precision.experimental.LossScale in the constructor, and noLossScale associated with it. Instead, Model.compileLossScaleOptimizer usingPolicy.name is "mixed_float16".tf.keras.mixed_precision.LossScaleOptimizer's constructor takes inLossScale,LossScale associated with theLossScaleOptimizer. Instead, LossScaleOptimizer directly implementstf.keras.mixed_precision.experimental.LossScaleOptimizer LossScaleOptimizer and the new non-experimental LossScaleOptimizer.tf.mixed_precision.experimental.LossScale and its subclasses aretf.keras.mixed_precision.LossScaleOptimizertf.lite:TFLiteConverter:
inference_input_type and inference_output_type for full integer quantized models. This allows users to modify the model input and output type to integer types (tf.int8, tf.uint8) instead of defaulting to float type (tf.float32).Interpreter.setUseNNAPI(boolean) Java API. Use Interpreter.Options.setUseNNAPI instead.Interpreter::UseNNAPI(bool) C++ API. Use NnApiDelegate() and related delegate configuration methods directly.Interpreter::SetAllowFp16PrecisionForFp32(bool) C++ API. Prefer controlling this via delegate options, e.g. tflite::StatefulNnApiDelegate::Options::allow_fp16' orTfLiteGpuDelegateOptionsV2::is_precision_loss_allowed`.DynamicBuffer::AddJoinedString() will now add a separator if the first string to be joined is empty.TensorRTsession_config parameter for the TF1 converter is used or the rewrite_config_template field in the TF2beta parameter of the FTRL optimizer for TPU embeddings. Users of other TensorFlow platforms can implement equivalentl2 parameter.tf.function(experimental_compile=True) instead.tf.function.experimental_get_compiler_ir which returns compiler IR (currently 'hlo' and 'optimized_hlo') for given input for given function.tf.raw_ops.Switch, (CVE-2020-15190)SparseFillEmptyRowsGrad
RaggedCountSparseOutput and SparseCountSparseOutput operations
tf.strings.as_string, (CVE-2020-15203)tf.raw_ops.StringNGrams, (CVE-2020-15205)SavedModel validation, (CVE-2020-15206)tf.quantization.quantize_and_dequantize, (CVE-2020-15265)tf.config.experimental.mlir_bridge_rollout which will help us rollout the new MLIR TPU bridge.tf.experimental.register_filesystem_plugin to load modular filesystem plugins from Pythonπ This release contains contributions from many people at Google and external contributors.
8bitmp3, aaa.jq, Abhineet Choudhary, Abolfazl Shahbazi, acxz, Adam Hillier, Adrian Garcia Badaracco, Ag Ramesh, ahmedsabie, Alan Anderson, Alexander Grund, Alexandre Lissy, Alexey Ivanov, Amedeo Cavallo, anencore94, Aniket Kumar Singh, Anthony Platanios, Ashwin Phadke, Balint Cristian, Basit Ayantunde, bbbboom, Ben Barsdell, Benjamin Chetioui, Benjamin Peterson, bhack, Bhanu Prakash Bandaru Venkata, Biagio Montaruli, Brent M. Spell, bubblebooy, bzhao, cfRod, Cheng Chen, Cheng(Kit) Chen, Chris Tessum, Christian, chuanqiw, codeadmin_peritiae, COTASPAR, CuiYifeng, danielknobe, danielyou0230, dannyfriar, daria, DarrenZhang01, Denisa Roberts, dependabot[bot], Deven Desai, Dmitry Volodin, Dmitry Zakharov, drebain, Duncan Riach, Eduard Feicho, Ehsan Toosi, Elena Zhelezina, emlaprise2358, Eugene Kuznetsov, Evaderan-Lab, Evgeniy Polyakov, Fausto Morales, Felix Johnny, fo40225, Frederic Bastien, Fredrik Knutsson, fsx950223, Gaurav Singh, Gauri1 Deshpande, George Grzegorz Pawelczak, gerbauz, Gianluca Baratti, Giorgio Arena, Gmc2, Guozhong Zhuang, Hannes Achleitner, Harirai, HarisWang, Harsh188, hedgehog91, Hemal Mamtora, Hideto Ueno, Hugh Ku, Ian Beauregard, Ilya Persky, jacco, Jakub BerΓ‘nek, Jan Jongboom, Javier Montalt Tordera, Jens Elofsson, Jerry Shih, jerryyin, jgehw, Jinjing Zhou, jma, jmsmdy, Johan NordstrΓΆm, John Poole, Jonah Kohn, Jonathan Dekhtiar, jpodivin, Jung Daun, Kai Katsumata, Kaixi Hou, Kamil Rakoczy, Kaustubh Maske Patil, Kazuaki Ishizaki, Kedar Sovani, Koan-Sin Tan, Koki Ibukuro, Krzysztof Laskowski, Kushagra Sharma, Kushan Ahmadian, Lakshay Tokas, Leicong Li, levinxo, Lukas Geiger, Maderator, Mahmoud Abuzaina, Mao Yunfei, Marius Brehler, markf, Martin Hwasser, Martin KubovΔΓk, Matt Conley, Matthias, mazharul, mdfaijul, Michael137, MichelBr, Mikhail Startsev, Milan Straka, Ml-0, Myung-Hyun Kim, MΓ₯ns Nilsson, Nathan Luehr, ngc92, nikochiko, Niranjan Hasabnis, nyagato_00, Oceania2018, Oleg Guba, Ongun Kanat, OscarVanL, Patrik Laurell, Paul Tanger, Peter Sobot, Phil Pearl, PlusPlusUltra, Poedator, Prasad Nikam, Rahul-Kamat, Rajeshwar Reddy T, redwrasse, Rickard, Robert Szczepanski, Rohan Lekhwani, Sam Holt, Sami Kama, Samuel Holt, Sandeep Giri, sboshin, Sean Settle, settle, Sharada Shiddibhavi, Shawn Presser, ShengYang1, Shi,Guangyong, Shuxiang Gao, Sicong Li, Sidong-Wei, Srihari Humbarwadi, Srinivasan Narayanamoorthy, Steenu Johnson, Steven Clarkson, stjohnso98, Tamas Bela Feher, Tamas Nyiri, Tarandeep Singh, Teng Lu, Thibaut Goetghebuer-Planchon, Tim Bradley, Tomasz Strejczek, Tongzhou Wang, Torsten Rudolf, Trent Lo, Ty Mick, Tzu-Wei Sung, Varghese, Jojimon, Vignesh Kothapalli, Vishakha Agrawal, Vividha, Vladimir Menshakov, Vladimir Silyaev, VoVAllen, VΓ΅ VΔn NghΔ©a, wondertx, xiaohong1031, Xiaoming (Jason) Cui, Xinan Jiang, Yair Ehrenwald, Yasir Modak, Yasuhiro Matsumoto, Yimei Sun, Yiwen Li, Yixing, Yoav Ramon, Yong Tang, Yong Wu, yuanbopeng, Yunmo Koo, Zhangqiang, Zhou Peng, ZhuBaohe, zilinzhu, zmx