You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.
Patches
This problem is fixed starting with version 3.9.
Workarounds
Only load models from trusted sources and model archives created with Keras.
It is possible to bypass the mitigation introduced in response to CVE-2025-1550, when an untrusted Keras v3 model is loaded, even when "safe_mode" is enabled, by crafting malicious arguments to built-in Keras modules.
The vulnerability is exploitable on the default configuration and does not depend on user input (just requires an untrusted model to be loaded).
Impact
Type
Vector
Impact
Unsafe deserialization
Client-Side (when loading untrusted model)
Arbitrary file overwrite. Can lead to Arbitrary code execution in many cases.
if config["class_name"] == "__lambda__":
if safe_mode:
raise ValueError(
"Requested the deserialization of a `lambda` object. "
"This carries a potential risk of arbitrary code execution "
"and thus it is disallowed by default. If you trust the "
"source of the saved model, you can pass `safe_mode=False` to "
"the loading function in order to allow `lambda` loading, "
"or call `keras.config.enable_unsafe_deserialization()`."
)
A fix to the vulnerability, allowing deserialization of the object only from internal Keras modules, was introduced in the commit bb340d6780fdd6e115f2f4f78d8dbe374971c930.
package = module.split(".", maxsplit=1)[0]
if package in {"keras", "keras_hub", "keras_cv", "keras_nlp"}:
However, it is still possible to exploit model loading, for example by reusing the internal Keras function keras.utils.get_file, and download remote files to an attacker-controlled location.
This allows for arbitrary file overwrite which in many cases could also lead to remote code execution. For example, an attacker would be able to download a malicious authorized_keys file into the user’s SSH folder, giving the attacker full SSH access to the victim’s machine.
Since the model does not contain arbitrary Python code, this scenario will not be blocked by "safe_mode". It will bypass the latest fix since it uses a function from one of the approved modules (keras).
Keras versions prior to 3.11.0 allow for arbitrary code execution when loading a crafted .keras model archive, even when safe_mode=True.
The issue arises because the archive’s config.json is parsed before layer deserialization. This can invoke keras.config.enable_unsafe_deserialization(), effectively disabling safe mode from within the loading process itself. An attacker can place this call first in the archive and then include a Lambda layer whose function is deserialized from a pickle, leading to the execution of attacker-controlled Python code as soon as a victim loads the model file.
Exploitation requires a user to open an untrusted model; no additional privileges are needed. The fix in version 3.11.0 enforces safe-mode semantics before reading any user-controlled configuration and prevents the toggling of unsafe deserialization via the config file.
Note: This report has already been discussed with the Google OSS VRP team, who recommended that I reach out directly to the Keras team. I’ve chosen to do so privately rather than opening a public issue, due to the potential security implications. I also attempted to use the email address listed in your SECURITY.md, but received no response.
Summary
When a model in the .h5 (or .hdf5) format is loaded using the Keras Model.load_model method, the safe_mode=True setting is silently ignored without any warning or error. This allows an attacker to execute arbitrary code on the victim’s machine with the same privileges as the Keras application. This report is specific to the .h5/.hdf5 file format. The attack works regardless of the other parameters passed to load_model and does not require any sophisticated technique—.h5 and .hdf5 files are simply not checked for unsafe code execution.
From this point on, I will refer only to the .h5 file format, though everything equally applies to .hdf5.
Details
Intended behaviour
According to the official Keras documentation, safe_mode is defined as:
safe_mode: Boolean, whether to disallow unsafe lambda deserialization. When safe_mode=False, loading an object has the potential to trigger arbitrary code execution. This argument is only applicable to the Keras v3 model format. Defaults to True.
I understand that the behavior described in this report is somehow intentional, as safe_mode is only applicable to .keras models.
However, in practice, this behavior is misleading for users who are unaware of the internal Keras implementation. .h5 files can still be loaded seamlessly using load_model with safe_mode=True, and the absence of any warning or error creates a false sense of security. Whether intended or not, I believe silently ignoring a security-related parameter is not the best possible design decision. At a minimum, if safe_mode cannot be applied to a given file format, an explicit error should be raised to alert the user.
This issue is particularly critical given the widespread use of the .h5 format, despite the introduction of newer formats.
As a small anecdotal test, I asked several of my colleagues what they would expect when loading a .h5 file with safe_mode=True. None of them expected the setting to be silently ignored, even after reading the documentation. While this is a small sample, all of these colleagues are cybersecurity researchers—experts in binary or ML security—and regular participants in DEF CON finals. I was careful not to give any hints about the vulnerability in our discussion.
Technical Details
Examining the implementation of load_model in keras/src/saving/saving_api.py, we can see that the safe_mode parameter is completely ignored when loading .h5 files. Here's the relevant snippet:
As shown, when the file format is .h5 or .hdf5, the method delegates to legacy_h5_format.load_model_from_hdf5, which does not use or check the safe_mode parameter at all.
Solution
Since the release of the new .keras format, I believe the simplest and most effective way to address this misleading behavior—and to improve security in Keras—is to have the safe_mode parameter raise an explicit error when safe_mode=True is used with .h5/.hdf5 files. This error should be clear and informative, explaining that the legacy format does not support safe_mode and outlining the associated risks of loading such files.
I recognize this fix may have minor backward compatibility considerations.
If you confirm that you're open to this approach, I’d be happy to open a PR that includes the missing check.
PoC
From the attacker’s perspective, creating a malicious .h5 model is as simple as the following:
That’s all. The exploit occurs during model loading, with no further interaction required. The parameters passed to the method do not mitigate of influence the attack in any way.
As expected, the attacker can substitute the exec(...) call with any payload. Whatever command is used will execute with the same permissions as the Keras application.
Attack scenario
The attacker may distribute a malicious .h5/.hdf5 model on platforms such as Hugging Face, or act as a malicious node in a federated learning environment. The victim only needs to load the model—even withsafe_mode=True that would give the illusion of security. No inference or further action is required, making the threat particularly stealthy and dangerous.
Once the model is loaded, the attacker gains the ability to execute arbitrary code on the victim’s machine with the same privileges as the Keras process. The provided proof-of-concept demonstrates a simple shell spawn, but any payload could be delivered this way.
Add support for weight sharding for saving very large models with model.save(). It is controlled via the max_shard_size argument. Specifying this argument will split your Keras model weight file into chunks of this size at most. Use load_model() to reload the sharded files.
Add new Keras rematerialization API: keras.RematScope and keras.remat. It can be used to turn on rematerizaliation for certain layers in fine-grained manner, e.g. only for layers larger than a certain size, or for a specific set of layers, or only for activations.
Increase op coverage for OpenVINO backend.
New operations:
keras.ops.rot90
keras.ops.rearrange (Einops-style)
keras.ops.signbit
keras.ops.polar
keras.ops.image.perspective_transform
keras.ops.image.gaussian_blur
New layers:
keras.layers.RMSNormalization
keras.layers.AugMix
keras.layers.CutMix
keras.layers.RandomInvert
keras.layers.RandomErasing
keras.layers.RandomGaussianBlur
keras.layers.RandomPerspective
Minor additions:
Add support for dtype argument to JaxLayer and FlaxLayer layers
Add boolean input support to BinaryAccuracy metric
Add antialias argument to keras.layers.Resizing layer.
Security fix: disallow object pickling in saved npz model files (numpy format). Thanks to Peng Zhou for reporting the vulnerability.
OpenVINO is now available as an infererence-only Keras backend. You can start using it by setting the backend field to "openvino" in your keras.json config file.
OpenVINO is a deep learning inference-only framework tailored for CPU (x86, ARM), certain GPUs (OpenCL capable, integrated and discrete) and certain AI accelerators (Intel NPU).
Because OpenVINO does not support gradients, you cannot use it for training (e.g. model.fit()) -- only inference. You can train your models with the JAX/TensorFlow/PyTorch backends, and when trained, reload them with the OpenVINO backend for inference on a target device supported by OpenVINO.
New: ONNX model export
You can now export your Keras models to the ONNX format from the JAX, TensorFlow, and PyTorch backends.
Just pass format="onnx" in your model.export() call:
### Export the model as a ONNX artifactmodel.export("path/to/location", format="onnx")
### Load the artifact in a different process/environmentort_session=onnxruntime.InferenceSession("path/to/location")
### Run inferenceort_inputs= {
k.name: vfork, vinzip(ort_session.get_inputs(), input_data)
}
predictions=ort_session.run(None, ort_inputs)
New: Scikit-Learn API compatibility interface
It's now possible to easily integrate Keras models into Sciki-Learn pipelines! The following wrapper classes are available:
keras.wrappers.SKLearnClassifier: implements the sklearn Classifier API
keras.wrappers.SKLearnRegressor: implements the sklearn Regressor API
keras.wrappers.SKLearnTransformer: implements the sklearn Transformer API
Other feature additions
Add new ops:
Add keras.ops.diagflat
Add keras.ops.unravel_index
Add new activations:
Add sparse_plus activation
Add sparsemax activation
Add new image augmentation and preprocessing layers:
Add keras.layers.RandAugment
Add keras.layers.Equalization
Add keras.layers.MixUp
Add keras.layers.RandomHue
Add keras.layers.RandomGrayscale
Add keras.layers.RandomSaturation
Add keras.layers.RandomColorJitter
Add keras.layers.RandomColorDegeneration
Add keras.layers.RandomSharpness
Add keras.layers.RandomShear
Add argument axis to tversky loss
JAX specific changes
Add support for JAX named scope
TensorFlow specific changes
Make keras.random.shuffle XLA compilable
PyTorch specific changes
Add support for model.export() and keras.export.ExportArchive with the PyTorch backend, supporting both the TF SavedModel format and the ONNX format.
Add flash_attention argument to keras.ops.dot_product_attention and to keras.layers.MultiHeadAttention.
Add keras.layers.STFTSpectrogram layer (to extract STFT spectrograms from inputs as a preprocessing step) as well as its initializer keras.initializers.STFTInitializer.
Add double_checkpoint argument to BackupAndRestore to save a fallback checkpoint in case the first checkpoint gets corrupted.
Add bounding box preprocessing support to image augmentation layers CenterCrop, RandomFlip, RandomZoom, RandomTranslation, RandomCrop.
Add keras.ops.exp2, keras.ops.inner operations.
Performance improvements
JAX backend: add native Flash Attention support for GPU (via cuDNN) and TPU (via a Pallas kernel). Flash Attention is now used automatically when the hardware supports it.
PyTorch backend: add native Flash Attention support for GPU (via cuDNN). It is currently opt-in.
TensorFlow backend: enable more kernel fusion via bias_add.
PyTorch backend: add support for Intel XPU devices.
New file editor utility: keras.saving.KerasFileEditor. Use it to inspect, diff, modify and resave Keras weights files. See basic workflow here.
New keras.utils.Config class for managing experiment config parameters.
BREAKING changes
When using keras.utils.get_file, with extract=True or untar=True, the return value will be the path of the extracted directory, rather than the path of the archive.
Other changes and additions
Logging is now asynchronous in fit(), evaluate(), predict(). This enables 100% compact stacking of train_step calls on accelerators (e.g. when running small models on TPU).
If you are using custom callbacks that rely on on_batch_end, this will disable async logging. You can force it back by adding self.async_safe = True to your callbacks. Note that the TensorBoard callback isn't considered async safe by default. Default callbacks like the progress bar are async safe.
Added keras.saving.KerasFileEditor utility to inspect, diff, modify and resave Keras weights file.
Added keras.utils.Config class. It behaves like a dictionary, with a few nice features:
All entries are accessible and settable as attributes, in addition to dict-style (e.g. config.foo = 2 or config["foo"] are both valid)
You can easily serialize it to JSON via config.to_json().
You can easily freeze it, preventing future changes, via config.freeze().
Added bitwise numpy ops:
bitwise_and
bitwise_invert
bitwise_left_shift
bitwise_not
bitwise_or
bitwise_right_shift
bitwise_xor
Added math op keras.ops.logdet.
Added numpy op keras.ops.trunc.
Added keras.ops.dot_product_attention.
Added keras.ops.histogram.
Allow infinite PyDataset instances to use multithreading.
Added argument verbose in keras.saving.ExportArchive.write_out() method for exporting TF SavedModel.
Added epsilon argument in keras.ops.normalize.
Added Model.get_state_tree() method for retrieving a nested dict mapping variable paths to variable values (either as numpy arrays or backend tensors (default)). This is useful for rolling out custom JAX training loops.
Added keras.layers.Pipeline class, to apply a sequence of layers to an input. This class is useful to build a preprocessing pipeline. Compared to a Sequential model, Pipeline features a few important differences:
It's not a Model, just a plain layer.
When the layers in the pipeline are compatible with tf.data, the pipeline will also remain tf.data compatible, independently of the backend you use.
Add integration with the Hugging Face Hub. You can now save models to Hugging Face Hub directly from keras.Model.save() and load .keras models directly from Hugging Face Hub with keras.saving.load_model().
Ensure compatibility with NumPy 2.0.
Add keras.optimizers.Lamb optimizer.
Improve keras.distribution API support for very large models.
@renovaterenovatebot
changed the title
(削除) Update dependency keras to v3.9.0 [SECURITY] (削除ここまで)
(追記) Update dependency keras to v3.11.0 [SECURITY] (追記ここまで)
Aug 15, 2025
@renovaterenovatebot
changed the title
(削除) Update dependency keras to v3.11.0 [SECURITY] (削除ここまで)
(追記) Update dependency keras to v3.11.3 [SECURITY] (追記ここまで)
Sep 20, 2025
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.
Coming soon: The Renovate bot (GitHub App) will be renamed to Mend. PRs from Renovate will soon appear from 'Mend'. Learn more here.
This PR contains the following updates:
==3.3.2->==3.11.3Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
GitHub Vulnerability Alerts
CVE-2025-1550
Impact
The Keras
Model.load_modelfunction permits arbitrary code execution, even withsafe_mode=True, through a manually constructed, malicious.kerasarchive. By altering theconfig.jsonfile within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.Patches
This problem is fixed starting with version
3.9.Workarounds
Only load models from trusted sources and model archives created with Keras.
References
CVE-2025-8747
Summary
It is possible to bypass the mitigation introduced in response to CVE-2025-1550, when an untrusted Keras v3 model is loaded, even when "safe_mode" is enabled, by crafting malicious arguments to built-in Keras modules.
The vulnerability is exploitable on the default configuration and does not depend on user input (just requires an untrusted model to be loaded).
Impact
Details
Keras’ safe_mode flag is designed to disallow unsafe lambda deserialization - specifically by rejecting any arbitrary embedded Python code, marked by the "lambda" class name.
https://github.com/keras-team/keras/blob/v3.8.0/keras/src/saving/serialization_lib.py#L641 -
A fix to the vulnerability, allowing deserialization of the object only from internal Keras modules, was introduced in the commit bb340d6780fdd6e115f2f4f78d8dbe374971c930.
However, it is still possible to exploit model loading, for example by reusing the internal Keras function
keras.utils.get_file, and download remote files to an attacker-controlled location.This allows for arbitrary file overwrite which in many cases could also lead to remote code execution. For example, an attacker would be able to download a malicious
authorized_keysfile into the user’s SSH folder, giving the attacker full SSH access to the victim’s machine.Since the model does not contain arbitrary Python code, this scenario will not be blocked by "safe_mode". It will bypass the latest fix since it uses a function from one of the approved modules (
keras).Example
The following truncated
config.jsonwill cause a remote file download from https://raw.githubusercontent.com/andr3colonel/when_you_watch_computer/refs/heads/master/index.js to the local/tmpfolder, by sending arbitrary arguments to Keras’ builtin functionkeras.utils.get_file()-PoC
Download malicious_model_download.keras to a local directory
Load the model -
index.jswas created in the/tmpdirectoryFix suggestions
block_all_lambdathat allows users to completely disallow loading models with a Lambda layer.keras,keras_hub,keras_cv,keras_nlpmodules and remove/block all "gadget functions" which could be used by malicious ML models.lambda_whitelist_functionsthat allows users to specify a list of functions that are allowed to be invoked by a Lambda layerCredit
The vulnerability was discovered by Andrey Polkovnichenko of the JFrog Vulnerability Research
CVE-2025-9906
Arbitrary Code Execution in Keras
Keras versions prior to 3.11.0 allow for arbitrary code execution when loading a crafted
.kerasmodel archive, even whensafe_mode=True.The issue arises because the archive’s
config.jsonis parsed before layer deserialization. This can invokekeras.config.enable_unsafe_deserialization(), effectively disabling safe mode from within the loading process itself. An attacker can place this call first in the archive and then include aLambdalayer whose function is deserialized from a pickle, leading to the execution of attacker-controlled Python code as soon as a victim loads the model file.Exploitation requires a user to open an untrusted model; no additional privileges are needed. The fix in version 3.11.0 enforces safe-mode semantics before reading any user-controlled configuration and prevents the toggling of unsafe deserialization via the config file.
Affected versions: < 3.11.0
Patched version: 3.11.0
It is recommended to upgrade to version 3.11.0 or later and to avoid opening untrusted model files.
CVE-2025-9905
Note: This report has already been discussed with the Google OSS VRP team, who recommended that I reach out directly to the Keras team. I’ve chosen to do so privately rather than opening a public issue, due to the potential security implications. I also attempted to use the email address listed in your
SECURITY.md, but received no response.Summary
When a model in the
.h5(or.hdf5) format is loaded using the KerasModel.load_modelmethod, thesafe_mode=Truesetting is silently ignored without any warning or error. This allows an attacker to execute arbitrary code on the victim’s machine with the same privileges as the Keras application. This report is specific to the.h5/.hdf5file format. The attack works regardless of the other parameters passed toload_modeland does not require any sophisticated technique—.h5and.hdf5files are simply not checked for unsafe code execution.From this point on, I will refer only to the
.h5file format, though everything equally applies to.hdf5.Details
Intended behaviour
According to the official Keras documentation,
safe_modeis defined as:I understand that the behavior described in this report is somehow intentional, as
safe_modeis only applicable to.kerasmodels.However, in practice, this behavior is misleading for users who are unaware of the internal Keras implementation.
.h5files can still be loaded seamlessly usingload_modelwithsafe_mode=True, and the absence of any warning or error creates a false sense of security. Whether intended or not, I believe silently ignoring a security-related parameter is not the best possible design decision. At a minimum, ifsafe_modecannot be applied to a given file format, an explicit error should be raised to alert the user.This issue is particularly critical given the widespread use of the
.h5format, despite the introduction of newer formats.As a small anecdotal test, I asked several of my colleagues what they would expect when loading a
.h5file withsafe_mode=True. None of them expected the setting to be silently ignored, even after reading the documentation. While this is a small sample, all of these colleagues are cybersecurity researchers—experts in binary or ML security—and regular participants in DEF CON finals. I was careful not to give any hints about the vulnerability in our discussion.Technical Details
Examining the implementation of
load_modelinkeras/src/saving/saving_api.py, we can see that thesafe_modeparameter is completely ignored when loading.h5files. Here's the relevant snippet:As shown, when the file format is
.h5or.hdf5, the method delegates tolegacy_h5_format.load_model_from_hdf5, which does not use or check thesafe_modeparameter at all.Solution
Since the release of the new
.kerasformat, I believe the simplest and most effective way to address this misleading behavior—and to improve security in Keras—is to have thesafe_modeparameter raise an explicit error whensafe_mode=Trueis used with.h5/.hdf5files. This error should be clear and informative, explaining that the legacy format does not supportsafe_modeand outlining the associated risks of loading such files.I recognize this fix may have minor backward compatibility considerations.
If you confirm that you're open to this approach, I’d be happy to open a PR that includes the missing check.
PoC
From the attacker’s perspective, creating a malicious
.h5model is as simple as the following:From the victim’s side, triggering code execution is just as simple:
That’s all. The exploit occurs during model loading, with no further interaction required. The parameters passed to the method do not mitigate of influence the attack in any way.
As expected, the attacker can substitute the
exec(...)call with any payload. Whatever command is used will execute with the same permissions as the Keras application.Attack scenario
The attacker may distribute a malicious
.h5/.hdf5model on platforms such as Hugging Face, or act as a malicious node in a federated learning environment. The victim only needs to load the model—even withsafe_mode=Truethat would give the illusion of security. No inference or further action is required, making the threat particularly stealthy and dangerous.Once the model is loaded, the attacker gains the ability to execute arbitrary code on the victim’s machine with the same privileges as the Keras process. The provided proof-of-concept demonstrates a simple shell spawn, but any payload could be delivered this way.
Release Notes
keras-team/keras (keras)
v3.11.3: Keras 3.11.3Compare Source
What's Changed
Full Changelog: keras-team/keras@v3.11.2...v3.11.3
v3.11.2: Keras 3.11.2Compare Source
What's Changed
New Contributors
Full Changelog: keras-team/keras@v3.11.1...v3.11.2
v3.11.1: Keras 3.11.1Compare Source
What's Changed
Full Changelog: keras-team/keras@v3.11.0...v3.11.1
v3.11.0: Keras 3.11.0Compare Source
What's Changed
fit()/evaluate()/predict().keras.ops.kaiserfunction.keras.ops.hanningfunction.keras.ops.cbrtfunction.keras.ops.deg2radfunction.keras.ops.layer_normalizationfunction to leverage backend-specific performance optimizations.Backend-specific changes
JAX backend
TensorFlow backend
Flattenlayer.OpenVINO backend
New Contributors
Full Changelog: keras-team/keras@v3.10.0...v3.11.0
v3.10.0: Keras 3.10.0Compare Source
New features
model.save(). It is controlled via themax_shard_sizeargument. Specifying this argument will split your Keras model weight file into chunks of this size at most. Useload_model()to reload the sharded files.keras.optimizers.Muonkeras.layers.RandomElasticTransformkeras.losses.CategoricalGeneralizedCrossEntropy(with functional versionkeras.losses.categorical_generalized_cross_entropy)axisargument toSparseCategoricalCrossentropylora_alphato all LoRA-enabled layers. If set, this parameter scales the low-rank adaptation delta during the forward pass.keras.activations.sparse_sigmoidkeras.ops.image.elastic_transformkeras.ops.anglekeras.ops.bartlettkeras.ops.blackmankeras.ops.hammingkeras.ops.view_as_complex,keras.ops.view_as_realPyTorch backend
TensorFlow backend
tf.RaggedTensorsupport toEmbeddinglayersynchronizationargumentOpenVINO backend
New Contributors
Full Changelog: keras-team/keras@v3.9.0...v3.10.0
v3.9.2: Keras 3.9.2Compare Source
What's Changed
Full Changelog: keras-team/keras@v3.9.1...v3.9.2
v3.9.1: Keras 3.9.1Compare Source
What's Changed
Full Changelog: keras-team/keras@v3.9.0...v3.9.1
v3.9.0: Keras 3.9.0Compare Source
New features
keras.RematScopeandkeras.remat. It can be used to turn on rematerizaliation for certain layers in fine-grained manner, e.g. only for layers larger than a certain size, or for a specific set of layers, or only for activations.keras.ops.rot90keras.ops.rearrange(Einops-style)keras.ops.signbitkeras.ops.polarkeras.ops.image.perspective_transformkeras.ops.image.gaussian_blurkeras.layers.RMSNormalizationkeras.layers.AugMixkeras.layers.CutMixkeras.layers.RandomInvertkeras.layers.RandomErasingkeras.layers.RandomGaussianBlurkeras.layers.RandomPerspectivedtypeargument toJaxLayerandFlaxLayerlayersBinaryAccuracymetricantialiasargument tokeras.layers.Resizinglayer.npzmodel files (numpy format). Thanks to Peng Zhou for reporting the vulnerability.New Contributors
Full Changelog: keras-team/keras@v3.8.0...v3.9.0
v3.8.0: Keras 3.8.0Compare Source
New: OpenVINO backend
OpenVINO is now available as an infererence-only Keras backend. You can start using it by setting the
backendfield to"openvino"in yourkeras.jsonconfig file.OpenVINO is a deep learning inference-only framework tailored for CPU (x86, ARM), certain GPUs (OpenCL capable, integrated and discrete) and certain AI accelerators (Intel NPU).
Because OpenVINO does not support gradients, you cannot use it for training (e.g.
model.fit()) -- only inference. You can train your models with the JAX/TensorFlow/PyTorch backends, and when trained, reload them with the OpenVINO backend for inference on a target device supported by OpenVINO.New: ONNX model export
You can now export your Keras models to the ONNX format from the JAX, TensorFlow, and PyTorch backends.
Just pass
format="onnx"in yourmodel.export()call:New: Scikit-Learn API compatibility interface
It's now possible to easily integrate Keras models into Sciki-Learn pipelines! The following wrapper classes are available:
keras.wrappers.SKLearnClassifier: implements the sklearnClassifierAPIkeras.wrappers.SKLearnRegressor: implements the sklearnRegressorAPIkeras.wrappers.SKLearnTransformer: implements the sklearnTransformerAPIOther feature additions
keras.ops.diagflatkeras.ops.unravel_indexsparse_plusactivationsparsemaxactivationkeras.layers.RandAugmentkeras.layers.Equalizationkeras.layers.MixUpkeras.layers.RandomHuekeras.layers.RandomGrayscalekeras.layers.RandomSaturationkeras.layers.RandomColorJitterkeras.layers.RandomColorDegenerationkeras.layers.RandomSharpnesskeras.layers.RandomShearaxistotverskylossJAX specific changes
TensorFlow specific changes
keras.random.shuffleXLA compilablePyTorch specific changes
model.export()andkeras.export.ExportArchivewith the PyTorch backend, supporting both the TF SavedModel format and the ONNX format.New Contributors
Full Changelog: keras-team/keras@v3.7.0...v3.8.0
v3.7.0: Keras 3.7.0Compare Source
API changes
flash_attentionargument tokeras.ops.dot_product_attentionand tokeras.layers.MultiHeadAttention.keras.layers.STFTSpectrogramlayer (to extract STFT spectrograms from inputs as a preprocessing step) as well as its initializerkeras.initializers.STFTInitializer.celu,glu,log_sigmoid,hard_tanh,hard_shrink,squareplusactivations.keras.losses.Circleloss.keras.visualization.draw_bounding_boxes,keras.visualization.draw_segmentation_masks,keras.visualization.plot_image_gallery,keras.visualization.plot_segmentation_mask_gallery.double_checkpointargument toBackupAndRestoreto save a fallback checkpoint in case the first checkpoint gets corrupted.CenterCrop,RandomFlip,RandomZoom,RandomTranslation,RandomCrop.keras.ops.exp2,keras.ops.inneroperations.Performance improvements
bias_add.New Contributors
Full Changelog: keras-team/keras@v3.6.0...v3.7.0
v3.6.0: Keras 3.6.0Compare Source
Highlights
keras.saving.KerasFileEditor. Use it to inspect, diff, modify and resave Keras weights files. See basic workflow here.keras.utils.Configclass for managing experiment config parameters.BREAKING changes
keras.utils.get_file, withextract=Trueoruntar=True, the return value will be the path of the extracted directory, rather than the path of the archive.Other changes and additions
fit(),evaluate(),predict(). This enables 100% compact stacking oftrain_stepcalls on accelerators (e.g. when running small models on TPU).on_batch_end, this will disable async logging. You can force it back by addingself.async_safe = Trueto your callbacks. Note that theTensorBoardcallback isn't considered async safe by default. Default callbacks like the progress bar are async safe.keras.saving.KerasFileEditorutility to inspect, diff, modify and resave Keras weights file.keras.utils.Configclass. It behaves like a dictionary, with a few nice features:config.foo = 2orconfig["foo"]are both valid)config.to_json().config.freeze().bitwise_andbitwise_invertbitwise_left_shiftbitwise_notbitwise_orbitwise_right_shiftbitwise_xorkeras.ops.logdet.keras.ops.trunc.keras.ops.dot_product_attention.keras.ops.histogram.PyDatasetinstances to use multithreading.verboseinkeras.saving.ExportArchive.write_out()method for exporting TF SavedModel.epsilonargument inkeras.ops.normalize.Model.get_state_tree()method for retrieving a nested dict mapping variable paths to variable values (either as numpy arrays or backend tensors (default)). This is useful for rolling out custom JAX training loops.keras.layers.AutoContrast,keras.layers.Solarization.keras.layers.Pipelineclass, to apply a sequence of layers to an input. This class is useful to build a preprocessing pipeline. Compared to aSequentialmodel,Pipelinefeatures a few important differences:Model, just a plain layer.tf.data, the pipeline will also remaintf.datacompatible, independently of the backend you use.New Contributors
Full Changelog: keras-team/keras@v3.5.0...v3.6.0
v3.5.0: Keras 3.5.0Compare Source
What's Changed
keras.Model.save()and load.kerasmodels directly from Hugging Face Hub withkeras.saving.load_model().keras.optimizers.Lamboptimizer.keras.distributionAPI support for very large models.keras.ops.associative_scanop.keras.ops.searchsortedop.keras.utils.PyDataset.on_epoch_begin()method.data_formatargument tokeras.layers.ZeroPadding1Dlayer.Full Changelog: keras-team/keras@v3.4.1...v3.5.0
v3.4.1: Keras 3.4.1Compare Source
This is a minor bugfix release.
v3.4.0: Keras 3.4.0Compare Source
Highlights
keras.dtype_policies.DTypePolicyMapfor easy configuration of dtype policies of nested sublayers of a subclassed layer/model.keras.ops.argpartitionkeras.ops.scankeras.ops.lstsqkeras.ops.switchkeras.ops.dtypekeras.ops.mapkeras.ops.image.rgb_to_hsvkeras.ops.image.hsv_to_rgbWhat's changed
float8inference forDenseandEinsumDenselayers.nameargument in all Keras Applications models.axisargument inkeras.losses.Dice.keras.utils.FeatureSpaceto be used in atf.datapipeline even when the backend isn't TensorFlow.StringLookuplayer can now taketf.SparseTensoras input.Metric.variablesis now recursive.trainingargument toModel.compute_loss().dtypeargument to all losses.keras.utils.split_datasetnow supports nested structures in dataset.Full Changelog: keras-team/keras@v3.3.3...v3.4.0
v3.3.3: Keras 3.3.3Compare Source
This is a minor bugfix release.
Configuration
📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.