6 questions
- Bountied 0
- Unanswered
- Frequent
- Score
- Trending
- Week
- Month
- Unanswered (my tags)
0
votes
0
answers
161
views
I'm having difficulty annotating the quantization of Keras's Dense layer within tfmot.quantization.keras.quantize_annotate_layer. What shall I do?
[To give context, this capstone project is an ML visual model for real-world and real time classification of wastes, based on their biodegradability. For better desirability, I chose ResNet50 as the ...
0
votes
1
answer
82
views
What does QuantizeWrapperV2 actually do?
So I am training this small CNN model which has few Conv2D layers and some MaxPool2D, Activations, Dense, basically the basic layers that Tensorflow provides.
I want it to run on an embedded system ...
0
votes
1
answer
205
views
How to set training=False for keras-model/layer outside of the __call__ method?
I’m using Keras with tensorflow-model-optimization (tf_mot) for quantization aware training (QAT). My model is based on a pre-trained backbone from keras.application. As mentioned in the transfer ...
0
votes
1
answer
162
views
How to save checkpoints of quantise wrapped models in tensorflow model optimization?
Hi I am using tensorflow and model optimisations.
This is an overview of the process:
from tensorflow_model_optimization.quantization.keras import quantise_model
model = define_model()
qat_model = ...
3
votes
2
answers
2k
views
Cant import tensorflow_model_optimization
I got an error when try to import the tensorflow_model_optimization package
When i run:
import tensorflow_model_optimization as tfmot
I got the following ERROR:
Traceback (most recent call last):
...
4
votes
1
answer
1k
views
Quantisation of custom model with custom layer (Full int 8)
Hi I have a custom model, that requires every layer to be int8 quantised.
I have a custom layer called CustomLayer
Post training quantisation (Works):
converter = tf.lite.TFLiteConverter....