My goal is to use Keras to visualise the architecture of the model. No training, no inference.
For example if I just want to visualise the graph of the network of the classic VGG16-model with
model = create_vgg_like_model() then Keras starts to pre-allocate the memory needed and the program crash (see image).
PC specs: CPU with 4 cores, 8GB RAM. Due to the "huge" amount of parameters of the model it uses even the swap partition.
I tried to play around with
cpu_devices = tf.config.list_physical_devices('CPU')
if cpu_devices:
try:
# avoid allocating all memory on the device
tf.config.set_visible_devices(cpu_devices, 'CPU')
tf.config.experimental.set_memory_growth(cpu_devices[0], True)
model = create_vgg_like_model()
dot = tf.keras.utils.model_to_dot(model) # example of display
except ValueError as e:
print(e)
but without success, it raises
ValueError: Cannot set memory growth on non-GPU and non-Pluggable devices.
Is there a way
- to post-pone the memory allocation on CPU and/or on GPU
- to implement a
Model-like object containing only "shallow" information of the network such as layer name, input, output, number of parameters, ...?
1 Answer 1
The problem is it didn't detect your gpu that's why it is throwing :ValueError: Cannot set memory growth on non-GPU and non-Pluggable devices. To avoid this issue try removing the line which is causing the issue :
tf.config.experimental.set_memory_growth(cpu_devices[0], True)
Coment it out and it might do the job just fine or if you want to use this try using a proper gpu.
1 Comment
Explore related questions
See similar questions with these tags.
kerasandtensorflow.kerasfrom keras.applications import VGG16->model = VGG16(weights=None). You can get the model config with layer definitions from there like so:model.get_config().weights=Noneis accepted only by the pre-build models fromkeras.applicationsand what it does is a "random initialization" of the weights.Model/Sequential