-
Notifications
You must be signed in to change notification settings - Fork 13.4k
-
Hi there ,
when i try to convert a moel from huggingface with llama.cpp I get following error when i start the python conversion script :
kiuser@kisystem:/opt/huggingface/ollama-work/llama.cpp$ /opt/huggingface/ollama-work/bin/python3 convert_hf_to_gguf.py ../colqwen2-hf --outfile colqwen2-v1.0.gguf --outtype q8_0 INFO:hf-to-gguf:Loading model: colqwen2-hf Traceback (most recent call last): File "/opt/huggingface/ollama-work/llama.cpp/convert_hf_to_gguf.py", line 9485, in <module> main() File "/opt/huggingface/ollama-work/llama.cpp/convert_hf_to_gguf.py", line 9450, in main model_architecture = get_model_architecture(hparams, model_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/huggingface/ollama-work/llama.cpp/convert_hf_to_gguf.py", line 9380, in get_model_architecture raise ValueError("Failed to detect model architecture") ValueError: Failed to detect model architecture
How to fix this ?
Regards ...
Beta Was this translation helpful? Give feedback.