Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Converting model not working due to Failed to detect model architecure #16473

Unanswered
whackl asked this question in Q&A
Discussion options

Hi there ,
when i try to convert a moel from huggingface with llama.cpp I get following error when i start the python conversion script :

kiuser@kisystem:/opt/huggingface/ollama-work/llama.cpp$ /opt/huggingface/ollama-work/bin/python3 convert_hf_to_gguf.py ../colqwen2-hf --outfile colqwen2-v1.0.gguf --outtype q8_0 INFO:hf-to-gguf:Loading model: colqwen2-hf Traceback (most recent call last): File "/opt/huggingface/ollama-work/llama.cpp/convert_hf_to_gguf.py", line 9485, in <module> main() File "/opt/huggingface/ollama-work/llama.cpp/convert_hf_to_gguf.py", line 9450, in main model_architecture = get_model_architecture(hparams, model_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/huggingface/ollama-work/llama.cpp/convert_hf_to_gguf.py", line 9380, in get_model_architecture raise ValueError("Failed to detect model architecture") ValueError: Failed to detect model architecture
How to fix this ?

Regards ...

You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant

AltStyle によって変換されたページ (->オリジナル) /