LLM architecture Dots1ForCausalLM conversion to GGUF

rednote-hilab/dots.llm1.inst model cannot be converted to GGUF format
llama.cpp/convert_hf_to_gguf.py ./mymodels/dots.llm1.inst --outtype bf16 --outfile ./quantized_models/dots.llm1.inst_BF16.gguf
INFO:hf-to-gguf:Loading model: dots.llm1.inst
ERROR:hf-to-gguf:Model Dots1ForCausalLM is not supported

is there a way to get teh conversion done ?

1 Like

Hmm… This?

Galunid
convert-hf-to-gguf.py never supported llama based models. Please use convert.py.