Skip to content

Conversation

@Hansehart
Copy link
Contributor

Hi, this has the same changes like the reverted pr, but with fixed code. Copy and pasted from before:

When I tried to run save_pretrained_gguf() I had multiple different errors. All problems were related to a failing LLama.cpp build. As you can see unter this pull request , curl for llama cpp is now activated by default, which is resulting for me in:

CMake Error at common/CMakeLists.txt:90 (message):
  Could NOT find CURL.  Hint: to disable this feature, set -DLLAMA_CURL=OFF

(System is a Ubuntu 24.04 with newest version of unsloth and llama.cpp. Curl is installed.)

So then I installed llama.cpp manually, but then got the error:


Unsloth: The file 'llama.cpp/llama-quantize' or `llama.cpp/quantize` does not exist.\n"\
            "But we expect this file to exist! Maybe the llama.cpp developers changed the name or check extension of the llama-quantize file."

Thats right, because is it still under /llama.cpp/build/bin. As the save.py shows, all llama-* files gets copied to llama.cpp, when the build runs successfully. But in my case that dont happend and I had some misleading error messages. I hope that pull request decreases the amount of people having these issues or a clearer direction for debugging!

@Hansehart Hansehart changed the title add: path checking for failed llama cpp builds fix: improved error handling when llama.cpp build fails #2358 May 21, 2025
@danielhanchen
Copy link
Contributor

Thanks for the PR! This looks good!

@danielhanchen danielhanchen merged commit ca17a9a into unslothai:main May 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants