-
Notifications
You must be signed in to change notification settings - Fork 13.4k
-
Hi everyone 👋
I noticed something small but useful that could improve CLI usability.
When using llama-mtmd-cli.exe, initialization messages go to StandardError, and model replies go to StandardOutput — perfect.
But when I run the /image [ImagePath] command, all the image-processing logs (like "encoding image slice..." and "decoding image batch...") are also printed to StandardOutput, mixed with the assistant’s reply.
Example in terminal:
User: Analyze the image and describe what you see
Assistant: D:\dev\Apps\IRIS\Debug\net10.0-windows\Temp\img_prompt.png image loaded
encoding image slice...
image slice encoded in 1242 ms
decoding image batch 1/1, n_tokens_batch = 256
image decoded (batch 1/1) in 15 ms
The image shows the side of a cat’s face, with a brown and gray fur pattern and bright blue eyes. The background is black, creating a dramatic lighting effect.
Would it be possible to redirect those internal image-processing logs to StandardError (or another stream)?
That would keep StandardOutput clean and make it easier to parse or display only the model’s actual response in chat-based UIs.
Small tweak — big quality-of-life improvement for integrations.
Thanks for all your amazing work on llama.cpp! 🙏
Beta Was this translation helpful? Give feedback.