Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Suggestion: Clean up /image output in llama-mtmd-cli.exe #16620

apipino started this conversation in Ideas
Discussion options

Hi everyone 👋

I noticed something small but useful that could improve CLI usability.

When using llama-mtmd-cli.exe, initialization messages go to StandardError, and model replies go to StandardOutput — perfect.

But when I run the /image [ImagePath] command, all the image-processing logs (like "encoding image slice..." and "decoding image batch...") are also printed to StandardOutput, mixed with the assistant’s reply.

Example in terminal:

User: Analyze the image and describe what you see
Assistant: D:\dev\Apps\IRIS\Debug\net10.0-windows\Temp\img_prompt.png image loaded
encoding image slice...
image slice encoded in 1242 ms
decoding image batch 1/1, n_tokens_batch = 256
image decoded (batch 1/1) in 15 ms
The image shows the side of a cat’s face, with a brown and gray fur pattern and bright blue eyes. The background is black, creating a dramatic lighting effect.

Would it be possible to redirect those internal image-processing logs to StandardError (or another stream)?
That would keep StandardOutput clean and make it easier to parse or display only the model’s actual response in chat-based UIs.

Small tweak — big quality-of-life improvement for integrations.
Thanks for all your amazing work on llama.cpp! 🙏

You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Ideas
Labels
None yet
1 participant

AltStyle によって変換されたページ (->オリジナル) /