Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

fix(LiteLlm): add fallback user message to prevent empty content error with include_contents='none' #3255

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ishanrajsingh wants to merge 8 commits into google:main
base: main
Choose a base branch
Loading
from ishanrajsingh:fix/litellm-include-contents

Conversation

@ishanrajsingh
Copy link

@ishanrajsingh ishanrajsingh commented Oct 22, 2025
edited
Loading

This fix addresses an issue in the LiteLlm integration where setting include_contents='none' causes the content payload sent to the LLM provider to be empty, resulting in a BadRequestError. The patch adds a minimal fallback user message in the generate_content_async method when no messages are included, ensuring a non-empty content array is always sent. This prevents errors from providers that require non-empty input and improves compatibility when using LiteLlm with the Agent Development Kit.

GitHub Issue:
This PR addresses the bug reported in issue #3242 titled "LiteLlm + include_contents='none' results in empty content sent to the provider -> Error". The issue is already created and linked here: #3242

Testing Plan:

I verified the fix by running a scenario where an LlmAgent uses LiteLlm with include_contents='none'.

The test confirms that the LLM request no longer sends empty content, preventing the BadRequestError from the LLM provider.

Additionally, I ran existing LiteLlm unit tests to ensure no regressions occur.

For thoroughness, a new test can be added simulating include_contents='none' to confirm fallback content insertion.

Copy link

Summary of Changes

Hello @ishanrajsingh, I'm Gemini Code Assist1 ! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a BadRequestError within the LiteLlm integration by preventing empty content payloads from being sent to Large Language Model providers. The fix ensures that even when include_contents='none' is specified, a default user message is included, thereby maintaining compatibility with LLM APIs that require non-empty input and improving the overall robustness of the Agent Development Kit's LiteLlm component.

Highlights

  • Fixes Empty Content Error: Addresses a BadRequestError in LiteLlm that occurred when include_contents='none' resulted in an empty content payload being sent to LLM providers.
  • Introduces Fallback User Message: A minimal fallback user message is now added in the generate_content_async method if no messages are included, ensuring the content array is never empty.
  • Improves Compatibility: This change prevents errors from providers that require non-empty input and enhances compatibility when using LiteLlm with the Agent Development Kit.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Collaborator

adk-bot commented Oct 22, 2025

Response from ADK Triaging Agent

Hello @ishanrajsingh, thank you for creating this PR!

To help reviewers to review your PR more efficiently, could you please provide the followings:

  • a Github issue associated with this PR. If there is no existing issue, could you please create one?
  • a testing plan for your changes.
  • logs or screenshot after the fix is applied.

Thanks!

ishanrajsingh reacted with thumbs up emoji

@adk-bot adk-bot added the models [Component] Issues related to model support label Oct 22, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue where the LiteLlm integration would error when include_contents='none' due to an empty content payload. The fix adds a fallback user message to ensure a non-empty content array is always sent to the LLM provider. I have identified a potential issue with the placement of the fallback logic that could lead to unexpected behavior.

Copy link
Author

@ishanrajsingh ishanrajsingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handle empty file content

Copy link
Author

GitHub Issue:
This PR addresses the bug reported in issue #3242 titled "LiteLlm + include_contents='none' results in empty content sent to the provider -> Error". The issue is already created and linked here: #3242

Testing Plan:

I verified the fix by running a scenario where an LlmAgent uses LiteLlm with include_contents='none'.

The test confirms that the LLM request no longer sends empty content, preventing the BadRequestError from the LLM provider.

Additionally, I ran existing LiteLlm unit tests to ensure no regressions occur.

For thoroughness, a new test can be added simulating include_contents='none' to confirm fallback content insertion.

Copy link
Author

/gemini review

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a critical issue where the LiteLlm integration fails when include_contents='none' due to an empty content payload. The fix introduces a fallback user message to ensure a non-empty content array is always sent to the LLM provider. I have added a review comment to suggest a minor improvement to the fallback message.

Copy link
Author

/gemini review

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a critical issue where LiteLlm integration with include_contents='none' results in an empty content payload, causing a BadRequestError. The fix introduces a fallback user message when no messages are included, ensuring a non-empty content array is always sent. This review focuses on the correctness and maintainability of the implemented solution, ensuring it effectively prevents the error and adheres to best practices.

ishanrajsingh and others added 2 commits October 22, 2025 22:08
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Copy link
Collaborator

GWeale commented Oct 24, 2025

Hi thank you for the contribution!

Two comments

  1. Can you close the ChatCompletionUserMessage( and create tests to show this change working
  2. Instead of truncating the entire system instruction (and duplicating it as user content), can you look at reusing the fallback text inserted by _maybe_append_user_content. That keeps behavior aligned with the Gemini path and stops us from slicing instructions midsentence.
ishanrajsingh reacted with thumbs up emoji

...content and add tests
- Add fallback user message when messages list is empty due to include_contents='none'
- Use same fallback text as BaseLlm._maybe_append_user_content for consistency
- Add comprehensive tests covering empty contents scenarios with and without tools
- Fixes google#3242 
Copy link
Author

Thank you @GWeale for the feedback! I've updated the PR to address both points:

  1. Aligned fallback content with BaseLlm._maybe_append_user_content: The fix now uses the exact same fallback text ("Handle the requests as specified in the System Instruction.") as the base class, keeping behavior consistent with the Gemini path.

  2. Added comprehensive tests: Created tests that demonstrate the fix works correctly:

    • test_include_contents_none_with_fallback: Verifies empty contents are handled gracefully
    • test_include_contents_none_with_tools: Confirms the fix works with function declarations
    • test_include_contents_with_existing_content: Ensures normal operation is unaffected

All tests pass successfully (6/6). The fix ensures that when include_contents='none' is used with LiteLlm, a minimal fallback user message is added to prevent the "contents is not specified" error from the LLM provider.

Copy link
Author

@ishanrajsingh ishanrajsingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

align fallback content with BaseLlm._maybe_append_user_content and add tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

@GWeale GWeale Awaiting requested review from GWeale

1 more reviewer

@gemini-code-assist gemini-code-assist[bot] gemini-code-assist[bot] left review comments

Reviewers whose approvals may not affect merge requirements

At least 1 approving review is required to merge this pull request.

Labels

models [Component] Issues related to model support

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

AltStyle によって変換されたページ (->オリジナル) /