-
-
Notifications
You must be signed in to change notification settings - Fork 339
Context aware filtering with research_with_chunking method (issue #647) #915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a new research_with_chunking()
method to the LDRClient class to prevent context window crashes by automatically limiting the number of sources processed when research returns too many results.
- Adds
research_with_chunking()
method that acts as a drop-in replacement forquick_research()
- Implements automatic source limiting to top 10 sources (configurable) when needed
- Provides feedback when chunking is applied through result metadata
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Copilot
AI
Oct 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed grammatical errors in the docstring.
Copilot uses AI. Check for mistakes.
Copilot
AI
Oct 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed capitalization and apostrophe in comment.
Copilot uses AI. Check for mistakes.
Copilot
AI
Oct 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed grammatical error in comment.
Copilot uses AI. Check for mistakes.
Copilot
AI
Oct 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method name and parameter 'max_chunk_size' are misleading. This method doesn't actually chunk sources into multiple requests, it simply truncates to the top N sources. Consider renaming to 'research_with_limit' and the parameter to 'max_sources' for clarity.
Copilot uses AI. Check for mistakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The overall idea is reasonable, but I don't see how the provided code actually solves the problem it's purporting to solve. AFAIK, it runs a normal research and then limits the sources after the fact. I don't see how that will help with context window issues.
As a side note, the proper way to deal with this is probably to modify CrossEngineFilter
to filter the sources in chunks instead of all at once. @devtcu You're welcome to attempt that change if you want 😄. I can help you if you get stuck!
Adds
research_with_chunking()
method to LDRClient which thenautomatically limits sources when too many are found, hopefully preventing context window crashes..quick_research()