Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

add LLM in adapter and save query and answer #64

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
DreamCyc wants to merge 1 commit into codefuse-ai:main
base: main
Choose a base branch
Loading
from DreamCyc:cyc_adapter

Conversation

@DreamCyc
Copy link

@DreamCyc DreamCyc commented Dec 18, 2024

No description provided.

Copy link

Does the current LLM (Large Language Model) adapter for this project support streaming answers? For scenarios that require low latency, is there a plan to support this feature in the future if it's not available now? Thank you very much for your assistance.

Copy link
Author

@hicofeng When the model is deployed to the server machine and provided with a URL, it can achieve streaming output to avoid user waiting. The functionality provided here is to invoke the deployed model when there are no matching results in the cached data, referring to the OpenAI specification, which may vary depending on the specific model and deployment method used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

No reviews

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

AltStyle によって変換されたページ (->オリジナル) /