Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

fix: maxTokens not used #3013

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
dingdinglz wants to merge 1 commit into onlook-dev:main
base: main
Choose a base branch
Loading
from dingdinglz:main
Open

Conversation

Copy link

@dingdinglz dingdinglz commented Oct 14, 2025
edited by coderabbitai bot
Loading

Description

maxOutputTokens set but never used

Type of Change

  • Bug fix
  • New feature
  • Documentation
  • Refactor
  • Other (please describe):

Important

Fix missing maxOutputTokens parameter in streamText call in createRootAgentStream in root.ts.

  • Bug Fix:
    • Add maxOutputTokens: modelConfig.maxOutputTokens to streamText call in createRootAgentStream in root.ts to ensure token limit is applied.

This description was created by Ellipsis for 7e42a07. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • New Features
    • Added support for configuring a maximum output length for AI responses. This enables finer control over response size during streaming, helping manage cost and maintain concise outputs. Responses now adhere to a cap derived from the selected model’s settings, improving predictability and preventing overly long generations.

ellipsis-dev[bot] reacted with rocket emoji
Copy link

vercel bot commented Oct 14, 2025

@dingdinglz is attempting to deploy a commit to the Onlook Team on Vercel.

A member of the Team first needs to authorize it.

Copy link

coderabbitai bot commented Oct 14, 2025
edited
Loading

Walkthrough

Adds the maxOutputTokens parameter to the streamText call within createRootAgentStream, sourcing its value from modelConfig.maxOutputTokens. No other logic or control-flow changes.

Changes

Cohort / File(s) Summary
Root agent streaming config
packages/ai/src/agents/root.ts
Passes maxOutputTokens from modelConfig.maxOutputTokens into streamText within createRootAgentStream.

Sequence Diagram(s)

sequenceDiagram
 autonumber
 participant Caller
 participant RootAgent
 participant TextStreamer as streamText
 Caller->>RootAgent: createRootAgentStream(modelConfig)
 Note right of RootAgent: Extract maxOutputTokens from modelConfig
 RootAgent->>TextStreamer: streamText({... , maxOutputTokens})
 TextStreamer-->>RootAgent: Streamed text chunks
 RootAgent-->>Caller: Forward streamed output
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

I tweak my dials, hop hop—no fuss,
A token cap guides the output bus.
Streams now heed a gentle bound,
Carrots counted, neatly sound.
With fewer words, I still delight—
A tidy burrow, snug and bright.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request provides a clear description of the issue and correctly labels it as a bug fix, but it omits the Related Issues and Testing sections required by the template, leaving out links to any issue trackers and steps or tests used to verify the change. Please add a Related Issues section linking any relevant issue numbers and a Testing section that describes how you verified the fix or any tests you added to ensure maxOutputTokens is applied correctly.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title succinctly describes the primary change by indicating that the token limit parameter was not being used, which directly aligns with the bug fix in createRootAgentStream. It is concise and clearly conveys the main change without unnecessary detail.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f273632 and 7e42a07.

📒 Files selected for processing (1)
  • packages/ai/src/agents/root.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Do not use the any type unless necessary

Files:

  • packages/ai/src/agents/root.ts
{apps,packages}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Avoid using the any type unless absolutely necessary

Files:

  • packages/ai/src/agents/root.ts
🔇 Additional comments (1)
packages/ai/src/agents/root.ts (1)

25-25: Approve change: maxOutputTokens is defined on ModelConfig and initialized in initModel using MODEL_MAX_TOKENS, so no further action needed.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

No reviews

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

1 participant

AltStyle によって変換されたページ (->オリジナル) /