Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

v2 user input message fixes #45

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
akshaydeo wants to merge 1 commit into main
base: main
Choose a base branch
Loading
from 10-21-v2_user_input_message_fixes

Conversation

@akshaydeo
Copy link
Contributor

@akshaydeo akshaydeo commented Oct 21, 2025
edited
Loading

TL;DR

Added comprehensive test suite for Vercel AI SDK V2 integration and improved input logging in the V2 wrapper.

What changed?

  • Added a new test file vercelLoggerV2.test.ts with extensive tests for the Vercel AI SDK V2 specification
  • Enhanced the V2 wrapper implementation to properly capture user input in traces
  • Tests cover various scenarios including:
    • Basic text generation with OpenAI and Anthropic models
    • Streaming text and object generation
    • Tool calls and function execution
    • Multi-modal inputs with images
    • Session management and multi-turn conversations
    • Error handling and edge cases
    • Concurrent model calls

How to test?

  1. Ensure environment variables are set:

    • MAXIM_API_KEY
    • MAXIM_BASE_URL
    • MAXIM_LOG_REPO_ID
    • OPENAI_API_KEY
    • ANTHROPIC_API_KEY
  2. Run the tests:

    npm test -- src/lib/logger/vercel/vercelLoggerV2.test.ts
    
  3. Verify that all test cases pass and traces are properly created in the Maxim dashboard

Why make this change?

This change ensures that our Vercel AI SDK V2 integration is thoroughly tested and properly captures user inputs in traces. The comprehensive test suite validates that our wrapper correctly handles all the features of the V2 specification, including multi-modal inputs, tool calls, and streaming responses. This will help maintain compatibility as the Vercel AI SDK evolves and ensure our logging functionality works correctly across different LLM providers.

@akshaydeo akshaydeo marked this pull request as ready for review October 21, 2025 18:41
Copy link

cursor bot commented Oct 21, 2025

You have run out of free Bugbot PR reviews for this billing cycle. This will reset on November 16.

To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

Copy link
Contributor Author

This stack of pull requests is managed by Graphite. Learn more about stacking.

Copy link

coderabbitai bot commented Oct 21, 2025
edited
Loading

📝 Walkthrough

Summary by CodeRabbit

  • Tests
    • Added extensive test coverage for AI SDK v2 integration with support for multiple model providers, covering text generation, streaming, structured outputs, tool calls, multi-modal inputs, concurrent operations, error handling, and multi-turn conversations.

Walkthrough

This PR adds a comprehensive test suite for V2 AI SDK integration with Vercel (covering OpenAI and Anthropic models through Maxim), enhances the V2 wrapper to trace user input during logging setup, and includes minor formatting adjustments.

Changes

Cohort / File(s) Summary
V2 Test Suite
src/lib/logger/vercel/vercelLoggerV2.test.ts
New comprehensive test suite for V2 specification covering: basic and streaming text generation, object generation with Zod schemas, tool calls, session/trace management, multi-modal input, concurrent calls, custom tracing, file processing, multi-turn conversations, specification validation, error handling, and performance edge cases across OpenAI and Anthropic models.
V2 Wrapper Enhancement
src/lib/logger/vercel/wrapperV2.ts
Enhanced setupLogging method to trace user input by detecting user role messages from prompts and logging either string input directly or structured content (text or image URLs). Consolidated import statements.
Formatting
vercel-ai-sdk.ts
Added two blank lines after export statements.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

The addition of the comprehensive test suite comprises a substantial amount of code, but follows established patterns and test structure. The wrapper enhancement is localized and straightforward. The whitespace-only change is trivial. Overall complexity stems from the breadth of test scenarios and need to verify each test case aligns with V2 specification expectations.

Poem

🐰 A new suite of tests hops into view,
V2 traces and logging, through and through,
From OpenAI to Claude's gentle call,
With images, streams, and tools—we test them all!
Input logged, sessions traced with care,
The wrapper now stronger, beyond compare!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request has no description provided by the author. The PR objectives indicate the PR is titled "v2 user input message fixes," which relates to the actual changeset that includes test coverage for V2 specification behavior and modifications to user input tracing in the wrapper. However, since the author did not provide any description text in the pull request body, there is no content to evaluate against the changeset. An empty description does not meet the pass criteria of being "related in some way to the changeset" as it conveys no information about the changes. The author should add a pull request description that explains the purpose and scope of the changes, including the new test suite for V2 specification interactions and the user input tracing enhancements to the wrapper. Even a brief description would help reviewers understand the intent and context of the modifications.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title "v2 user input message fixes" directly relates to the primary implementation changes in wrapperV2.ts, which introduced tracing of user input messages by detecting and logging user role messages from promptMessages. The title is concise, specific, and clearly communicates the focus of the changes without vague terminology. While the PR also includes a substantial test suite addition, the title adequately captures the core fix being implemented, and it's not expected to cover every detail of the changeset.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 10-21-v2_user_input_message_fixes

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
vercel-ai-sdk.ts (1)

3-4: Whitespace-only change.

Two blank lines were added after the export statements. If this was unintentional, consider removing them for consistency.

src/lib/logger/vercel/vercelLoggerV2.test.ts (2)

37-90: Potential duplicate input logging.

Line 58 manually calls trace.input(query), but the enhanced wrapperV2.ts (changed in this PR) now automatically logs user input during setupLogging. This could result in duplicate input entries in traces.

Consider removing the manual trace.input(query) call unless double-logging is intentional for testing purposes.


284-340: Move sessionId declaration inside the test case.

The sessionId is declared outside the test case at line 284. This could cause issues if tests are run in parallel or if the test suite is executed multiple times, as the same session ID would be reused.

Apply this diff to scope the sessionId properly:

-	const sessionId = uuid();
 	it("should handle V2 session and trace management", async () => {
 		if (!repoId || !openAIKey) {
 			throw new Error("MAXIM_LOG_REPO_ID and OPENAI_API_KEY environment variables are required");
 		}
 		const logger = await maxim.logger({ id: repoId });
 		if (!logger) {
 			throw new Error("Logger is not available");
 		}
 		
+		const sessionId = uuid();
 		const model = wrapMaximAISDKModel(openai.chat("gpt-4o-mini"), logger);
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 49a67cb and 2004d91.

📒 Files selected for processing (3)
  • src/lib/logger/vercel/vercelLoggerV2.test.ts (1 hunks)
  • src/lib/logger/vercel/wrapperV2.ts (2 hunks)
  • vercel-ai-sdk.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/lib/logger/vercel/vercelLoggerV2.test.ts (3)
src/lib/maxim.ts (2)
  • Maxim (149-1302)
  • logger (1183-1214)
src/lib/logger/components/error.ts (1)
  • Error (47-90)
vercel-ai-sdk.ts (2)
  • wrapMaximAISDKModel (2-2)
  • MaximVercelProviderMetadata (1-1)
🪛 Biome (2.1.2)
src/lib/logger/vercel/vercelLoggerV2.test.ts

[error] 690-690: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)


[error] 691-691: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🔇 Additional comments (6)
src/lib/logger/vercel/wrapperV2.ts (1)

2-3: LGTM!

Import additions are appropriate for the user input tracing enhancement.

src/lib/logger/vercel/vercelLoggerV2.test.ts (5)

10-34: LGTM!

Test setup and teardown are properly configured with environment validation and cleanup.


222-282: LGTM!

Tool call test is well-structured and includes proper error handling for edge cases like division by zero.


342-393: LGTM!

Multi-modal image analysis test is well-implemented with a publicly accessible image URL.


478-513: LGTM!

Error handling test properly validates that invalid parameters trigger appropriate errors.


517-560: LGTM!

Concurrent call test is well-structured and properly validates parallel execution with unique trace metadata for each call.

Comment on lines +681 to +708
execute: async ({ data, analysisType }) => {
switch (analysisType) {
case "mean":
return {
result: data.reduce((a, b) => a + b, 0) / data.length,
type: "mean",
dataPoints: data.length
};
case "median":
const sorted = [...data].sort((a, b) => a - b);
const mid = Math.floor(sorted.length / 2);
return {
result: sorted.length % 2 === 0 ? (sorted[mid - 1] + sorted[mid]) / 2 : sorted[mid],
type: "median",
dataPoints: data.length
};
case "range":
return {
result: Math.max(...data) - Math.min(...data),
type: "range",
min: Math.min(...data),
max: Math.max(...data),
dataPoints: data.length
};
default:
return { result: "Analysis type not supported", type: "error" };
}
},
Copy link

@coderabbitai coderabbitai bot Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Wrap switch case declarations in blocks.

Lines 690-691 declare variables (sorted, mid) within switch cases without wrapping them in blocks. This can cause issues where these declarations are accessible to other switch cases, leading to potential bugs.

Based on static analysis hints.

Apply this diff to fix the issue:

 							switch (analysisType) {
 								case "mean":
 									return { 
 										result: data.reduce((a, b) => a + b, 0) / data.length,
 										type: "mean",
 										dataPoints: data.length 
 									};
-								case "median":
-									const sorted = [...data].sort((a, b) => a - b);
-									const mid = Math.floor(sorted.length / 2);
+								case "median": {
+									const sorted = [...data].sort((a, b) => a - b);
+									const mid = Math.floor(sorted.length / 2);
 									return { 
 										result: sorted.length % 2 === 0 ? (sorted[mid - 1] + sorted[mid]) / 2 : sorted[mid],
 										type: "median",
 										dataPoints: data.length 
 									};
+								}
 								case "range":
 									return { 
 										result: Math.max(...data) - Math.min(...data),
 										type: "range",
 										min: Math.min(...data),
 										max: Math.max(...data),
 										dataPoints: data.length 
 									};
 								default:
 									return { result: "Analysis type not supported", type: "error" };
 							}
📝 Committable suggestion

!!️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
execute: async ({ data, analysisType }) => {
switch (analysisType) {
case "mean":
return {
result: data.reduce((a, b) => a + b, 0) / data.length,
type: "mean",
dataPoints: data.length
};
case "median":
const sorted = [...data].sort((a, b) => a - b);
const mid = Math.floor(sorted.length / 2);
return {
result: sorted.length % 2 === 0 ? (sorted[mid - 1] + sorted[mid]) / 2 : sorted[mid],
type: "median",
dataPoints: data.length
};
case "range":
return {
result: Math.max(...data) - Math.min(...data),
type: "range",
min: Math.min(...data),
max: Math.max(...data),
dataPoints: data.length
};
default:
return { result: "Analysis type not supported", type: "error" };
}
},
execute: async ({ data, analysisType }) => {
switch (analysisType) {
case "mean":
return {
result: data.reduce((a, b) => a + b, 0) / data.length,
type: "mean",
dataPoints: data.length
};
case "median": {
const sorted = [...data].sort((a, b) => a - b);
const mid = Math.floor(sorted.length / 2);
return {
result: sorted.length % 2 === 0 ? (sorted[mid - 1] + sorted[mid]) / 2 : sorted[mid],
type: "median",
dataPoints: data.length
};
}
case "range":
return {
result: Math.max(...data) - Math.min(...data),
type: "range",
min: Math.min(...data),
max: Math.max(...data),
dataPoints: data.length
};
default:
return { result: "Analysis type not supported", type: "error" };
}
},
🧰 Tools
🪛 Biome (2.1.2)

[error] 690-690: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)


[error] 691-691: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🤖 Prompt for AI Agents
In src/lib/logger/vercel/vercelLoggerV2.test.ts around lines 681 to 708, the
switch cases declare consts (sorted, mid) without block scoping which can leak
those bindings across cases; wrap each case body that declares variables in its
own block (e.g., case "median": { const sorted = ...; const mid = ...; return {
... }; } ) and similarly wrap the other cases that declare variables so each
declaration is scoped to its case.

Comment on lines +75 to +94
const userMessage = promptMessages.find((msg) => msg.role === "user");

const userInput = userMessage?.content;
if (userInput) {
if (typeof userInput === "string") {
trace.input(userInput);
} else {
const userMessage = userInput[0];
switch (userMessage.type) {
case "text":
trace.input(userMessage.text);
break;
case "image_url":
trace.input(userMessage.image_url.url);
break;
default:
break;
}
}
}
Copy link

@coderabbitai coderabbitai bot Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Variable shadowing and potential array bounds issue.

There are a few concerns with this user input tracing logic:

  1. Variable shadowing: Line 82 declares const userMessage which shadows the outer userMessage from line 75. Consider renaming the inner variable to firstContent or contentItem.

  2. Missing array bounds check: Line 82 accesses userInput[0] without verifying the array has elements. If userInput is an empty array, this will be undefined and cause a runtime error in the switch statement.

  3. Incomplete multi-part logging: Only the first content item is logged for structured inputs. For multi-modal messages with multiple parts, subsequent items are silently ignored.

Apply this diff to fix the variable shadowing and add bounds checking:

 	const userMessage = promptMessages.find((msg) => msg.role === "user");
 	
 	const userInput = userMessage?.content;
 	if (userInput) {
 		if (typeof userInput === "string") {
 			trace.input(userInput);
 		} else {
-			const userMessage = userInput[0];
-			switch (userMessage.type) {
+			if (userInput.length > 0) {
+				const firstContent = userInput[0];
+				switch (firstContent.type) {
 					case "text":
-						trace.input(userMessage.text);
+						trace.input(firstContent.text);
 						break;
 					case "image_url":
-						trace.input(userMessage.image_url.url);
+						trace.input(firstContent.image_url.url);
 						break;
 					default:
 						break;
+				}
 			}
 		}
 	}

Note: This implementation only logs the first content item. If capturing all content parts is desired, consider iterating through the array or concatenating text parts.

🤖 Prompt for AI Agents
In src/lib/logger/vercel/wrapperV2.ts around lines 75 to 94, the inner const
userMessage shadows the outer variable and the code indexes userInput[0] without
checking the array length, risking runtime errors; rename the inner variable to
something like firstContent (or contentItem), add a guard that ensures
Array.isArray(userInput) && userInput.length > 0 before accessing index 0, and
handle the undefined case (skip tracing or log a fallback) so the switch only
runs on a valid object; keep current behavior of only logging the first content
item unless you choose to iterate through the array.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

@coderabbitai coderabbitai[bot] coderabbitai[bot] requested changes

Requested changes must be addressed to merge this pull request.

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

2 participants

AltStyle によって変換されたページ (->オリジナル) /