Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[MaximJS] together ai sdk integration #28

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
yashdamani wants to merge 1 commit into main
base: main
Choose a base branch
Loading
from 07-15-_maximjs_together_ai_sdk_integration

Conversation

@yashdamani
Copy link
Contributor

@yashdamani yashdamani commented Jul 15, 2025

No description provided.

Copy link

coderabbitai bot commented Jul 15, 2025
edited
Loading

Summary by CodeRabbit

  • New Features

    • Introduced integration for Together AI, allowing enhanced logging and tracing of chat and image generation with Maxim.
    • Added utilities for processing and logging streaming responses from Together AI.
    • Provided public SDK exports for Together AI integration.
  • Tests

    • Added a comprehensive test suite for Together AI integration with Maxim logging (currently commented out).
  • Chores

    • Updated dependencies to include Together AI and Groq SDKs.

Summary by CodeRabbit

  • New Features

    • Introduced integration with Together AI, enabling enhanced logging and tracing for chat completions and image generation.
    • Added new exports for Together AI SDK, providing streamlined access to logging and tracing utilities.
  • Tests

    • Added a comprehensive test suite for Together AI integration, covering various chat and image generation scenarios.
  • Chores

    • Updated dependencies to include the latest Together AI and Groq SDKs.

Walkthrough

The changes introduce Together AI and Groq SDK integrations by updating dependencies and exports in package.json. New utility and wrapper modules are added for Together AI, enabling logging and tracing of chat and image generation via MaximLogger. A new public SDK entry point and comprehensive test suite for Together AI logging are also included.

Changes

File(s) Change Summary
package.json Added groq-sdk and together-ai dependencies; added exports for ./together-ai-sdk and ./groq-sdk.
src/lib/logger/together/utils.ts Added processTogetherStream utility for processing and logging Together AI streaming responses.
src/lib/logger/together/wrapper.ts Added wrapMaximTogetherClient wrapper and MaximTogetherProviderMetadata type for Maxim logging integration.
together-ai-sdk.ts New module re-exporting wrapMaximTogetherClient and MaximTogetherProviderMetadata for SDK consumers.
src/lib/logger/together/togetherLogger.test.ts Added (commented out) test suite for Maxim-Together AI integration, covering chat, streaming, and images.

Sequence Diagram(s)

sequenceDiagram
 participant User
 participant TogetherClient
 participant MaximLogger
 User->>TogetherClient: chat.create()/images.create() (via wrapped client)
 TogetherClient->>MaximLogger: Start trace & generation (with metadata)
 TogetherClient->>TogetherClient: Perform API call (chat or image)
 alt Success
 TogetherClient->>MaximLogger: Log result & end generation/trace
 TogetherClient->>User: Return result
 else Error
 TogetherClient->>MaximLogger: Log error & end generation/trace
 TogetherClient->>User: Throw error
 end
Loading

Suggested reviewers

  • akshaydeo

Poem

A hop and a skip, new SDKs in our den,
Together and Groq, now tracked by Maxim’s pen.
Logging and tracing, with wrappers so neat,
Our code is now nimble, observability complete!
With every new stream, a rabbit’s delight—
Hopping through logs, from morning to night.
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 07-15-_maximjs_together_ai_sdk_integration

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor Author

yashdamani commented Jul 15, 2025
edited
Loading

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aa3eb72 and 848ddd9.

📒 Files selected for processing (5)
  • package.json (1 hunks)
  • src/lib/logger/together/togetherLogger.test.ts (1 hunks)
  • src/lib/logger/together/utils.ts (1 hunks)
  • src/lib/logger/together/wrapper.ts (1 hunks)
  • together-ai-sdk.ts (1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: danpiths
PR: maximhq/maxim-js#16
File: src/lib/logger/components/session.ts:96-98
Timestamp: 2025年06月28日T20:29:57.000Z
Learning: In the maxim-js codebase, danpiths prefers to handle feedback score validation (1-5 range) in the UI layer rather than adding validation to the SDK methods like Session.feedback() and Session.feedback_(), indicating a preference for UI-side validation over SDK-side validation for user input data.
Learnt from: danpiths
PR: maximhq/maxim-js#16
File: src/lib/maxim.ts:170-174
Timestamp: 2025年06月28日T20:39:04.577Z
Learning: In the maxim-js codebase, danpiths is comfortable using the non-standard `@important` JSDoc tag in documentation comments and prefers to defer replacing it with the standard `@remarks` tag, prioritizing current PR objectives over JSDoc tooling compatibility improvements.
Learnt from: SamstyleGhost
PR: maximhq/maxim-js#18
File: src/lib/logger/vercel/utils.ts:241-241
Timestamp: 2025年06月26日T13:58:22.625Z
Learning: In src/lib/logger/vercel/utils.ts:241, the convertDoGenerateResultToChatCompletionResult function parameter uses `DoGenerateResultLike & { [key: string]: any }` intersection type intentionally because the AI SDK has additional properties that aren't exported in their type definitions. Rewriting it completely wouldn't make sense, so the intersection type provides access to these runtime properties.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: package.json:38-40
Timestamp: 2025年06月16日T05:35:50.998Z
Learning: In the maxim-js library, @langchain/core is correctly placed in optionalDependencies because it's only required when using MaximLangchainTracer functionality. Users who don't need LangChain integration can use the library without installing @langchain/core, keeping the installation lightweight.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: package.json:15-33
Timestamp: 2025年06月16日T05:36:54.415Z
Learning: The LangChain packages in maxim-js are intentionally kept as optional/dev dependencies rather than regular dependencies. This design choice ensures that users who don't need MaximLangchainTracer functionality can use the library without installing heavy LangChain packages, keeping the installation lightweight.
together-ai-sdk.ts (11)
Learnt from: SamstyleGhost
PR: maximhq/maxim-js#18
File: src/lib/logger/vercel/utils.ts:241-241
Timestamp: 2025年06月26日T13:58:22.625Z
Learning: In src/lib/logger/vercel/utils.ts:241, the convertDoGenerateResultToChatCompletionResult function parameter uses `DoGenerateResultLike & { [key: string]: any }` intersection type intentionally because the AI SDK has additional properties that aren't exported in their type definitions. Rewriting it completely wouldn't make sense, so the intersection type provides access to these runtime properties.
Learnt from: SamstyleGhost
PR: maximhq/maxim-js#18
File: src/lib/logger/vercel/wrapper.ts:212-212
Timestamp: 2025年06月25日T05:20:43.227Z
Learning: In `src/lib/logger/vercel/wrapper.ts`, the `processStream` function expects the model ID string as a parameter, not the provider name. When calling `processStream(chunks, span, trace, generation, model, maximMetadata)`, use `this.modelId` instead of `modelProvider` to pass the correct model identifier.
Learnt from: SamstyleGhost
PR: maximhq/maxim-js#18
File: src/lib/logger/vercel/utils.ts:14-15
Timestamp: 2025年06月24日T08:03:29.963Z
Learning: In the provider detection logic for Vercel AI models in src/lib/logger/vercel/utils.ts, both "azure" and "azure_openai" model strings should be classified as the "azure" provider type. The current implementation correctly prevents "azure_openai" models from being misclassified as "openai" provider.
Learnt from: danpiths
PR: maximhq/maxim-js#20
File: src/lib/logger/vercel/utils.ts:23-35
Timestamp: 2025年06月29日T07:57:14.446Z
Learning: In the maxim-js codebase, danpiths prefers to keep hardcoded switch cases for provider detection logic in src/lib/logger/vercel/utils.ts rather than making it case-insensitive or removing redundant checks, indicating a preference for explicit provider mapping over normalized/optimized detection logic.
Learnt from: danpiths
PR: maximhq/maxim-js#16
File: src/lib/logger/components/session.ts:129-132
Timestamp: 2025年06月28日T20:29:27.139Z
Learning: In the maxim-js codebase, danpiths is okay with the trace_() method in src/lib/logger/components/session.ts mutating the caller-supplied config object (config.sessionId = id) for now, preferring to defer this non-critical improvement to maintain focus on the current PR's main objectives.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: src/lib/logger/langchain/utils.ts:71-83
Timestamp: 2025年06月16日T12:36:38.940Z
Learning: In the maximhq/maxim-js repository, variable shadowing (like `const provider` being redeclared in inner scopes) is acceptable and not considered an issue by the maintainer danpiths.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: src/lib/logger/langchain/tracer.ts:100-104
Timestamp: 2025年06月16日T12:35:36.606Z
Learning: In the MaximLangchainTracer class in src/lib/logger/langchain/tracer.ts, danpiths prefers to keep the current implementation that JSON.stringify-s all metadata entries into tags, even though it could potentially create oversized tags. The preference is to defer optimization until it becomes a demonstrated problem rather than preemptively fixing it.
Learnt from: danpiths
PR: maximhq/maxim-js#16
File: src/lib/logger/components/retrieval.ts:113-118
Timestamp: 2025年06月28日T20:31:28.233Z
Learning: In the maxim-js codebase, danpiths is comfortable with small amounts of duplicate logic (like 3-4 lines of simple string-to-array normalization) when the duplication is minimal and not complex, preferring to keep things simple rather than extracting tiny utilities for minor code deduplication.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: package.json:38-40
Timestamp: 2025年06月16日T05:35:50.998Z
Learning: In the maxim-js library, @langchain/core is correctly placed in optionalDependencies because it's only required when using MaximLangchainTracer functionality. Users who don't need LangChain integration can use the library without installing @langchain/core, keeping the installation lightweight.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: package.json:15-33
Timestamp: 2025年06月16日T05:36:54.415Z
Learning: The LangChain packages in maxim-js are intentionally kept as optional/dev dependencies rather than regular dependencies. This design choice ensures that users who don't need MaximLangchainTracer functionality can use the library without installing heavy LangChain packages, keeping the installation lightweight.
Learnt from: danpiths
PR: maximhq/maxim-js#17
File: scripts/generate-mintlify-nav.js:75-175
Timestamp: 2025年06月29日T06:44:39.669Z
Learning: In the maxim-js project, danpiths is comfortable with code duplication in documentation generation scripts since they are internal tools not shipped to users, preferring to defer refactoring for better maintainability to later iterations when the scripts are not critical to core functionality.
src/lib/logger/together/utils.ts (6)
Learnt from: SamstyleGhost
PR: maximhq/maxim-js#18
File: src/lib/logger/vercel/wrapper.ts:212-212
Timestamp: 2025年06月25日T05:20:43.227Z
Learning: In `src/lib/logger/vercel/wrapper.ts`, the `processStream` function expects the model ID string as a parameter, not the provider name. When calling `processStream(chunks, span, trace, generation, model, maximMetadata)`, use `this.modelId` instead of `modelProvider` to pass the correct model identifier.
Learnt from: SamstyleGhost
PR: maximhq/maxim-js#18
File: src/lib/logger/vercel/utils.ts:241-241
Timestamp: 2025年06月26日T13:58:22.625Z
Learning: In src/lib/logger/vercel/utils.ts:241, the convertDoGenerateResultToChatCompletionResult function parameter uses `DoGenerateResultLike & { [key: string]: any }` intersection type intentionally because the AI SDK has additional properties that aren't exported in their type definitions. Rewriting it completely wouldn't make sense, so the intersection type provides access to these runtime properties.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: src/lib/logger/langchain/tracer.ts:100-104
Timestamp: 2025年06月16日T12:35:36.606Z
Learning: In the MaximLangchainTracer class in src/lib/logger/langchain/tracer.ts, danpiths prefers to keep the current implementation that JSON.stringify-s all metadata entries into tags, even though it could potentially create oversized tags. The preference is to defer optimization until it becomes a demonstrated problem rather than preemptively fixing it.
Learnt from: danpiths
PR: maximhq/maxim-js#16
File: src/lib/logger/components/session.ts:129-132
Timestamp: 2025年06月28日T20:29:27.139Z
Learning: In the maxim-js codebase, danpiths is okay with the trace_() method in src/lib/logger/components/session.ts mutating the caller-supplied config object (config.sessionId = id) for now, preferring to defer this non-critical improvement to maintain focus on the current PR's main objectives.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: src/lib/logger/components/generation.ts:213-217
Timestamp: 2025年06月16日T12:31:06.643Z
Learning: In src/lib/logger/components/generation.ts, the parseAttachmentsFromMessages function contains a `typeof item === "string"` check that appears unreachable due to TypeScript typing (item is typed as CompletionRequestContent), but this is intentionally kept as-is for now as accepted technical debt.
Learnt from: danpiths
PR: maximhq/maxim-js#17
File: scripts/fix-links.js:55-58
Timestamp: 2025年06月29日T06:42:21.189Z
Learning: In the maxim-js project, the documentation generation scripts (like scripts/fix-links.js) are run manually and locally, not in CI environments. Therefore, detailed console.log output is acceptable and beneficial for debugging during local documentation builds.
package.json (2)
Learnt from: danpiths
PR: maximhq/maxim-js#15
File: scripts/clean-package.js:22-24
Timestamp: 2025年06月28日T20:20:10.582Z
Learning: In the maximhq/maxim-js project, the clean-package script (scripts/clean-package.js) runs as a post-build step after the TypeScript build process, which means the dist directory is guaranteed to exist when the script executes. Therefore, no directory existence check is needed before writing to dist/package.json.
Learnt from: SamstyleGhost
PR: maximhq/maxim-js#18
File: src/lib/logger/vercel/utils.ts:241-241
Timestamp: 2025年06月26日T13:58:22.625Z
Learning: In src/lib/logger/vercel/utils.ts:241, the convertDoGenerateResultToChatCompletionResult function parameter uses `DoGenerateResultLike & { [key: string]: any }` intersection type intentionally because the AI SDK has additional properties that aren't exported in their type definitions. Rewriting it completely wouldn't make sense, so the intersection type provides access to these runtime properties.
src/lib/logger/together/togetherLogger.test.ts (12)
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: jest.config.js:10-12
Timestamp: 2025年06月15日T09:30:55.364Z
Learning: In the maximhq/maxim-js project, the explicit transform configuration in jest.config.js is required for Jest to work properly, even though the ts-jest preset is being used. The transform block should not be removed as it's needed for the project's specific setup.
Learnt from: danpiths
PR: maximhq/maxim-js#16
File: src/lib/logger/components/session.ts:129-132
Timestamp: 2025年06月28日T20:29:27.139Z
Learning: In the maxim-js codebase, danpiths is okay with the trace_() method in src/lib/logger/components/session.ts mutating the caller-supplied config object (config.sessionId = id) for now, preferring to defer this non-critical improvement to maintain focus on the current PR's main objectives.
Learnt from: danpiths
PR: maximhq/maxim-js#17
File: scripts/fix-links.js:55-58
Timestamp: 2025年06月29日T06:42:21.189Z
Learning: In the maxim-js project, the documentation generation scripts (like scripts/fix-links.js) are run manually and locally, not in CI environments. Therefore, detailed console.log output is acceptable and beneficial for debugging during local documentation builds.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: src/lib/logger/langchain/tracer.ts:100-104
Timestamp: 2025年06月16日T12:35:36.606Z
Learning: In the MaximLangchainTracer class in src/lib/logger/langchain/tracer.ts, danpiths prefers to keep the current implementation that JSON.stringify-s all metadata entries into tags, even though it could potentially create oversized tags. The preference is to defer optimization until it becomes a demonstrated problem rather than preemptively fixing it.
Learnt from: SamstyleGhost
PR: maximhq/maxim-js#18
File: src/lib/logger/vercel/utils.ts:241-241
Timestamp: 2025年06月26日T13:58:22.625Z
Learning: In src/lib/logger/vercel/utils.ts:241, the convertDoGenerateResultToChatCompletionResult function parameter uses `DoGenerateResultLike & { [key: string]: any }` intersection type intentionally because the AI SDK has additional properties that aren't exported in their type definitions. Rewriting it completely wouldn't make sense, so the intersection type provides access to these runtime properties.
Learnt from: danpiths
PR: maximhq/maxim-js#16
File: src/lib/evaluators/evaluators.ts:84-95
Timestamp: 2025年06月28日T20:35:58.071Z
Learning: In the maxim-js codebase, evaluator name uniqueness is enforced in the test run builder's .run() method rather than during individual evaluator creation with createCustomEvaluator() or createCustomCombinedEvaluatorsFor(). This represents a deliberate architectural decision for proper separation of concerns.
Learnt from: danpiths
PR: maximhq/maxim-js#20
File: src/lib/logger/vercel/utils.ts:23-35
Timestamp: 2025年06月29日T07:57:14.446Z
Learning: In the maxim-js codebase, danpiths prefers to keep hardcoded switch cases for provider detection logic in src/lib/logger/vercel/utils.ts rather than making it case-insensitive or removing redundant checks, indicating a preference for explicit provider mapping over normalized/optimized detection logic.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: src/lib/logger/components/generation.ts:213-217
Timestamp: 2025年06月16日T12:31:06.643Z
Learning: In src/lib/logger/components/generation.ts, the parseAttachmentsFromMessages function contains a `typeof item === "string"` check that appears unreachable due to TypeScript typing (item is typed as CompletionRequestContent), but this is intentionally kept as-is for now as accepted technical debt.
Learnt from: danpiths
PR: maximhq/maxim-js#17
File: scripts/generate-mintlify-nav.js:185-188
Timestamp: 2025年06月29日T06:45:30.100Z
Learning: In the maxim-js project, danpiths prefers fail-fast behavior over graceful error handling in documentation generation scripts (like scripts/generate-mintlify-nav.js). Since these scripts are used locally for SDK documentation generation and not in production systems, immediate failure provides better visibility when issues occur rather than silent failures or graceful degradation.
Learnt from: danpiths
PR: maximhq/maxim-js#17
File: scripts/rename-files.js:19-44
Timestamp: 2025年06月29日T06:40:49.140Z
Learning: In the maxim-js project, danpiths accepts a reactive approach to error handling in documentation generation scripts, preferring to fix issues if and when they occur rather than adding proactive error handling, since documentation generation failures are immediately visible and the scripts are not critical to core functionality.
Learnt from: danpiths
PR: maximhq/maxim-js#16
File: src/lib/logger/components/base.ts:94-96
Timestamp: 2025年06月28日T20:33:39.066Z
Learning: In the maxim-js codebase, danpiths prefers to defer non-critical fixes (like in-memory state synchronization issues) when there are plans to refactor or remove the affected code later, especially when the impacted functionality isn't widely used by consumers.
Learnt from: danpiths
PR: maximhq/maxim-js#17
File: scripts/generate-mintlify-nav.js:75-175
Timestamp: 2025年06月29日T06:44:39.669Z
Learning: In the maxim-js project, danpiths is comfortable with code duplication in documentation generation scripts since they are internal tools not shipped to users, preferring to defer refactoring for better maintainability to later iterations when the scripts are not critical to core functionality.
src/lib/logger/together/wrapper.ts (6)
Learnt from: danpiths
PR: maximhq/maxim-js#16
File: src/lib/logger/components/session.ts:129-132
Timestamp: 2025年06月28日T20:29:27.139Z
Learning: In the maxim-js codebase, danpiths is okay with the trace_() method in src/lib/logger/components/session.ts mutating the caller-supplied config object (config.sessionId = id) for now, preferring to defer this non-critical improvement to maintain focus on the current PR's main objectives.
Learnt from: danpiths
PR: maximhq/maxim-js#2
File: src/lib/logger/langchain/tracer.ts:100-104
Timestamp: 2025年06月16日T12:35:36.606Z
Learning: In the MaximLangchainTracer class in src/lib/logger/langchain/tracer.ts, danpiths prefers to keep the current implementation that JSON.stringify-s all metadata entries into tags, even though it could potentially create oversized tags. The preference is to defer optimization until it becomes a demonstrated problem rather than preemptively fixing it.
Learnt from: SamstyleGhost
PR: maximhq/maxim-js#18
File: src/lib/logger/vercel/wrapper.ts:212-212
Timestamp: 2025年06月25日T05:20:43.227Z
Learning: In `src/lib/logger/vercel/wrapper.ts`, the `processStream` function expects the model ID string as a parameter, not the provider name. When calling `processStream(chunks, span, trace, generation, model, maximMetadata)`, use `this.modelId` instead of `modelProvider` to pass the correct model identifier.
Learnt from: danpiths
PR: maximhq/maxim-js#20
File: src/lib/logger/vercel/utils.ts:23-35
Timestamp: 2025年06月29日T07:57:14.446Z
Learning: In the maxim-js codebase, danpiths prefers to keep hardcoded switch cases for provider detection logic in src/lib/logger/vercel/utils.ts rather than making it case-insensitive or removing redundant checks, indicating a preference for explicit provider mapping over normalized/optimized detection logic.
Learnt from: SamstyleGhost
PR: maximhq/maxim-js#18
File: src/lib/logger/vercel/utils.ts:241-241
Timestamp: 2025年06月26日T13:58:22.625Z
Learning: In src/lib/logger/vercel/utils.ts:241, the convertDoGenerateResultToChatCompletionResult function parameter uses `DoGenerateResultLike & { [key: string]: any }` intersection type intentionally because the AI SDK has additional properties that aren't exported in their type definitions. Rewriting it completely wouldn't make sense, so the intersection type provides access to these runtime properties.
Learnt from: danpiths
PR: maximhq/maxim-js#12
File: src/lib/models/containers.ts:375-381
Timestamp: 2025年06月16日T17:16:34.987Z
Learning: In the maxim-js codebase, the sessionId parameter should only be added to TraceContainer.create() method, not to the abstract Container.create() signature, because only traces need session tracking functionality, not spans or other container types.
🪛 Biome (1.9.4)
src/lib/logger/together/utils.ts

[error] 15-15: This let declares a variable that is only assigned once.

'chunks' is never reassigned.

Safe fix: Use const instead.

(lint/style/useConst)


[error] 16-16: This let declares a variable that is only assigned once.

'result' is never reassigned.

Safe fix: Use const instead.

(lint/style/useConst)

🔇 Additional comments (2)
together-ai-sdk.ts (1)

1-2: LGTM! Clean SDK entry point.

The re-export pattern properly exposes the public API while keeping implementation details internal.

package.json (1)

92-95: Verify dependency placement aligns with project philosophy.

The new groq-sdk and together-ai packages are added as regular dependencies. Based on the project's established pattern, consider whether these should be optional dependencies instead to maintain the lightweight installation approach used for LangChain packages.

Comment on lines +15 to +20
let chunks: any[] = [];
let result = {
text: "",
usage: { prompt_tokens: 0, completion_tokens: 0 },
finishReason: "stop",
};
Copy link

@coderabbitai coderabbitai bot Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use const for variables that are never reassigned.

Both chunks and result are never reassigned after initialization.

- let chunks: any[] = [];
- let result = {
+ const chunks: any[] = [];
+ const result = {
 text: "",
 usage: { prompt_tokens: 0, completion_tokens: 0 },
 finishReason: "stop",
 };
📝 Committable suggestion

!!️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let chunks: any[] = [];
let result = {
text: "",
usage: { prompt_tokens: 0, completion_tokens: 0 },
finishReason: "stop",
};
const chunks: any[] = [];
const result = {
text: "",
usage: { prompt_tokens: 0, completion_tokens: 0 },
finishReason: "stop",
};
🧰 Tools
🪛 Biome (1.9.4)

[error] 15-15: This let declares a variable that is only assigned once.

'chunks' is never reassigned.

Safe fix: Use const instead.

(lint/style/useConst)


[error] 16-16: This let declares a variable that is only assigned once.

'result' is never reassigned.

Safe fix: Use const instead.

(lint/style/useConst)

🤖 Prompt for AI Agents
In src/lib/logger/together/utils.ts between lines 15 and 20, change the
declarations of the variables `chunks` and `result` from `let` to `const` since
they are never reassigned after initialization. This involves replacing `let`
with `const` for both variables to reflect their immutability.

Comment on lines +33 to +398
// it("should trace Together chat model with basic text and system message", async () => {
// if (!repoId || !togetherApiKey) {
// throw new Error("MAXIM_LOG_REPO_ID and TOGETHER_API_KEY environment variables are required");
// }
// const logger = await maxim.logger({ id: repoId });
// if (!logger) {
// throw new Error("Logger is not available");
// }
// const client = wrapMaximTogetherClient(new Together({ apiKey: togetherApiKey }), logger);

// const query = "Who is Sachin Tendulkar?";
// try {
// const response = await client.completions.create({
// model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
// temperature: 0.3,
// top_p: 1,
// frequency_penalty: 0,
// prompt: query,
// max_tokens: 4096,
// });
// console.log("Together response for basic generateText", JSON.stringify(response.choices[0].text));
// } catch (error) {
// console.error(error);
// }
// }, 40000);

// it("should log the user message and Together chat model response for multiple messages", async () => {
// if (!repoId || !togetherApiKey) {
// throw new Error("MAXIM_LOG_REPO_ID and TOGETHER_API_KEY environment variables are required");
// }
// const logger = await maxim.logger({ id: repoId });
// if (!logger) {
// throw new Error("Logger is not available");
// }
// const client = wrapMaximTogetherClient(new Together({ apiKey: togetherApiKey }), logger);

// try {
// const result = await client.chat.completions.create({
// model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
// temperature: 0.3,
// top_p: 1,
// frequency_penalty: 0,
// messages: [
// {
// role: "system",
// content: "You are a helpful assistant.",
// },
// {
// role: "user",
// content: "Hello!",
// },
// ],
// max_tokens: 4096,
// });
// console.log("Together response for multiple messages", result.choices[0].message?.content);
// } catch (error) {
// console.error(error);
// }
// }, 20000);

// it("should log the inputs and outputs for multi turn messages in a single trace", async () => {
// if (!repoId || !togetherApiKey) {
// throw new Error("MAXIM_LOG_REPO_ID and TOGETHER_API_KEY environment variables are required");
// }
// const logger = await maxim.logger({ id: repoId });
// if (!logger) {
// throw new Error("Logger is not available");
// }
// const client = wrapMaximTogetherClient(new Together({ apiKey: togetherApiKey }), logger);

// try {
// const result = await client.chat.completions.create({
// model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
// max_tokens: 512,
// messages: [
// {
// role: "user",
// content: [
// {
// type: "text",
// text: "what are the red things in this image?",
// },
// {
// type: "image_url",
// image_url: {
// url: "https://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/2024_Solar_Eclipse_Prominences.jpg/720px-2024_Solar_Eclipse_Prominences.jpg",
// },
// },
// ],
// },
// ],
// });
// console.log("Together response for image prompt", result.choices[0].message?.content);
// } catch (error) {
// console.error(error);
// }
// }, 20000);

// it("should log the user input image and assistant message for image prompt", async () => {
// if (!repoId || !togetherApiKey) {
// throw new Error("MAXIM_LOG_REPO_ID and TOGETHER_API_KEY environment variables are required");
// }
// const logger = await maxim.logger({ id: repoId });
// if (!logger) {
// throw new Error("Logger is not available");
// }
// const client = wrapMaximTogetherClient(new Together({ apiKey: togetherApiKey }), logger);

// try {
// const result = await client.chat.completions.create({
// model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
// max_tokens: 512,
// messages: [
// {
// role: "user",
// content: [
// {
// type: "text",
// text: "what are the red things in this image?",
// },
// {
// type: "image_url",
// image_url: {
// url: "https://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/2024_Solar_Eclipse_Prominences.jpg/720px-2024_Solar_Eclipse_Prominences.jpg",
// },
// },
// ],
// },
// ],
// });
// console.log("Together response for image prompt", result.choices[0].message?.content);
// } catch (error) {
// console.error(error);
// }
// }, 40000);

// it("should log the user input and the model response for stream text", async () => {
// if (!repoId || !togetherApiKey) {
// throw new Error("MAXIM_LOG_REPO_ID and TOGETHER_API_KEY environment variables are required");
// }
// const logger = await maxim.logger({ id: repoId });
// if (!logger) {
// throw new Error("Logger is not available");
// }
// const client = wrapMaximTogetherClient(new Together({ apiKey: togetherApiKey }), logger);

// try {
// const result = client.chat.completions.stream({
// model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
// max_tokens: 512,
// temperature: 0.3,
// messages: [
// {
// role: "user",
// content: "Invent a new holiday and describe its traditions.",
// },
// ],
// });
// let fullText = "";
// for await (const chunk of result) {
// if (chunk.choices?.[0]?.delta?.content) {
// fullText += chunk.choices[0].delta.content;
// }
// }
// console.log("Together response for stream text", fullText);
// } catch (error) {
// console.error(error);
// }
// }, 40000);


// it("should log the user input and the model response for stream text with chat prompt", async () => {
// if (!repoId || !togetherApiKey) {
// throw new Error("MAXIM_LOG_REPO_ID and TOGETHER_API_KEY environment variables are required");
// }
// const logger = await maxim.logger({ id: repoId });
// if (!logger) {
// throw new Error("Logger is not available");
// }
// const client = wrapMaximTogetherClient(new Together({ apiKey: togetherApiKey }), logger);

// try {
// const result = await client.chat.completions.stream({
// model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
// max_tokens: 1024,
// temperature: 0.3,
// messages: [
// {
// role: "user",
// content: "Hello!",
// },
// {
// role: "assistant",
// content: "Hello! How can I help you today?",
// },
// {
// role: "user",
// content: "I need help with my computer.",
// },
// ],
// });
// let fullText = "";
// for await (const chunk of result) {
// if (chunk.choices?.[0]?.delta?.content) {
// fullText += chunk.choices[0].delta.content;
// }
// }
// console.log("Together response for stream text with chat prompt", fullText);
// } catch (error) {
// console.error("Error in stream text with chat prompt", error);
// }
// }, 40000);

// it("should log the user input and the model response for stream text", async () => {
// if (!repoId || !togetherApiKey) {
// throw new Error("MAXIM_LOG_REPO_ID and TOGETHER_API_KEY environment variables are required");
// }
// const logger = await maxim.logger({ id: repoId });
// if (!logger) {
// throw new Error("Logger is not available");
// }
// const client = wrapMaximTogetherClient(new Together({ apiKey: togetherApiKey }), logger);

// try {
// const result = await client.chat.completions.stream({
// model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
// max_tokens: 512,
// temperature: 0.3,
// messages: [
// {
// role: "user",
// content: [
// { type: "text", text: "Describe the image in detail." },
// {
// type: "image_url",
// image_url: {
// url: "https://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/2024_Solar_Eclipse_Prominences.jpg/720px-2024_Solar_Eclipse_Prominences.jpg",
// },
// },
// ],
// },
// ],
// });
// let fullText = "";
// for await (const chunk of result) {
// if (chunk.choices?.[0]?.delta?.content) {
// fullText += chunk.choices[0].delta.content;
// }
// }
// console.log("Together response for image prompt with streamed text", fullText);
// } catch (error) {
// console.error("Error in image prompt with streamed text", error);
// }
// // }, 20000);

// it("should log the user input and the model response in response_format", async () => {
// if (!repoId || !togetherApiKey) {
// throw new Error("MAXIM_LOG_REPO_ID and TOGETHER_API_KEY environment variables are required");
// }
// const logger = await maxim.logger({ id: repoId });
// if (!logger) {
// throw new Error("Logger is not available");
// }
// const client = wrapMaximTogetherClient(new Together({ apiKey: togetherApiKey }), logger);

// try {
// const result = await client.chat.completions.create({
// model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
// max_tokens: 512,
// temperature: 0.3,
// messages: [
// {
// role: "user",
// content: "Generate a lasagna recipe.",
// },
// ],
// response_format: {
// type: "json_object",
// },
// });
// console.log("Together response for generate object", result.choices[0].message?.content);
// } catch (error) {
// console.error("Error in generate object", error);
// }
// }, 20000);

// it("should log Together.ai image generation with Maxim attachments", async () => {
// if (!repoId || !togetherApiKey) {
// throw new Error("MAXIM_LOG_REPO_ID and TOGETHER_API_KEY environment variables are required");
// }
// const logger = await maxim.logger({ id: repoId });
// if (!logger) {
// throw new Error("Logger is not available");
// }
// const client = wrapMaximTogetherClient(new Together({ apiKey: togetherApiKey }), logger);

// try {
// const result = await client.images.create({
// model: "black-forest-labs/FLUX.1-schnell-Free",
// prompt: "A beautiful sunset over a calm ocean",
// n: 1,
// });
// console.log("Together image generation result:", result.data[0]);
// } catch (error) {
// console.error("Error in image generation with Maxim logging:", error);
// }
// }, 40000);

// it("should demonstrate Together.ai chat completion with full Maxim metadata", async () => {
// if (!repoId || !togetherApiKey) {
// throw new Error("MAXIM_LOG_REPO_ID and TOGETHER_API_KEY environment variables are required");
// }
// const logger = await maxim.logger({ id: repoId });
// if (!logger) {
// throw new Error("Logger is not available");
// }
// const client = wrapMaximTogetherClient(new Together({ apiKey: togetherApiKey }), logger);

// try {
// const result = await client.chat.completions.create({
// model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
// messages: [
// {
// role: "system",
// content: "You are a helpful assistant that provides concise responses."
// },
// {
// role: "user",
// content: "What is the capital of France?"
// }
// ],
// temperature: 0.7,
// max_tokens: 100,
// maxim: {
// sessionId: "demo-session-123",
// sessionName: "Together.ai Demo Session",
// sessionTags: {
// environment: "test",
// model_provider: "together",
// demo_type: "metadata_showcase"
// },
// traceName: "capital-question-trace",
// traceTags: {
// question_type: "geography",
// difficulty: "easy",
// expected_answer: "Paris"
// },
// generationName: "france-capital-response",
// generationTags: {
// response_type: "factual",
// category: "geography",
// language: "english"
// }
// }
// });

// console.log("Together.ai chat completion result:", result.choices[0]?.message?.content);
// console.log("This should appear on Maxim dashboard with:");
// console.log("- Session: 'Together.ai Demo Session' with environment and provider tags");
// console.log("- Trace: 'capital-question-trace' with question metadata");
// console.log("- Generation: 'france-capital-response' with response categorization");
// } catch (error) {
// console.error("Error in chat completion with Maxim metadata:", error);
// }
// }, 20000);
});
Copy link

@coderabbitai coderabbitai bot Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Enable tests to ensure proper coverage.

All test cases are commented out, providing no test coverage for the new Together AI integration. Either enable these tests or properly skip them with test.skip() and a reason if they're not ready.

🤖 Prompt for AI Agents
In src/lib/logger/together/togetherLogger.test.ts between lines 33 and 398, all
test cases are currently commented out, resulting in no test coverage for the
Together AI integration. To fix this, uncomment the test cases to enable them
for execution. If some tests are not ready to run, replace the commented-out
tests with test.skip() calls including a reason for skipping, ensuring proper
test coverage and clarity on test status.

if (prop === "chat") {
return {
completions: {
create: async (options: any) => {
Copy link

@coderabbitai coderabbitai bot Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add proper TypeScript types instead of using any.

The options parameter should be properly typed using Together.ai SDK types.

- create: async (options: any) => {
+ create: async (options: Parameters<typeof target.chat.completions.create>[0]) => {
- stream: (options: any) => {
+ stream: (options: Parameters<typeof target.chat.completions.stream>[0]) => {
- create: async (options: any) => {
+ create: async (options: Parameters<typeof target.images.create>[0]) => {

Also applies to: 109-109, 174-174

🤖 Prompt for AI Agents
In src/lib/logger/together/wrapper.ts at lines 51, 109, and 174, the parameter
'options' is currently typed as 'any', which lacks type safety. Replace 'any'
with the appropriate type from the Together.ai SDK that defines the structure of
the options parameter. Import the necessary types from the SDK if not already
imported, and apply them to the 'options' parameter in the create functions to
ensure proper TypeScript typing.

Comment on lines +59 to +83
if (maximMetadata?.traceId) {
trace = logger.trace({
id: maximMetadata.traceId,
name: maximMetadata?.traceName ?? "together-completion",
tags: maximMetadata?.traceTags
});
} else if (maximMetadata?.sessionId) {
const session = logger.session({
id: maximMetadata.sessionId,
name: maximMetadata?.sessionName ?? "default-session",
tags: maximMetadata?.sessionTags
});

trace = session.trace({
id: uuid(),
name: maximMetadata?.traceName ?? "together-completion",
tags: maximMetadata?.traceTags
});
} else {
trace = logger.trace({
id: uuid(),
name: maximMetadata?.traceName ?? "together-completion",
tags: maximMetadata?.traceTags
});
}
Copy link

@coderabbitai coderabbitai bot Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Extract duplicated trace creation logic into a helper function.

The trace creation logic is duplicated across three methods. This violates DRY principles.

Create a helper function to eliminate duplication:

function createTrace(
 logger: MaximLogger,
 maximMetadata: MaximTogetherProviderMetadata | undefined,
 defaultName: string
): Trace {
 if (maximMetadata?.traceId) {
 return logger.trace({
 id: maximMetadata.traceId,
 name: maximMetadata?.traceName ?? defaultName,
 tags: maximMetadata?.traceTags
 });
 } else if (maximMetadata?.sessionId) {
 const session = logger.session({
 id: maximMetadata.sessionId,
 name: maximMetadata?.sessionName ?? "default-session",
 tags: maximMetadata?.sessionTags
 });
 
 return session.trace({
 id: uuid(),
 name: maximMetadata?.traceName ?? defaultName,
 tags: maximMetadata?.traceTags
 });
 } else {
 return logger.trace({
 id: uuid(),
 name: maximMetadata?.traceName ?? defaultName,
 tags: maximMetadata?.traceTags
 });
 }
}

Then replace the duplicated code with:

const trace = createTrace(logger, maximMetadata, "together-completion");

Also applies to: 117-141, 182-206

🤖 Prompt for AI Agents
In src/lib/logger/together/wrapper.ts around lines 59 to 83, 117 to 141, and 182
to 206, the trace creation logic is duplicated in three places, violating DRY
principles. To fix this, create a helper function named createTrace that takes
logger, maximMetadata, and a defaultName string as parameters and encapsulates
the existing conditional trace creation logic. Then replace each duplicated
block with a call to this helper function, passing the appropriate arguments, to
centralize and simplify the trace creation code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

@coderabbitai coderabbitai[bot] coderabbitai[bot] requested changes

@akshaydeo akshaydeo Awaiting requested review from akshaydeo

Requested changes must be addressed to merge this pull request.

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

2 participants

AltStyle によって変換されたページ (->オリジナル) /