-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat(javascript): Add Claude Code Agent SDK instrumentation #17844
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Adds Sentry tracing instrumentation for the @anthropic-ai/claude-agent-sdk following OpenTelemetry Semantic Conventions for Generative AI. Key features: - Captures agent invocation, LLM chat, and tool execution spans - Records token usage, model info, and session tracking - Supports input/output recording based on sendDefaultPii setting - Provides createInstrumentedClaudeQuery() helper for clean DX Due to ESM-only module constraints, this integration uses a helper function pattern instead of automatic OpenTelemetry instrumentation hooks. Usage: ```typescript import { createInstrumentedClaudeQuery } from '@sentry/node'; const query = createInstrumentedClaudeQuery(); ``` 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
...g in Claude Code integration - Add SEMANTIC_ATTRIBUTE_SENTRY_OP to all span creation calls (invoke_agent, chat, execute_tool) - Capture exceptions to Sentry in catch block with proper mechanism metadata - Ensure child spans (currentLLMSpan, previousLLMSpan) are always closed in finally block - Prevents incomplete traces if generator exits early
size-limit report 📦
|
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have these attributes in packages/core/src/utils/ai/gen-ai-attributes.ts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you reuse function from packages/core/src/utils/ai/utils.ts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we revert to unknown if not model found here? i think this might be confusing if it's not accurate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually i believe InstrumentationModuleDefinition will automatically patch Node.js modules when they're loaded via import, you can find some patterns in other AI integrations e.g anthropic AI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should hook into the query and use proxy here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using proxy should clean this up a little
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this! For the first pass, the biggest lift here is to try to auto patch the functions we need automatically instead of asking user to import patched method, then we can move to tackling the other TODOs you have
Thanks for working on this! For the first pass, the biggest lift here is to try to auto patch the functions we need automatically instead of asking user to import patched method, then we can move to tackling the other TODOs you have
Thanks SO much for all these. I'll get started on them.
I tried REALLY hard to figure out how to hook into the existing query, and I couldn't get it to work no matter what I tried. I'll chat with you in slack on it, but I'd love some advice / guidance. I tried a bunch of different angles - but each time I ran into effectively timing issues where we couldn't hook fast enough. Felt like a limitation on how Claude Code's SDK works - but could be a total skill issue on my side.
Uh oh!
There was an error while loading. Please reload this page.
Summary
Adds Sentry tracing instrumentation for the
@anthropic-ai/claude-agent-sdk
(Claude Code Agent SDK) following OpenTelemetry Semantic Conventions Sentry's Agent Monitoring.This integration enables AI monitoring for Claude Code agents with comprehensive telemetry:
invoke_agent
)chat
)execute_tool
)Key Implementation Details
Why Not Automatic Like Other AI Integrations?
The Claude Code SDK (
@anthropic-ai/claude-agent-sdk
) is ESM-only with no CommonJS build, which prevents automatic instrumentation via OpenTelemetry'srequire()
hooks that work for other integrations (Anthropic AI, OpenAI, etc.).Solution: Helper Function Pattern
Provides
createInstrumentedClaudeQuery()
- a one-line helper that:import()
(avoids bundler issues)claudeCodeIntegration()
configUsage
Remaining TODO's