Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Upgraded to v8, challenging getting it going with an existing Otel setup #12191

chadxz started this conversation in General
Discussion options

I just upgraded my node SDK from v7 to v8. I already have an existing Otel setup and I'm working to get the SDK to play nice with it.

I have found a few miscellaneous pieces of documentation that I'm trying to piece together into a fully working configuration, but I just wanted to drop a note and highlight that it's challenging. The documentation at https://docs.sentry.io/platforms/javascript/guides/node/performance/instrumentation/opentelemetry/#using-a-custom-opentelemetry-setup refers to SentryContextManager that is not exported, so I found https://github.com/getsentry/sentry-javascript/blob/develop/packages/opentelemetry/README.md that points out wrapContextManagerClass should be used instead. However there's some additional steps in that doc that are not in the docs site, such as setupEventContextTrace that I don't know if I should use.

Is there a canonical document or example I should follow?

You must be logged in to vote

Replies: 11 comments 35 replies

Comment options

Reading through the source code, it looks like I do not need to call setOpenTelemetryContextAsyncContextStrategy since that is done by Sentry.init() (source)

And it looks like I need to make sure I setup OpenTelemetry first because init() calls validateOpenTelemetrySetup (source) that will print a bunch of warnings if it's not setup yet.

You must be logged in to vote
2 replies
Comment options

yes, you are right about both of these! I'll adjust the docs accordingly.

Comment options

Actually, in regards to the warning, today there is a deadlock there basically - the setup requires the client, which requires init, which means you can't really "fix" the warning.

I'll adjust this so that validateOpenTelemetrySetup is not called when skipOpenTelemetrySetup is configured, instead you'll have to manually call validateOpenTelemetrySetup if you want to check this.

Comment options

I definitely have to use setupEventContextTrace because it is not called (source) when I use skipOpenTelemetrySetup: true,

You must be logged in to vote
1 reply
Comment options

This is not really ideal, I'll change this so that this runs either way even if we skip OTEL!

Comment options

And it looks like I need to make sure I setup OpenTelemetry first because init() calls validateOpenTelemetrySetup (source) that will print a bunch of warnings if it's not setup yet.

to setup my tracerProvider with the SentrySampler I need the Sentry client before instantiating my TracerProvider. I don't get warnings with it setup this way because DEBUG_BUILD isn't set.

You must be logged in to vote
0 replies
Comment options

Here's the setup I landed on. If you see anything weird or wrong here would love to know.

import { MetricExporter } from "@google-cloud/opentelemetry-cloud-monitoring-exporter";
import { TraceExporter } from "@google-cloud/opentelemetry-cloud-trace-exporter";
import opentelemetry, * as api from "@opentelemetry/api";
import { AsyncLocalStorageContextManager } from "@opentelemetry/context-async-hooks";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { HttpInstrumentation } from "@opentelemetry/instrumentation-http";
import { IORedisInstrumentation } from "@opentelemetry/instrumentation-ioredis";
import { UndiciInstrumentation } from "@opentelemetry/instrumentation-undici";
import { WinstonInstrumentation } from "@opentelemetry/instrumentation-winston";
import { Resource } from "@opentelemetry/resources";
import { MeterProvider, PeriodicExportingMetricReader, ConsoleMetricExporter } from "@opentelemetry/sdk-metrics";
import { BatchSpanProcessor, SimpleSpanProcessor, ConsoleSpanExporter } from "@opentelemetry/sdk-trace-base";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { SEMRESATTRS_SERVICE_NAME, SEMRESATTRS_SERVICE_NAMESPACE } from "@opentelemetry/semantic-conventions";
import type { NodeClient } from "@sentry/node";
import {
 SentrySpanProcessor,
 SentryPropagator,
 wrapContextManagerClass,
 setupEventContextTrace,
 SentrySampler,
} from "@sentry/opentelemetry";
import { TypeormInstrumentation } from "opentelemetry-instrumentation-typeorm";
import { ExpressInstrumentation } from "@opentelemetry/instrumentation-express";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import * as Sentry from "@sentry/node";
import { nodeProfilingIntegration } from "@sentry/profiling-node";
import { environment, release, SentryDSN } from "@smartrr/shared/services/Sentry/constants";
import { IS_DEPLOYED, PRODUCTION, RUNTIME_ENV_NAME } from "./utils/constants";
Sentry.init({
 dsn: SentryDSN.backend,
 environment,
 release,
 integrations: [nodeProfilingIntegration()],
 skipOpenTelemetrySetup: true,
});
const sentryClient = Sentry.getClient<NodeClient>();
const resource = new Resource({
 [SEMRESATTRS_SERVICE_NAME]: "smartrr",
 [SEMRESATTRS_SERVICE_NAMESPACE]: RUNTIME_ENV_NAME,
});
const metricReaders = [];
if (IS_DEPLOYED) {
 metricReaders.push(
 new PeriodicExportingMetricReader({
 // Export metrics every 10 seconds.
 // 5 seconds is the smallest sample period allowed by Cloud Monitoring.
 exportIntervalMillis: 10_000,
 exporter: new MetricExporter(),
 })
 );
}
if (process.env.OTEL_DEBUG || process.env.OTEL_DEBUG_JAEGER) {
 metricReaders.push(
 new PeriodicExportingMetricReader({
 exporter: new ConsoleMetricExporter(),
 })
 );
}
const meterProvider = new MeterProvider({ resource, readers: metricReaders });
const tracerProvider = new NodeTracerProvider({
 resource,
 spanLimits: { eventCountLimit: 512 },
 sampler: sentryClient && new SentrySampler(sentryClient),
});
const SentryContextManager = wrapContextManagerClass(AsyncLocalStorageContextManager);
tracerProvider.register({
 propagator: new SentryPropagator(),
 contextManager: new SentryContextManager(),
});
opentelemetry.metrics.setGlobalMeterProvider(meterProvider);
if (sentryClient) {
 setupEventContextTrace(sentryClient);
 sentryClient.traceProvider = tracerProvider;
}
registerInstrumentations({
 tracerProvider,
 meterProvider,
 instrumentations: [
 new HttpInstrumentation(),
 new UndiciInstrumentation(),
 new ExpressInstrumentation(),
 new IORedisInstrumentation(),
 new TypeormInstrumentation(),
 new WinstonInstrumentation({
 logHook(_, record) {
 record["resource.service.name"] = tracerProvider.resource.attributes[SEMRESATTRS_SERVICE_NAME];
 },
 }),
 ],
});
tracerProvider.addSpanProcessor(new SentrySpanProcessor());
if (IS_DEPLOYED) {
 tracerProvider.addSpanProcessor(
 new BatchSpanProcessor(new TraceExporter(), {
 scheduledDelayMillis: 1000,
 maxExportBatchSize: 512,
 maxQueueSize: 512 * 6,
 })
 );
}
if (process.env.OTEL_DEBUG) {
 tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
}
if (process.env.OTEL_DEBUG_JAEGER) {
 tracerProvider.addSpanProcessor(new BatchSpanProcessor(new OTLPTraceExporter()));
}
if (RUNTIME_ENV_NAME !== PRODUCTION) {
 opentelemetry.diag.setLogger(new api.DiagConsoleLogger(), api.DiagLogLevel.ERROR);
}
export const OtelTracer = tracerProvider.getTracer("basic");
export const OtelMeter = meterProvider.getMeter(RUNTIME_ENV_NAME);
You must be logged in to vote
1 reply
Comment options

I think this could probably be simplified with the Otel node sdk library, but we weren’t using that and I didn’t want to rock the boat more at the moment.

Comment options

Hey, sorry about the confusion there - you are right that there are some incorrect/missing things in the docs there! I will make sure to update them and ensure everything is actually exported etc. correctly!

You must be logged in to vote
0 replies
Comment options

I opened a PR with some small fixes based on your feedback: #12214

Thank you so much, we greatly appreciate the feedback and time to go through this! Hopefully with the changes in the PR this will become more straightforward 🙏

You must be logged in to vote
2 replies
Comment options

I think the setupEventContextTrace change will break me since it will add a second listener. Thoughts?

Comment options

it should not break you, because it will just overwrite the event with the same data one more time - so should not be problematic, you can safely remove it once you updated though!

Comment options

Hey, we have released 8.5.0 with some updates where stuff should properly work now!
The updated docs can be found here: https://docs.sentry.io/platforms/javascript/guides/node/performance/instrumentation/opentelemetry/#using-a-custom-opentelemetry-setup

You must be logged in to vote
1 reply
Comment options

Thank you! Will check it out

Comment options

Hello. We discovered today that with the above configuration, none of our tracing is making it to our Google Cloud Tracing. I also found that it's not making it to my local Jaeger instance when I have OTEL_DEBUG_JAEGER=1 set. While debugging I found that if I remove the SentrySampler my traces make it to my non-sentry tracing destinations, but of course then the sentry stuff stops working.

I ended up just removing Sentry from our backend entirely.

Do you have any suggestions on how we can support non-sentry trace destinations while still having Sentry work properly?

You must be logged in to vote
6 replies
Comment options

my expectation would be that we could configure Sentry sampling and trace output separately from our custom tracing.

Comment options

It looks like many of my incoming HTTP requests have a span context that looks like this:

{
 "traceId": "6bb00c3adf48435f9027bf8d6f7862ef",
 "spanId": "b6ca89edd5e2b591",
 "isRemote": true,
 "traceFlags": 0,
 "traceState": {
 "_internalState": {}
 }
}

I believe the traceFlags: 0 is telling OpenTelemetry not to record the trace, which SentrySampler is seeing and acting on.

Comment options

Looks like it is picking up the Sentry frontend tracing sample rate and propagating that to the backend. One of my missing trace requests has the following headers:

sentry-trace: 6bb00c3adf48435f9027bf8d6f7862ef-b6ca89edd5e2b591-0
baggage: sentry-environment=development,sentry-public_key=20aca0a2e3e849bc9fc85e5cdd2edecd,sentry-trace_id=6bb00c3adf48435f9027bf8d6f7862ef,sentry-sample_rate=0.1,sentry-sampled=false
Comment options

So I tried completely disabling tracing on the frontend, but it is still sending traceId in the header, and the backend Sentry tracing components are picking up this traceId and making it the parent for all subsequent spans.

Comment options

mydea Jun 3, 2024
Maintainer

Hey,

yeah, so that explains that - we do continue the trace sampling decision from the frontend in order to get complete traces.

If you do not want this, you can disable trace propagation in the frontend via:

Sentry.init({
 // ...
 tracePropagationTargets: []
})

this overwrites the defaults (which will propagate traces for the same origin) and instead should not propagate ever. Can you give this a try?

Comment options

My takeaways from this experience:

  1. It feels like Sentry has "taken over" my tracing. This does not feel good.
  2. Having all of my backend tracing sampling be determined by Sentry sampling configuration is confusing and not necessarily what I want.
  3. Having frontend tracing sampling determine backend tracing sampling is not what I want and I can't figure out how to turn this off.

We have Sentry installed for error tracking. I don't want it to even be involved with our tracing setup at all. I feel I have been sucked into trying to get the two to play nice together and failed. Ultimately I had to remove Sentry from my backend to get our tracing recovered to its previous state, but this is not what I want - I want to keep using Sentry for backend error tracking.

Disappointed.

You must be logged in to vote
18 replies
Comment options

@mydea I think the problem I'm running into also involves the Sampler, e.g. I want to use a custom Sampler for my custom OTel setup, and just Sentry's sampler for Sentry's side of things. I will whip up a repo and post an issue.

Comment options

You can extend the sentry sampler, if you want! E.g. do:

class MySampler extends SentrySampler {
 shouldSample(...) {
 const sampledBySentry = super.shouldSample(...arguments);
 // do stuff on top of that
 }
}

We can work to expose some more utilities for this to make this easier, though!

Today, we depend on the sampler to do certain things we need for performance monitoring in Sentry. To clarify, do you want to also have performance data in Sentry, or only outside of Sentry?

Comment options

Will try that out! Not 100% sure what you call "performance data". I think that is Sentry-speak. I just want to control my own traces (metrics and logs are not relevant in my case) and send them to a 3rd party, OTel compliant tracing ingester. At the same time I just want Sentry to behave "as is", so any traces I record using Sentry's SDK should just work.

Comment options

I am looking into exposing these utils from @sentry/opentelemetry, which you could use in your own sampler:

const { sentrySamplerNotSampled, sentrySamplerSampled } = require('@sentry/opentelemetry');
class MySampler {
 shouldSample(context, traceId, spanName, spanKind, spanAttributes) {
 if (shouldBeSampled) {
 return sentrySamplerSampled({ context, spanAttributes });
 }
 return sentrySamplerNotSampled({ context, spanAttributes });
 }
}

This can encapsulate underlying things that Sentry may need to work properly, but you can make your own sampling decision.

Comment options

@mydea issue is live. Should be reproducable. #12678

Comment options

FYI, we have shipped improved docs on how to use Sentry with a custom OTEL setup: https://docs.sentry.io/platforms/javascript/guides/node/opentelemetry/custom-setup/

Hopefully, these docs clarify some of the challenges that folks had so far.

You must be logged in to vote
0 replies
Comment options

@mydea

  • i've submitted an issue that was closed in favor of discussing it here.
  • i'd also like to point out a related issue

In my opinion i do not agree that this is the appropriate but discussion, but this was the direction given to me.

The problem:
Sentry scopes are not Isolated to each request. Specifically, when using Sentry.setUser or Sentry.setTag, the expectation is that if an exception is reported for that request, the event should include the correct User and tags that were set for the request. However, often times when a web server processes multiple requests at a time, the reported issue has incorrect User and Tag information.

For example:

  • assume a web server is processing multiple requests
  • each request was performed by a unique user
  • Sentry.setUser and Sentry.setTag are called by the web server once it has identified the unique, authenticated user
  • later on as the web server is processing a request, we need to capture/report an event to Sentry.
  • The reported event is often captured with the user and tags which were set by a different a request.

Reproduction
This repository demonstrates the issue.

  • setUser and setTag are called per request
  • the authUser passed throughout the webservers middleware and set in the extra to illustrate what the expected user and tags should be for the event
  • We've set up OTEL using all of the documented Sentry tools: SentrySampler, SentryPropagator, SentryContextManager

-- EDIT--

I apologize for the edit, but i forgot to mention some important context about how we use Sentry:

  • We only use Sentry for reporting errors and events.
  • We do no not need Sentry to link events between our UI and API.
  • We have "an existing OTEL setup", AKA we do use OTEL, but have no plans at the time export our OTEL info to Sentry.
  • Sentry's Sampler breaks our existing OTEL sampler, so we have to use a Custom Sampler. IMO - this seems like an unnecessary ask from Sentry, but we are willing to do it of course. Ideally, we'd be able to just init Sentry and everything would just work seeing as we manage our own OTEL separately.
You must be logged in to vote
4 replies
Comment options

OK, I can see the problem now - it is because you are setting data in the middleware, which happens before we automatically isolate the scope. Basically, think of it this way: We automatically isolate a request at the time when the request starts, so any middleware before that will not be automatically isolated. We'll need to document this better, as this is of course tricky to figure out...

It works for me if I change the example app to:

const auth: RequestHandler = (req, res, next) => {
 Sentry.withIsolationScope(() => {
 const authUser = Users.find((u) => u.id === req.headers['authorization']);
 if (!authUser) {
 Sentry.setTag('Authenticated', false);
 Sentry.setTag('UserID', null);
 Sentry.setUser(null);
 next(new Error('Authentication Error'));
 } else {
 Sentry.setTag('Authenticated', true);
 Sentry.setTag('UserID', authUser.id);
 Sentry.setUser(authUser);
 res.locals.authUser = authUser;
 next();
 }
 });
};

this ensures that the content inside of Sentry.withIsolationScope is automatically isolated, and since the next() callback is inside too the downstream stuff will inherit the same isolation data.

Comment options

FYI I added a PR to update docs accordingly here: getsentry/sentry-docs#11378

Comment options

Wow, thanks so much @mydea, this now works as expected! I'll take a look at the docs PR and provide some feedback in case it's useful.

Comment options

Just keeping this updated here - I investigated some more, and turns out the middleware is not actually the problem, this should work fine - the problem in that example is the manual adding of HttpInstrumentation, which overwrites our http instrumentation, which is what actually handles the request isolation by default 😬

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

AltStyle によって変換されたページ (->オリジナル) /