-
Notifications
You must be signed in to change notification settings - Fork 950
Setting the correct logging format for the S3CrtClient #6359
Unanswered
dmregister
asked this question in
Q&A
-
When there is a stream interruption, the exceptions thrown are not following the correct JSON layout configured with our slf2j as all the other logs. What is the propery way to catch and format these logs. Here is an example log
Exception in thread "AwsEventLoop 2" java.io.UncheckedIOException: okhttp3.internal.http2.StreamResetException: stream was reset: PROTOCOL_ERROR
at software.amazon.awssdk.utils.async.StoringSubscriber$Event.runtimeError(StoringSubscriber.java:181)
at software.amazon.awssdk.utils.async.ByteBufferStoringSubscriber.transferTo(ByteBufferStoringSubscriber.java:112)
at software.amazon.awssdk.utils.async.ByteBufferStoringSubscriber.blockingTransferTo(ByteBufferStoringSubscriber.java:134)
at software.amazon.awssdk.services.s3.internal.crt.S3CrtRequestBodyStreamAdapter.sendRequestBody(S3CrtRequestBodyStreamAdapter.java:48)
Caused by: okhttp3.internal.http2.StreamResetException: stream was reset: PROTOCOL_ERROR
at okhttp3.internal.http2.Http2Stream$FramingSource.read(Http2Stream.kt:358)
at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
at okio.RealBufferedSource$inputStream1ドル.read(RealBufferedSource.kt:158)
at java.base/java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:243)
at java.base/java.util.zip.InflaterInputStream.read(InflaterInputStream.java:159)
at java.base/java.util.zip.GZIPInputStream.read(GZIPInputStream.java:118)
at java.base/java.io.FilterInputStream.read(FilterInputStream.java:107)
at software.amazon.awssdk.utils.async.InputStreamConsumingPublisher.doBlockingWrite(InputStreamConsumingPublisher.java:55)
Caused by: okhttp3.internal.http2.StreamResetException: stream was reset: PROTOCOL_ERROR
at software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody.writeInputStream(BlockingInputStreamAsyncRequestBody.java:79)
at software.amazon.awssdk.core.internal.async.InputStreamWithExecutorAsyncRequestBody.doBlockingWrite(InputStreamWithExecutorAsyncRequestBody.java:108)
at software.amazon.awssdk.core.internal.async.InputStreamWithExecutorAsyncRequestBody.lambda$subscribe0ドル(InputStreamWithExecutorAsyncRequestBody.java:81)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
I have tried to Thread.setDefaultUncaughtExceptionHandler and on the executor level, but nothing works.
code that produces the stream
public InputStream getStream(final Headers headers, final String url, final UrlType urlType)
throws TimeoutException, IOException {
final Request request = new Request.Builder()
.url(url)
.headers(headers
.newBuilder()
.add("Connection", "Keep-Alive")
.build())
.build();
final Response decorateSupplier = resilienceManager.decorateSupplier("example",
() -> {
try {
return httpClient.newCall(request).execute();
} catch (final Exception e) {
throw new ExampleException("my message", e);
}
});
final var responseBody = decorateSupplier.body();
if (responseBody == null) {
return InputStream.nullInputStream();
}
return new GZIPInputStream(responseBody.byteStream());
}
Code the uploads the stream:
// Executor required to handle reading from the InputStream on a separate thread so the main upload is not blocked.
final ExecutorService executor = Executors.newSingleThreadExecutor();
try (final S3TransferManager transferManager = buildTransferManager()) {
// Specify `null` for the content length when you don't know the content length.
final AsyncRequestBody body = AsyncRequestBody.fromInputStream(inputStream, null, executor);
final Upload upload = transferManager.upload(builder -> builder
.requestBody(body)
.putObjectRequest(req -> req.bucket(bucketName).key(key).contentType(CONTENT_TYPE_XML))
.addTransferListener(LoggingTransferListener.create())
.build());
upload.completionFuture().join();
} catch (final Exception e) {
logger.atSevere()
.withCause(e)
.log("Error occurred while uploading XML file %s to S3", key);
} finally {
executor.shutdown();
}
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment