-
Notifications
You must be signed in to change notification settings - Fork 950
-
Disclaimer
I asked the original question on SO, but decided to duplicate it here.
I am using aws-sdk to work with corporate S3 compatible storage. My S3 client configuration looks like this (aws sdk 2.31.12):
@Bean public S3Client s3Client() { return S3Client.builder() .endpointOverride(URI.create(properties.getS3().getEndpoint())) .region(Region.of(properties.getS3().getRegion())) .credentialsProvider(StaticCredentialsProvider.create( AwsBasicCredentials.create( properties.getS3().getAccessKey(), properties.getS3().getSecretKey() ) )) .serviceConfiguration(builder -> builder.chunkedEncodingEnabled(properties.getS3().getChunkedEncodingEnabled())) .forcePathStyle(properties.getS3().getPathStyle()) .build(); }
the chunked upload setting is intentional, because my storage provider does not fully support it (i.e. I have it disabled).
I am trying to upload a file to S3 by receiving it from the user in the API as a multipart request (the file is passed to the application API as a multipart request). But when uploading, I get an error like:
The request content has fewer bytes than the specified content-length: N bytes.
I tried wrapping the original InputStream in BufferedInputStream as suggested here (even though the javadoc of RequestBody.fromInputStream says that automatic wrapping occurs if the original stream does not support mark/reset, then I get an error when trying to create BufferedInputStream:
Caused by: java.io.IOException: Resetting to invalid mark
My upload code looks like this:
try (InputStream is = resource.getInputStream()){ PutObjectResponse putObjectResponse = s3Client.putObject(PutObjectRequest.builder() .bucket(properties.getS3().getBucketName()) .key(resource.getFilename()) .build(), RequestBody.fromInputStream(is, resource.contentLength())); return S3SavedFile.builder() .key(resource.getFilename()) .size(putObjectResponse.size()) .build(); } catch (Exception e) { throw e; }
where resource is MultipartFileResource and resource.getInputStream() returns ChannelInputStream.
Please tell me if it is possible to somehow upload a file to S3 using InputStream from the original multipart request?
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 1 comment
-
FWIW hadoop-aws S3A client ended up with its own ContentStreamProvider implementation as the interior of the SDK was doing things we didn't want (copying stuff) while not doing things we did (restarting when attempting a second block upload).
It's not that hard to do and you may want to do the same.
Beta Was this translation helpful? Give feedback.