8f36f18f467a4de47268e83b20609756b945651d
35 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Clay Gerrard
|
b5e6964a22 |
s3api: fix test_service with pre-existing buckets
The s3api cross-compat tests in test_service weren't sophisticated enough to account for real s3 session credentials that could see actual aws s3 buckets (or a vsaio you actually use) - however valid assertions on the authorization logic doesn't actually require such a strictly clean slate. Drive-by: prefer test config option without double negative, and update ansible that's based on the sample config. Related-Change-Id: I811642fccd916bd9ef71846a8108d50a462740f0 Change-Id: Ifab08cfe72f12d80e2196ad9b9b7876ace5825b4 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Alistair Coles
|
c4cc83c5e7 |
s3api compat tests: stop asserting DisplayName in Owner
S3 stopped returning DisplayNamme in the Owner field of object listings [1], so the tests need to stop asserting that it is present. Further work is needed to drop DisplayName from the Swift s3api responses [2]. [1] https://docs.aws.amazon.com/AmazonS3/latest/API/API_Owner.html [2] https://bugs.launchpad.net/swift/+bug/2120622 Change-Id: Ia915a65313394910c74ae826c912b5549e833a7b Signed-off-by: Alistair Coles <alistairncoles@gmail.com> |
||
|
Zuul
|
8af485775a | Merge "s3api: Add support for crc64nvme checksum calculation" | ||
|
Alistair Coles
|
404e1f2732 |
s3api: Add support for crc64nvme checksum calculation
Add anycrc as a soft dependency in case ISA-L isn't available. Plus we'll want it later: when we start writing down checksums, we'll need it to combine per-part checksums for MPUs. Like with crc32c, we won't provide any pure-python version as the CPU-intensiveness could present a DoS vector. Worst case, we 501 as before. Co-Authored-By: Tim Burke <tim.burke@gmail.com> Signed-off-by: Tim Burke <tim.burke@gmail.com> Change-Id: Ia05e5677a8ca89a62b142078abfb7371b1badd3f Signed-off-by: Alistair Coles <alistairncoles@gmail.com> |
||
|
Alistair Coles
|
61c0bfcf95 |
s3api: add more assertions w.r.t. S3 checksum BadDigest
Assert that BadDigest responses due to checksum mismatch do not include the expected or computed values. Change-Id: Iaffa02c3c02fa3bc6922f51ecf28a39f4b24ccf2 Signed-off-by: Alistair Coles <alistairncoles@gmail.com> |
||
|
Alistair Coles
|
351ee72790 |
s3api: add compat test sending too much body with checksum
Adds a test that verifies extra body content beyond the content-length is ignored provided that the checksum value matches that of the content-length bytes. Add comment to explain why this is the case. Drive-by: add clarifying comment to unit test. Change-Id: I8f198298a817be47223e2f45fbc48a6f393b3bef Signed-off-by: Alistair Coles <alistairncoles@gmail.com> |
||
|
Tim Burke
|
be56c1e258 |
s3api: Validate additional checksums on upload
See https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html for some background. This covers both "normal" objects and part-uploads for MPUs. Note that because we don't write down any client-provided checksums during initiate-MPU calls, we can't do any verification during complete-MPU calls. crc64nvme checksums are not yet supported; clients attempting to use them will get back 501s. Adds crt as a boto3 extra to test-requirements. The extra lib provides crc32c and crc64nvme checksum support in boto3. Co-Authored-By: Ashwin Nair <ashnair@nvidia.com> Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Signed-off-by: Tim Burke <tim.burke@gmail.com> Signed-off-by: Alistair Coles <alistairncoles@gmail.com> Change-Id: Id39fd71bc59875a5b88d1d012542136acf880019 |
||
|
Alistair Coles
|
1a27d1b83f |
s3api: fix multi-upload BadDigest error
S3 includes the expected base64 digest in a BadDigest response to a multipart complete POST request. Co-Authored-By: Tim Burke <tim.burke@gmail.com> Change-Id: Ie20ccf10846854f375c29be1b0b00b8eaacc9afa |
||
|
Takashi Kajinami
|
9754eff025 |
Use built-in implementation to get utc timezone
datetime.timezone.utc[1] has been available in Python 3 and can be used instead of datetime.UTC which is available only in Python >=3.11 . [1] https://docs.python.org/3.13/library/datetime.html#datetime.timezone.utc Change-Id: I92bc82a1b7e2bcb947376bc4d96fc603ad7d5b6c |
||
|
Alistair Coles
|
962084ded0 |
s3 compat tests: sanitize object listings
Swift does not return all the parameters of objects in a listing (e.g. ChecksumType and ChecksumAlgorithm) so pop these from listings before making assertions. Change-Id: Ieb7a9783731c11f1c08db398eae07ffafa127460 |
||
|
Zuul
|
1c244b3cd5 | Merge "Add config option for whether to skip s3_acl-requiring tests" | ||
|
Zuul
|
84a70769b1 |
Merge "s3api: Allow PUT with if-none-match: *"
|
||
|
Tim Burke
|
edd5eb29d7 |
s3api: Allow PUT with if-none-match: *
Swift already supports that much, at least. AWS used to not support any conditional PUTs, but that's changed somewhat recently; see - https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/ - https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-s3-functionality-conditional-writes/ Drive-By: Fix retry of a CompleteMultipartUpload with changed parts; it should 404 rather than succeed in writing the new manifest. Change-Id: I2e57dacb342b5758f16b502bb91372a2443d0182 |
||
|
Alistair Coles
|
7a3b7373bc |
Add config option for whether to skip s3_acl-requiring tests
Co-Authored-By: Tim Burke <tim.burke@gmail.com> Change-Id: I811642fccd916bd9ef71846a8108d50a462740f0 |
||
|
Thibault Person
|
f9ac22971f |
Add support of Sigv4-streaming
This update implements Sigv4-streaming (chunked upload) as described in the Amazon S3 documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html Closes-Bug: #1810026 Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Co-Authored-By: ashnair <ashnair@nvidia.com> Change-Id: I7be1ce9eb5dba7b17bdf3e53b0d05d25ac0a05b0 |
||
|
Alistair Coles
|
ab5c742e2b |
s3api: make MPU part error response message same as S3
Change-Id: I60f0b36633c2a348933fd45d348d76b256fca57a |
||
|
Zuul
|
7140633925 | Merge "tests: Allow more configuration for S3 cross-compat tests" | ||
|
Zuul
|
9723f99250 | Merge "tests: More simplification of s3api test bucket naming" | ||
|
Alistair Coles
|
adc7760ed5 |
tests: More simplification of s3api test bucket naming
Change-Id: Ie59f11874daf9f166a699fa904581830942163a9 Related-Change: I02efba463a8263ca44be511c025c6c6bfbc57334 |
||
|
Tim Burke
|
ed0b68e1b7 |
tests: Simplify test bucket name
We already prefix everything with "s3api-test-"; there's no reason to double up the "test-" part. Change-Id: I02efba463a8263ca44be511c025c6c6bfbc57334 |
||
|
Tim Burke
|
f9354c9eb6 |
tests: Allow more configuration for S3 cross-compat tests
Specifically, allow endpoint and region to be configured. This allows developers to compare boto3 behaviors with TLS enabled/disabled, for example, or AWS behaviors between different regions. Change-Id: I4113e2fd47e5535eec8bd9487884af077e8b0318 |
||
|
Tim Burke
|
128124cdd8 |
Remove py2-only code paths
Change-Id: Ic66b9ae89837afe31929ce07cc625dfc28314ea3 |
||
|
Alistair Coles
|
eac4ffd7a9 |
s3api: add more MPU cross-compat tests
Change-Id: Ia03af1680c6230658473c0c8d444efb5bb805f58 |
||
|
Tim Burke
|
7bf2797799 |
s3api: Clean up some errors
- SHA256 mismatches should trip XAmzContentSHA256Mismatch errors, not BadDigest. This should include ClientComputedContentSHA256 and S3ComputedContentSHA256 elements. - BadDigest responses should include ExpectedDigest elements. - Fix a typo in InvalidDigest error message. - Requests with a v4 authorization header require a sha256 header, rejecting with InvalidRequest on failure (and pretty darn early!). - Requests with a v4 authorization header perform a looks-like-a-valid-sha256 check, rejecting with InvalidArgument on failure. - Invalid SHA256 should take precedence over invalid MD5. - v2-signed requests can still raise XAmzContentSHA256Mismatch errors (though they *don't* do the looks-like-a-valid-sha256 check). - If provided, SHA256 should be used in calculating canonical request for v4 pre-signed URLs. Change-Id: I06c2a16126886bab8807d704294b9809844be086 |
||
|
Clay Gerrard
|
d10351db30 |
s3api test for zero byte mpu
Change-Id: I89050cead3ef2d5f8ebfc9cb58f736f33b1c44fe |
||
|
indianwhocodes
|
46e7da97c6 |
s3api: Support GET/HEAD request with ?partNumber
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Closes-Bug: #1735284 Change-Id: Ib396309c706fbc6bc419377fe23fcf5603a89f45 |
||
|
Matthew Oliver
|
0996433fe5 |
s3api: Add basic GET object-lock support
Some tooling out there, like Ansible, will always call to see if object-lock is enabled on a bucket/container. This fails as Swift doesn't understand the object-lock or the get object lock api[0]. When you use the get-object-lock-configuration to a bucket in s3 that doesn't have it applied it returns a specific 404: GET /?object-lock HTTP/1.1" 404 None ... <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>ObjectLockConfigurationNotFoundError</Code> <Message>Object Lock configuration does not exist for this bucket</Message> <BucketName>bucket_name</BucketName> <RequestId>83VQBYP0SENV3VP4</RequestId> </Error>' This patch doesn't add support for get_object lock, instead it always returns a similar 404 as supplied by s3, so clients know it's not enabled. Also add a object-lock PUT 501 response. [0] https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html Change-Id: Icff8cf57474dfad975a4f45bf2d500c2682c1129 |
||
|
Tim Burke
|
f6ac7d4491 |
Tolerate absolute-form request targets
We've seen S3 clients expecting to be able to send request lines like GET https://cluster.domain/bucket/key HTTP/1.1 instead of the expected GET /bucket/key HTTP/1.1 Testing against other, independent servers with something like ( echo -n $'GET https://www.google.com/ HTTP/1.1\r\nHost: www.google.com\r\nConnection: close\r\n\r\n' ; sleep 1 ) | openssl s_client -connect www.google.com:443 suggests that it may be reasonable to accept them; the RFC even goes so far as to say > To allow for transition to the absolute-form for all requests in some > future version of HTTP, a server MUST accept the absolute-form in > requests, even though HTTP/1.1 clients will only send them in > requests to proxies. (See https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.2) Fix it at the protocol level, so everywhere else we can mostly continue to assume that PATH_INFO starts with a / like we always have. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Change-Id: I04012e523f01e910f41d5a41cdd86d3d2a1b9c59 |
||
|
indianwhocodes
|
4cba97d7b6 |
Malformed CompleteMultipartUpload request should 400
Closes-Bug: #1883172 Change-Id: Ie44288976ac5a507c27bd175c5f56c9b0bd04fe0 |
||
|
Tim Burke
|
162847d151 |
tests: Tolerate NoSuchBucket errors when cleaning up
Sometimes we'll get back a 503 on the initial attempt, though the delete succeeded on the backend. Then when the client automatically retries, it gets back a 404. Change-Id: I6d8d5af68884b08e22fd8a332f366a0b81acb7ed |
||
|
Clay Gerrard
|
d0feee1900 |
Add MPU to s3api tests
... and reword some mpu listing logic Related-Change-Id: I923033e863b2faf3826a0f5ba84307addc34f986 Change-Id: If1909bb7210622908f2ecc5e06d53cd48250572a |
||
|
Alistair Coles
|
5d9f1f009c |
s3api tests: allow AWS credential file loading
When switching the s3api cross-compatibility tests' target between a Swift endpoint and an S3 endpoint, allow specifying an AWS CLI style credentials file as an alternative to editing the swift 'test.conf' file. Change-Id: I5bebca91821552d7df1bc7fa479b6593ff433925 |
||
|
Ade Lee
|
5320ecbaf2 |
replace md5 with swift utils version
md5 is not an approved algorithm in FIPS mode, and trying to instantiate a hashlib.md5() will fail when the system is running in FIPS mode. md5 is allowed when in a non-security context. There is a plan to add a keyword parameter (usedforsecurity) to hashlib.md5() to annotate whether or not the instance is being used in a security context. In the case where it is not, the instantiation of md5 will be allowed. See https://bugs.python.org/issue9216 for more details. Some downstream python versions already support this parameter. To support these versions, a new encapsulation of md5() is added to swift/common/utils.py. This encapsulation is identical to the one being added to oslo.utils, but is recreated here to avoid adding a dependency. This patch is to replace the instances of hashlib.md5() with this new encapsulation, adding an annotation indicating whether the usage is a security context or not. While this patch seems large, it is really just the same change over and again. Reviewers need to pay particular attention as to whether the keyword parameter (usedforsecurity) is set correctly. Right now, all of them appear to be not used in a security context. Now that all the instances have been converted, we can update the bandit run to look for these instances and ensure that new invocations do not creep in. With this latest patch, the functional and unit tests all pass on a FIPS enabled system. Co-Authored-By: Pete Zaitcev Change-Id: Ibb4917da4c083e1e094156d748708b87387f2d87 |
||
|
karen chan
|
6097660f0c |
s3api: Implement object versioning API
Translate AWS S3 Object Versioning API requests to native Swift Object Versioning API, speficially: * bucket versioning status * bucket versioned objects listing params * object GETorHEAD & DELETE versionId * multi_delete versionId Change-Id: I8296681b61996e073b3ba12ad46f99042dc15c37 Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Tim Burke
|
89c9c6f0b2 |
Have a separate s3api functional test suite
The idea is that we should have a suite of pure-S3 tests that we can point at AWS to verify that we've written accurate tests, then point at Swift-with-s3api to verify that we've correctly implemented the S3 api. As a start, just check GET Service; go ahead and create a few buckets so we can see them in the service listing. Change-Id: I283757cd3084b1c83a1e9bf0f46b6ce9d7ee8eb9 |