84a70769b1c56cc376a1485391f81569cd7ce494
19 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Tim Burke
|
edd5eb29d7 |
s3api: Allow PUT with if-none-match: *
Swift already supports that much, at least. AWS used to not support any conditional PUTs, but that's changed somewhat recently; see - https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/ - https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-s3-functionality-conditional-writes/ Drive-By: Fix retry of a CompleteMultipartUpload with changed parts; it should 404 rather than succeed in writing the new manifest. Change-Id: I2e57dacb342b5758f16b502bb91372a2443d0182 |
||
|
Tim Burke
|
5be20f46df |
CI: update known failures for the ceph tests
For some reason, when we switched from py36 on centos8 to py39 on centos9, these two tests started failing. Looks like a disagreement about whether the canonical path for a bucket request should have a trailing slash or not. Mark them as known-failures for now so we can stay aware of any other new breakage brought on by swift code changes. Related-Change: I4f6b9c07af7bc768654f1a5d0c66b048e0f2c9c1 Change-Id: If990752c7ef7667182dbe18e49679e48c0e3d42d |
||
|
Tim Burke
|
fcf1110ab2 |
CI: Fix some known-failure formatting
Related-Change: Icff8cf57474dfad975a4f45bf2d500c2682c1129 Change-Id: Ic2283fab0d18ea03c6beb353c6b934344606c15e |
||
|
Matthew Oliver
|
0996433fe5 |
s3api: Add basic GET object-lock support
Some tooling out there, like Ansible, will always call to see if object-lock is enabled on a bucket/container. This fails as Swift doesn't understand the object-lock or the get object lock api[0]. When you use the get-object-lock-configuration to a bucket in s3 that doesn't have it applied it returns a specific 404: GET /?object-lock HTTP/1.1" 404 None ... <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>ObjectLockConfigurationNotFoundError</Code> <Message>Object Lock configuration does not exist for this bucket</Message> <BucketName>bucket_name</BucketName> <RequestId>83VQBYP0SENV3VP4</RequestId> </Error>' This patch doesn't add support for get_object lock, instead it always returns a similar 404 as supplied by s3, so clients know it's not enabled. Also add a object-lock PUT 501 response. [0] https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html Change-Id: Icff8cf57474dfad975a4f45bf2d500c2682c1129 |
||
|
Tim Burke
|
d0cf743b6f |
ceph-tests: Remove known-failure
Apparently we fixed that recently without realizing it. Change-Id: I2f623ffc1400f018c203e930a7b78dfdb9d6e61c Related-Change: I8c98791a920eeedfc79e8a9d83e5032c07ae86d3 |
||
|
Tim Burke
|
d29cbc3996 |
CI: Run ceph and rolling upgrade tests under py3
As part of that, the ceph test runner needed up-rev'ing to run under py3. As a result, the known-failures shifted. Trim the on-demand rolling upgrade jobs list -- now that it's running py3, we only expect it to pass for train and beyond. Also, pin smmap version on py2 -- otherwise, the remaining experimental jobs running on centos-7 fail. Change-Id: Ibe46aecf0f4461be59eb206bfe9063cc1bfff706 |
||
|
Aymeric Ducroquetz
|
fd2dd11562 |
s3api: Make the 'Quiet' key value case insensitive
When deleting multiple objects, S3 allows to enable a quiet mode with the 'Quiet' key. At AWS S3, the value of this key is case-insensitive. - Quiet mode is enabled if the value is 'true' (regardless of case). - Otherwise, in all other cases (even a non-boolean value), this mode will be disabled. Also, some tools (like Minio's python API) send the value 'True' (and not 'true'). Change-Id: Id9d1da2017b8d13242ae1f410347febb013e9ce1 |
||
|
Tim Burke
|
10c24e951c |
s3api: Fix prefix/delimiter/marker quoting
And stop sending WSGI strings on py3. Change-Id: I9b769e496aa7c8ed5862c2d7310f643838328084 Closes-Bug: #1853654 |
||
|
Tim Burke
|
f0b8790c12 |
s3api: Fix blank delimiter handling
Real AWS only includes an empty delimiter element when doing a version-aware listing. Change-Id: Id246a157c576eac93375be084ada3740f1e09793 Closes-Bug: #1853663 |
||
|
karen chan
|
6097660f0c |
s3api: Implement object versioning API
Translate AWS S3 Object Versioning API requests to native Swift Object Versioning API, speficially: * bucket versioning status * bucket versioned objects listing params * object GETorHEAD & DELETE versionId * multi_delete versionId Change-Id: I8296681b61996e073b3ba12ad46f99042dc15c37 Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Tim Burke
|
e11c58ef89 |
Turn off dns_compliant_bucket_names for ceph tests
We get a handful more passing tests that way, following their move to boto3. Change-Id: I73e9c38bde00a7117cec97e98f013e86350aa5be |
||
|
Tim Burke
|
249a302d0c |
Remove a bunch of known-failures that moved from boto to boto3
Change-Id: I775a03e0ba1e10982eb6f7ef52be773c8831b1ec |
||
|
Tim Burke
|
cfb3ae6019 |
Update known-failures and config for up-rev'ed ceph/s3tests
Change-Id: I3833843cd8d23c498a2afe6c68a3f0afe26343c0 |
||
|
Zuul
|
854a72facf | Merge "S3Api: handle non-ASCII markers in v1 listings." | ||
|
Timur Alperovich
|
8b64381371 |
Set Content-Type with s3api metadata updates.
S3 supports two metadata operations on object copy: COPY and REPLACE. When using REPLACE, the Content-Type should be set to the one supplied by the caller. When using COPY, the existing object's Content-Type value is used. Change-Id: Ic7c6278dedef308c9219eb45751abfa5655f144f Closes-Bug: #1828907 |
||
|
Timur Alperovich
|
dade632b0f |
S3Api: handle non-ASCII markers in v1 listings.
Added a test for S3 v1 listings that use URL encoding and have non-ASCII characters. In the process discovered that the XML schema for ListBucketResult had a small problem: Delimiter and EncodingType needed to be reordered. Change-Id: Ib3124ea079a73a577b86de97657603a64b16f965 |
||
|
karen chan
|
78c9fd9f93 |
Change PUT bucket conflict error
When a bucket already exists, PUT returned a BucketAlreadyExists error. AWS S3 returns BucketAlreadyOwnedByYou error instead, so this changes the error returned by swift3. When sending a PUT request to a bucket, if the bucket already exists and is not owned by the user, return 409 conflict error, BucketAlreadyExists. Change-Id: I32a0a9add57ca0e4d667b5eb538dc6ea53359944 Closes-Bug: #1498231 |
||
|
Kota Tsuyuzaki
|
2c7768a3cb |
Small cleanup on s3api
This patch is one of follow up to remove unnecessary files, and a comment in the code. The conf files are used to setup functests environment in the past swift3 repository but that should port to setuppers of functests (see related change). Anyway, we don't need shell based older conf.in script on that way. Change-Id: If431979ea6fa373ac1cde4b7e13d57d91fb15be8 Related-Change: I6f30f74678ad35479da237361bee48c46c0ecc49 |
||
|
Kota Tsuyuzaki
|
636b922f3b |
Import swift3 into swift repo as s3api middleware
This attempts to import openstack/swift3 package into swift upstream repository, namespace. This is almost simple porting except following items. 1. Rename swift3 namespace to swift.common.middleware.s3api 1.1 Rename also some conflicted class names (e.g. Request/Response) 2. Port unittests to test/unit/s3api dir to be able to run on the gate. 3. Port functests to test/functional/s3api and setup in-process testing 4. Port docs to doc dir, then address the namespace change. 5. Use get_logger() instead of global logger instance 6. Avoid global conf instance Ex. fix various minor issue on those steps (e.g. packages, dependencies, deprecated things) The details and patch references in the work on feature/s3api are listed at https://trello.com/b/ZloaZ23t/s3api (completed board) Note that, because this is just a porting, no new feature is developed since the last swift3 release, and in the future work, Swift upstream may continue to work on remaining items for further improvements and the best compatibility of Amazon S3. Please read the new docs for your deployment and keep track to know what would be changed in the future releases. Change-Id: Ib803ea89cfee9a53c429606149159dd136c036fd Co-Authored-By: Thiago da Silva <thiago@redhat.com> Co-Authored-By: Tim Burke <tim.burke@gmail.com> |