60c0ab2ea0533dabbcd6315357d426549ace5c9d
580 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Alistair Coles
|
60c0ab2ea0 |
Quiten boto logging in func tests
Some s3api functional tests were setting boto logging level to DEBUG, which results in a very large amount of logging output from the test runner, including large request bodies. This makes it extremely hard to inspect test output. This patch makes the quiet_boto_logging context manager revert the logger level to its original value rather than indiscriminately leaving it set to DEBUG. Change-Id: I1dd9603adf9a19e89da5a461d3c6810a3432ae46 |
||
|
Zuul
|
98eb28d510 | Merge "utils: paths with empty components are invalid" | ||
|
Chris Smart
|
112423f59c |
functest: add checks for quota count API
The account quota middleware was missing tests for the new quota count API and it was also only testing the legacy API for quota bytes. This adds the new APIs quota bytes and quota count to the functional tests. Change-Id: I6ebb19c90dfb1cfbe0535ed3860f2319e5153c05 |
||
|
Tim Burke
|
015cbaac86 |
utils: paths with empty components are invalid
Note that you can still have a "//" in the path with rest_with_last, though. Change-Id: I171afcd67b162634189b752ff92a4f43484bc12a |
||
|
Daanish Khan
|
4eefae2482 |
account_quota: migrate quota_bytes and quota_count to the sysmeta namespace
Account quota metadata such as quota_bytes and quota_count are stored in the `meta` namespace which users have access to. However, this should be only available to reseller admins. This patch adds support for writing the quota metadata to `sysmeta` namespace, so that it is not accessible by users. The account policy quota is already using `sysmeta` and has the namespace `X-Account-Quota-*`, so we are following this pattern. If present, `X-Account-Quota-Bytes` is always preferred. However, in order to maintain backwards compatibility, `X-Account-Meta-Quota-Bytes` will still be honoured if it exists and `X-Account-Quota-Bytes` is not present. This also adds some new "legacy" tests to validate backwards compatibility. Co-authored-by: Azmain Adib <adib1905@gmail.com> Co-authored-by: Daanish Khan <daanish1337@gmail.com> Co-authored-by: Mohammed Al-Jawaheri <mjawaheri02@gmail.com> Co-authored-by: Nada El-Mestkawy <nadamaged05@gmail.com> Co-authored-by: Tra Bui <trabui.0517@gmail.com> Co-authored-by: Chris Smart <distroguy@gmail.com> Change-Id: Icf7b26023ab5b84136ceaa103fa2797534320f1a |
||
|
Tim Burke
|
cd0fe25da1 |
account_quotas: Fix X-Remove-Account-Quota-Bytes-Policy-<name>
Previously, this would reduce the account's quota to zero, which seems like the opposite of what the operator intended. Now, remove the quota, similar to sending an empty quota header. Change-Id: Ic28752d835e0b970f2baa4e68cbfcde4f500b3d4 |
||
|
Tim Burke
|
cd288b183d |
tests: Functionally test account quotas
Change-Id: Ied0ff6bea7e054fad3fe9579c85d9ae5c9c0b255 |
||
|
Tim Burke
|
dd8b7656da |
Skip boto 2.x tests if boto is not installed
The boto library was last updated two years ago and has rusted to the point that it's unusable on py312 -- see https://github.com/boto/boto/issues/3951 We should transition all of these tests to boto3 equivalents, but this should help out in the meantime. Related-Bug: #1557260 Related-Bug: #2063367 Change-Id: If95f45371f352c6a2d16be1a3e1b64e265bccfb4 |
||
|
indianwhocodes
|
11eb17d3b2 |
support x-open-expired header for expired objects
If the global configuration option 'enable_open_expired' is set to true in the config, then the client will be able to make a request with the header 'x-open-expired' set to true in order to access an object that has expired, provided it is in its grace period. If this config flag is set to false, the client will not be able to access any expired objects, even with the header, which is the default behavior unless the flag is set. When a client sets a 'x-open-expired' header to a true value for a GET/HEAD/POST request the proxy will forward x-backend-open-expired to storage server. The storage server will allow clients that set x-backend-open-expired to open and read an object that has not yet been reaped by the object-expirer, even after the x-delete-at time has passed. The header is always ignored when used with temporary URLs. Co-Authored-By: Anish Kachinthaya <akachinthaya@nvidia.com> Related-Change: I106103438c4162a561486ac73a09436e998ae1f0 Change-Id: Ibe7dde0e3bf587d77e14808b169c02f8fb3dddb3 |
||
|
Zuul
|
60db1f847c | Merge "slo: part-number=N query parameter support" | ||
|
indianwhocodes
|
6adbeb4036 |
slo: part-number=N query parameter support
This change allows individual SLO segments to be downloaded by adding an extra 'part-number' query parameter to the GET request. You can also retrieve the Content-Length of an individual segment with a HEAD request. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: I7af0dc9898ca35f042b52dd5db000072f2c7512e |
||
|
Zuul
|
0947e94f66 | Merge "staticweb: Work with prefix-based tempurls" | ||
|
Tim Burke
|
ce9e56a6d1 |
lint: Consistently use assertIsInstance
This has been available since py32 and was backported to py27; there is no point in us continuing to carry the old idiom forward. Change-Id: I21f64b8b2970e2dd5f56836f7f513e7895a5dc88 |
||
|
Tim Burke
|
8c4e65a6b5 |
staticweb: Work with prefix-based tempurls
Note that there's a bit of a privilege escalation as prefix-based tempurls can now be used to perform listings -- but only on containers with staticweb enabled. Since having staticweb enabled was previously pretty useless unless the container was both public and publicly-listable, I think it's probably fine. This also allows tempurls to be used at the container level, but only for staticweb responses. Change-Id: I7949185fdd3b64b882df01d54a8bc158ce2d7032 |
||
|
indianwhocodes
|
0893cedc35 |
Include accept-ranges header in s3api response
Change-Id: Ib3fa895ea13a6703b0f146bc8833c4e635976fdd |
||
|
Matthew Oliver
|
0996433fe5 |
s3api: Add basic GET object-lock support
Some tooling out there, like Ansible, will always call to see if object-lock is enabled on a bucket/container. This fails as Swift doesn't understand the object-lock or the get object lock api[0]. When you use the get-object-lock-configuration to a bucket in s3 that doesn't have it applied it returns a specific 404: GET /?object-lock HTTP/1.1" 404 None ... <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>ObjectLockConfigurationNotFoundError</Code> <Message>Object Lock configuration does not exist for this bucket</Message> <BucketName>bucket_name</BucketName> <RequestId>83VQBYP0SENV3VP4</RequestId> </Error>' This patch doesn't add support for get_object lock, instead it always returns a similar 404 as supplied by s3, so clients know it's not enabled. Also add a object-lock PUT 501 response. [0] https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html Change-Id: Icff8cf57474dfad975a4f45bf2d500c2682c1129 |
||
|
Tim Burke
|
3f3f5be9bb |
tests: boto is always <3.0
Otherwise, it'd be boto3. Change-Id: I2961740fd4f3e914675083331f2465591d63b755 |
||
|
Tim Burke
|
f871591baa |
tests: swiftclient supports insecure
We already require swiftclient>=3.2.0, and have for years. We can stop checking whether it's 1.x Related-Change: I9842c9975821bda5c7d8bf2fc214480c0c0a5e96 Change-Id: I798904ab66ca10e21b4999ed7f2be74d1b63584c |
||
|
Tim Burke
|
5392a2057b |
tests: Add test(s) for MPU part copy from range
When using the copy-part API it is expected for s3api to write down an empty value for X-Object-Sysmeta-S3Api-Etag on segments. This was ostensibly to prevent writing down an unrelated S3Api-Etag when copying a part from another MPU the copy transfers object sysmeta. We should assume a S3Api-Etag w/o X-Static-Large-Object is non-sense, and SLO should forever expect empty values for it's sysmeta. Drive-By: consolidate handling of boto2 sigv4 skips Related-Bug: #2035158 Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Change-Id: Ic6f04a5a6af8a3e65b226cff2ed6c9fce8ce1fa2 |
||
|
indianwhocodes
|
7b39698d0d |
wsgi: bad request syntax response missing txn-id
When a client sends a malformed http request our server returns a valid http error response with Connection: close and closes the connection. We want to include a transaction-id and ensure we log details about about the "bad request syntax" Change-Id: Ic0ee1e4fd4d434d442fcffa68da77e862b37d4c6 |
||
|
Tim Burke
|
b46b735a3e |
Fix handling of non-ASCII accounts
Related-Change: I4ecfae2bca6ffa08ad15e584579ebce707f4628d Related-Change: I1e244c231753b8f4b6f1cf95cb0ae4c3c959ae0f Change-Id: Ia386736b9b283858931794690538871b6e1ad9c8 |
||
|
Tim Burke
|
052bcadb27 |
tests: Skip s3api functional tests when no s3api user configured
Change-Id: I61f141a71eddcac600058d66ddf802306df455c1 |
||
|
Tim Burke
|
78f13be75c |
tests: Let func tests run with test users 1 and 2 but not 3
Change-Id: Ia564f2ee70f5d04acab1c38e17d1936642a01447 |
||
|
Zuul
|
bba3a3145d | Merge "tests: Get rid of test.unit.SkipTest" | ||
|
Zuul
|
e21766cf64 | Merge "Skip S3 versioning test when versioning is not enabled" | ||
|
Tim Burke
|
cd693e519e |
encryption: Expose decrypted metadata via CORS
Normally, the proxy object controller would be adding these, but when encrypted, there won't be any headers in the x-object-meta-* namespace. Closes-Bug: #1868045 Change-Id: I8e708a60ee63f679056300fc9d68227e46d605e8 |
||
|
Tim Burke
|
8dd2d010ac |
Skip S3 versioning test when versioning is not enabled
Change-Id: I36e42f459a74ed71a1cc57570a564e5562abbae3 |
||
|
Tim Burke
|
be16d6c4fd |
tests: Get rid of test.unit.SkipTest
unittest.SkipTest suffices. Change-Id: I11eb73f7dc4a8598fae85d1efca721f69067fb4f |
||
|
Tim Burke
|
488f8c839f |
tests: Fix some func tests to do with metadata maximums
Previously, if a cluster's combined configured max_meta_name_length and max_meta_value_length constraints were larger than the configured max_meta_overall_size, we would accidentally go over the overall size while intending to just test being exactly at the value length-limit. Change-Id: I42a5287011509e5b43959aab060f9ec7405ae5b9 |
||
|
Tim Burke
|
3550e00dd9 |
tests: Ensure XXE injection tests have config loaded
Depending on test order (and possibly whether there were earlier failures?) the new tests may trip KeyErrors when trying to get s3_access_key values. Solution seems to be defining setUpModule() / tearDownModule() like other functional tests. Also fix up some Content-MD5 handling; if we're using pre-signed URLs, we can't provide a Content-MD5. Change-Id: Ifce72ec255b1b618b9914ce5785d04ee0ebd3b8c Related-Change: I84494123cfc85e234098c554ecd3e77981f8a096 |
||
|
Aymeric Ducroquetz
|
b8467e190f |
s3api: Prevent XXE injections
Previously, clients could use XML external entities (XXEs) to read arbitrary files from proxy-servers and inject the content into the request. Since many S3 APIs reflect request content back to the user, this could be used to extract any secrets that the swift user could read, such as tempauth credentials, keymaster secrets, etc. Now, disable entity resolution -- any unknown entities will be replaced with an empty string. Without resolving the entities, the request is still processed. [CVE-2022-47950] Closes-Bug: #1998625 Co-Authored-By: Romain de Joux <romain.de-joux@ovhcloud.com> Change-Id: I84494123cfc85e234098c554ecd3e77981f8a096 |
||
|
Tim Burke
|
f6ac7d4491 |
Tolerate absolute-form request targets
We've seen S3 clients expecting to be able to send request lines like GET https://cluster.domain/bucket/key HTTP/1.1 instead of the expected GET /bucket/key HTTP/1.1 Testing against other, independent servers with something like ( echo -n $'GET https://www.google.com/ HTTP/1.1\r\nHost: www.google.com\r\nConnection: close\r\n\r\n' ; sleep 1 ) | openssl s_client -connect www.google.com:443 suggests that it may be reasonable to accept them; the RFC even goes so far as to say > To allow for transition to the absolute-form for all requests in some > future version of HTTP, a server MUST accept the absolute-form in > requests, even though HTTP/1.1 clients will only send them in > requests to proxies. (See https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.2) Fix it at the protocol level, so everywhere else we can mostly continue to assume that PATH_INFO starts with a / like we always have. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Change-Id: I04012e523f01e910f41d5a41cdd86d3d2a1b9c59 |
||
|
Tim Burke
|
597887dedc |
Extract SwiftHttpProtocol to its own module
Change-Id: I35cade2c46eb6acb66c064cde75d78173f46864c |
||
|
indianwhocodes
|
d363236a24 |
s3api errors for unsupported headers x-delete-at, x-delete-after
We need to support the aforementioned headers in our s3 apis and raise an InvalidArgumentError if a s3 client makes a request Change-Id: I2c5b18e52da7f33b31ba386cdbd042f90b69ef97 |
||
|
Tim Burke
|
bc3625142c |
py310: Fix formatdate() call
Previously, this would trip TypeErrors on py310: TypeError: 'S3Timestamp' object cannot be interpreted as an integer Change-Id: I124c1957264c80d28a6b3e852d042cbc8468939c |
||
|
Matthew Oliver
|
ef31baf3fc |
formpost: Add support for sha256/512 signatures
Sha1 has known to be deprecated for a while so allow the formpost middleware to use SHA256 and SHA512. Follow the tempurl model and accept signatures of the form: <hex-encoded signature> or sha1:<base64-encoded signature> sha256:<base64-encoded signature> sha512:<base64-encoded signature> where the base64-encoding can be either standard or URL-safe, and the trailing '=' chars may be stripped off. As part of this, pull the signature-parsing out to a new function, and add detection for hex-encoded sha512 signatures to tempurl. Change-Id: Iaba3725551bd47d75067a634a7571485b9afa2de Related-Change: Ia9dd1a91cc3c9c946f5f029cdefc9e66bcf01046 Co-Authored-By: Tim Burke <tim.burke@gmail.com> Closes-Bug: #1794601 |
||
|
Zuul
|
5398204f22 | Merge "tempurl: Deprecate sha1 signatures" | ||
|
Alistair Coles
|
2f607cd319 |
Round s3api listing LastModified to integer resolution
s3api bucket listing elements currently have LastModified values with millisecond precision. This is inconsistent with the value of the Last-Modified header returned with an object GET or HEAD response which has second precision. This patch reduces the precision to seconds in bucket listings and upload part listings. This is also consistent with observation of an aws listing response. The last modified values in the swift native listing *up* to the nearest second to be consistent with the seconds-precision Last-Modified time header that is returned with an object GET or HEAD. However, we continue to include millisecond digits set to 0 in the last-modified string, e.g.: '2014年06月10日T22:47:32.000Z'. Also, fix the last modified time returned in an object copy response to be consistent with the last modified time of the object that was created. Previously it was rounded down, but it should be rounded up. Change-Id: I8c98791a920eeedfc79e8a9d83e5032c07ae86d3 |
||
|
Tim Burke
|
118cf2ba8a |
tempurl: Deprecate sha1 signatures
We've known this would eventually be necessary for a while [1], and way back in 2017 we started seeing SHA-1 collisions [2]. [1] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html [2] https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html UpgradeImpact: ============== "sha1" has been removed from the default set of `allowed_digests` in the tempurl middleware config. If your cluster still has clients requiring the use of SHA-1, - explicitly configure `allowed_digests` to include "sha1" and - encourage your clients to move to more-secure algorithms. Depends-On: https://review.opendev.org/c/openstack/tempest/+/832771 Change-Id: I6e6fa76671c860191a2ce921cb6caddc859b1066 Related-Change: Ia9dd1a91cc3c9c946f5f029cdefc9e66bcf01046 Closes-Bug: #1733634 |
||
|
Zuul
|
ec964b23bb | Merge "s3api: Copy more headers from MPU marker to final object" | ||
|
Tim Burke
|
1c4acf2d8f |
s3api: Copy more headers from MPU marker to final object
Closes-Bug: 1966396 Change-Id: I253d8e3e8678fad3fde43259ed3225df4048a458 |
||
|
Zuul
|
0651d8175d | Merge "trivial: Replace assertRegexpMatches with assertRegex" | ||
|
Zuul
|
014c98e853 | Merge "s3api: Fix multi_delete with object names using non-ASCII characters" | ||
|
Zuul
|
7ac2b2eb76 | Merge "s3api: Delete all parts when aborting MPU with non-ASCII characters" | ||
|
Aymeric Ducroquetz
|
82ca37517d |
s3api: Delete all parts when aborting MPU with non-ASCII characters
Change-Id: Idcda76f7a880a18c3bac699e0fb2435e4a54abbd |
||
|
Aymeric Ducroquetz
|
dd64a81e65 |
s3api: Fix multi_delete with object names using non-ASCII characters
Co-Authored-By: Florent Vennetier <florent.vennetier@ovhcloud.com> Change-Id: I635bc91faa7709f9df9cdf3aec157a21c08923ca |
||
|
Aymeric Ducroquetz
|
5b3ec5aa64 |
s3api: Properly decode MPU request parameters before using them
Specifically, parameters that may contain non-ASCII characters, such as the prefix and marker to list current uploads. Change-Id: Icfae68825f94ddf2412c0274c3d500e265117e8e |
||
|
Tim Burke
|
5f25e1cc77 |
s3api: Fix non-ascii MPUs
Previous problems included: - returning wsgi strings quoted assuming UTF-8 on py3 when initiating or completing multipart uploads - trying to str() some unicode on py2 when listing parts, leading to UnicodeEncodeErrors Change-Id: Ibc1d42c8deffe41c557350a574ae80751e9bd565 |
||
|
Florent Vennetier
|
c15818f1e6 |
s3api: fix the copy of non-ASCII objects
Trying to copy an object with non-ASCII characters in its name results in, depending on the pipeline: - an error code 412 because of a badly urlencoded path - an error code 500 "TypeError: Expected a WSGI string" This commit fixes the problem by calling str_to_wsgi on the object name after it has been urldecoded. We do not need to call this on the container name because it is supposed to contain only ASCII characters. Change-Id: If837d4e55735b10a783c85d91f37fbea5e3baf1d |
||
|
Tim Burke
|
b4e532a46f |
func test improvements
Not all v1 auth systems use an acct:user format; s3api tests should not require it. Be a little more tolerant of listing consistency issues when resetting. Tolerate s3api /info results returning strings instead of ints. Related-Change: I4a46bd650a53f88c642d402e697869df28bd2fd3 Change-Id: I8f2f247dd113ad637b17d241133b14c35cadecae |