76ca11773eb6508e1140da2b04ddb2ebc59cd753
10446 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Tim Burke
|
76ca11773e |
lint: Up-rev hacking
Last time we did this was nearly 4 years ago; drag ourselves into something approaching the present. Address a few new pyflakes issues that seem reasonable to enforce: E275 missing whitespace after keyword E231 missing whitespace after ',' E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()` Main motivator is that the old hacking kept us on an old version of flake8 et al., which no longer work with newer Pythons. Change-Id: I54b46349fabb9776dcadc6def1cfb961c123aaa0 |
||
|
Tim Burke
|
1936f6735c |
replicator: Rename update_deleted to revert
This is a more-intuitive name for what's going on and it's been working well for us in the reconstructor. Change-Id: Id935de4ca9eb6f38b0d587eaed8d13c54bd89d60 |
||
|
Zuul
|
afe31b4c01 | Merge "tests: Fix float expectations for py312" | ||
|
Zuul
|
0cb02a6ce5 | Merge "proxy: don't send multi-part terminator when no parts sent" | ||
|
Tim Burke
|
e96a081024 |
tests: Fix float expectations for py312
From https://docs.python.org/3/whatsnew/3.12.html : sum() now uses Neumaier summation to improve accuracy and commutativity when summing floats or mixed ints and floats. At least, I *think* that's what was causing the ring builder failures. Partial-Bug: #2046352 Change-Id: Icae2f1e3e95f216d214636bd5a6d1f40aacab20d |
||
|
Alistair Coles
|
dc3eda7e89 |
proxy: don't send multi-part terminator when no parts sent
If the proxy timed out while reading a replicated policy multi-part response body, it would transform the ChunkReadTimeout to a StopIteration. This masks the fact that the backend read has terminated unexpectedly. The document_iters_to_multipart_byteranges would complete iterating over parts and send a multipart terminator line, even though no parts may have been sent. This patch removes the conversion of ChunkReadTmeout to StopIteration. The ChunkReadTimeout that is now raised prevents the document_iters_to_multipart_byteranges 'for' loop completing and therefore stops the multi-part terminator line being sent. It is raised from the GetOrHeadHandler similar to other scenarios that raise ChunkReadTimeouts while the resp body is being read. A ChunkReadTimeout exception handler is removed in the _iter_parts_from_response method. This handler was previously never reached (because StopIteration rather than ChunkReadTimeout was raised from _get_next_response_part), but if it were reached (i.e. with this change) then it would repeat logging of the error and repeat incrementing the node's error counter. This change in the GetOrHeadHandler mimics a similar change in the ECFragGetter [1]. [1] Related-Chage: I0654815543be3df059eb2875d9b3669dbd97f5b4 Co-Authored-By: Tim Burke <tim.burke@gmail.com> Change-Id: I6dd53e239f5e7eefcf1c74229a19b1df1c989b4a |
||
|
Zuul
|
486fb23447 | Merge "proxy: only use listing shards cache for 'auto' listings" | ||
|
Zuul
|
5c659b1a6d | Merge "Prevent installation of known-broken eventlet" | ||
|
Alistair Coles
|
252f0d36b7 |
proxy: only use listing shards cache for 'auto' listings
The proxy should NOT read or write to memcache when handling a container GET that explicitly requests 'shard' or 'object' record type. A request for 'shard' record type may specify 'namespace' format, but this request is unrelated to container listings or object updates and passes directly to the backend. This patch also removes unnecessary JSON serialisation and de-serialisation of namespaces within the proxy GET path when a sharded object listing is being built. The final response body will contain a list of objects so there is no need to write intermediate response bodies with a list of namespaces. Requests that explicitly specify record type of 'shard' will of course still have the response body with serialised shard dicts that is returned from the backend. Change-Id: Id79c156432350c11c52a4004d69b85e9eb904ca6 |
||
|
Zuul
|
bdbabbb809 | Merge "test: swift.proxy_logging_status is really lazy (in a good way!)" | ||
|
Clay Gerrard
|
0a6daa1ad5 |
test: swift.proxy_logging_status is really lazy (in a good way!)
Related-Change-Id: I9b5cc6d5fb69a2957b8c4846ce1feed8c115e6b6 Change-Id: I5dda9767c1c66597291211a087f7c917ba990651 |
||
|
Zuul
|
4eda676e2e | Merge "Support swift.proxy_logging_status in request env" | ||
|
Alistair Coles
|
a16e1f55a7 |
Improve unit tests for proxy GET ChunkReadTimeouts
Unit test changes only: - Add tests for some resuming replicated GET scenarios. - Add test to cover resuming GET fast_forward "failing" when range read is complete. - Add test to verify different node_timeout for account and container vs object controller getters. - Refactor proxy.test_server.py tests to split out different scenarios. Drive-by: remove some ring device manipulation setup that's not needed. Change-Id: I38c7fa648492c9bd2173ecf92f89e423bee4abf3 Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Matthew Oliver
|
b1836f9368 |
Update malformed_example.db to actually be malformed
Seems since somewhere around sqlite 3.40+ our in tests malformed sqlite db isn't malformed anymore. I don't actually know how it was malformed but looking in a hex editor it seems to have a bunch of null truncated in the middle of the file. Which maybe isn't an issue anymore? Instead I've gone and messed up what looks like to be the marker before defining the test table data at the end of file, so from: 00001FF0 00 00 00 00 00 00 00 00 00 00 00 03 01 02 0F 31 ...............1 ^^ To: 00001FF0 00 00 00 00 00 00 00 00 00 00 00 FF 01 02 0F 31 ...............1 ^^ Basically FF'ed the start of the data marker (at least what I'm calling it). Closes-Bug: #2051067 Change-Id: I2a10adffa39abbf7e97718b7228de298209140f8 |
||
|
Tim Burke
|
0d60af2508 |
Prevent installation of known-broken eventlet
See https://github.com/eventlet/eventlet/pull/890 It's a relatively minor breakage, triggered by clients hitting a funny corner of the HTTP spec, but it does prevent our unit tests from passing and may surprise some clients. Change-Id: Id29ba545e4ac7887c63fa62b75469988a2c6773c |
||
|
Zuul
|
1d9ab5d0a8 | Merge "Only try to install py2 dev libraries for py2 jobs" | ||
|
Tim Burke
|
5c4176aaf7 |
Only try to install py2 dev libraries for py2 jobs
Change-Id: I744416ffaf245c278ebce6350854b99c0eed88e3 |
||
|
Zuul
|
ca00e6b853 | Merge "CI: Treat grenade-skip-level jobs like grenade jobs" | ||
|
Zuul
|
52321866d9 | Merge "tests: Exercise recent eventlet breakage without XFS" | ||
|
Zuul
|
2b684b796a | Merge "staticweb: Generate HTML5 pages" | ||
|
Zuul
|
03b033f70f | Merge "Work with latest eventlet (again)" | ||
|
Zuul
|
4a278ae03f | Merge "cli: add --sync to db info to show syncs" | ||
|
Tim Burke
|
a736bd96f0 |
CI: Treat grenade-skip-level jobs like grenade jobs
Change-Id: Ia35743d3a59bfc456c45d28340f81a45289ceb6c |
||
|
Tim Burke
|
e39078135e |
tests: Exercise recent eventlet breakage without XFS
Recently, upper-constraints updated eventlet. Unfortunately, there was a bug which breaks our unit tests which was not discovered during the cross-project testing because the affected unit tests require an XFS temp dir. The requirements change has since been reverted, but we ought to have tests that cover the problematic behavior that will actually run as part of cross-project testing. See https://github.com/eventlet/eventlet/pull/826 for the eventlet change that introduced the bug; it has since been fixed on master in https://github.com/eventlet/eventlet/pull/890 (though we still need https://review.opendev.org/c/openstack/swift/+/905796 to be able to work with eventlet master). Change-Id: I4a6d79317b65f746ee29d2d25073b8c3859cd6a0 |
||
|
Zuul
|
569525a937 | Merge "tests: Get test_handoff_non_durable passing with encryption enabled" | ||
|
Tim Burke
|
7e3925aa9c |
tests: Fix probe test when encryption is enabled
Change-Id: I94e8cfd154aa058d91255efc87776224a919f572 |
||
|
Tim Burke
|
3ab9e45d6e |
Work with latest eventlet (again)
See https://github.com/eventlet/eventlet/pull/826 and its follow-up, https://github.com/eventlet/eventlet/pull/890 Change-Id: I7dff5342013a3f31f19cb410a9f3f6d4b60938f1 |
||
|
Matthew Oliver
|
52c80d652d |
cli: add --sync to db info to show syncs
When looking at containers and accounts it's sometimes nice to know who
they've been replicating with. This patch adds a `--sync|-s` option to
swift-{container|account}-info which will also dump the incoming and
outgoing sync tables:
$ swift-container-info /srv/node3/sdb3/containers/294/624/49b9ff074c502ec5e429e7af99a30624/49b9ff074c502ec5e429e7af99a30624.db -s
Path: /AUTH_test/new
Account: AUTH_test
Container: new
Deleted: False
Container Hash: 49b9ff074c502ec5e429e7af99a30624
Metadata:
Created at: 2022年02月16日T05:34:05.988480 (1644989645.98848)
Put Timestamp: 2022年02月16日T05:34:05.981320 (1644989645.98132)
Delete Timestamp: 1970年01月01日T00:00:00.000000 (0)
Status Timestamp: 2022年02月16日T05:34:05.981320 (1644989645.98132)
Object Count: 1
Bytes Used: 7
Storage Policy: default (0)
Reported Put Timestamp: 1970年01月01日T00:00:00.000000 (0)
Reported Delete Timestamp: 1970年01月01日T00:00:00.000000 (0)
Reported Object Count: 0
Reported Bytes Used: 0
Chexor: 962368324c2ca023c56669d03ed92807
UUID: f33184e7-56d5-4c74-9d2e-5417c187d722-sdb3
X-Container-Sync-Point2: -1
X-Container-Sync-Point1: -1
No system metadata found in db file
No user metadata found in db file
Sharding Metadata:
Type: root
State: unsharded
Incoming Syncs:
Sync Point Remote ID Updated At
1 ce7268a1-f5d0-4b83-b993-af17b602a0ff-sdb1 2022年02月16日T05:38:22.000000 (1644989902)
1 2af5abc0-7f70-4e2f-8f94-737aeaada7f4-sdb4 2022年02月16日T05:38:22.000000 (1644989902)
Outgoing Syncs:
Sync Point Remote ID Updated At
Partition 294
Hash 49b9ff074c502ec5e429e7af99a30624
As a follow up to the device in DB ID patch we can see that the replicas
at sdb1 and sdb4 have replicated with this node.
Change-Id: I23d786e82c6710bea7660a9acf8bbbd113b5b727
|
||
|
Zuul
|
2331c9abf2 | Merge "tests: Switch get_v4_amz_date_header to take timedeltas" | ||
|
Matthew Oliver
|
03b66c94f4 |
Proxy: Use namespaces when getting listing/updating shards
With the Related-Change, container servers can return a list Namespace objects in response to a GET request. This patch modifies the proxy to take advantage of this when fetching namespaces. Specifically, the proxy only needs Namespaces when caching 'updating' or 'listing' shard range metadata. In order to allow upgrades to clusters we can't just send 'X-Backend-Record-Type = namespace', as old container servers won't know how to respond. Instead, proxies send a new header 'X-Backend-Record-Shard-Format = namespace' along with the existing 'X-Backend-Record-Type = shard' header. Newer container servers will return namespaces, old container servers continue to return full shard ranges and they are parsed as Namespaces by the new proxy. This patch refactors _get_from_shards to clarify that it does not require ShardRange objects. The method is now passed a list of namespaces, which is parsed from the response body before the method is called. Some unit tests are also refactored to be more realistic when mocking _get_from_shards. Also refactor the test_container tests to better test shard-range and namespace responses from legacy and modern container servers. Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Co-Authored-By: Jianjian Huo <jhuo@nvidia.com> Related-Change: If152942c168d127de13e11e8da00a5760de5ae0d Change-Id: I7169fb767525753554a40e28b8c8c2e265d08ecd |
||
|
Jianjian Huo
|
c073933387 |
Container-server: add container namespaces GET
The proxy-server makes GET requests to the container server to fetch full lists of shard ranges when handling object PUT/POST/DELETE and container GETs, then it only stores the Namespace attributes (lower and name) of the shard ranges into Memcache and reconstructs the list of Namespaces based on those attributes. Thus, a namespaces GET interface can be added into the backend container-server to only return a list of those Namespace attributes. On a container server setup which serves a container with ~12000 shard ranges, benchmarking results show that the request rate of the HTTP GET all namespaces (states=updating) is ~12 op/s, while the HTTP GET all shard ranges (states=updating) is ~3.2 op/s. The new namespace GET interface supports most of headers and parameters supported by shard range GET interface. For example, the support of marker, end_marker, include, reverse and etc. Two exceptions are: 'x-backend-include-deleted' cannot be supported because there is no way for a Namespace to indicate the deleted state; the 'auditing' state query parameter is not supported because it is specific to the sharder which only requests full shard ranges. Co-Authored-By: Matthew Oliver <matt@oliver.net.au> Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Change-Id: If152942c168d127de13e11e8da00a5760de5ae0d |
||
|
Zuul
|
c1c41a145e | Merge "Get tests passing with latest eventlet" | ||
|
Zuul
|
7d5c73fcde | Merge "ContainerBroker.get_shard_ranges(): states must be a list" | ||
|
Alistair Coles
|
f3a32367bf |
ContainerBroker.get_shard_ranges(): states must be a list
The 'states' argument of get_shard_ranges() should be a list of ints, but previously just a single int was tolerated. This was unnecessary and led to inconsistent usage across call sites. We'd like similar ContainerBroker methods, such as the anticipated get_namespaces() [1], to have an interface consistent with get_shard_ranges(), but not continue the unnecessary pattern of supporting both a list and a single int argument for 'states'. This patch therefore normalises all call sites to pass a list and deprecates support for just a single int. [1] Related-Change: If152942c168d127de13e11e8da00a5760de5ae0d Change-Id: I056cefbf0894dbc68b9a6eb3d76ec4dc0a72de0d |
||
|
Zuul
|
a2a09a77bc | Merge "Make the dark data watcher work with sharded containers" | ||
|
Tim Burke
|
6b91334298 |
Make the dark data watcher work with sharded containers
Be willing to accept shards instead of objects when querying containers. If we receive shards, be willing to query them looking for the object. Change-Id: I0d8dd42f81b97dddd6cf8910afaef4ba85e67d27 Partial-Bug: #1925346 |
||
|
Alistair Coles
|
f2c6c19411 |
container-server unit tests: use self.ts everywhere
The setUp method creates a timestamp iterator, so let's use it consistently in all the tests. Change-Id: Ibd06b243c6db93380b99227ac79157269a64b28a |
||
|
Tim Burke
|
ac7eb8ac9d |
staticweb: Generate HTML5 pages
HTML 4.01 is so 1999. Closes-Bug: #2047679 Change-Id: Ic5a68998222709ab088caafe9dbea2b726db38c6 |
||
|
Ghanshyam Mann
|
3cbf01b12d |
Update python classifier in setup.cfg
As per the current release tested runtime, we test python version from 3.8 to 3.11 so updating the same in python classifier in setup.cfg Change-Id: If270236c2b23f432e1a3e1508101a7cc86bbd73b |
||
|
Tim Burke
|
bf7f3ff2f9 |
tests: Switch get_v4_amz_date_header to take timedeltas
Change-Id: Ic89141c0dce619390c2be8a01d231f9ff8e2056c |
||
|
Zuul
|
8bdd8f206a | Merge "Document allowed_digests for formpost middleware" | ||
|
Tim Burke
|
fe0d138eab |
Get tests passing with latest eventlet
Previously, our tests would not just fail, but segfault on recent eventlet releases. See https://github.com/eventlet/eventlet/issues/864 and https://github.com/python/cpython/issues/113631 Fortunately, it looks like we can just avoid actually monkey-patching to dodge the bug. Closes-Bug: #2047768 Change-Id: I0dc22dab05bc00722671dca3f0e6eb1cf6e18349 |
||
|
Takashi Kajinami
|
bd64748a03 |
Document allowed_digests for formpost middleware
The allowed_digests option were added to the formpost middleware in
addition to the tempurl middleware[1], but the option was not added to
the formpost section in the example proxy config file.
[1]
|
||
|
Clay Gerrard
|
5af7719ef3 |
Support swift.proxy_logging_status in request env
When logging a request, if the request environ has a swift.proxy_logging_status item then use its value for the log message status int. The swift.proxy_logging_status hint may be used by other middlewares when the desired logged status is different from the wire_status_int. If the proxy_logging middleware detects a client disconnect then any swift.proxy_logging_status item is ignored and a 499 status int is logged, as per current behaviour. i.e.: * client disconnect overrides swift.proxy_logging_status and the response status * swift.proxy_logging_status overrides the response status If the proxy_logging middleware catches an exception then the logged status int will be 500 regardless of any swift.proxy_logging_status item. Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: I9b5cc6d5fb69a2957b8c4846ce1feed8c115e6b6 |
||
|
Alistair Coles
|
365db20275 |
FakeSwift: use HTTPMethodNotAllowed not HTTPNotImplemented
If a method is not allowed, real swift proxy server app will return an HTTPMethodNotAllowed response, whereas FakeSwift would previously *raise* HTTPNotImplemented. S3Api deliberately sends requests with method 'TEST' which is not allowed/implemented. To workaround the difference in real and fake swift behaviour, FakeSwift was configured to allow the 'TEST' method, and then in some tests an HTTPMethodNotAllowed response was registered for 'TEST' requests! This patch modifies FakeSwift to return an HTTPMethodNotAllowed response to the incoming request when the request method is not allowed. It is no longer necessary for FakeSwift to support extending the default list of allowed methods. Change-Id: I550d0174e14a5d5a05d26e5cbe9d3353f5da4e8a |
||
|
Alistair Coles
|
b07d87c4be |
tests: use subclasses for S3Acl tests
We remove s3api.FakeSwift and replace it with the "normal" FakeSwift. Additionally the @s3acl decorator is removed and replaced with an inheritance based pattern. This simplifies maintenance using more familiar patterns and improves debugging. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Change-Id: I55b596a42af01870b49fda22800f7a1293163eb8 |
||
|
Clay Gerrard
|
1c31973d33 |
test: couple raw manifests with their TestCase
The ?format=raw TestCase has it's own manifest setup and doesn't do any segment validation. It's manifests are not suitable for use in other TestCases. Change-Id: Idf4b72bb59b8bf7232236ca544a3317b6e2e08fd |
||
|
Zuul
|
7a3124d82d | Merge "proxy: remove x-backend-record-type=shard in object listing" | ||
|
Clay Gerrard
|
bcb8810886 |
tests: consolidate Namespace/ShardRange _check_name
The Namespace class grew account/container properties to make them easier to use in the proxy and subjected to similar consistency requirements as the ShardRange's properties in the related change. There are no new assertions added in this change, it merely consolidates the py2/py3 validating helper which was duplicated between the Namespace and ShardRange TestCases. Related-Change-Id: Iebb09d6eff2165c25f80abca360210242cf3e6b7 Change-Id: Ide7f1dd3d9c664fb57c47dcd50edb44ae90ff5f9 |
||
|
Alistair Coles
|
71ad062bc3 |
proxy: remove x-backend-record-type=shard in object listing
When constructing an object listing from container shards, the proxy would previously return the X-Backend-Record-Type header with the value 'shard' that is returned with the initial GET response from the root container. It didn't break anything but was plainly wrong. This patch removes the header from object listing responses to request that did not have the header. The header value is not set to 'object' because in a request that value specifically means 'do not recurse into shards'. Change-Id: I94c68e5d5625bc8b3d9cd9baa17a33bb35a7f82f |