14ce84b64cf2455b330733dd4e615145cb95e585
10699 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Zuul
|
2c4ff7bf2c | Merge "CI: Update rolling-upgrade jobs to point to unmaintained branches" | ||
|
Zuul
|
b6f5971471 | Merge "reno: Update master for unmaintained/wallaby" | ||
|
Zuul
|
e951a42788 | Merge "reno: Update master for unmaintained/victoria" | ||
|
Tim Burke
|
9279e3d2c0 |
CI: Bring unit test jobs in line with 2024.1 tested runtimes
The last time I really looked at this was probably Yoga, when we were targetting 3.6 through 3.9 (and left 3.7 and 3.8 as experimental jobs). Now, though, OpenStack is targetting 3.8 through 3.11; as before, we can assume that if tests pass on those two versions, they should pass on the versions in-between, too. (But still have them as experimental, on-demand jobs). See https://governance.openstack.org/tc/reference/runtimes/2024.1.html Keep 2.7 and 3.6 testing as our own self-imposed minimums. Change-Id: I7700aa3c93df311644655e7ebaf0b67aa692ee80 |
||
|
Tim Burke
|
b06ffea941 |
CI: Update rolling-upgrade jobs to point to unmaintained branches
Change-Id: I936d9074ab60e34b379fb207d63f10bd5c3a4312 |
||
|
Alistair Coles
|
3517ca453e |
backend ratelimit: support per-method rate limits
Add support for config options such as: head_requests_per_device_per_second = 100 Change-Id: I2936f799b6112155ff01dcd8e1f985849a1af178 |
||
|
Clay Gerrard
|
d10351db30 |
s3api test for zero byte mpu
Change-Id: I89050cead3ef2d5f8ebfc9cb58f736f33b1c44fe |
||
|
indianwhocodes
|
46e7da97c6 |
s3api: Support GET/HEAD request with ?partNumber
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Closes-Bug: #1735284 Change-Id: Ib396309c706fbc6bc419377fe23fcf5603a89f45 |
||
|
indianwhocodes
|
6adbeb4036 |
slo: part-number=N query parameter support
This change allows individual SLO segments to be downloaded by adding an extra 'part-number' query parameter to the GET request. You can also retrieve the Content-Length of an individual segment with a HEAD request. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: I7af0dc9898ca35f042b52dd5db000072f2c7512e |
||
| dc379af893 |
reno: Update master for unmaintained/xena
Update the xena release notes configuration to build from unmaintained/xena. Change-Id: I72576af46cc13fac0a7a69461b673931c8b8496e |
|||
| 3ad1565f0a |
reno: Update master for unmaintained/wallaby
Update the wallaby release notes configuration to build from unmaintained/wallaby. Change-Id: Ieb2d4542d395bb1f4498beab0d9ec146fad7ba84 |
|||
| 0d8c89b123 |
reno: Update master for unmaintained/victoria
Update the victoria release notes configuration to build from unmaintained/victoria. Change-Id: I5a420de74a2ef5096135511dfea2489419414856 |
|||
|
Alistair Coles
|
e9abfd76ee |
backend ratelimit: support reloadable config file
Add support for a backend_ratelimit_conf_path option in the [filter:backend_ratelimit] config. If specified then the middleware will give precedence to config options from that file over config options from the [filter:backend_ratelimit] section. The path defaults to /etc/swift/backend-ratelimit.conf. The config file is periodically reloaded and any changed options are applied. The middleware will log a warning the first time it fails to load a config file that had previously been successfully loaded. The middleware also logs at info level when it first successfully loads a config file that had previously failed to be loaded. Otherwise, the middleware will log when a config file is loaded that results in the config being changed. Change-Id: I6554e37c6ab5b0a260f99b54169cb90ab5718f81 |
||
|
Tim Burke
|
6a426f7fa0 |
sharder: Add periodic_warnings_interval to example config
Change-Id: Ie3c64646373580b70557f2720a13a5a0c5ef7097 |
||
|
Zuul
|
0f6ecb641b | Merge "docs: add discussion of content-type metadata" | ||
|
Alistair Coles
|
cc27780042 |
docs: add discussion of content-type metadata
Change-Id: I2aa13e2b23bda86c51ef6aaa69ea3fd0075bb9ad |
||
|
Matthew Oliver
|
4135133a63 |
memcachering: change failed to yield log message
Currently when the memcachering `_get_conns` method runs out of memcached servers to try and so fails to yield anything we log a: All memcached servers error-limited However, this error message isn't entirely accurate. It can also fail because it failed to connect all it's memcached servers not just because they're error limited. You can disable error-limiting of memcached servers. So in this case this error message is a red-herring. Downstream we use a mcrouter client on each node which itself talks to a bunch of memcache servers. Therefore in swift's memcachering client we only configure the 1 mcrouter client as a single server in the ring. Because of this we disable memcached error-limiting. If the node gets too overloaded we've had timeouts talking to the local mcrouter client. This fires off error-limitted log messages which can confuse things. Because it's possible to turn off error-limiting, the log line isn't quite adequate anymore. So this patch changes it to: No more memcached servers to try Change-Id: I97fb4f3ee2ac45831aae14a782b2c6dc73e82d85 |
||
|
Zuul
|
627448362a | Merge "CI: Remove centos-7 jobs" | ||
|
Zuul
|
dd066946a1 | Merge "CI: fix rolling-upgrade jobs" | ||
|
Zuul
|
6cc5262bb7 | Merge "CI: pin python-dateutil for py2" | ||
|
Tim Burke
|
bd3b2256a9 |
CI: Remove centos-7 jobs
CentOS 7 will go EOL later this year, and infra wants to drop the nodes soon-ish -- don't make them wait on our account. The only major loss is py2 probe tests, but officially, yoga was the last release we pledged to support py2. Change-Id: I8f6c247c21f16aa4717569cc69308f846c6a0245 |
||
|
Tim Burke
|
275af9e008 |
CI: fix rolling-upgrade jobs
train and ussuri are now EOL; drop them. yoga has moved from stable to unmaintained. Change-Id: I3516823fdacbe8fd3c2434c0de9dedd1d82980fe |
||
|
Tim Burke
|
f32f2dd023 |
CI: pin python-dateutil for py2
Their 2.9.0 release is known-broken for py27-py35. Change-Id: I40c1724fa673ac252f5052ac85006788ba69d5c7 |
||
|
Zuul
|
3478803a95 | Merge "zero bytes manifests are not legacy" | ||
|
Zuul
|
e9cf2a31aa | Merge "tests: Clear txn id on init for all debug loggers" | ||
|
Zuul
|
0947e94f66 | Merge "staticweb: Work with prefix-based tempurls" | ||
|
Clay Gerrard
|
130188b6c0 |
zero bytes manifests are not legacy
Change-Id: I7c8adb129b8770eee501748a378f3adc42c8cd39 |
||
|
Zuul
|
4c5f41cc1f | Merge "Fix diskfile test failing on macOS" | ||
|
Tim Burke
|
1ee9b1e3ba |
tests: Clear txn id on init for all debug loggers
Since we fake out all the greenthread stuff to run in the main thread, we can (sometimes?) find that a transaction ID has already been set, leading to failures in test_bad_request_app_logging like AssertionError: b'X-Trans-Id: test-trans-id' not found in b'X-Trans-Id: tx...' By resetting the logger's txn_id, we're assured that our mock will be run and the expected transaction ID will be used. Change-Id: I465eed5372a2a5e591f80a09676f4b7f091cd444 |
||
|
Zuul
|
07c8e8bcdc | Merge "Object-server: add periodic greenthread yielding during file read." | ||
|
Jianjian Huo
|
d5877179a5 |
Object-server: add periodic greenthread yielding during file read.
Currently, when object-server serves GET request and DiskFile reader iterate over disk file chunks, there is no explicit eventlet sleep called. When network outpace the slow disk IO, it's possible one large and slow GET request could cause eventlet hub not to schedule any other green threads for a long period of time. To improve this, this patch add a configurable sleep parameter into DiskFile reader, which is 'cooperative_period' with a default value of 0 (disabled). Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Change-Id: I80b04bad0601b6cd6caef35498f89d4ba70a4fd4 |
||
|
Alistair Coles
|
2da150b890 |
Fix diskfile test failing on macOS
The existing test fails on macOS because the value of errno.ENODATA is platform dependent. On macOS ENODATA is 96: % man 2 intro|grep ENODATA 96 ENODATA No message available. Change-Id: Ibc760e641d4351ed771f2321dba27dc4e5b367c1 |
||
|
Alistair Coles
|
2500fbeea9 |
proxy: don't use recoverable_node_timeout with x-newest
Object GET requests with a truthy X-Newest header are not resumed if a backend request times out. The GetOrHeadHandler therefore uses the regular node_timeout when waiting for a backend connection response, rather than the possibly shorter recoverable_node_timeout. However, previously while reading data from a backend response the recoverable_node_timeout would still be used with X-Newest requests. This patch simplifies GetOrHeadHandler to never use recoverable_node_timeout when X-Newest is truthy. Change-Id: I326278ecb21465f519b281c9f6c2dedbcbb5ff14 |
||
|
Alistair Coles
|
8061dfb1c3 |
proxy-server: de-duplicate _get_next_response_part method
Both GetOrHeadHandler (used for replicated policy GETs) and ECFragGetter (used for EC policy GETs) have _get_next_response_part methods that are very similar. This patch replaces them with a single method in the common GetterBase superclass. Both classes are modified to use *only* the Request instance passed to their constructors. Previously their entry methods (GetOrHeadHandler.get_working_response and ECFragGetter.response_parts_iter) accepted a Request instance as an arg and the class then variably referred to that or the Request instance passed to the constructor. Both instances must be the same and it is therefore safer to only allow the Request to be passed to the constructor. The 'newest' keyword arg is dropped from the GetOrHeadHandler constructor because it is never used. This refactoring patch makes no intentional behavioral changes, apart from the text of some error log messages which have been changed to differentiate replicated object GETs from EC fragment GETs. Change-Id: I148e158ab046929d188289796abfbbce97dc8d90 |
||
|
Zuul
|
50336c5098 | Merge "test: all primary error limit is error" | ||
|
Zuul
|
fe8227e56c | Merge "reno: Update master for unmaintained/yoga" | ||
|
Zuul
|
439dc93cc4 | Merge "Add ClosingIterator class; be more explicit about closes" | ||
|
Clay Gerrard
|
89dd515310 |
test: all primary error limit is error
Change-Id: Ib790be26a2b990f313484f9ebdc99b8dc14613c9 |
||
|
Zuul
|
3aba22fde5 | Merge "Stop using deprecated datetime.utc* functions" | ||
|
Tim Burke
|
c522f5676e |
Add ClosingIterator class; be more explicit about closes
... in document_iters_to_http_response_body. We seemed to be relying a little too heavily upon prompt garbage collection to log client disconnects, leading to failures in test_base.py::TestGetOrHeadHandler::test_disconnected_logging under python 3.12. Closes-Bug: #2046352 Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: I4479d2690f708312270eb92759789ddce7f7f930 |
||
| db2caea1f3 |
reno: Update master for unmaintained/yoga
Update the yoga release notes configuration to build from unmaintained/yoga. Change-Id: I3ef2117e0e00c2a1dc02ab018baae04ebfeb7214 |
|||
|
Zuul
|
51ae9b00c9 | Merge "lint: Consistently use assertIsInstance" | ||
|
Zuul
|
ad41371005 | Merge "lint: Up-rev hacking" | ||
|
Zuul
|
93d654024a | Merge "diskfile: Ignore invalid suffixes in invalidations file" | ||
|
Zuul
|
4d3f9fe952 | Merge "sharding: don't replace own_shard_range without an epoch" | ||
|
Tim Burke
|
ce9e56a6d1 |
lint: Consistently use assertIsInstance
This has been available since py32 and was backported to py27; there is no point in us continuing to carry the old idiom forward. Change-Id: I21f64b8b2970e2dd5f56836f7f513e7895a5dc88 |
||
|
Tim Burke
|
76ca11773e |
lint: Up-rev hacking
Last time we did this was nearly 4 years ago; drag ourselves into something approaching the present. Address a few new pyflakes issues that seem reasonable to enforce: E275 missing whitespace after keyword E231 missing whitespace after ',' E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()` Main motivator is that the old hacking kept us on an old version of flake8 et al., which no longer work with newer Pythons. Change-Id: I54b46349fabb9776dcadc6def1cfb961c123aaa0 |
||
|
Matthew Oliver
|
8227f4539c |
sharding: don't replace own_shard_range without an epoch
We've observed a root container suddenly thinks it's unsharded when it's own_shard_range is reset. This patch blocks a remote osr with an epoch of None from overwriting a local epoched OSR. The only way we've observed this happen is when a new replica or handoff node creates a container and it's new own_shard_range is created without an epoch and then replicated to older primaries. However, if a bad node with a non-epoched OSR is on a primary, it's newer timestamp would prevent pulling the good osr from it's peers. So it'll be left stuck with it's bad one. When this happens expect to see a bunch of: Ignoring remote osr w/o epoch: x, from: y When an OSR comes in from a replica that doesn't have an epoch when it should, we do a pre-flight check to see if it would remove the epoch before emitting the error above. We do this because when sharding is first initiated it's perfectly valid to get OSR's without epochs from replicas. This is expected and harmless. Closes-bug: #1980451 Change-Id: I069bdbeb430e89074605e40525d955b3a704a44f |
||
|
Tim Burke
|
c5d743347c |
diskfile: Ignore invalid suffixes in invalidations file
Change-Id: I0357939cf3a12712e6719c257705cf565e3afc8b |
||
|
Tim Burke
|
1936f6735c |
replicator: Rename update_deleted to revert
This is a more-intuitive name for what's going on and it's been working well for us in the reconstructor. Change-Id: Id935de4ca9eb6f38b0d587eaed8d13c54bd89d60 |