f256bc7eb385d8d80c2ce2aa26e270171d9164a1
11074 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Clay Gerrard
|
f256bc7eb3 |
tests: remove some global patching
replace a questionable reload_module with idiomatic addCleanup Change-Id: I66d4df1e2dba058b7c719a4a932234b3fc10b554 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Zuul
|
48a5d5e42f | Merge "test-db-replicator (trivial): just one tmpdir" | ||
|
Zuul
|
4b7543b2e1 | Merge "trivial test_[db_]replicator cleanup" | ||
|
Clay Gerrard
|
fac55ced3c |
test-db-replicator (trivial): just one tmpdir
Change-Id: I1e53d171faff02e2dbcbcc779ad3a47506b26853 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Zuul
|
c161aa168f | Merge "relinker: allow clobber-hardlink-collision" | ||
|
Zuul
|
4fa1237997 | Merge "common.db_replicator: log container, db paths consistently" | ||
|
Alistair Coles
|
c0fefe80b3 |
trivial test_[db_]replicator cleanup
* add a self.temp_dir in setUp and remove in teraDown * be consistent in order of (expected, actual) args * assert the complete exception error line Related-Change: I289d3e9b6fe14159925786732ad748acd0459812 Change-Id: I185c8cd55db6df593bb3304c54c5160c1f662b86 Signed-off-by: Alistair Coles <alistairncoles@gmail.com> |
||
|
Clay Gerrard
|
be62933d00 |
relinker: allow clobber-hardlink-collision
The relinker has already been robust to hardlink collisions on tombstones for some time; this change allows ops to optionally (non-default) enable a similar handling of other files when relinking the old=>new partdir. If your cluster is having a bunch of these kinds of collisions and after spot checking you determine the data is in fact duplicate copies the same data - you'd much rather have the option for the relinker to programatically handle them non-destructively than forcing ops to rm a bunch of files manually just get out of a PPI. Once the PPI is over and you reconstrcutors are running again, after some validation you can probably clean out your quarantine dirs. Drive-by: log unknown relink errors at error level to match expected non-zero return code Closes-Bug: #2127779 Change-Id: Iaae0d9fb7a1949d1aad9aa77b0daeb249fb471b5 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
ashnair
|
41bf72a5cc |
common.db_replicator: log container, db paths consistently
Extract helpers (from container.sharder) that formats log context from either a broker (preferring broker.path/db_file) or a plain db_file string. Use it in common.db_replicator and container.replicator so messages are uniform and robust. Update tests to cover both cases. No functional changes to replication behavior; this is logging/robustness and test updates only. Change-Id: I289d3e9b6fe14159925786732ad748acd0459812 Related-Change: I7d2fe064175f002055054a72f348b87dc396772b Signed-off-by: ashnair <ashnair@nvidia.com> |
||
|
Zuul
|
e963d13979 | Merge "s3api: fix test_service with pre-existing buckets" | ||
|
Clay Gerrard
|
97e00e208f |
trivial: reorder AccountBroker.path methods
To me, this structure makes the call order more obvious. There was some uneccessary defensiveness on the structure of the dict returned from the SELECT query. Drive-by: cleanup docstrings, there was some extra info about failures and ContainerBrokers that was confusing me. Related-Change: Ic7c2aa878caf039b29abb900b4f491130be3d8a8 Change-Id: I13e91abb09b2102dc52429df22fe47c73c6346aa Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Clay Gerrard
|
3c6e967a58 |
test: fix AccountBroker.path tests
Move tests to base TestCase, currently they are only running against old "broker w/o metadata" - but the tests and behavior should work on all versions of the account schema Drive-by: reword tests to make assertions stronger and behaviors more obvious Related-Change: Ic7c2aa878caf039b29abb900b4f491130be3d8a8 Change-Id: I59abd956ffa01bd41f29959ff3df89a3a20a00d4 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Zuul
|
ac5c783d65 | Merge "Assert metadata of SLO PUT from container sync" | ||
|
Zuul
|
9a45531942 | Merge "Test each method in test_crossdomain_get_only" | ||
|
Zuul
|
05fcc5964a | Merge "docs: More proxy-server.conf-sample cleanup" | ||
|
Zuul
|
6e9a06b270 | Merge "doc: specify seconds in proxy-server.conf-sample" | ||
|
Shashirekha Gundur
|
a6bde729c5 |
Test each method in test_crossdomain_get_only
iterate through not allowed methods and assert Change-Id: Ia304709fc56d3e81bb1326b56a4b0d64ed698160 Signed-off-by: Tim Burke <tim.burke@gmail.com> |
||
|
Zuul
|
6da1207489 | Merge "test: move import to top of file" | ||
|
Clay Gerrard
|
64bb041398 |
Assert metadata of SLO PUT from container sync
In addition to being in the pure unmolested ondisk format from the source container; the manifest must also include the normally protected X-Static-Large-Object metadata. Change-Id: Ic6638e8258e9dec755f8d9630f0586bd3c9b4420 Related-Change: I8d503419b7996721a671ed6b2795224775a7d8c6 Signed-off-by: Tim Burke <tim.burke@gmail.com> |
||
|
Tim Burke
|
79feb12b28 |
docs: More proxy-server.conf-sample cleanup
Change-Id: I99dbd9590ff39343422852e4154f98bc194d161d Signed-off-by: Tim Burke <tim.burke@gmail.com> |
||
|
Zuul
|
a9a1ea4137 | Merge "Adds --skip-commits to s-m-s-r" | ||
|
Clay Gerrard
|
b55f13c758 |
test: move import to top of file
Related-Change-Id: I38c11b7aae8c4112bb3d671fa96012ab0c44d5a2 Change-Id: Ibe5d206d2b96e174f849715fb13562ae0d2f5de2 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Clay Gerrard
|
389747a8b2 |
doc: specify seconds in proxy-server.conf-sample
Most of swift's timing configuration values should accept units in seconds; make this explicit in the sample config for values that did not already do so. Related-Change-Id: I38c11b7aae8c4112bb3d671fa96012ab0c44d5a2 Change-Id: I5b25b7e830a31f03d11f371adf12289222222eb2 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Zuul
|
e5d44d669a | Merge "proxy: use cooperative tokens to coalesce updating shard range requests into backend" | ||
|
Christian Ohanaja
|
ba1ab9d11c |
Adds --skip-commits to s-m-s-r
This patch replaces --force-commits with a --skip-commits flag in swift-manage-shard-ranges to determine when to commit object updates. Change-Id: I6de041f5c12dca2618d22d1271efe242b2f35258 Signed-off-by: Christian Ohanaja <cohanaja@nvidia.com> |
||
|
Jianjian Huo
|
d9883d0834 |
proxy: use cooperative tokens to coalesce updating shard range requests into backend
The cost of memcache misses could be deadly. For example, when updating shard range cache query miss, PUT requests would have to query the backend to figure out which shard to upload the objects. And when a lot of requests are sending to the backend at the same time, this could easily overload the root containers and cause a lot of 500/503 errors; and when proxy-servers receive responses of all those 200 backend shard range queries, they could in turn try to write the same shard range data into memcached servers at the same time, and cause memcached to return OOM failures too. We have seen cache misses frequently to updating shard range cache in production, due to Memcached out-of-memory and cache evictions. To cope with those kind of situations, a memcached based cooperative token mechanism can be added into proxy-server to coalesce lots of in-flight backend requests into a few: when updating shard range cache misses, only the first few of requests will get global cooperative tokens and then be able to fetch updating shard ranges from backend container servers. And the following cache miss requests will wait for cache filling to finish, instead of all querying the backend container servers. This will prevent a flood of backend requests to overload both container servers and memcached servers. Drive-by fix: when memcache is not available, object controller will only need to retrieve a specific shard range from the container server to send the update request to. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Yan Xiao <yanxiao@nvidia.com> Co-Authored-By: Shreeya Deshpande <shreeyad@nvidia.com> Signed-off-by: Jianjian Huo <jhuo@nvidia.com> Change-Id: I38c11b7aae8c4112bb3d671fa96012ab0c44d5a2 |
||
|
Zuul
|
dd23020c30 | Merge "common: add memcached based cooperative token mechanism." | ||
|
ashnair
|
d353f15fac |
account-broker: add resilient path property with lazy cache
Add a path property for AccountBroker and use lazy, resilient _populate_instance_cache(). Use None attrs as flags, avoid broad try/except in path, and retry if cache population fails. Change-Id: Ic7c2aa878caf039b29abb900b4f491130be3d8a8 Signed-off-by: ashnair <ashnair@nvidia.com> |
||
|
Jianjian Huo
|
707a65ab3c |
common: add memcached based cooperative token mechanism.
Memcached based cooperative token is a improved version of ghetto lock, see the description of ghetto lock at here: https://github.com/memcached/memcached/wiki/ProgrammingTricks It's used to avoid the thundering herd situation which many caching users face: given a cache item that is popular and difficult to recreate, in the event of cache misses, users could end up with hundreds (or thousands) of processes slamming the backend database at the same time in an attempt to refill the same cache content. This thundering herd problem not only often leads to unresponsive backend; and also those writes into memcached cause premature cache eviction under memory pressure. With cooperative token, when lots of in-flight callers try to get the cached item specified by key from memcache and get cache misses, only the first few query requests (limited by by ``num_tokens``) will be able get the cooperative tokens by creating or incrementing an internal memcache key, and then those callers with tokens can send backend requests to fetch data from backend servers and be able to set data into memcache; all other cache miss requests without a token should wait for cache filling to finish, instead of all querying the backend servers at the same time. Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Co-Authored-By: Yan Xiao <yanxiao@nvidia.com> Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Signed-off-by: Jianjian Huo <jhuo@nvidia.com> Change-Id: I50ff92441c2f2c49b3034644aba59930e8a99589 |
||
|
Christian Ohanaja
|
b035ed1385 |
add Christian O to AUTHORS
Change-Id: Id2ab5c00182516a744fd9e8b8e89f7232a433222 Signed-off-by: Christian Ohanaja <cohanaja@nvidia.com> |
||
|
Zuul
|
92dd03ed77 | Merge "diskfile: Fix UnboundLocalError during part power increase" | ||
|
Zuul
|
4cacaa968f | Merge "test: do not create timestamp collision unnecessarily" | ||
|
Zuul
|
d0a3b1b016 | Merge "test: fix module state pollution" | ||
|
Zuul
|
f3e98aa710 | Merge "tests: simplify TestGlobalSetupObjectReconstructor setUp" | ||
|
Zuul
|
2142861146 | Merge "cleaning up and fixing some links" | ||
|
Clay Gerrard
|
7b05356bd0 |
test: do not create timestamp collision unnecessarily
Change-Id: Ib6bf702e38495e52e3b2f5ca95ed17c519018474 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Clay Gerrard
|
815393dff4 |
test: fix module state pollution
The disable_fallocate function provided in common.utils doesn't really have a way to undo it - it's tested independently in test_utils. It shouldn't be used on test_diskfile or else test_utils fallocate tests will fail afterwards. Change-Id: I6ffa97b39111ba25f85ba7cfde21440d975dc760 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Alistair Coles
|
c26c7b8edd |
tests: simplify TestGlobalSetupObjectReconstructor setUp
Change-Id: I0168ab113fdda60ed858ed0928356699399d4044 Signed-off-by: Alistair Coles <alistairncoles@gmail.com> |
||
|
Christian Ohanaja
|
bd27fc6baf |
cleaning up and fixing some links
Verified every changed link works by building and testing manually Change-Id: I4bb6cc238d4e567e3edc6c15a58d4a5f9a41e273 Signed-off-by: Christian Ohanaja <cohanaja@nvidia.com> |
||
| 63eeb005bd |
Update master for stable/2025.2
Add file to the reno documentation build to show release notes for stable/2025.2. Use pbr instruction to increment the minor version number automatically so that master versions are higher than the versions on stable/2025.2. Sem-Ver: feature Change-Id: I3ab48efc8208b791bfdf5ac24d098b8e236d7031 Signed-off-by: OpenStack Release Bot <infra-root@openstack.org> Generated-By: openstack/project-config:roles/copy-release-tools-scripts/files/release-tools/add_release_note_page.sh |
|||
|
Tim Burke
|
397f94c73b |
diskfile: Fix UnboundLocalError during part power increase
Closes-Bug: #2122543 Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Signed-off-by: Tim Burke <tim.burke@gmail.com> Change-Id: I8a2a96394734899ee48e1d9264bf3908968c51a8 |
||
|
Tim Burke
|
82cb5a5d78 |
AUTHORS/CHANGELOG for 2.36.0
Signed-off-by: Tim Burke <tim.burke@gmail.com> Change-Id: I9c86383ed3d35657a7e88fa9cdc6a94559e5ca372.36.0 |
||
|
Clay Gerrard
|
b5e6964a22 |
s3api: fix test_service with pre-existing buckets
The s3api cross-compat tests in test_service weren't sophisticated enough to account for real s3 session credentials that could see actual aws s3 buckets (or a vsaio you actually use) - however valid assertions on the authorization logic doesn't actually require such a strictly clean slate. Drive-by: prefer test config option without double negative, and update ansible that's based on the sample config. Related-Change-Id: I811642fccd916bd9ef71846a8108d50a462740f0 Change-Id: Ifab08cfe72f12d80e2196ad9b9b7876ace5825b4 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Tim Burke
|
e13f4abcd7 |
tests: Skip some tests if crc32c is not available
Signed-off-by: Tim Burke <tim.burke@gmail.com> Change-Id: I2e2a4e2c448319e6531372ae06ab81eb58edc57e |
||
|
Tim Burke
|
21325988df |
CI: Remove a bunch of unnecessary bindep profiles
Signed-off-by: Tim Burke <tim.burke@gmail.com> Change-Id: I26be3752b8c67b7a6a0a9c75571a44f3827cbb90 |
||
|
Tim Burke
|
3db8e2d05f |
Clean up some py36 infra
- Drop py36 versions from py3-constraints.txt - Remove py36 tox environment Forgot to do this when we dropped py36 support. Signed-off-by: Tim Burke <tim.burke@gmail.com> Change-Id: I0233e1dd036b9a420c815fec3c9632d2967b934e |
||
|
Tim Burke
|
1ed7b71bb5 |
Update py3-constraints.txt
We're starting to see projects drop py39 support; seems as good a time as any for an update. Signed-off-by: Tim Burke <tim.burke@gmail.com> Change-Id: I481efbd2627e517edf49f3025b3399a86d1b4f3e |
||
|
Tim Burke
|
a18fb08b48 |
Switch py39 jobs to use py3-constraints.txt
Signed-off-by: Tim Burke <tim.burke@gmail.com> Change-Id: I6fb806d299fc30f6ceaeba78cf3a810298e94f26 |
||
|
Zuul
|
e10c2bafcb | Merge "proxy-logging: create field for access_user_id" | ||
|
Vitaly Bordyug
|
32eaab20b1 |
proxy-logging: create field for access_user_id
Added the new field to be able to log the access key during the s3api calls, while reserving the field to be filled with auth relevant information in case of other middlewares. Added respective code to the tempauth and keystone middlewares. Since s3api creates a copy of the environ dict for the downstream request object when translating the s3req.to_swift_req the environ dict that is seen/modifed in other mw module is not the same instance seen in proxy-logging - using mutable objects get transfered into the swift_req.environ. Change the assert in test_proxy_logging from "the last field" to the index 21 in the interests of maintainability. Also added some regression tests for object, bucket and s3 v4 apis and updated the documentation with the details about the new field. Signed-off-by: Vitaly Bordyug <vbordug@gmail.com> Change-Id: I0ce4e92458e2b05a4848cc7675604c1aa2b64d64 |