e7bb2a3855258f3c59d947de288b0b22f79c949b
11081 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Tim Burke
|
e7bb2a3855 |
s3token: Pass service auth token to Keystone
Recent versions of Keystone require auth tokens when accessing the /v3/s3tokens endpoint to prevent exposure of a lot of information that a user who just has a presigned URL should not be able to see. UpgradeImpact ============= The s3token middleware now requires Keystone auth credentials to be configured. If secret_cache_duration is enabled, these credentials should already be configured. Without these credentials, Keystone users will no longer be able to make S3 API requests. Closes-Bug: #2119646 Change-Id: Ie80bc33d0d9de17ca6eaad3b43628724538001f6 Signed-off-by: Tim Burke <tim.burke@gmail.com> |
||
|
Zuul
|
d87ebd7d05 | Merge "reno: Update master for unmaintained/2024.1" | ||
|
Zuul
|
8f36f18f46 | Merge "Move Pete Zaitcev to Core Emeritus" | ||
| a785025b26 |
reno: Update master for unmaintained/2024.1
Update the 2024.1 release notes configuration to build from unmaintained/2024.1. Change-Id: I69da0bb132bf0b8e5f065a79a22c037835d097de Signed-off-by: OpenStack Release Bot <infra-root@openstack.org> Generated-By: openstack/project-config:roles/copy-release-tools-scripts/files/release-tools/change_reno_branch_to_unmaintained.sh |
|||
|
Shreeya Deshpande
|
8cbe10552a |
Update resource type in labels.html
Change-Id: I1a6f85feea1e7a77966d5a5f35df12d8325c8464 Signed-off-by: Shreeya Deshpande <shreeyad@nvidia.com> |
||
|
Tim Burke
|
7744e4a43a |
Move Pete Zaitcev to Core Emeritus
He offered his core resignation earlier this week. Change-Id: I552ae6ae2aee29be94683e9e01b3ab7b10a4f42e Signed-off-by: Tim Burke <tim.burke@gmail.com> |
||
|
Zuul
|
9232350af0 | Merge "Fix swift_dir setting in WSGI servers" | ||
|
Zuul
|
48a5d5e42f | Merge "test-db-replicator (trivial): just one tmpdir" | ||
|
Zuul
|
4b7543b2e1 | Merge "trivial test_[db_]replicator cleanup" | ||
|
Clay Gerrard
|
fac55ced3c |
test-db-replicator (trivial): just one tmpdir
Change-Id: I1e53d171faff02e2dbcbcc779ad3a47506b26853 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Zuul
|
c161aa168f | Merge "relinker: allow clobber-hardlink-collision" | ||
|
Zuul
|
4fa1237997 | Merge "common.db_replicator: log container, db paths consistently" | ||
|
Alistair Coles
|
c0fefe80b3 |
trivial test_[db_]replicator cleanup
* add a self.temp_dir in setUp and remove in teraDown * be consistent in order of (expected, actual) args * assert the complete exception error line Related-Change: I289d3e9b6fe14159925786732ad748acd0459812 Change-Id: I185c8cd55db6df593bb3304c54c5160c1f662b86 Signed-off-by: Alistair Coles <alistairncoles@gmail.com> |
||
|
Clay Gerrard
|
be62933d00 |
relinker: allow clobber-hardlink-collision
The relinker has already been robust to hardlink collisions on tombstones for some time; this change allows ops to optionally (non-default) enable a similar handling of other files when relinking the old=>new partdir. If your cluster is having a bunch of these kinds of collisions and after spot checking you determine the data is in fact duplicate copies the same data - you'd much rather have the option for the relinker to programatically handle them non-destructively than forcing ops to rm a bunch of files manually just get out of a PPI. Once the PPI is over and you reconstrcutors are running again, after some validation you can probably clean out your quarantine dirs. Drive-by: log unknown relink errors at error level to match expected non-zero return code Closes-Bug: #2127779 Change-Id: Iaae0d9fb7a1949d1aad9aa77b0daeb249fb471b5 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
ashnair
|
41bf72a5cc |
common.db_replicator: log container, db paths consistently
Extract helpers (from container.sharder) that formats log context from either a broker (preferring broker.path/db_file) or a plain db_file string. Use it in common.db_replicator and container.replicator so messages are uniform and robust. Update tests to cover both cases. No functional changes to replication behavior; this is logging/robustness and test updates only. Change-Id: I289d3e9b6fe14159925786732ad748acd0459812 Related-Change: I7d2fe064175f002055054a72f348b87dc396772b Signed-off-by: ashnair <ashnair@nvidia.com> |
||
|
Zuul
|
e963d13979 | Merge "s3api: fix test_service with pre-existing buckets" | ||
|
Samuel Merritt
|
5568dd09b5 |
Fix swift_dir setting in WSGI servers
Theoretically, the various WSGI servers should be able to operate on a system without /etc/swift/swift.conf. However, this doesn't actually work. WSGI servers call utils.validate_configuration() before looking for a swift_dir option, and that validation reads swift.conf from its default location. Even if you set swift_dir=/some/where/else, the WSGI servers require /etc/swift/swift.conf to exist. This commit makes the WSGI servers call utils.set_swift_dir before calling utils.validate_configuration. Motivation: I'm working on testing some client software against actual Swift, but my CI environment doesn't have /etc/swift at all, so the test suite can't start the Swift daemons. Change-Id: Ie0efee33e684b1c5bad6ee2191c187bb680de5f1 Signed-off-by: Samuel Merritt <smerritt@nvidia.com> |
||
|
Clay Gerrard
|
97e00e208f |
trivial: reorder AccountBroker.path methods
To me, this structure makes the call order more obvious. There was some uneccessary defensiveness on the structure of the dict returned from the SELECT query. Drive-by: cleanup docstrings, there was some extra info about failures and ContainerBrokers that was confusing me. Related-Change: Ic7c2aa878caf039b29abb900b4f491130be3d8a8 Change-Id: I13e91abb09b2102dc52429df22fe47c73c6346aa Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Clay Gerrard
|
3c6e967a58 |
test: fix AccountBroker.path tests
Move tests to base TestCase, currently they are only running against old "broker w/o metadata" - but the tests and behavior should work on all versions of the account schema Drive-by: reword tests to make assertions stronger and behaviors more obvious Related-Change: Ic7c2aa878caf039b29abb900b4f491130be3d8a8 Change-Id: I59abd956ffa01bd41f29959ff3df89a3a20a00d4 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Zuul
|
ac5c783d65 | Merge "Assert metadata of SLO PUT from container sync" | ||
|
Zuul
|
9a45531942 | Merge "Test each method in test_crossdomain_get_only" | ||
|
Zuul
|
05fcc5964a | Merge "docs: More proxy-server.conf-sample cleanup" | ||
|
Zuul
|
6e9a06b270 | Merge "doc: specify seconds in proxy-server.conf-sample" | ||
|
Shashirekha Gundur
|
a6bde729c5 |
Test each method in test_crossdomain_get_only
iterate through not allowed methods and assert Change-Id: Ia304709fc56d3e81bb1326b56a4b0d64ed698160 Signed-off-by: Tim Burke <tim.burke@gmail.com> |
||
|
Zuul
|
6da1207489 | Merge "test: move import to top of file" | ||
|
Clay Gerrard
|
64bb041398 |
Assert metadata of SLO PUT from container sync
In addition to being in the pure unmolested ondisk format from the source container; the manifest must also include the normally protected X-Static-Large-Object metadata. Change-Id: Ic6638e8258e9dec755f8d9630f0586bd3c9b4420 Related-Change: I8d503419b7996721a671ed6b2795224775a7d8c6 Signed-off-by: Tim Burke <tim.burke@gmail.com> |
||
|
Tim Burke
|
79feb12b28 |
docs: More proxy-server.conf-sample cleanup
Change-Id: I99dbd9590ff39343422852e4154f98bc194d161d Signed-off-by: Tim Burke <tim.burke@gmail.com> |
||
|
Zuul
|
a9a1ea4137 | Merge "Adds --skip-commits to s-m-s-r" | ||
|
Clay Gerrard
|
b55f13c758 |
test: move import to top of file
Related-Change-Id: I38c11b7aae8c4112bb3d671fa96012ab0c44d5a2 Change-Id: Ibe5d206d2b96e174f849715fb13562ae0d2f5de2 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Clay Gerrard
|
389747a8b2 |
doc: specify seconds in proxy-server.conf-sample
Most of swift's timing configuration values should accept units in seconds; make this explicit in the sample config for values that did not already do so. Related-Change-Id: I38c11b7aae8c4112bb3d671fa96012ab0c44d5a2 Change-Id: I5b25b7e830a31f03d11f371adf12289222222eb2 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Zuul
|
e5d44d669a | Merge "proxy: use cooperative tokens to coalesce updating shard range requests into backend" | ||
|
Christian Ohanaja
|
ba1ab9d11c |
Adds --skip-commits to s-m-s-r
This patch replaces --force-commits with a --skip-commits flag in swift-manage-shard-ranges to determine when to commit object updates. Change-Id: I6de041f5c12dca2618d22d1271efe242b2f35258 Signed-off-by: Christian Ohanaja <cohanaja@nvidia.com> |
||
|
Jianjian Huo
|
d9883d0834 |
proxy: use cooperative tokens to coalesce updating shard range requests into backend
The cost of memcache misses could be deadly. For example, when updating shard range cache query miss, PUT requests would have to query the backend to figure out which shard to upload the objects. And when a lot of requests are sending to the backend at the same time, this could easily overload the root containers and cause a lot of 500/503 errors; and when proxy-servers receive responses of all those 200 backend shard range queries, they could in turn try to write the same shard range data into memcached servers at the same time, and cause memcached to return OOM failures too. We have seen cache misses frequently to updating shard range cache in production, due to Memcached out-of-memory and cache evictions. To cope with those kind of situations, a memcached based cooperative token mechanism can be added into proxy-server to coalesce lots of in-flight backend requests into a few: when updating shard range cache misses, only the first few of requests will get global cooperative tokens and then be able to fetch updating shard ranges from backend container servers. And the following cache miss requests will wait for cache filling to finish, instead of all querying the backend container servers. This will prevent a flood of backend requests to overload both container servers and memcached servers. Drive-by fix: when memcache is not available, object controller will only need to retrieve a specific shard range from the container server to send the update request to. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Yan Xiao <yanxiao@nvidia.com> Co-Authored-By: Shreeya Deshpande <shreeyad@nvidia.com> Signed-off-by: Jianjian Huo <jhuo@nvidia.com> Change-Id: I38c11b7aae8c4112bb3d671fa96012ab0c44d5a2 |
||
|
Zuul
|
dd23020c30 | Merge "common: add memcached based cooperative token mechanism." | ||
|
ashnair
|
d353f15fac |
account-broker: add resilient path property with lazy cache
Add a path property for AccountBroker and use lazy, resilient _populate_instance_cache(). Use None attrs as flags, avoid broad try/except in path, and retry if cache population fails. Change-Id: Ic7c2aa878caf039b29abb900b4f491130be3d8a8 Signed-off-by: ashnair <ashnair@nvidia.com> |
||
|
Jianjian Huo
|
707a65ab3c |
common: add memcached based cooperative token mechanism.
Memcached based cooperative token is a improved version of ghetto lock, see the description of ghetto lock at here: https://github.com/memcached/memcached/wiki/ProgrammingTricks It's used to avoid the thundering herd situation which many caching users face: given a cache item that is popular and difficult to recreate, in the event of cache misses, users could end up with hundreds (or thousands) of processes slamming the backend database at the same time in an attempt to refill the same cache content. This thundering herd problem not only often leads to unresponsive backend; and also those writes into memcached cause premature cache eviction under memory pressure. With cooperative token, when lots of in-flight callers try to get the cached item specified by key from memcache and get cache misses, only the first few query requests (limited by by ``num_tokens``) will be able get the cooperative tokens by creating or incrementing an internal memcache key, and then those callers with tokens can send backend requests to fetch data from backend servers and be able to set data into memcache; all other cache miss requests without a token should wait for cache filling to finish, instead of all querying the backend servers at the same time. Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Co-Authored-By: Yan Xiao <yanxiao@nvidia.com> Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Signed-off-by: Jianjian Huo <jhuo@nvidia.com> Change-Id: I50ff92441c2f2c49b3034644aba59930e8a99589 |
||
|
Christian Ohanaja
|
b035ed1385 |
add Christian O to AUTHORS
Change-Id: Id2ab5c00182516a744fd9e8b8e89f7232a433222 Signed-off-by: Christian Ohanaja <cohanaja@nvidia.com> |
||
|
Zuul
|
92dd03ed77 | Merge "diskfile: Fix UnboundLocalError during part power increase" | ||
|
Zuul
|
4cacaa968f | Merge "test: do not create timestamp collision unnecessarily" | ||
|
Zuul
|
d0a3b1b016 | Merge "test: fix module state pollution" | ||
|
Zuul
|
f3e98aa710 | Merge "tests: simplify TestGlobalSetupObjectReconstructor setUp" | ||
|
Zuul
|
2142861146 | Merge "cleaning up and fixing some links" | ||
|
Clay Gerrard
|
7b05356bd0 |
test: do not create timestamp collision unnecessarily
Change-Id: Ib6bf702e38495e52e3b2f5ca95ed17c519018474 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Clay Gerrard
|
815393dff4 |
test: fix module state pollution
The disable_fallocate function provided in common.utils doesn't really have a way to undo it - it's tested independently in test_utils. It shouldn't be used on test_diskfile or else test_utils fallocate tests will fail afterwards. Change-Id: I6ffa97b39111ba25f85ba7cfde21440d975dc760 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |
||
|
Alistair Coles
|
c26c7b8edd |
tests: simplify TestGlobalSetupObjectReconstructor setUp
Change-Id: I0168ab113fdda60ed858ed0928356699399d4044 Signed-off-by: Alistair Coles <alistairncoles@gmail.com> |
||
|
Christian Ohanaja
|
bd27fc6baf |
cleaning up and fixing some links
Verified every changed link works by building and testing manually Change-Id: I4bb6cc238d4e567e3edc6c15a58d4a5f9a41e273 Signed-off-by: Christian Ohanaja <cohanaja@nvidia.com> |
||
| 63eeb005bd |
Update master for stable/2025.2
Add file to the reno documentation build to show release notes for stable/2025.2. Use pbr instruction to increment the minor version number automatically so that master versions are higher than the versions on stable/2025.2. Sem-Ver: feature Change-Id: I3ab48efc8208b791bfdf5ac24d098b8e236d7031 Signed-off-by: OpenStack Release Bot <infra-root@openstack.org> Generated-By: openstack/project-config:roles/copy-release-tools-scripts/files/release-tools/add_release_note_page.sh |
|||
|
Tim Burke
|
397f94c73b |
diskfile: Fix UnboundLocalError during part power increase
Closes-Bug: #2122543 Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Signed-off-by: Tim Burke <tim.burke@gmail.com> Change-Id: I8a2a96394734899ee48e1d9264bf3908968c51a8 |
||
|
Tim Burke
|
82cb5a5d78 |
AUTHORS/CHANGELOG for 2.36.0
Signed-off-by: Tim Burke <tim.burke@gmail.com> Change-Id: I9c86383ed3d35657a7e88fa9cdc6a94559e5ca372.36.0 |
||
|
Clay Gerrard
|
b5e6964a22 |
s3api: fix test_service with pre-existing buckets
The s3api cross-compat tests in test_service weren't sophisticated enough to account for real s3 session credentials that could see actual aws s3 buckets (or a vsaio you actually use) - however valid assertions on the authorization logic doesn't actually require such a strictly clean slate. Drive-by: prefer test config option without double negative, and update ansible that's based on the sample config. Related-Change-Id: I811642fccd916bd9ef71846a8108d50a462740f0 Change-Id: Ifab08cfe72f12d80e2196ad9b9b7876ace5825b4 Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com> |