00373dad617db30bda3e8b722ab8518f59e0cb10
4066 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
cheng
|
5cbd5cf303 |
Return HTTPServerError instead of HTTPNotFound
Swift allows autocreate account. It should be treat as server error instead of 404 when it fails to create account Change-Id: I726271bc06e3c1b07a4af504c3fd7ddb789bd512 Closes-bug: 1718810 |
||
|
Thiago da Silva
|
a9964a7fc3 |
fix barbican integration
Added auth_url to the context we pass to castellan library. In a change [1] intended to deprecate the use of auth_endpoint passed as the oslo config, it actually completely removed the use of it[2], so this change became necessary or the integration is broken. [1] - https://review.openstack.org/#/c/483457 [2] - https://review.openstack.org/#/c/483457/6/castellan/key_manager/barbican_key_manager.py@143 Change-Id: I933367fa46aa0a3dc9aedf078b1be715bfa8c054 |
||
|
Zuul
|
5f436c2fa5 | Merge "Use _update_x_timestamp method in object controller DELETE method" | ||
|
Zuul
|
e8bd8411c1 | Merge "Remove un-needed hack in probetest" | ||
|
Zuul
|
eaf056154e | Merge "Limit object-expirer queue updates on object DELETE, PUT, POST" | ||
|
Alistair Coles
|
35ad4e8745 |
Add tests for X-Backend-Clean-Expiring-Object-Queue true
Check that when X-Backend-Clean-Expiring-Object-Queue is true the object server does indeed call async_update. Change-Id: I0a87979147591f15349b868a12ac6dd15ac4e37f Related-Change: I4d64f4d1d107c437fd3c23e19160157fdafbcd42 |
||
|
Zuul
|
9a323e1989 | Merge "Fix socket leak on 416 EC GET responses." | ||
|
Zuul
|
8ce5dd54e6 | Merge "proxy: make the right number of container updates" | ||
|
Clay Gerrard
|
7afc6a06ee |
Remove un-needed hack in probetest
If you ran this probe test with ssync before the related change it would demonstrate the related bug. The hack isn't harmful, but it isn't needed anymore. Related-Change-Id: I7f90b732c3268cb852b64f17555c631d668044a8 Related-Bug: 1652323 Change-Id: I09e3984a0500a0f4eceec392e7970b84070a5b39 |
||
|
Samuel Merritt
|
48da3c1ed7 |
Limit object-expirer queue updates on object DELETE, PUT, POST
Currently, on deletion of an expiring object, each object server writes an async_pending to update the expirer queue and remove the row for that object. Each async_pending is processed by the object updater and results in all container replicas being updated. This is also true for PUT and POST requests for existing expiring objects. If you have Rc container replicas and Ro object replicas (or EC pieces), then the number of expirer-queue requests made is Rc * Ro [1]. For a 3-replica cluster, that number is 9, which is not terrible. For a cluster with 3 container replicas and a 15+4 EC scheme, that number is 57, which is terrible. This commit makes it so at most two object servers will write out the async_pending files needed to update the queue, dropping the request count to 2 * Rc [2]. The object server now looks for a header "X-Backend-Clean-Expiring-Object-Queue: <true|false>" and writes or does not write expirer-queue async_pendings as appropriate. The proxy sends that header to 2 object servers. The queue update is not necessary for the proper functioning of the object expirer; if the queue update fails, then the object expirer will try to delete the object, receive 404s or 412s, and remove the queue entry. Removal on object PUT/POST/DELETE is helpful but not required. [1] assuming no retries needed by the object updater [2] or Rc, if a cluster has only one object replica Change-Id: I4d64f4d1d107c437fd3c23e19160157fdafbcd42 |
||
|
Zuul
|
5917ea0ea1 | Merge "Change exit code when displaying empty rings" | ||
|
Zuul
|
b7cc19d9c5 | Merge "Show devices marked as deleted on empty rings" | ||
|
Christian Schwede
|
9754a2ebe3 |
Change exit code when displaying empty rings
Displaying an empty ring should not be an error, thus changing the exit code back to the former value of 0. Closes-Bug: 1742417 Change-Id: I779c30cff1b4d24483f993221a8c6d944b7ae98d |
||
|
Samuel Merritt
|
a41c458c90 |
proxy: make the right number of container updates
When the proxy is putting X-Container headers into object PUT requests, it should put out just enough to make the container update durable in the worst case. It shouldn't do more, since that results in extra work for the container servers; and it shouldn't do less, since that results in objects not showing up in listings. The current code gets the number right as long as you have 3 container replicas and an odd number of object replicas, but it comes up with some bogus numbers in other cases. The number it computes is (object-quorum + 1). This patch changes the number to (container-quorum + max_put_failures). Example: given an EC 12+5 policy and 3 container replicas, you can lose up to 4 connections and still succeed. Since you need to have 2 container updates happen for durability, you need 6 connections to have X-Container headers. That way, you can lose 4 and still have 2 left. The current code would put X-Container headers on 14 of the connections, resulting in more than double the workload on the container servers; this patch changes the number to 6. Example 2: given a (crazy) EC 3+6 policy and 3 container replicas, you can lose up to 5 connections, so you need X-Container headers on 7. The current code only sends 5, giving a worst-case result of a PUT succeeds but never reaches the containers. This patch changes the number to 7. Other examples: | current | this change | --+-----------+---------------+ EC 10+4, 3x container | 12 | 5 | EC 10+4, 5x container | 12 | 6 | EC 15+4, 3x container | 17 | 5 | EC 15+4, 5x container | 17 | 6 | EC 4+8, 3x container | 6 | 9 | 7x object, 3x container | 5 | 5 | 6x object, 3x container | 4 | 5 | Change-Id: I34efd48655b890340912810ab111bb63445e5c8b |
||
|
Tim Burke
|
b451ceed4b |
Add pipeline modification test for previously-recommended pipelines
This includes every pipeline from etc/proxy-server.conf-sample since we switched from swauth to tempauth in 1.4.1. As much as anything, I view this as a canary for auto-insertion changes, so we can (somewhat easily) see where new middlewares will be placed when upgrading from old swifts. Change-Id: I117937ab6ce28f3bc219e809f07b563c45fc486f Related-Change: I88678fddc7a25b0f065b33eb26047321d0db4d38 |
||
|
Zuul
|
b0242f4fdc | Merge "Fix intermittent check_delete_headers failure" | ||
|
Alistair Coles
|
e7ffda5d0b |
Use _update_x_timestamp method in object controller DELETE method
The DELETE method repeats inline the same behaviour as provided by _update_x_timestamp, so just call the method. Also add unit tests for the behaviour of _update_x_timestamp. Change-Id: I8b6cfdbfb54b6d43ac507f23d84309ab543374aa |
||
|
Zuul
|
3acf292699 | Merge "Allow InternalClient to container/object listing with prefix" | ||
|
Zuul
|
e7a99f9019 | Merge "Support existing builders with None _last_part_moves" | ||
|
Matthew Oliver
|
bf13d64cd0 |
Show devices marked as deleted on empty rings
This is a follow up patch to 530258 which will show extra infromation on empty rings. This patch goes one step further. On a completely empty ring: $ swift-ring-builder my.builder create 8 3 1 $ swift-ring-builder my.builder my.builder, build version 0, id 33b4e117056340feae7d40430180c6bb 256 partitions, 3.000000 replicas, 0 regions, 0 zones, 0 devices, 0.00 balance, 0.00 dispersion The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining) The overload factor is 0.00% (0.000000) Ring file my.ring.gz not found, probably it hasn't been written yet Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta There are no devices in this ring, or all devices have been deleted It will still start the device list and then say no devices.. Why. let's see what happens now on an empty ring with devices still marked as deleted: $ swift-ring-builder my.builder add r1z1-127.0.0.1:6010/sdb1 1 Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0 $ swift-ring-builder my.builder add r1z1-127.0.0.1:6010/sdb2 1 Device d1r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb2_"" with 1.0 weight got id 1 $ swift-ring-builder my.builder remove r1z1-127.0.0.1 Matched more than one device: d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" d1r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb2_"" Are you sure you want to remove these 2 devices? (y/N) y d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" marked for removal and will be removed next rebalance. d1r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb2_"" marked for removal and will be removed next rebalance. $ swift-ring-builder my.builder my.builder, build version 4, id 33b4e117056340feae7d40430180c6bb 256 partitions, 3.000000 replicas, 1 regions, 1 zones, 2 devices, 0.00 balance, 0.00 dispersion The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining) The overload factor is 0.00% (0.000000) Ring file my.ring.gz not found, probably it hasn't been written yet Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta 0 1 1 127.0.0.1:6010 127.0.0.1:6010 sdb1 0.00 0 0.00 DEL 1 1 1 127.0.0.1:6010 127.0.0.1:6010 sdb2 0.00 0 0.00 DEL There are no devices in this ring, or all devices have been deleted Now even when all devices are removed we can still see them as they are still there, only marked as deleted. Change-Id: Ib39f734deb67ad50bcdad5333cba716161a47e95 |
||
|
Tim Burke
|
e343452394 |
Support existing builders with None _last_part_moves
These were likely written before the first related change, or created from an existing ring file. Also, tolerate missing dispersion when rebalancing -- that may not exist in the builder file. Change-Id: I26e3b4429c747c23206e4671f7c86543bb182a15 Related-Change: Ib165cf974c865d47c2d9e8f7b3641971d2e9f404 Related-Change: Ie239b958fc7e0547ffda2bebf61546bd4ef3d829 Related-Change: I551fcaf274876861feb12848749590f220842d68 |
||
|
Alistair Coles
|
94565d9137 |
Disallow x-delete-at equal to x-timestamp
Previously an x-delete-at value equal to the x-timestamp value was allowed. This could only occur when x-timestamp happened to take an integer value and would result in an object that was immediately unreadable. Similarly an x-delete-after value of zero may previously have been accepted if x-timestamp happened to be an integer value. With this change an x-delete-at value equal to x-timestamp or an x-delete-after value of zero always results in a 400 BadRequest. Also cleans up check_delete_headers docstring. Related-Change: Ia8d00fcef8893e3b3dd5720da2c8a5ae1e6e4cb8 Related-Change: Ib2483444d3999e13ba83ca2edd3a8ef8e5c48548 Change-Id: I27fdd800d8e149302ff4d6531101e9726a14d471 |
||
|
Alistair Coles
|
79ac3a3c31 |
Fix intermittent check_delete_headers failure
Use a utils.Timestamp object to set a more realistic x-timestamp header to avoid intermittent failure when str(time.time()) results in a rounded up value. Closes-Bug: 1741912 Change-Id: I0c54d07e30ecb391f9429e7bcfb782f965ece1ea |
||
|
Alistair Coles
|
6151554a89 |
Correct 400 response message when x-delete-after is zero
Before an x-delete-after header with value '0' would almost certainly result in a 400 response, but the response body would report a problem with x-delete-at. Now the response correctly blames the x-delete-after header. Related-Change: I9a1b6826c4c553f0442cfe2bb78cdf49508fa4a5 Change-Id: Ia8d00fcef8893e3b3dd5720da2c8a5ae1e6e4cb8 |
||
|
Samuel Merritt
|
31c294de79 |
Fix time skew when using X-Delete-After
When a client sent "X-Delete-After: <n>", the proxy and all object servers would each compute X-Delete-At as "int(time.time() + n)". However, since they don't all compute it at exactly the same time, the objects stored on disk can end up with differing values for X-Delete-At, and in that case, the object-expirer queue has multiple entries for the same object (one for each distinct X-Delete-At value). This commit makes two changes, either one of which is sufficient to fix the bug. First, after computing X-Delete-At from X-Delete-After, X-Delete-After is removed from the request's headers. Thus, the proxy computes X-Delete-At, and the object servers don't, so there's only a single value. Second, computation of X-Delete-At now uses the request's X-Timestamp instead of time.time(). In the proxy, these values are essentially the same; the proxy is responsible for setting X-Timestamp. In the object server, this ensures that all computed X-Delete-At values are identical, even if the object servers' clocks are not, or if one object server takes an extra second to respond to a PUT request. Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: I9a1b6826c4c553f0442cfe2bb78cdf49508fa4a5 Closes-Bug: 1741371 |
||
|
Zuul
|
3aa17e6dc8 | Merge "add swift-ring-builder option to recalculate dispersion" | ||
|
Zuul
|
507a4fab10 | Merge "Represent dispersion worse than one replicanth" | ||
|
Zuul
|
e5d67c1fed | Merge "Display more info on empty rings" | ||
|
Zuul
|
38befacd9b | Merge "Handle EmptyRingError in swift-ring-builder's default command" | ||
|
Clay Gerrard
|
49de7db532 |
add swift-ring-builder option to recalculate dispersion
Since dispersion info is cached, this can easily happen if we make changes to how dispersion info is calculated or stored (e.g. we extend the dispersion calculation to consider dispersion of all part-replicas in the related change) Related-Change-Id: Ifefff0260deac0c3e8b369a1e158686c89936686 Change-Id: I714deb9e349cd114a21ec591216a9496aaf9e0d1 |
||
|
Clay Gerrard
|
9189f51d76 |
Display more info on empty rings
Related-Bug: #1737068 Related-Change-Id: Ibadaf64748728a47a8f3f861ec1af601dbfeb9e0 Change-Id: I683677f33764fa56dadfb7f6208f7f6ee25c8557 |
||
|
vxlinux
|
56126b2839 |
Handle EmptyRingError in swift-ring-builder's default command
When the default display command for swift-ring-error encounters a EmptyRingError trying to calculate balance it should not raise exception and display the traceback in a command line environment. Instead handle the exceptional condition and provide the user with useful feedback. Closes-Bug: #1737068 Change-Id: Ibadaf64748728a47a8f3f861ec1af601dbfeb9e0 |
||
|
Samuel Merritt
|
f709eed41b |
Fix socket leak on 416 EC GET responses.
Sometimes, when handling an EC GET request with a Range header, the object servers reply 206 to the proxy, but the proxy (correctly) replies 416 to the client[1]. In that case, the connections to the object servers were not being closed. This was due to improper error handling in ECAppIter. Since ECAppIter is intended to be a WSGI iterable, it expects to have its close() method called when the caller is done with it. In this particular case, the caller (ECAppIter.kickoff()) was not calling close() when an exception was raised. Now it is. [1] consider a 4+2 EC policy with segment size 1024, an 20 byte object, and a request with "Range: bytes=21-50". The proxy needs whole fragments to decode, so it asks the object server for "Range: bytes=0-255" [2], the object server says 206, and then the proxy realizes that the client's request is unsatisfiable and tells the client 416. [2] segment size 1024 and 4 data fragments means the fragments have size 1024 / 4 = 256, hence "bytes=0-255" asks for the first whole fragment Change-Id: Ide2edf8c449c97d45f48c2dbbbff7aebefa4b158 Closes-Bug: 1738804 |
||
|
Clay Gerrard
|
7013e70ca6 |
Represent dispersion worse than one replicanth
With a sufficiently undispersed ring it's possible to move an entire replicas worth of parts and yet the value of dispersion may not get any better (even though in reality dispersion has dramatically improved). The problem is dispersion will currently only represent up to one whole replica worth of parts being undispersed. However with EC rings it's possible for more than one whole replicas worth of partitions to be undispersed, in these cases the builder will require multiple rebalance operations to fully disperse replicas - but the dispersion value should improve with every rebalance. N.B. with this change it's possible for rings with a bad dispersion value to measure as having a significantly smaller dispersion value after a rebalance (even though they may not have had their dispersion change) because the total amount of bad dispersion we can measure has been increased but we're normalizing within a similar range. Closes-Bug: #1697543 Change-Id: Ifefff0260deac0c3e8b369a1e158686c89936686 |
||
|
Zuul
|
9e8abe46c6 | Merge "Skip symlink + vw functional tests if symlink is not enabled" | ||
|
Tim Burke
|
61fe6aae81 |
Better mock out OSErrors in test_replicator before raising them
Also, provide a return value for resp.read() so we hit a pickle error instead of a type error. Change-Id: I56141eee63ad1ceb2edf807432fa2516fabb15a6 |
||
|
Kazuhiro MIYAHARA
|
0bdec4661b |
Skip symlink + vw functional tests if symlink is not enabled
Functional tests for symlink and versioned writes run and result in falure even if symlink is not enabled. This patch fixes the functional tests to run only if both of symlink and versioned writes are enabled. Change-Id: I5ffd0b6436e56a805784baf5ceb722effdf74884 |
||
|
Kazuhiro MIYAHARA
|
1449532fb8 |
Allow InternalClient to container/object listing with prefix
This patch adds 'prefix' argument to iter_containers/iter_objects method of InternalClient. This change will be used in general task queue feature [1]. [1]: https://review.openstack.org/#/c/517389/ Change-Id: I8c2067c07fe35681fdc9403da771f451c21136d3 |
||
|
Zuul
|
26822232b3 | Merge "Fix sometimes-flaky container name functional test." | ||
|
Zuul
|
a8d1900553 | Merge "fix SkipTest imports in functests so they can be run directly by nose" | ||
|
Zuul
|
fbee2bc178 | Merge "functest for symlink + versioned writes" | ||
|
Samuel Merritt
|
af2c2a6eb5 |
Fix sometimes-flaky container name functional test.
You've got two test classes: TestContainer and TestContainerUTF8. They each try to create the same set of containers with names of varying lengths to make sure the container-name length limit is being honored. Also, each test class tries to clean up pre-existing data in its setUpClass method. If TestContainerUTF8 fails to delete a contaienr that TestContainer made, then its testContainerNameLimit method will fail because the container PUT response has status 202 instead of 201, which is because the container still existed from the prior test. I've made the test consider both 201 and 202 as success. For purposes of testing the maximum container name length, any 2xx is fine. Change-Id: I7b343a8ed0d12537659c051ddf29226cefa78a8f |
||
|
Zuul
|
5b2929d7b8 | Merge "Fix intermittent problem in part_swapping test" | ||
|
Zuul
|
c29a9b01b9 | Merge "Save ring builder if dispersion changes" | ||
|
Clay Gerrard
|
609c757e69 |
functest for symlink + versioned writes
Co-Author: Alistair Coles <alistairncoles@gmail.com> Related-Change-Id: I838ed71bacb3e33916db8dd42c7880d5bb9f8e18 Change-Id: I0ccff1eafcfb3fdbdda9faf55a44c45b834e723a |
||
|
Matthew Oliver
|
a7da223262 |
Fix intermittent problem in part_swapping test
There is an intermittent failure in the test_part_swapping_problem
test found in test/unit/common/ring/test_builder.py.
The test does a rebalance, then changes the ring to test a specific
problem, does some housekeeping and then rebalances again. The problem
is the ringbuilder keeps track of where in the ring it started the
last ring rebalance, saved in `_last_part_gather_start`.
On a rebalance, or more specifically in `_gather_parts_for_balance` we
then we will start somewhere on the other side of the ring with:
quarter_turn = (self.parts // 4)
random_half = random.randint(0, self.parts / 2)
start = (self._last_part_gather_start + quarter_turn +
random_half) % self.parts
Because we don't reset `_last_part_gather_start` when we change the ring
in the test, there is edge case where if we are unlucky during both
rebalances whereby both calls to randint returns a relatively large
number pushes the start of the second rebalance to the wrong side
of the ring. Actually it's more problematic, and only 1 large random
and a one in the middle will cause it, maybe pictures help:
rabalance 1 (r1): quarter_turn = 4, random_half = 5
rebalance 2 (r2): quarter_turn = 4, random_half = 3
r1 r2
| |
v v
array('H', [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]),
array('H', [1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 2, 2, 2, 3, 3, 3]),
array('H', [2, 2, 2, 2, 3, 3, 4, 4, 4, 4, 3, 4, 4, 4, 4, 4])]
Now when gathering for rebalance 2 it'll pick:
array('H', [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, X]),
array('H', [X, X, 1, 1, 2, 2, 2, 3, 3, 3, 2, 2, 2, 3, 3, 3]),
array('H', [2, 2, 2, 2, 3, 3, 4, 4, 4, 4, 3, 4, 4, 4, 4, 4])]
Which can cause the 3 attempts to gather and rebalance to be used up.
This causes the intermittent failure seen in the bug.
This patch solves this by resetting `_gather_parts_for_balance` to 0
while we tidy up the ring change. Meaning we'll always start on the
correct side of the ring.
Change-Id: I0d3a69620d4734091dfa516efd0d6b2ed87e196b
Closes-Bug: #1724356
|
||
|
Tim Burke
|
cd11289ba1 |
Add a small sleep when trying to predict X-Timestamp
The existing test works fine if you're running the tests on an all-in-one, but is pretty brittle if you aren't running them on the one and only proxy-server they're hitting. Add 0.1s sleep to allow *some* clock slippage between client and server. Change-Id: Iacd08e9f703d08d0092b5e8eb53fe287ba1d1596 |
||
|
Tim Burke
|
fba3fb7089 |
Stop logging tracebacks when the replicator runs out of handoffs
Otherwise, swift-in-the-small can fill up logs with object-replicator: Error syncing partition: Traceback (most recent call last): File ".../swift/obj/replicator.py", line 419, in update node = next(nodes) StopIteration ...which simultaneously sounds worse than it is and isn't helpful in diagnosing/debugging the issue. Change-Id: I2f5bb12f3704880df1750229425f64f419ff9aef |
||
|
John Dickinson
|
2cf5e7ceff |
fix SkipTest imports in functests so they can be run directly by nose
Change-Id: I7ecc48f69ca677d5ecb0986ac4042688442355bb |
||
|
Zuul
|
8dcef64975 | Merge "Move symlink versioning functional test" |