c03d53ab7737e208d9c436088e41f60fdab96b30
Commit Graph

5077 Commits

This Branch
This Branch
All Branches
Author SHA1 Message Date
Jenkins
6b854bd908 Merge "py3: Replace urllib imports with six.moves.urllib" 2015年10月10日 07:20:26 +00:00
Victor Stinner
84f0a54445 py3: Replace basestring with six.string_types
The builtin basestring type was removed in Python 3. Replace it with
six.string_types which works on Python 2 and Python 3.
Change-Id: Ib92a729682322cc65b41050ae169167be2899e2c
2015年10月09日 22:20:03 +02:00
Victor Stinner
2447e0cf6e Replace itertools.ifilter with six.moves.filter
Replace itertools.ifilter() with six.moves.filter(). The
itertools.ifilter was removed in Python 3. This change makes modified
code compatible with Python 2 and Python 3.
Replace itertools.izip() with six.moves.zip().
The patch was generated by the itertools operation of the sixer tool.
Change-Id: Ie7f787cc6d66edfceb8fa2c1a906351a8c8c5fed
2015年10月09日 22:19:42 +02:00
Jenkins
e15be1e6ec Merge "swift-ring-builder can't select id=0" 2015年10月09日 09:58:22 +00:00
Zack M. Davis
1ba7641c79 minutæ: port ClientException tweaks from swiftclient; dict .pop
openstack/python-swiftclient@5ae4b423 changed python-swiftclient's
ClientException to have its http_status attribute default to
None (rather than 0) and to use super in its __init__ method. For
consistency's sake, it's nice for Swift's inlined copy of
ClientException to receive the same patch. Also, the retry function in
direct_client (a major user of ClientException) was using a somewhat
awkward conditional-assignment-and-delete construction where the .pop
method of dictionaries would be more idiomatic.
Change-Id: I70a12f934f84f57549617af28b86f7f5637bd8fa
2015年10月08日 16:39:35 -07:00
Jenkins
e2498083b4 Merge "Fix EC documentation of .durable quorum" 2015年10月08日 20:53:58 +00:00
Alistair Coles
01f9d15045 Fix EC documentation of .durable quorum
Update the doc to reflect the change [1] to ndata + 1
.durable files being committed before a success response
is returned for a PUT.
[1] Ifd36790faa0a5d00ec79c23d1f96a332a0ca0f0b
Change-Id: I1744d457bda8a52eb2451029c4031962e92c2bb7
2015年10月08日 18:55:29 +01:00
Lisak, Peter
a5d2faab90 swift-ring-builder can't select id=0
Currently, it is not possible to change weight of device with id=0
by swift-ring-builder cli. Instead of change the help is shown.
Example:
$ swift-ring-builder object.builder set_weight --id 0 1.00
But id=0 is generated by swift for the first device if not provided.
Also --weight, --zone and --region cause the same bug.
There is problem to detect new command format in validate_args
function if zero is as valid value for some args.
Change-Id: I4ee379c242f090d116cd2504e21d0e1904cdc2fc
2015年10月08日 15:57:01 +02:00
Victor Stinner
8f85427939 py3: Replace gen.next() with next(gen)
The next() method of Python 2 generators was renamed to __next__().
Call the builtin next() function instead which works on Python 2 and
Python 3.
The patch was generated by the next operation of the sixer tool.
Change-Id: Id12bc16cba7d9b8a283af0d392188a185abe439d
2015年10月08日 15:40:06 +02:00
Victor Stinner
6a82097b0e py3: Use six.reraise() to reraise an exception
Replace "raise exc_type, exc_value, exc_tb" with
"six.reraise(exc_type, exc_value, exc_tb)".
The patch was generated by the raise operation of the sixer tool on:
bin/* swift/ test/.
Change-Id: Ic4ca6d7f26d1e0075bd2a8a26d6e408b59b17fbb
2015年10月08日 15:33:26 +02:00
Victor Stinner
c0af385173 py3: Replace urllib imports with six.moves.urllib
The urllib, urllib2 and urlparse modules of Python 2 were reorganized
into a new urllib namespace on Python 3. Replace urllib, urllib2 and
urlparse imports with six.moves.urllib to make the modified code
compatible with Python 2 and Python 3.
The initial patch was generated by the urllib operation of the sixer
tool on: bin/* swift/ test/.
Change-Id: I61a8c7fb7972eabc7da8dad3b3d34bceee5c5d93
2015年10月08日 15:24:13 +02:00
Victor Stinner
f2cac20d17 py3: Replace unicode with six.text_type
The unicode type was renamed to str in Python 3. Use six.text_type to
make the modified code compatible with Python 2 and Python 3.
The initial patch was generated by the unicode operation of the sixer
tool on: bin/* swift/ test/.
Change-Id: I9e13748ccde36ee8110756202d55d3ae945d4860
2015年10月08日 13:16:43 +02:00
Jenkins
6a9b868ae6 Merge "Python3: Fix Remaining issues of python3 compatibility in bin directory" 2015年10月08日 08:41:34 +00:00
Jenkins
647e97e63b Merge "Python 3 using builtins instead of __builtin__, rename raw_input() to input() from six.moves" 2015年10月08日 08:41:30 +00:00
Jenkins
2b521dfbaa Merge "Port swob to Python 3" 2015年10月08日 06:05:17 +00:00
Jenkins
6ff6bb272e Merge "Fix replicator intersection exception when sync data to remote regions." 2015年10月08日 05:09:09 +00:00
Jenkins
c87bceb42d Merge "Fix ring device checks in probetests" 2015年10月08日 05:06:46 +00:00
Jenkins
21ebebfa87 Merge "py3: Fix Python 3 issues in utils" 2015年10月08日 03:55:31 +00:00
Christian Schwede
c30ceec6f1 Fix ring device checks in probetests
If a device has been removed from one of the rings, it actually is set as None
within the ring. In that case the length of the devices is not True without
filtering the None devices. However, if the length matched the condition but
included a removed device the probetests would fail with a TypeError.
This fix could be done also in swift/common/ring/ring.py, but it seems it only
affects probetests right now, thus fixing it there and not changing the current
behavior.
Change-Id: I8ccf9b32a51957e040dd370bc9f711d4328d17b1
2015年10月07日 19:59:15 +00:00
Charles Hsu
d01cd42509 Fix replicator intersection exception when sync data to remote regions.
This patch fixed the exception (AttributeError: 'list' object has no
attribute 'intersection') when replicator try to sync data from
handoff to primary partition in more than one remote region.
Change-Id: I565c45dda8c99d36e24dbf1145f2d2527d593ac0
Closes-Bug: 1503152
2015年10月07日 12:18:35 -07:00
Jenkins
75e10aa77c Merge "Input validation must not depend on the locale" 2015年10月07日 17:57:18 +00:00
Jenkins
ce52187375 Merge "replace use of deprecated rfc822.Message with a helper utility" 2015年10月07日 12:45:48 +00:00
Jenkins
45ae57902a Merge "Add search filter examples to swift-ring-builder dispersion help" 2015年10月07日 07:24:47 +00:00
Jenkins
84687ee324 Merge "Change POST container semantics" 2015年10月06日 19:10:15 +00:00
Jenkins
bcd17c550d Merge "Fix slorange on-disk format when including whole object" 2015年10月06日 17:16:01 +00:00
Clay Gerrard
752ceb266b Close ECAppIter's sub-generators before propagating GeneratorExit
... which ensures no Timeouts remain pending after the parent generator
is closed when a client disconnects before being able to read the entire
body.
Also tighten up a few tests that may have left some open ECAppIter
generators lying about after the tests themselves had finished. This
has the side effect of preventing the extraneous printing of the Timeout
errors being raised by the eventlet hub in the background while our
unittests are running.
Change-Id: I156d873c72c19623bcfbf39bf120c98800b3cada
2015年10月05日 13:31:09 -07:00
Zack M. Davis
06bede8942 replace use of deprecated rfc822.Message with a helper utility
The rfc822 module has been deprecated since Python 2.3, and in
particular is absent from the Python 3 standard library. However, Swift
uses instances of rfc822.Message in a number of places, relying on its
behavior of immediately parsing the headers of a file-like object
without consuming the body, leaving the position of the file at the
start of the body. Python 3's http.client has an undocumented
parse_headers function with the same behavior, which inspired the new
parse_mime_headers utility introduced here. (The HeaderKeyDict returned
by parse_mime_headers doesn't have a `.getheader(key)` method like
rfc822.Message did; the dictionary-like `[key]` or `.get(key)` interface
should be used exclusively.)
The implementation in this commit won't actually work with Python 3, the
email.parser.Parser().parsestr of which expects a Unicode string, but it
is believed that this can be addressed in followup work.
Change-Id: Ia5ee2ead67e36e8c6416183667f64ae255887736
2015年10月05日 12:22:24 -07:00
John Dickinson
47eb6a37f8 authors and changelog update for 2.5.0
Change-Id: Id20b9340a017922b29d8bf9558825697a7f1f6f1
2.5.0
2015年10月02日 21:28:15 -07:00
Jenkins
ab78b2409a Merge "Make sure we have enough .durable's for GETs" 2015年10月03日 02:04:23 +00:00
Jenkins
c799d4de52 Merge "Validate against duplicate device part replica assignment" 2015年10月03日 01:35:12 +00:00
Victor Stinner
a8c2978707 py3: Fix Python 3 issues in utils
* Replace urllib imports with six.moves.urllib
* Don't access private logging._levelNames attribute, but use the
 public function logging.addLevelName() instead.
* Replace basestring with six.string_types
Change-Id: I4cd5dd71ffb40f84e8844b5808b38630795ad520
2015年10月03日 02:13:38 +02:00
Clay Gerrard
5070869ac0 Validate against duplicate device part replica assignment
We should never assign multiple replicas of the same partition to the
same device - our on-disk layout can only support a single replica of a
given part on a single device. We should not do this, so we validate
against it and raise a loud warning if this terrible state is ever
observed after a rebalance.
Unfortunately currently there's a couple not necessarily uncommon
scenarios which will trigger this observed state today:
 1. If we have less devices than replicas
 2. If a server or zones aggregate device weight make it the most
 appropriate candidate for multiple replicas and you're a bit unlucky
Fixing #1 would be easy, we should just not allow that state anymore.
Really we never did - if you have a 3 replica ring with one device - you
have one replica. Everything that iter_nodes'd would de-dupe. We
should just be insisting that you explicitly acknowledge your replica
count with set_replicas.
I have been lost in the abyss for days searching for a general solutions
to #2. I'm sure it exists, but I will not have wrestled it to
submission by RC1. In the meantime we can eliminate a great deal of the
luck required simply by refusing to place more than one replica of a
part on a device in assign_parts.
The meat of the change is a small update to the .validate method in
RingBuilder. It basically unrolls a pre-existing (part, replica) loop
so that all the replicas of the part come out in order so that we can
build up the set of dev_id's for which all the replicas of a given part
are assigned part-by-part.
If we observe any duplicates - we raise a warning.
To clean the cobwebs out of the rest of the corner cases we're going to
delay get_required_overload from kicking in until we achive dispersion,
and a small check was added when selecting a device subtier to validate
if it's already being used - picking any other device in the tier works
out much better. If no other devices are available in the tier - we
raise a warning. A more elegant or optimized solution may exist.
Many unittests did not meet the criteria #1, but the fix was straight
forward after being identified by the pigeonhole check.
However, many more tests were affected by #2 - but again the fix came to
be simply adding more devices. The fantasy that all failure domains
contain at least replica count devices is prevalent in both our ring
placement algorithm and it's tests. These tests were trying to
demonstrate some complex characteristics of our ring placement algorithm
and I believe we just got a bit too carried away trying to find the
simplest possible example to demonstrate the desirable trait. I think
a better example looks more like a real ring - with many devices in each
server and many servers in each zone - I think more devices makes the
tests better. As much as possible I've tried to maintain the original
intent of the tests - when adding devices I've either spread the weight
out amongst them or added proportional weights to the other tiers.
I added an example straw man test to validate that three devices with
different weights in three different zones won't blow up. Once we can
do that without raising warnings and assigning duplicate device part
replicas - we can add more. And more importantly change the warnings to
errors - because we would much prefer to not do that #$%^ anymore.
Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Related-Bug: #1452431
Change-Id: I592d5b611188670ae842fe3d030aa3b340ac36f9
2015年10月02日 16:42:25 -07:00
Clay Gerrard
a31ee07bda Make sure we have enough .durable's for GETs
Increase the number of nodes from which we require a final successful
HTTP responses before we return success to the client on a write - to
the same number of nodes we'll require successful responses from to
service a client request for a read.
Change-Id: Ifd36790faa0a5d00ec79c23d1f96a332a0ca0f0b
Related-Bug: #1469094 
2015年10月02日 13:53:27 -07:00
Jenkins
0e3e2db913 Merge "Add POST capability to ssync for .meta files" 2015年10月02日 13:54:51 +00:00
Alistair Coles
29c10db0cb Add POST capability to ssync for .meta files
ssync currently does the wrong thing when replicating object dirs
containing both a .data and a .meta file. The ssync sender uses a
single PUT to send both object content and metadata to the receiver,
using the metadata (.meta file) timestamp. This results in the object
content timestamp being advanced to the metadata timestamp,
potentially overwriting newer object data on the receiver and causing
an inconsistency with the container server record for the object.
For example, replicating an object dir with {t0.data(etag=x), t2.meta}
to a receiver with t1.data(etag=y) will result in the creation of
t2.data(etag=x) on the receiver. However, the container server will
continue to list the object as t1(etag=y).
This patch modifies ssync to replicate the content of .data and .meta
separately using a PUT request for the data (no change) and a POST
request for the metadata. In effect, ssync replication replicates the
client operations that generated the .data and .meta files so that
the result of replication is the same as if the original client requests
had persisted on all object servers.
Apart from maintaining correct timestamps across sync'd nodes, this has
the added benefit of not needing to PUT objects when only the metadata
has changed and a POST will suffice.
Taking the same example, ssync sender will no longer PUT t0.data but will
POST t2.meta resulting in the receiver having t1.data and t2.meta.
The changes are backwards compatible: an upgraded sender will only sync
data files to a legacy receiver and will not sync meta files (fixing the
erroneous behavior described above); a legacy sender will operate as
before when sync'ing to an upgraded receiver.
Changes:
- diskfile API provides methods to get the data file timestamp
 as distinct from the diskfile timestamp.
- diskfile yield_hashes return tuple now passes a dict mapping data and
 meta (if any) timestamps to their respective values in the timestamp
 field.
- ssync_sender will encode data and meta timestamps in the
 (hash_path, timestamp) tuple sent to the receiver during
 missing_checks.
- ssync_receiver compares sender's data and meta timestamps to any
 local diskfile and may specify that only data or meta parts are sent
 during updates phase by appending a qualifier to the hash returned
 in its 'wanted' list.
- ssync_sender now sends POST subrequests when a meta file
 exists and its content needs to be replicated.
- ssync_sender may send *only* a POST if the receiver indicates that
 is the only part required to be sync'd.
- object server will allow PUT and DELETE with earlier timestamp than
 a POST
- Fixed TODO related to replicated objects with fast-POST and ssync
Related spec change-id: I60688efc3df692d3a39557114dca8c5490f7837e
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Closes-Bug: 1501528
Change-Id: I97552d194e5cc342b0a3f4b9800de8aa6b9cb85b
2015年10月02日 11:24:19 +00:00
Jenkins
fc7c8c23c9 Merge "Fix copy requests to service accounts in Keystone" 2015年10月02日 03:10:18 +00:00
Tim Burke
969f1ea939 Fix slorange on-disk format when including whole object
Not that the current implementation is broken, just wasteful.
When a client specifies a range for an SLO segment that includes the
entire referenced object, we should drop the 'range' key from the
manifest that's stored on disk.
Previously, we would do this if the uploaded manifest included the
object-length for validation, but not if it didn't. Now we will
always drop the 'range' key if the entire segment is being used.
Change-Id: I69d2fff8c7c59b81e9e4777bdbefcd3c274b59a9
Related-Change: Ia21d51c2cef4e2ee5162161dd2c1d3069009b52c
2015年10月01日 21:22:40 +00:00
Clay Gerrard
8f6fd855a1 Add search filter examples to swift-ring-builder dispersion help
... because I always forget how to regex and have to independently
discover it everytime I want to use the tool!
Change-Id: I00d5ab6f573ef26e7e10502493c0066623583b00
2015年10月01日 11:34:10 -07:00
Christian Schwede
4b8f52b153 Fix copy requests to service accounts in Keystone
In case of a COPY request the swift_owner was already set to True, and the
following PUT request was granted access no matter if a service token was used
or not. This allowed to copy data to service accounts without any service
token.
Service token unit tests have been added to verify that when
swift_owner is set to True in a request environ, this setting is
ignored when authorizing another request based on the same
environ. Applying only this test change on master fails currently, and
only passes with the fix in this patch.
Tempauth seems to be not affected, however a small doc update has been added to
make it more clear that a service token is not needed to access a service account
when an ACL is used.
Further details with an example are available in the bug report
(https://bugs.launchpad.net/swift/+bug/1483007).
Co-Authored-By: Alistair Coles <alistair.coles@hp.com>
Co-Authored-By: Hisashi Osanai <osanai.hisashi@jp.fujitsu.com>
Co-Authored-By: Donagh McCabe <donagh.mccabe@hp.com>
Closes-Bug: 1483007
Change-Id: I1207b911f018b855362b1078f68c38615be74bbd
2015年10月01日 10:01:03 +01:00
Alistair Coles
167f3c8cbd Update EC overview doc for PUT path
Update the EC overview docs 'under the hood' section to reflect the
change in durable file parity from 2 to ec_nparity + 1 [1].
Also fix some typos and cleanup the text.
[1] change id I80d666f61273e589d0990baa78fd657b3470785d
Change-Id: I23f6299da59ba8357da2bb5976d879d9a4bb173e
2015年09月30日 09:45:57 +01:00
Jenkins
fbbe942041 Merge "fix docstring: s/2xx/1xx/" 2015年09月29日 23:54:30 +00:00
Jenkins
608bdd7245 Merge "Don't send commits for quorum *BAD* requests on EC" 2015年09月29日 22:46:01 +00:00
John Dickinson
590e80870c fix docstring: s/2xx/1xx/
Change-Id: If863eb4e66e400081d2402ec8fbf0f9fe8f55b7c
2015年09月29日 15:02:55 -07:00
Jenkins
4a6f0cc30b Merge "Fix inlines for test/unit/obj/test_server.py" 2015年09月29日 17:29:55 +00:00
Kota Tsuyuzaki
8f1c7409e7 Don't send commits for quorum *BAD* requests on EC
In EC PUT request case, proxy-server may send commits to object-servers
it may make .durable file even though the request failed due to a lack
of quorum number.
For example:
- Considering the case that almost all object-servers fail by 422
 Unprocessable Entity
- Using ec scheme 4 + 2
- 5 (quorum size) object-server failed with 422, 1 object-servers
 succeeded as 201 created
How it works:
- Client creates a PUT request
- Proxy will open connections to backend object-servers
- Proxy will send whole encoded chunks to object-servers
- Proxy will send content-md5 as footers.
- Proxy will get responses [422, 422, 422, 422, 422, 201] (currently
 this list will be regarded as "we have quorum response")
- And then proxy will send commits to object-servers (the only
 object-server with 201 will create .durable file)
- Proxy will return 503 because the commits results in no response
 statuses from object-servers except the 201 node.
This patch fixes the quorum handling at ObjectController to check
that it has *successful* quorum responses before sending durable commits.
Closes-Bug: #1491748
Change-Id: Icc099993be76bcc687191f332db56d62856a500f
2015年09月29日 05:56:12 -07:00
Jenkins
e3bfd33ea7 Merge "Imported Translations from Zanata" 2015年09月28日 16:01:06 +00:00
Jenkins
1c94241423 Merge "Better error handling for EC PUT path when client goes away" 2015年09月28日 13:24:02 +00:00
Kota Tsuyuzaki
224c40fa67 Fix inlines for test/unit/obj/test_server.py
This patch fixes small nits for inline comments for
https://review.openstack.org/#/c/211338
as a follow-up patch, plus some other typos in comments.
Change-Id: Ibf7dc5683b39d6662573dbb036da146174a965fd
2015年09月28日 14:17:36 +01:00
OpenStack Proposal Bot
be9a8fb56e Imported Translations from Zanata
For more information about this automatic import see:
https://wiki.openstack.org/wiki/Translations/Infrastructure
Change-Id: I59e314778d95bce32ab05bfeca2067819180dd30
2015年09月28日 06:27:21 +00:00
Jenkins
88cbdf78cd Merge "Fix missing container update" 2015年09月28日 03:32:07 +00:00