627d0ba52f8f04765e09c9fe0974a2308ee6def3
105 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Peter Portante
|
d0a27f477b |
Hide the file descriptor and disk write methodology for PUTs
Towards moving the DiskFile class into place as the API definition for pluggable DiskFile backends, we hide the file descriptor and the method of writing data to disks. The mkstemp() method has been renamed to writer(), and no longer returns an fd, but a new object that encapsulates the state tracked for writes. This new object is then used directly to perform the reminder of the write operations and application of required semantics. Change-Id: Ib37ed37b34a2ce6b442d69f83ca011c918114434 Signed-off-by: Peter Portante <peter.portante@redhat.com> |
||
|
gholt
|
69cf78bb16 |
Moved tests for moved obj.base code
Follow-on to https://review.openstack.org/#/c/28895/ Moved the tests for the code that was moved to obj.base and also made the new test file flake8 compliant. Change-Id: I4be718927b6cd2de8efe32f8e54b458a4e05291b |
||
|
Jenkins
|
d754b59cf8 | Merge "Moved some code out of swift.obj.replicator" | ||
|
gholt
|
9fe15dd15a |
Moved some code out of swift.obj.replicator
This will be needed in future replication work to avoid circular imports. I used swift.obj.base as the module name just because we seemed to avoid putting code in __init__.py files so far and I didn't want to buck the trend. I would love to see other obj things like *_metadata and DiskFile move into swift.obj.base as well and swift.obj.server just be the WSGI server logic, but I'll leave that for the future. I have changed the tests as little as possible (just the references to where they get the code to test) to show the refactor has not broken anything. I did add a test for tpool_reraise since there was none before. There will be a follow on patch for moving the tests to their new location(s). I figured I'd wait to put the bikes in the shed until everyone's done painting it. Change-Id: I32b4ac88be21eb76c877d3f4cc1e6ac33304835b |
||
|
Jenkins
|
b9a6bcb431 | Merge "Add an explicit unit test for handling content-length: 0" | ||
|
Peter Portante
|
d62a2a832e |
Push fallocate() down into mkstemp(); use known size
Towards defining the DiskFile class, or something like it, as an API for the low level disk acesses, we push the fallocate() system call down into the DiskFile.mkstemp() method. This allows another implementation of DiskFile to decide to use or not use fallocate(). Change-Id: Ib4d2ee1f971e4e20e53ca4b41892c5e44ecc88d5 Signed-off-by: Peter Portante <peter.portante@redhat.com> |
||
|
Peter Portante
|
960f01b4ba |
Add an explicit unit test for handling content-length: 0
Change-Id: I3568d4dc1900e6ddb4860589ca6a7b7039cc8c2d Signed-off-by: Peter Portante <peter.portante@redhat.com> |
||
|
Sergey Kraynev
|
ea7858176b |
Implementation of replication servers
Support separate replication ip address: - Added new function in utils. This function provides ability to select separate IP address for replication service. - Db_replicator and object replicators were changed. Replication process uses new function now. Replication network parameters: - Replication network fields (replication_ip, replication_port) support was added to device dictionary in swift-ring-builder script. - Changes were made to support new fields in search, show and set_info functions. Implementation of replication servers: - Separate replication servers use the same code as normal replication servers, but with replication_server parameter = True. When using a separate replication network, the non-replication servers set replication_server = False. When there is no separate replication network (the default case), replication_server is not included in the config. DocImpact Change-Id: Ie9af5bdcdf9241c355e36053ca4adfe49dc35bd0 Implements: blueprint dedicated-replication-network |
||
|
Peter Portante
|
8825c9c74a |
Enhance log msg to report referer and user-agent
Enhance internally logged messages to report referer and user-agent.
Pass the referering URL and METHOD between internal servers (when
known), and set the user-agent to be the server type (obj-server,
container-server, proxy-server, obj-updater, obj-replicator,
container-updater, direct-client, etc.) with the process PID. In
conjunction with the transaction ID, it helps to track down which PID
from a given system was responsible for initiating the request and
what that server was working on to make this request.
This has been helpful in tracking down interactions between object,
container and account servers.
We also take things a bit further performaing a bit of refactoring to
consolidate calls to transfer_headers() now that we have a helper
method for constructing them.
Finally we performed further changes to avoid header key duplication
due to string literal header key values and the various objects
representing headers for requests and responses. See below for more
details.
====
Header Keys
There seems to be a bit of a problem with the case of the various
string literals used for header keys and the interchangable way
standard Python dictionaries, HeaderKeyDict() and HeaderEnvironProxy()
objects are used.
If one is not careful, a header object of some sort (one that does not
normalize its keys, and that is not necessarily a dictionary) can be
constructed containing header keys which differ only by the case of
their string literals. E.g.:
{ 'x-trans-id': '1234', 'X-Trans-Id': '5678' }
Such an object, when passed to http_connect() will result in an
on-the-wire header where the key values are merged together, comma
separated, that looks something like:
HTTP_X_TRANS_ID: 1234,5678
For some headers in some contexts, this is behavior is desirable. For
example, one can also use a list of tuples which enumerate the multiple
values a single header should have.
However, in almost all of the contexts used in the code base, this is
not desirable.
This behavior arises from a combination of factors:
1. Header strings are not constants and different lower-case and
title-case header strings values are used interchangably in the
code at times
It might be worth the effort to make a pass through the code to
stop using string literals and use constants instead, but there
are plusses and minuses to doing that, so this was not attempted
in this effort
2. HeaderEnvironProxy() objects report their keys in ".title()"
case, but normalize all other key references to the form
expected by the Request class's environ field
swob.Request.headers fields are HeaderEnvironProxy() objects.
3. HeaderKeyDict() objects report their keys in ".lower()" case,
and normalize all other key references to ".lower()" case
swob.Response.headers fields are HeaderKeyDict() objects.
Depending on which object is used and how it is used, one can end up
with such a mismatch.
This commit takes the following steps as a (PROPOSED) solution:
1. Change HeaderKeyDict() to normalize using ".title()" case to
match HeaderEnvironProxy()
2. Replace standard python dictionary objects with HeaderKeyDict()
objects where possible
This gives us an object that normalizes key references to avoid
fixing the code to normalize the string literals.
3. Fix up a few places to use title case string literals to match
the new defaults
Change-Id: Ied56a1df83ffac793ee85e796424d7d20f18f469
Signed-off-by: Peter Portante <peter.portante@redhat.com>
|
||
|
gholt
|
c354db2158 |
Expirer now quotes names when deleting
Change-Id: I5c615c6f32967510f09b783b1ba7089119f1d8bd |
||
|
David Hadas
|
caa01cd81e |
objects md5-collisions
This patch identifies md5 collisions on objects and sends a 403 from the object server. Credits for originating this fix are to Michael Factor. Change-Id: I4f1b32183e2be6bbea56eaff86b9a4c7f440804a Fix: Bug #1157454 |
||
|
Jenkins
|
ab355e349a | Merge "Fix reading xattrs in object-server's unittests." | ||
|
Greg Lange
|
44f00a23c1 |
fixed some minor things in tests that pyflakes complained about
Change-Id: Ifeab56a964630bcf941e932fcbe39e6572e62975 |
||
|
David Hadas
|
a979c8007b |
Add support for Hash Prefix
A new configuration parameter is added to /etc/swift/swift.conf [swift-hash] swift_hash_path_prefix = 'random unique string' New installations are advised to set this parameter to a random secret, which would not be disclosed ouside the organization. The same secret needs to be used by all swift servers of the same cluster. Existing installations should set this parameter to an empty string (the default) DocImpact Fixes: Bug #1157454 Change-Id: I63b10d0b7d6dd3f74e0f10bb41b5f240fa03578a |
||
|
Vladimir Vechkanov
|
9e3d2f6ea8 |
Fix reading xattrs in object-server's unittests.
Use for reading metadata in unit tests function from object-server. Change-Id: I2bfeb76fdd775442a0e614fef740b0987fba4a22 Fixes: bug #1079131 |
||
|
Joe Gordon
|
45f0502b52 |
Fix spelling mistakes
git ls-files | misspellings -f - Source: https://github.com/lyda/misspell-check Change-Id: I4132e6a276e44e2a8985238358533d315ee8d9c4 |
||
|
Jenkins
|
64270fab71 | Merge "Allow for multiple X-(Account|Container)-* headers." | ||
|
Jenkins
|
d69509a779 | Merge "Fixed bug in object replicator" | ||
|
Samuel Merritt
|
6ff644b945 |
Allow for multiple X-(Account|Container)-* headers.
When the number of account/container or container/object replicas are different, Swift had a few misbehaviors. This commit fixes them. * On an object PUT/POST/DELETE, if there were 3 object replicas and only 2 container replicas, then only 2 requests would be made to object servers. Now, 3 requests will be made, but the third won't have any X-Container-* headers in it. * On an object PUT/POST/DELETE, if there were 3 object replicas and 4 container replicas, then only 3/4 container servers would receive immediate updates; the fourth would be ignored. Now one of the object servers will receive multiple (comma-separated) values in the X-Container-* headers and it will attempt to contact both of them. One side effect is that multiple async_pendings may be written for updates to the same object. They'll have differing timestamps, though, so all but the newest will be deleted unread. To trigger this behavior, you have to have more container replicas than object replicas, 2 or more of the container servers must be down, and the headers sent to one object server must reference 2 or more down container servers; it's unlikely enough and the consequences are so minor that it didn't seem worth fixing. The situation with account/containers is analogous, only without the async_pendings. Change-Id: I98bc2de93fb6b2346d6de1d764213d7563653e8d |
||
|
Hodong Hwang
|
d46f90e17a |
Make object-auditor to use one logger
This commit makes that AuditorWorker gets a logger from ObjectAuditor class (instead of creating new one) so the auditor uses minimum unix-sockets. Fixes: bug #1089215 Change-Id: Ia47d862cbe7505db821784b01fcce6f22196e79f |
||
|
gholt
|
95d5cf851b |
Fixed bug in object replicator
If the object replicator couldn't create a device's object directory (due to permissions or whatever) it wouldn't do any work at all. This fixes that. Change-Id: I6a30439d036b29c9cfdb660428d13668e0dc8632 |
||
|
Darrell Bishop
|
ea95d0092a |
Avoid infinite recursion in swift.obj.replicator.get_hashes.
Fixes bug 1089140. Turns out that if an exception bails out of the pickle loading (eg. zero-byte hahes_file), the if clause to determine whether or not to write out a fresh hashes_file can evaluate to false, leading to an infinite loop. This patch fixes this infinite loop generally, by ensuring that if any exception is thrown, a new hashes_file is written. Change-Id: I344c5f8e261ce7c667bdafe1687263a4150b21dc |
||
|
Peter Portante
|
7d70e05aeb |
Refactor DiskFile to hide temp file names and exts
This set of changes reworks the DiskFile class to remove the "extension" parameter from the put() method, offering the new put_metadata() method with an optional tombstone keyword boolean, and changes the mkstemp method to only return the file descriptor. Reviewing the code it was found that the temporary file name created as a result of calling DiskFile.mkstemp() was never used by the caller, but the caller was responsible for passing it back to the DiskFile.put() method. That seems like too much information is exposed to the caller, when all the caller requires is the file descriptor to write data into it. Upon further review, the mkstemp() method was used in three places: PUT, POST and DELETE method handling. Of those three cases, only PUT requires the file descriptor, since it is responsible for writing the object contents. For POST and DELETE, DiskFile only needs to associate metadata with the correct file name. We abstract the pattern that those two use (once we also refactor the code to move the fetch of the delete-at metadata, and subsequent delete-at-update initiation, from under the mkstemp context) by adding the new put_metadata() method. As a result, the DiskFile class is then free to do whatever file system operations it must to meet the API, without the caller having to know more than just how to write data to a file descriptor. Note that DiskFile itself key'd off of the '.ts' and '.meta' extensions for its operations, and for that to work properly, the caller had to know to use those correctly. With this change, the caller has no knowledge of how the file system is being used to accomplish data and metadata storage. See also Question 213796 at: https://answers.launchpad.net/swift/+question/213796 Change-Id: I267f68e64391ba627b2a13682393bec62600159d Signed-off-by: Peter Portante <peter.portante@redhat.com> |
||
|
Samuel Merritt
|
237a440cd1 |
Make DELETE requests to expired objects return 404.
It is already the case that a DELETE of a deleted object results in a 404, and GET/HEAD/POSTs to both expired and deleted objects result in 404s. However, a DELETE of an expired object resulted in a 202. This change makes it consistent with the other verbs. Fixes bug 1076245. Change-Id: I793e62d72461a4fb9fb3404e10658ddcc4c3a7a6 |
||
|
gholt
|
f46a4d8a2f |
Fixed bugs with internal client and object expirer
These bug fixes are lumped together because they all caused problems with the object expirer doing its job. There was a bug with the internal client doing listings that happened to run across a Unicode object name for use as a marker. There was a bug with the object expirer not utf8 encoding object names it got from json listings, causing deletes to fail. There was a bug with the object expirer url quoting object names when calling the internal client's make_request, when make_request already handles that. Change-Id: I29fdd351fd60c8e63874b44d604c5fdff35169d4 |
||
|
litong01
|
ce274b3532 |
blueprint Multi-range support implementation
This change adds multi range retrieval to OpenStack Swift. For non- segmented data object, a client can use HTTP Range header to specify multiple ranges to retrieve sections of the data object. This implementation currently does not support segmented data object multi range retrieval. When a client sends a multi range request against a segmented data object, Swift will return HTTP status code 200. Support for segmented data multi range retrieval will be added in near future. This implementation is to bring Swift closer to CDMI multi range data retrieval standard. Once support for segemented data multi range is added, Swift will be compliant with CDMI standard in this area. DocImpact Change-Id: I4ed1fb0a0a93c037ddb2f551ea62afe447945107 |
||
|
Samuel Merritt
|
851bbe2ea9 |
Track unlinks of async_pendings.
It's not sufficient to just look at swift.object-updater.successes to see the async_pending unlink rate. There are two different spots where unlinks happen: one when an async_pending has been successfully processed, and another when the updater notices multiple async_pendings for the same object. Both events are now tracked under the same name: swift.object-updater.unlinks. FakeLogger has now sprouted a couple of convenience methods for testing logged metrics. Fixed pep8 1.3.3's complaints in the files this diff touches. Also: bonus speling and, grammar fixes in the admin guide. Change-Id: I8c1493784adbe24ba2b5512615e87669b3d94505 |
||
|
Greg Lange
|
e7f3a9865e |
internal client unicode paths
made internal client handle unicode path parts by adding make_path method fixed pep8 problems in internal client and its test moved internal client unit test file to correct directory Change-Id: Id1c81c9cb0db05342e4e8a8393db93552fda4647 |
||
|
Michael Barton
|
5e3e9a882d |
local WSGI Request and Response classes
This change replaces WebOb with a mostly compatible local library,
swift.common.swob. Subtle changes to WebOb's API over the years have been a
huge headache. Swift doesn't even run on the current version.
There are a few incompatibilities to simplify the implementation/interface:
* It only implements the header properties we use. More can be easily added.
* Casts header values to str on assignment.
* Response classes ("HTTPNotFound") are no longer subclasses, but partials
on Response, so things like isinstance no longer work on them.
* Unlike newer webob versions, will never return unicode objects.
Change-Id: I76617a0903ee2286b25a821b3c935c86ff95233f
|
||
|
David Goetz
|
a6c44d2764 |
allow replicator run_once to check specific devices/partitions
Change-Id: If45f77fda269ae6e251579542e70eb71bd11fe2a |
||
|
David Goetz
|
d24e280bf4 |
obj replicator speed up
Change-Id: If02b573353dedea9c2368ce4733fe97599229b2e |
||
|
Jenkins
|
3482eb26f9 | Merge "swift constraints are now settable via config" | ||
|
John Dickinson
|
a2ac5efaa6 |
swift constraints are now settable via config
Change previously hard-coded constants into config variables. This allows deployers to tune their cluster more specifically based on their needs. For example, a deployment that uses direct swift access for public content may need to set a larger header value constraint to allow for the full object name to be represented in the Content- Disposition header (which browsers check to determine the name of a downloaded object). The new settings are set in the [swift-constraints] section of /etc/swift/swift.conf. Comments were also added to this config file. Cleaned up swift/common/constraints.py to pass pep8 1.3.3 Funtional tests now require constraints to be defined in /etc/test.conf or in /etc/swift/swift.conf (in the case of running the functional tests against a local swift cluster). To have any hope of tests passing, the defined constraints must match the constraints on the tested cluster. Removed a ton of "magic numbers" in both unit and functional tests. Change-Id: Ie4588e052fd158314ddca6cd8fca9bc793311465 |
||
|
Darrell Bishop
|
46a093f068 |
Obj replicator cleans up files where part dirs should be.
If a partition directory was a file instead of a directory, the object-replicator would attempt to listdir() it, raise an exception, and try again next iteration. This condition could arise after running xfs_repair. Now, collect_jobs() will reap any partition directories which are actually files. Fixes bug 1045954. Change-Id: Id65d3eab2effd61c3f6b25250611c88c907b2a16 |
||
|
John Dickinson
|
eb5f89ac25 |
fallocate call error handling
fallocate() failures properly return HTTPInsufficientStorage from object-server before reading from wsgi.input, allowing the proxy server to error_limit that node. Change-Id: Idfc293bbab2cff1e508edf58045108ca1ef5cec1 |
||
|
Michael Barton
|
da0e013d98 |
make obj replicator locking more optimistic
Basically, do all hashing in the replicator without a lock, then lock briefly to rewrite the hashes file. Retry if someone else has modified the hashes file in the mean time (which should be rare). Also, a little refactoring. Change-Id: I6257a53808d14b567bde70d2d18a9c58cb1e415a |
||
|
gholt
|
61932d8506 |
Fixed bug where expirer would get confused by...
Fixed bug where expirer would get confused by odd deletion times. Since this has already rolled out, I just capped things at ten 9s, or Sat Nov 20 17:46:39 2286. I can't wait for the Y2286 world panic. :/ Change-Id: Iba10963faa344a418a1fa573d5c85f4ff864b574 |
||
|
Ionuț Arțăriși
|
9af3df9ee8 |
fix object replication on older rsync versions when using ipv4
Fixes bug 987388 Change-Id: I6eb5c45fe1f5844ad853a4ff9bc8fd23cc9abd5d |
||
|
Ionuț Arțăriși
|
9f5a6bba1a |
only allow methods which implement HTTP verbs to be called remotely
This fixes 500 server crashes caused by requests such as: curl -X__init__ "http://your-swift-object-server:6000/sda1/p/a/c/o" Fixes bug 1005903 Change-Id: I6c0ad39a29e07ce5f46b0fdbd11a53a9a1010a04 |
||
|
Samuel Merritt
|
783f16035a |
Fix starvation in object server with fast clients.
When an object server was handling concurrent GET or POST requests from very fast clients, it would starve other connected clients. The greenthreads responsible for servicing the fast clients would hog the processor and only rarely yield to another greenthread. The reason this happens for GET requests is found in eventlet.greenio.GreenSocket, in the send() method. When you call .send(data) on a GreenSocket, it immediately calls .send(data) on its underlying real socket (socket._socketobject). If the real socket accepts all the data, then GreenSocket.send() returns without yielding to another greenthread. Only if the real socket failed to accept all the data (either .send(data) < len(data) or by raising EWOULDBLOCK) does the GreenSocket yield control. Under most workloads, this isn't a problem. The TCP connection to client X can only consume data so quickly, and therefore the greenthread serving client X will frequently encounter a full socket buffer and yield control, so no clients starve. However, when there's a lot of contention for a single object from a large number of fast clients (e.g. on a LAN connected w/10Gb Ethernet), then one winds up in a situation where reading from the disk is slower than writing to the network, and so full socket buffers become rare, and therefore so do context switches. The end result is that many clients time out waiting for data. The situation for PUT requests is analogous; GreenSocket.recv() seldom encounters EWOULDBLOCK, so greenthreads seldom yield. This patch calls eventlet.sleep() to yield control after each chunk, preventing any one greenthread's IO from blocking the hub for very long. This code has the flaw that it will greenthread-switch twice when a send() or recv() does block, but since there isn't a way to find out if a switch occurred or not, there's no way to avoid it. Since greenlet switches are quite fast (faster than system calls, which the object server does a lot of), this shouldn't have a significant performance impact. Change-Id: I8549adfb4a198739b80979236c27b76df607eebf |
||
|
Florian Hines
|
ccb6334c17 |
Expand recon middleware support
Expand recon middleware to include support for account and container servers in addition to the existing object servers. Also add support for retrieving recent information from auditors, replicators, and updaters. In the case of certain checks (such as container auditors) the stats returned are only for the most recent path processed. The middleware has also been refactored and should now also handle errors better in cases where stats are unavailable. While new check's have been added the output from pre-existing check's has not changed. This should allow existing 3rd party utilities such as the Swift ZenPack to continue to function. Change-Id: Ib9893a77b9b8a2f03179f2a73639bc4a6e264df7 |
||
|
Darrell Bishop
|
3d3ed34f44 |
Adding StatsD logging to Swift.
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2
|
||
|
Samuel Merritt
|
bb509dd863 |
As-unique-as-possible partition replica placement.
This commit introduces a new algorithm for assigning partition replicas to devices. Basically, the ring builder organizes the devices into tiers (first zone, then IP/port, then device ID). When placing a replica, the ring builder looks for the emptiest device (biggest parts_wanted) in the furthest-away tier. In the case where zone-count >= replica-count, the new algorithm will give the same results as the one it replaces. Thus, no migration is needed. In the case where zone-count < replica-count, the new algorithm behaves differently from the old algorithm. The new algorithm will distribute things evenly at each tier so that the replication is as high-quality as possible, given the circumstances. The old algorithm would just crash, so again, no migration is needed. Handoffs have also been updated to use the new algorithm. When generating handoff nodes, first the ring looks for nodes in other zones, then other ips/ports, then any other drive. The first handoff nodes (the ones in other zones) will be the same as before; this commit just extends the list of handoff nodes. The proxy server and replicators have been altered to avoid looking at the ring's replica count directly. Previously, with a replica count of C, RingData.get_nodes() and RingData.get_part_nodes() would return lists of length C, so some other code used the replica count when it needed the number of nodes. If two of a partition's replicas are on the same device (e.g. with 3 replicas, 2 devices), then that assumption is no longer true. Fortunately, all the proxy server and replicators really needed was the number of nodes returned, which they already had. (Bonus: now the only code that mentions replica_count directly is in the ring and the ring builder.) Change-Id: Iba2929edfc6ece89791890d0635d4763d821a3aa |
||
|
Greg Lange
|
8d2fe89a7d |
Added an internal client.
Refactored object expirer to use this client. Change-Id: Ibeca6dba873f8b4a558ecf3ba6e8d23d36f545b0 |
||
|
Jenkins
|
6682138b0a | Merge "Make ring class interface slightly more abstracted from implementation." | ||
|
John Dickinson
|
1ecf5ebba1 |
updated copyright date for all files
Change-Id: Ifd909d3561c2647770a7e0caa3cd91acd1b4f298 |
||
|
Michael Barton
|
e008c2ebb8 |
Make ring class interface slightly more abstracted from implementation.
Change-Id: I0f55d61c7b8de30460f17a69e5d9946494dbda6e |
||
|
David Goetz
|
a98ce6eade |
Change tpooled_get_hashes back to err,err on Timeout, (object server REPLICATE needs it) and unit tests
Change-Id: Ic60c33570594fd2c0939043863b013aa2103505d |
||
|
David Goetz
|
2b3aab86bb |
Fix object replicator to handle Timeouts fixes: lp 814263
Change-Id: I4c8b73d4cb0540fa105f240b2a9d481cf9c1e55c |
||
|
Jenkins
|
a885fe3b14 | Merge "Updated TimeoutError and except Exception refs..." |