a7e6d44706bd790de20830f9a72750d7cc3241c7
2061 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Jenkins
|
3ba5ea354b | Merge "Fixed bug with container reclaim/report race" | ||
|
Ionuț Arțăriși
|
9af3df9ee8 |
fix object replication on older rsync versions when using ipv4
Fixes bug 987388 Change-Id: I6eb5c45fe1f5844ad853a4ff9bc8fd23cc9abd5d |
||
|
Jenkins
|
54072cc951 | Merge "Added global catchall to account-reaper." | ||
|
Jenkins
|
efb0436838 | Merge "Raise ClientException for invalid auth version." | ||
|
Dan Prince
|
f48f253f4c |
Raise ClientException for invalid auth version.
Fixes LP Bug #1008667. Change-Id: I1e767a804b617eff8a9700c3d98b2360c040933a |
||
|
Greg Lange
|
63ad27cd5f |
Added global catchall to account-reaper.
bug 644075 Change-Id: I75c73a42ddd8654a39a2fd82320941199bee4363 |
||
|
Ionuț Arțăriși
|
9f5a6bba1a |
only allow methods which implement HTTP verbs to be called remotely
This fixes 500 server crashes caused by requests such as: curl -X__init__ "http://your-swift-object-server:6000/sda1/p/a/c/o" Fixes bug 1005903 Change-Id: I6c0ad39a29e07ce5f46b0fdbd11a53a9a1010a04 |
||
|
gholt
|
213f385348 |
Fixed bug with container reclaim/report race
Before, a really lagged cluster might not get its final report for a deleted container database sent to its corresponding account database. In such a case, the container database file would be permanently deleted while still leaving the container listed in the account database, never to be updated since the actual container database file was gone. The only way to fix such the situation before was to recreate and redelete the container. Now, the container database file will not be permanently deleted until it has sent its final report successfully to its corresponding account database. Change-Id: I1f42202455e7ecb0533b84ce7f45fcc7b98aeaa3 |
||
|
Samuel Merritt
|
783f16035a |
Fix starvation in object server with fast clients.
When an object server was handling concurrent GET or POST requests from very fast clients, it would starve other connected clients. The greenthreads responsible for servicing the fast clients would hog the processor and only rarely yield to another greenthread. The reason this happens for GET requests is found in eventlet.greenio.GreenSocket, in the send() method. When you call .send(data) on a GreenSocket, it immediately calls .send(data) on its underlying real socket (socket._socketobject). If the real socket accepts all the data, then GreenSocket.send() returns without yielding to another greenthread. Only if the real socket failed to accept all the data (either .send(data) < len(data) or by raising EWOULDBLOCK) does the GreenSocket yield control. Under most workloads, this isn't a problem. The TCP connection to client X can only consume data so quickly, and therefore the greenthread serving client X will frequently encounter a full socket buffer and yield control, so no clients starve. However, when there's a lot of contention for a single object from a large number of fast clients (e.g. on a LAN connected w/10Gb Ethernet), then one winds up in a situation where reading from the disk is slower than writing to the network, and so full socket buffers become rare, and therefore so do context switches. The end result is that many clients time out waiting for data. The situation for PUT requests is analogous; GreenSocket.recv() seldom encounters EWOULDBLOCK, so greenthreads seldom yield. This patch calls eventlet.sleep() to yield control after each chunk, preventing any one greenthread's IO from blocking the hub for very long. This code has the flaw that it will greenthread-switch twice when a send() or recv() does block, but since there isn't a way to find out if a switch occurred or not, there's no way to avoid it. Since greenlet switches are quite fast (faster than system calls, which the object server does a lot of), this shouldn't have a significant performance impact. Change-Id: I8549adfb4a198739b80979236c27b76df607eebf |
||
|
gholt
|
7a9c2d6ea5 |
Proxy logging content-length fix
Change-Id: Iad2f12b3db44378c1369481c567b3d13b9a4b75f |
||
|
Jenkins
|
4d25774012 | Merge "Fixed bug where 204 would sometimes be chunked" | ||
|
Florian Hines
|
243b439507 |
Ensure empty results are returned
Make sure that empty but still valid results (like no unmounted drives) aren't treated as 500 errors. Change-Id: I9588e2711d7916406f15613d5a26b9f0cf38235a |
||
|
gholt
|
135f154285 |
Fixed bug where 204 would sometimes be chunked
Not sure how this got introduced (which really annoys me) but here's the fix to make sure the content-length / transfer-encoding headers are set properly. Specifically, the proxy was sometimes returning transfer-encoding: chunked with no content-length on 204 No Content responses where it used to return content-length: 0 and no transfer-encoding header at all. Change-Id: I0927d102bc5e4324e38dbbb44be9033a6cd8ee20 |
||
|
Jenkins
|
ff761638ca | Merge "Fixed another make_pre_auth bug (wsgi.input)" | ||
|
Jenkins
|
6e77cb97a5 | Merge "Fixed query removal bug in make_pre_authed_request" | ||
|
gholt
|
d4c5818354 |
Fixed another make_pre_auth bug (wsgi.input)
Change-Id: I8b3c182ab85d4c5545e0a4259a64a496ebaf2bcb |
||
|
Thierry Carrez
|
9a2d9b920b |
Adding missing files in generated tarballs
Fix MANIFEST.in to include tools/, tox.ini and test/sample.conf in generated tarballs. Fixes bug 960018 and bug 1005801. Change-Id: Ifa83eab62300e3aec71ced217dc3cdcb2846ea0e |
||
|
John Dickinson
|
d668b27c09 |
fixed doc table format
Change-Id: I319de933ecfb1e3853e3064656968c36980ce5f5 |
||
|
John Dickinson
|
ad6a00d0a2 |
1.5.1 version bump to continue dev
Change-Id: Ied84c8274b3aee5f63a11e557c6c59729666d99f |
||
|
John Dickinson
|
576be4d77e |
Updated AUTHORS, CHANGELOG, and version for 1.5.0 release
Change-Id: I9e0e26394a1892d757e33806511940cbe43be4d5 |
||
|
gholt
|
e060561506 |
Fixed query removal bug in make_pre_authed_request
Change-Id: I1b8238fb2ffe07b1474f7d8f040fdc620b6897d7 |
||
|
Jenkins
|
676c338b7c | Merge "Expand recon middleware support" | ||
|
Michael Barton
|
7c98e7a625 |
Move proxy server logging to middleware.
Change-Id: I771c87207d4e1821e32c3424b341d182cc7ea7c0 |
||
|
gholt
|
9c8afc8b0e |
Fixed the one new thing PEP8 1.1 found
Change-Id: Iaa15bd47ff5ba48bd971ccc8c1707930977116df |
||
|
Florian Hines
|
ccb6334c17 |
Expand recon middleware support
Expand recon middleware to include support for account and container servers in addition to the existing object servers. Also add support for retrieving recent information from auditors, replicators, and updaters. In the case of certain checks (such as container auditors) the stats returned are only for the most recent path processed. The middleware has also been refactored and should now also handle errors better in cases where stats are unavailable. While new check's have been added the output from pre-existing check's has not changed. This should allow existing 3rd party utilities such as the Swift ZenPack to continue to function. Change-Id: Ib9893a77b9b8a2f03179f2a73639bc4a6e264df7 |
||
|
Jenkins
|
a74cd3b01b | Merge "Remove swift3 from here." | ||
|
Jenkins
|
86ddaab942 | Merge "!! Changed db_preallocation to False" | ||
|
Jenkins
|
0a79be6e91 | Merge "Clean up weird test code" | ||
|
Chmouel Boudjnah
|
d02a73f4a9 |
Remove swift3 from here.
- Reference https://github.com/fujita/swift3 in associated_projects. - Implements blueprint add-associated-projects-docs. Change-Id: I48ef4c03449edf6ef4fda1a391228cacac7d2ac6 |
||
|
John Dickinson
|
1e90b61076 |
Re-add cname lookup and domain remap middleware
Revert "removed cname lookup middleware"
This reverts commit
|
||
|
gholt
|
9eb797b099 |
!! Changed db_preallocation to False
Long explanation, but hopefully answers any questions. We don't like changing the default behavior of Swift unless there's a really good reason and, up until now, I've tried doing this with this new db_preallocation setting. For clusters with dedicated account/container servers that usually have fewer disks overall but SSD for speed, having db_preallocation on will gobble up disk space quite quickly and the fragmentation it's designed to fight isn't that big a speed impact to SSDs anyway. For clusters with account/container servers spread across all servers along with object servers usually having standard disks for cost, having db_preallocation off will cause very fragmented database files impacting speed, sometimes dramatically. Weighing these two negatives, it seems the second is the lesser evil. The first can cause disks to fill up and disable the cluster. The second will cause performance degradation, but the cluster will still function. Furthermore, if just one piece of code that touches all databases runs with db_preallocation on, it's effectively on for the whole cluster. We discovered this most recently when we finally configured everything within the Swift codebase to have db_preallocation off, only to find out Slogging didn't know about the new setting and so ran with it on and starting filling up SSDs. So that's why I'm proposing this change to the default behavior. We will definitely need to post a prominent notice of this change with the next release. Change-Id: I48a43439264cff5d03c14ec8787f718ee44e78ea |
||
|
Pete Zaitcev
|
f04b30e496 |
Clean up weird test code
While fixing something else, the strange code in the test suite presented itself. Looks like a massive copy-paste error and a couple of random oddities. Change-Id: I191e8cd9299b9336b0600363780d2930a04d1fd5 |
||
|
gholt
|
1c3b75c291 |
Reverted the pulling out of various middleware:
RateLimit StaticWeb TempURL/FormPOST Change-Id: I988e93e6f4aacb817a2e354d43a04e47516fdf88 |
||
|
Darrell Bishop
|
3d3ed34f44 |
Adding StatsD logging to Swift.
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2
|
||
|
Jenkins
|
86f37c47d7 | Merge "Let some swift-ring-builder commands take >1 arg." | ||
|
Victor Rodionov
|
a6595e22d1 |
SwiftException base class for all swift exceptions.
Also add new exception class SwiftConfigurationError, that can't be used to indicate that swift parameters in swift conf files are not correct. Change-Id: I39bff9068a19c8e1c1b4aac38cb756c5e46d75e6 |
||
|
Jenkins
|
d76106724b | Merge "have wsgi preauth copy over HTTP_HOST" | ||
|
Jenkins
|
eba080a64c | Merge "Speed up swift-get-nodes by 2x." | ||
|
Jenkins
|
e51cdbc8ba | Merge "Implement unit_test config to disable syslog." | ||
|
Samuel Merritt
|
8e6f099daa |
Speed up swift-get-nodes by 2x.
It was loading the ring off disk once to print the primary nodes, and then loading the whole thing off disk again to print the handoff nodes. Changed it to only load the ring off disk once. Change-Id: I6f4cd0af9762e1e69660c3eb20586590b5339e5f |
||
|
David Goetz
|
20384f1f84 |
have wsgi preauth copy over HTTP_HOST
Change-Id: I1d9a6dcc6fcdad5cf99353eaf7eb69e703c38e22 |
||
|
Samuel Merritt
|
bb509dd863 |
As-unique-as-possible partition replica placement.
This commit introduces a new algorithm for assigning partition replicas to devices. Basically, the ring builder organizes the devices into tiers (first zone, then IP/port, then device ID). When placing a replica, the ring builder looks for the emptiest device (biggest parts_wanted) in the furthest-away tier. In the case where zone-count >= replica-count, the new algorithm will give the same results as the one it replaces. Thus, no migration is needed. In the case where zone-count < replica-count, the new algorithm behaves differently from the old algorithm. The new algorithm will distribute things evenly at each tier so that the replication is as high-quality as possible, given the circumstances. The old algorithm would just crash, so again, no migration is needed. Handoffs have also been updated to use the new algorithm. When generating handoff nodes, first the ring looks for nodes in other zones, then other ips/ports, then any other drive. The first handoff nodes (the ones in other zones) will be the same as before; this commit just extends the list of handoff nodes. The proxy server and replicators have been altered to avoid looking at the ring's replica count directly. Previously, with a replica count of C, RingData.get_nodes() and RingData.get_part_nodes() would return lists of length C, so some other code used the replica count when it needed the number of nodes. If two of a partition's replicas are on the same device (e.g. with 3 replicas, 2 devices), then that assumption is no longer true. Fortunately, all the proxy server and replicators really needed was the number of nodes returned, which they already had. (Bonus: now the only code that mentions replica_count directly is in the ring and the ring builder.) Change-Id: Iba2929edfc6ece89791890d0635d4763d821a3aa |
||
|
Samuel Merritt
|
47f0dbb125 |
One PEP8 fix to make tox happy again.
Change-Id: I5ff2056f9f2eb99bfb98b020e3fc013332100e12 |
||
|
John Dickinson
|
b47bcf19e4 |
removed cname lookup middleware
The code has moved to https://github.com/notmyname/swift-cnamelookup. For current users of cname lookup, this will require installing the new package and changing the "use" line of the cname lookup conf section's to: [filter:cname_lookup] use = egg:swift_cnamelookup#swift_cnamelookup And then 'swift-init proxy reload'. Change-Id: If622486ddb04a53251244c9840aa3cfe72168fc5 |
||
|
gholt
|
3f00c1a630 |
Pulled out Rate Limit middleware
Rate Limit middleware is now at http://dpgoetz.github.com/swift-ratelimit/ For current users of Rate Limit, this will require installing the new package and changing the "use" line of the ratelimit conf section to: [filter:ratelimit] use = egg:swiftratelimit#middleware And then 'swift-init proxy reload'. Change-Id: I2ab774e9cee9fba4103c1be4bea6d52d1adb29f7 |
||
|
John Dickinson
|
7dfbd785b0 |
removed domain remap middleware
The code has moved to https://github.com/notmyname/swift-domainremap. For current users of domain remap, this will require installing the new package and changing the "use" line of the domain remap conf section's to: [filter:domain_remap] use = egg:swift_domainremap#swift_domainremap And then 'swift-init proxy reload'. Change-Id: I710caf9b991f9d37df36b826ae4338086d0ec36d |
||
|
Jenkins
|
78d1c0ae42 | Merge "fix pre_auth request funcs to handle quoted paths" | ||
|
gholt
|
c0532a6ef2 |
Pulled out TempURL/FormPOST
TempURL/FormPOST is now at http://gholt.github.com/swift-tempurl/ For current users of TempURL/FormPOST, this will require installing the new package and changing the "use" line of the tempurl and formpost conf section's to: [filter:tempurl] use = egg:swifttempurl#tempurl [filter:formpost] use = egg:swifttempurl#formpost And then 'swift-init proxy reload'. Change-Id: I5bddf7f9e09ee07815530a41c46ff901fc21b447 |
||
|
Jenkins
|
8d2e7bd112 | Merge "Pulled StaticWeb out to separate project" | ||
|
Jenkins
|
e833a6c484 | Merge "added annegentle entry in .mailmap" |