e15bceaa7e541c77f26a1f11ee2cbddbc871cbf1
Commit Graph

369 Commits

Author SHA1 Message Date
Bill Huber
0bcd7fd50e Update Erasure Coding Overview doc to remove Beta version
The major functionality of EC has been released for Liberty and
the beta version of the code has been removed since it is now
in production.
Change-Id: If60712045fb1af803093d6753fcd60434e637772
2015年12月18日 11:43:12 -06:00
Jenkins
500f7e8d34 Merge "Unification of manpages and conf-samples (default values, etc)" 2015年12月11日 03:20:52 +00:00
John Dickinson
5eaa5543c7 add sample proxy pipeline for keystone integration
Change-Id: I4b4fd9179d0234f001940e215c97d40a2a6204cd
2015年11月30日 10:47:19 -08:00
Peter Lisák
28c4b7310f Unification of manpages and conf-samples (default values, etc)
Change-Id: I47a3127ef698b4bd1537b1562901ee9c2b5924d4
2015年11月30日 10:08:16 -08:00
Ondřej Nový
001dc25318 Missing log_name option added
Change-Id: If0224f346aa7372268115284c112e8f3a60a9be4
2015年11月15日 11:26:06 +01:00
Alistair Coles
1a2b54fc0a Fix missing *-replicator conf sections in deployment guide
The doc for these sections was missing because of an rst error - the
source is there in rst file but didn't make it into the html output.
Add doc for per_diff and max_diffs in account and container doc sections.
Also, fix a bunch of other sphinx build errors and most of the warnings.
Change-Id: If9ed2619b2f92c6c65a94f41d8819db8726d3893
2015年10月23日 14:58:38 +01:00
Romain LE DISEZ
71f6fd025e Allows to configure the rsync modules where the replicators will send data
Currently, the rsync module where the replicators send data is static. It
forbids administrators to set rsync configuration based on their current
deployment or needs.
As an example, the rsyncd configuration example encourages to set a connections
limit for the modules account, container and object. It permits to protect
devices from excessives parallels connections, because it would impact
performances.
On a server with many devices, it is tempting to increase this number
proportionally, but nothing guarantees that the distribution of the connections
will be balanced. In the worst scenario, a single device can receive all the
connections, which is a severe impact on performances.
This commit adds a new option named 'rsync_module' to the *-replicator sections
of the *-server configuration file. This configuration variable can be
extrapolated with device attributes like ip, port, device, zone, ... by using
the format {NAME}. eg:
 rsync_module = {replication_ip}::object_{device}
With this configuration, an administrators can solve the problem of connections
distribution by creating one module per device in rsyncd configuration.
The default values are backward compatible:
 {replication_ip}::account
 {replication_ip}::container
 {replication_ip}::object
Option vm_test_mode is deprecated by this commit, but backward compatibility is
maintained. The option is only effective when rsync_module is not set. In that
case, {replication_port} is appended to the default value of rsync_module.
Change-Id: Iad91df50dadbe96c921181797799b4444323ce2e
2015年09月07日 08:00:18 +02:00
Jenkins
0279411c58 Merge "versioned writes middleware" 2015年08月10日 17:37:49 +00:00
Thiago da Silva
035a411660 versioned writes middleware
Rewrite object versioning as middleware to simplify the PUT method
in the object controller.
The functionality remains basically the
same with the only major difference being the ability to now
version slo manifest files. dlo manifests are still not
supported as part of this patch.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
DocImpact
Change-Id: Ie899290b3312e201979eafefb253d1a60b65b837
Signed-off-by: Thiago da Silva <thiago@redhat.com>
Signed-off-by: Prashanth Pai <ppai@redhat.com>
2015年08月07日 14:11:32 -04:00
Zhao Lei
4ac1fea5d1 Fix some spelling typo in comments
s/overide/override for object-expirer.conf and sample.
s/automaticaly/automatically for swift/proxy/controllers/obj.py
Change-Id: Ife107c7a1005a5d4959288db50a7f8f33c522dd4
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
2015年08月07日 22:21:37 +08:00
Jenkins
4e92d5e7b7 Merge "do container listing updates in another (green)thread" 2015年08月06日 00:27:09 +00:00
Charles Hsu
f0d51882b9 Add extra_header_count to document and config.
Change-Id: Iec86b488d71553c295afe7098822ce2046df9546
2015年08月05日 22:09:40 +08:00
Jenkins
e1683fdb2e Merge "Support keystone v3 domains in swift-dispersion" 2015年07月31日 06:59:01 +00:00
Charles Hsu
345785837f Remove error_suppression_interval, error_suppression_limit options.
These two options is belong to proxy-server, not account-replicator.
Change-Id: Ie030ecffd213e56db32df77c69b847479d96308f
2015年07月29日 22:25:44 -07:00
Falk Reimann
363a256e58 Support keystone v3 domains in swift-dispersion
This provides the capability to specify a project_name,
project_domain_name and user_domain_name in /etc/swift/dispersion.conf.
If this values are set in dispersion.conf they get populated to the
swift-client. With this it is possible to have a specific dispersion
project specified, which is not the keystone default domain. Changes
were applied to swift-dispersion-populate and swift-dispersion-report.
Relevant man pages, the example dispersion.conf and the admin guide were
updated accordingly.
DocImpact
Closes-Bug: #1468374
Change-Id: I0e716f8d281b4d0f510bc568bcee4a13fc480ff7
2015年07月24日 13:40:24 -05:00
John Dickinson
2289137164 do container listing updates in another (green)thread
The actual server-side changes are simple. The tests are a different
matter. Many changes were needed to the object server tests to
handle the now-async calls to the container server. In an effort to
test this properly, some drive-by changes were made to improve tests.
I tested this patch by doing zero-byte object writes to one container
as fast as possible. Then I did it again while also saturating 2 of the
container replica's disks. The results are linked below.
https://gist.github.com/notmyname/2bb85acfd8fbc7fc312a
DocImpact
Change-Id: I737bd0af3f124a4ce3e0862a155e97c1f0ac3e52
2015年07月22日 01:19:58 -07:00
Oshrit Feder
6cafd0a4c0 Fix Container Sync example
Container-sync realm uses cluster_ as a prefix to specify clusters'
names. At use, the prefix should not be included. Fixing the examples
and sample conf to make it clearer that only the name of the cluster
should be passed.
Change-Id: I2e521d86faffb59e1b45d3f039987ee023c5e939
2015年07月08日 16:37:31 -07:00
Christian Schwede
edfca861b6 Increase httplib._MAXHEADERS
Python 2.7.9+ and 3.2.6+ limits the number of maximum headers in httplib to 100
[1,2,3]. This setting is too low for Swift.
By default the maximum number of allowed headers depends on the number of max
allowed metadata settings plus a default value of 32 for regular http headers.
If for some reason this is not enough (custom middleware for example) it can be
increased with the extra_header_count constraint.
[1] https://bugs.python.org/issue16037
[2] https://hg.python.org/cpython/raw-file/15c95b7d81dc/Misc/NEWS
[3] https://hg.python.org/cpython/raw-file/v3.2.6/Misc/NEWS
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Matthew Oliver <matt@oliver.net.au>
Co-Authored-By: Thomas Herve <therve@redhat.com>
Change-Id: I388fd697ec88476024b0e9f1ae75ba35ff765282
2015年06月26日 14:35:40 +00:00
Jenkins
0009a43eb4 Merge "Allow 1+ object-servers-per-disk deployment" 2015年06月22日 03:45:11 +00:00
Darrell Bishop
df134df901 Allow 1+ object-servers-per-disk deployment
Enabled by a new > 0 integer config value, "servers_per_port" in the
[DEFAULT] config section for object-server and/or replication server
configs. The setting's integer value determines how many different
object-server workers handle requests for any single unique local port
in the ring. In this mode, the parent swift-object-server process
continues to run as the original user (i.e. root if low-port binding
is required), binds to all ports as defined in the ring, and forks off
the specified number of workers per listen socket. The child, per-port
servers drop privileges and behave pretty much how object-server workers
always have, except that because the ring has unique ports per disk, the
object-servers will only be handling requests for a single disk. The
parent process detects dead servers and restarts them (with the correct
listen socket), starts missing servers when an updated ring file is
found with a device on the server with a new port, and kills extraneous
servers when their port is found to no longer be in the ring. The ring
files are stat'ed at most every "ring_check_interval" seconds, as
configured in the object-server config (same default of 15s).
Immediately stopping all swift-object-worker processes still works by
sending the parent a SIGTERM. Likewise, a SIGHUP to the parent process
still causes the parent process to close all listen sockets and exit,
allowing existing children to finish serving their existing requests.
The drop_privileges helper function now has an optional param to
suppress the setsid() call, which otherwise screws up the child workers'
process management.
The class method RingData.load() can be told to only load the ring
metadata (i.e. everything except replica2part2dev_id) with the optional
kwarg, header_only=True. This is used to keep the parent and all
forked off workers from unnecessarily having full copies of all storage
policy rings in memory.
A new helper class, swift.common.storage_policy.BindPortsCache,
provides a method to return a set of all device ports in all rings for
the server on which it is instantiated (identified by its set of IP
addresses). The BindPortsCache instance will track mtimes of ring
files, so they are not opened more frequently than necessary.
This patch includes enhancements to the probe tests and
object-replicator/object-reconstructor config plumbing to allow the
probe tests to work correctly both in the "normal" config (same IP but
unique ports for each SAIO "server") and a server-per-port setup where
each SAIO "server" must have a unique IP address and unique port per
disk within each "server". The main probe tests only work with 4
servers and 4 disks, but you can see the difference in the rings for the
EC probe tests where there are 2 disks per server for a total of 8
disks. Specifically, swift.common.ring.utils.is_local_device() will
ignore the ports when the "my_port" argument is None. Then,
object-replicator and object-reconstructor both set self.bind_port to
None if server_per_port is enabled. Bonus improvement for IPv6
addresses in is_local_device().
This PR for vagrant-swift-all-in-one will aid in testing this patch:
https://github.com/swiftstack/vagrant-swift-all-in-one/pull/16/
Also allow SAIO to answer is_local_device() better; common SAIO setups
have multiple "servers" all on the same host with different ports for
the different "servers" (which happen to match the IPs specified in the
rings for the devices on each of those "servers").
However, you can configure the SAIO to have different localhost IP
addresses (e.g. 127.0.0.1, 127.0.0.2, etc.) in the ring and in the
servers' config files' bind_ip setting.
This new whataremyips() implementation combined with a little plumbing
allows is_local_device() to accurately answer, even on an SAIO.
In the default case (an unspecified bind_ip defaults to '0.0.0.0') as
well as an explict "bind to everything" like '0.0.0.0' or '::',
whataremyips() behaves as it always has, returning all IP addresses for
the server.
Also updated probe tests to handle each "server" in the SAIO having a
unique IP address.
For some (noisy) benchmarks that show servers_per_port=X is at least as
good as the same number of "normal" workers:
https://gist.github.com/dbishop/c214f89ca708a6b1624a#file-summary-md
Benchmarks showing the benefits of I/O isolation with a small number of
slow disks:
https://gist.github.com/dbishop/fd0ab067babdecfb07ca#file-results-md
If you were wondering what the overhead of threads_per_disk looks like:
https://gist.github.com/dbishop/1d14755fedc86a161718#file-tabular_results-md
DocImpact
Change-Id: I2239a4000b41a7e7cc53465ce794af49d44796c6
2015年06月18日 12:43:50 -07:00
Koert van der Veer
11e5c4adf0 Allow default reseller prefix in domain_remap middleware
Previously, the reseller prefix needed to be provided in the host name
even when the domain was unique to that reseller. With the
default_reseller_prefix, any domain which matches in this middleware,
will will be passed on with a reseller prefix, whether or not it was
provided.
Change-Id: I5aa5ce78ad1ee2e3660cce4c3e07306f8999f02a
Implements: blueprint domainremap-reseller-domains
2015年06月06日 12:54:41 -07:00
Joanna H. Huang
af8d842076 Replaced setting run_pause with standard interval
The deprecated directive `run_pause` should be replaced with the more
standard one `interval`. The `run_pause` should be still supported for
backward compatibility. This patch updates object replicator to use
`interval` and support `run_pause`. It also updates its sample config
and documentation.
Co-Authored-By: Joanna H. Huang <joanna.huitzu.huang@gmail.com>
Co-Authored-By: Kamil Rykowski <kamil.rykowski@intel.com>
Change-Id: Ie2a3414a96a94efb9273ff53a80b9d90c74fff09
Closes-Bug: #1364735 
2015年05月25日 11:47:47 +02:00
Jenkins
2ea8bae389 Merge "Allow rsync to use compression" 2015年05月13日 10:58:08 +00:00
Clay Gerrard
4aba2fbb25 Check if REST API version is valid
Swift doesn't check if the used API version is valid. Currently there
is only one valid REST API version, but that might change in the
future.
This patch enforces "v1" or "v1.0" as the version string when accessing
account, containers and objects.
The list of accepted version strings can be manually overridden using a
comma-separated list in swift.conf to make this backward-compatible.
The constraint loader has been modified slightly to accept strings as
well as integers.
Any request to an account, container, and object which does not
provide the correct version string will get a 400 BadRequest response.
The allowed api versions are by default excluded from /info.
Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
Co-Authored-By: John Dickinson <me@not.mn>
Closes Bug #1437442
Change-Id: I5ab6e236544378abf2eab562ab759513d09bc256
2015年04月14日 16:00:37 -07:00
paul luse
647b66a2ce Erasure Code Reconstructor
This patch adds the erasure code reconstructor. It follows the
design of the replicator but:
 - There is no notion of update() or update_deleted().
 - There is a single job processor
 - Jobs are processed partition by partition.
 - At the end of processing a rebalanced or handoff partition, the
 reconstructor will remove successfully reverted objects if any.
And various ssync changes such as the addition of reconstruct_fa()
function called from ssync_sender which performs the actual
reconstruction while sending the object to the receiver
Co-Authored-By: Alistair Coles <alistair.coles@hp.com>
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: John Dickinson <me@not.mn>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Tushar Gohad <tushar.gohad@intel.com>
Co-Authored-By: Samuel Merritt <sam@swiftstack.com>
Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
Co-Authored-By: Yuan Zhou <yuan.zhou@intel.com>
blueprint ec-reconstructor
Change-Id: I7d15620dc66ee646b223bb9fff700796cd6bef51
2015年04月14日 00:52:17 -07:00
Yuan Zhou
61a9d35fd5 Update container sync to use internal client
This patch changes container sync to use Internal Client instead
of Direct Client.
In the current design, container sync uses direct_get_object to
get the newest source object(which talks to storage node directly).
This works fine for replication storage policies however in
erasure coding policies, direct_get_object would only return part
of the object(it's encoded as several pieces). Using Internal
Client can get the original object in EC case.
Note that for the container sync put/delete part, it's working in
EC since it's using Simple Client.
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
DocImpact
Change-Id: I91952bc9337f354ce6024bf8392046a1ecf6ecc9
2015年04月14日 00:52:17 -07:00
Tushar Gohad
ed54066288 Add support for policy types, 'erasure_coding' policy
This patch extends the StoragePolicy class for non-replication storage
policies, the first one being "erasure coding".
Changes:
 - Add 'policy_type' support to BaseStoragePolicy class
 - Disallow direct instantiation of BaseStoragePolicy class
 - Subclass BaseStoragePolicy
 - "StoragePolicy":
 . Replication policy, default
 . policy_type = 'replication'
 - "ECStoragePolicy":
 . Erasure Coding policy
 . policy_type = 'erasure_coding'
 . Private member variables
 ec_type (EC backend),
 ec_num_data_fragments (number of fragments original
 data split into after erasure coding operation),
 ec_num_parity_fragments (number of parity fragments
 generated during erasure coding)
 . Private methods
 EC specific attributes and ring validator methods.
 - Swift will use PyECLib, a Python Erasure Coding library, for
 erasure coding operations. PyECLib is already an approved
 OpenStack core requirement.
 (https://bitbucket.org/kmgreen2/pyeclib/)
 - Add test cases for
 - 'policy_type' StoragePolicy member
 - policy_type == 'erasure_coding'
DocImpact
Co-Authored-By: Alistair Coles <alistair.coles@hp.com>
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Paul Luse <paul.e.luse@intel.com>
Co-Authored-By: Samuel Merritt <sam@swiftstack.com>
Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
Co-Authored-By: Yuan Zhou <yuan.zhou@intel.com>
Change-Id: Ie0e09796e3ec45d3e656fb7540d0e5a5709b8386
Implements: blueprint ec-proxy-work
2015年04月13日 22:57:42 -07:00
Christian Schwede
16ee994c1e Set connection timeout in container sync
Container sync might get stuck without a connection timeout if the remote proxy
is not responding.
This patch sets a default timeout of 5.0 seconds for the connection attempt. The
value is much higher than other connection timeouts inside Swift (0.5); however
there might be a much higher latency to the remote peer, thus playing it safe.
There is also a retry if the attempt timed out.
Note that this setting only applies to the connection request itself. Setting
this timeout does not apply when the remote proxy goes away during a request.
Also added a short test to ensure urlopen is called with the timeout value.
Co-Authored-By: Alistair Coles <alistair.coles@hp.com>
Change-Id: Ic08a55157fa91fe1316653781adf4d66eead61bc
Partial-Bug: 1419916
2015年04月10日 10:06:40 +00:00
Prashanth Pai
9c33bbde69 Allow rsync to use compression
From rsync's man page:
-z, --compress
With this option, rsync compresses the file data as it is sent to the
destination machine, which reduces the amount of data being transmitted --
something that is useful over a slow connection.
A configurable option has been added to allow rsync to compress, but only
if the remote node is in a different region than the local one.
NOTE: Objects that are already compressed (for example: .tar.gz, .mp3)
might slow down the syncing process.
On wire compression can also be extended to ssync later in a different
change if required. In case of ssync, we could explore faster
compression libraries like lz4. rsync uses zlib which is slow but offers
higher compression ratio.
Change-Id: Ic9b9cbff9b5e68bef8257b522cc352fc3544db3c
Signed-off-by: Prashanth Pai <ppai@redhat.com>
2015年03月02日 14:39:58 +05:30
Jenkins
d6467d3385 Merge "Add multiple reseller prefixes and composite tokens" 2015年02月24日 16:12:01 +00:00
Donagh McCabe
89397c5b67 Add multiple reseller prefixes and composite tokens
This change is in support of Composite Tokens and Service Accounts
(see http://specs.openstack.org/openstack/swift-specs/specs/in_progress/
service_token.html)
During coding, minor changes were made compared to the original
specification. See https://review.openstack.org/138771 for these changes.
DocImpact
Change-Id: I6072b4efb3a479a8e0cc2d9c11ffda5764b55e30
2015年02月23日 15:57:20 +00:00
Richard Hawkins
489dd5ff5d Add support for container TempURL Keys
Change-Id: Ic22b0b84b657e6cac7e0062fa410eefb09bc0f4d
Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
2015年02月10日 21:37:10 +00:00
John Dickinson
b45b83fb00 Correct the config default for delay_auth_decision
Updated proxy-server.conf-sample with the correct default. Also
updated the note on the overview-auth doc page.
Change-Id: I5cd62a7a118a28f7b58f47b8d8d4d963f6bc7347
2015年02月05日 11:52:41 -08:00
Jenkins
9621c861c4 Merge "Make more memcache options configurable" 2015年02月02日 08:14:42 +00:00
Bob Ball
cec00660cb Remove deprecated config variables
I1f8f5064ea8028af60f167df9b97e215cdadba44 deprecated auth_host etc but the default
config still used them. Ieac26806bd420aa08fc79bbc6a11eb6a1c15c7df then switched
devstack to using the new variables, but if the old variables still existed in the
default config, some installations were broken (e.g. XenServer CI)
Partial-bug: 1415795
Change-Id: I7076fa03ab531cbb1114918f75113620b65590dc
2015年01月29日 09:11:09 +00:00
Clay Gerrard
2012339982 Make more memcache options configurable
More memcache options can be set in the memcache.conf or proxy-server.conf
 * connect_timeout
 * pool_timeout
 * tries
 * io_timeout
Options set in proxy-server.conf are considered more specific to the memcache
middleware.
DocImpact
Change-Id: I194d0f4d88c6cb8c797a37dcab48f2d8473e7a4e
2015年01月14日 11:16:32 -05:00
Jenkins
0e660fade3 Merge "Change black/white-listing to use sysmeta." 2015年01月10日 00:21:41 +00:00
David Goetz
172a9b369f Change black/white-listing to use sysmeta.
The way we do this now involves a conf change and a proxy
reload which is a pain. You can now just set these:
X-Account-Sysmeta-Global-Write-Ratelimit: WHITELIST
or
X-Account-Sysmeta-Global-Write-Ratelimit: BLACKLIST
NOTE:
The existing proxy config settings: account_whitelist
and account_blacklist will continue to work.
Change-Id: I532663f1d2c75d03170c5fdb9b330416822fbc88
2015年01月09日 08:35:50 -08:00
Alistair Coles
fd8eb6b280 Add undocumented options to keystoneauth sample config
Adds is_admin and allow_overrides to the keystoneauth section
of proxy-server.conf.sample and also adds related comments to
the keystoneauth docstring.
DocImpact
Change-Id: I7c751880cb6742db7347f31c4d32b523e33da75b
2015年01月06日 16:57:17 +00:00
Keshava Bharadwaj
1ffe6b3953 Adds console logging to swift-drive-audit
This patch adds console logging ability to swift-drive-audit.
There are cases where logging to console is necessary when drive-audit
is done. This can be consumed for flagging errors in monitoring tools
such as icinga.
DocImpact
Change-Id: Ia1e1effcbd89bd2cf6d5b8c64019f1647c736a3a
2014年12月18日 14:19:12 +05:30
Eohyung Lee
87d8626505 fix example typo
5 * 1024 * 1024 = 5242880
Change-Id: I0eeb6e2d9fbd79103cd8c658627344f73fed9498
2014年11月20日 11:38:49 +09:00
Alistair Coles
c9f8246378 Make in process functional tests use sample proxy-server.conf
This patch was first motivated by noticing that the proxy
server pipeline used for in process functional tests was
out of date with respect to the pipeline in
/etc/proxy-server.conf.sample. Rather than cut and paste
the current pipeline into the in process setup, it seems
like a better idea would be to have the in process tests
always use the sample config.
A further benefit is that in process functional tests will
pick up changes to the sample config introduced by patches -
previously test/functional/__init__.py would need to be
manually modified to run in process functional tests
on new middleware for example.
Note: because the pipeline is now loaded using entry points,
'python setup.py [develop|install]' will now be needed
before running the tests.
Obvious next steps would be to do the same for the backend
servers, and to allow alternative config files and dir's
to be specified, but this patch is the first step.
Also drive-by fixes some typos in proxy-server.conf.sample
Change-Id: If442bd7c2b1721ec92839c4490924ba33e1545d8
2014年11月14日 10:44:41 +00:00
Clay Gerrard
f9bed74d1b Return 403 on unauthorized upload when over account quota
If you try an unauthorized upload into a container that is over quota you get
a 403 instead of a 413, but if you try to unauthorized upload when an
*account* is over quota you can see the 413 even though the upload would have
been rejected by the authorize callback. By wrapping the authorize callback
associated with the incoming request we can make sure to only return our 413
when the request would have been authorized otherwise.
Drive by doc fixes thanks to acoles:
 * State that container_quotas should be after auth middleware in
 the class doc string.
 * Add note to proxy-server.conf.sample that account_quotas should
 be after auth middleware.
The equivalent statements are already in place for each quota
middleware.
Doc-Impact
Closes-Bug: #1387415
Change-Id: I2a88b3ec79d35bfdd73ea6ad64e376b7c7af4ea6
2014年10月30日 14:03:56 -07:00
Jenkins
8647f86494 Merge "Add new features to swift-drive-audit" 2014年10月10日 09:24:41 +00:00
Jenkins
0d502de2f5 Merge "Zero-copy object-server GET responses with splice()" 2014年09月26日 15:48:25 +00:00
Lorcan
cb20763893 Add new features to swift-drive-audit
This patch adds two new features to swift-drive-audit. The first
is an option in the drive-audit.conf file that allows the operator
to prevent the drives ever being unmounted automatically,
regardless of the amount of errors present. This could be of
benefit in very small systems consisting of only one or two drives
where the operator would like to manually unmount/fix the
particular drive(s) and minimise any potential downtime.
The second is another option in drive-audit.conf that allows the
operator to select a recon directory. This directory will then
have a drive.recon file which will keep an up-to-date record of
the swift drives and any errors associated with them. An example
of the output would be as follows:
{"/srv/node/disk2": "0", "/srv/node/disk3": "25", "/srv/node/disk0": "0",
"/srv/node/disk1": "0", "/srv/node/disk10": "0", "/srv/node/disk7": "0",
"/srv/node/disk4": "137", "/srv/node/disk5": "0", "/srv/node/disk8": "0",
"/srv/node/disk9": "0", "/srv/node/disk6": "0", "/srv/node/disk11": "60"}
This would allow the operator to monitor the errors on the swift
drives without having to spend time searching through logs. Also, if
this is accepted, it should be possible to add an option to
swift-recon that would keep track of this at a system level.
Change-Id: Ib5dacf8622b7363e070c274c7c30c8ead448a055
2014年09月25日 16:27:25 +01:00
Rafael Rivero
c1f6569c00 Fixes several typos (Swift)
Corrects spelling errors found in comments.
Change-Id: I228a888e3f256569ea32ef1613092dbd63e13c62
2014年09月18日 21:18:50 -07:00
Samuel Merritt
7d0e5ebe69 Zero-copy object-server GET responses with splice()
This commit lets the object server use splice() and tee() to move data
from disk to the network without ever copying it into user space.
Requires Linux. Sorry, FreeBSD folks. You still have the old
mechanism, as does anyone who doesn't want to use splice. This
requires a relatively recent kernel (2.6.38+) to work, which includes
the two most recent Ubuntu LTS releases (Precise and Trusty) as well
as RHEL 7. However, it excludes Lucid and RHEL 6. On those systems,
setting "splice = on" will result in warnings in the logs but no
actual use of splice.
Note that this only applies to GET responses without Range headers. It
can easily be extended to single-range GET requests, but this commit
leaves that for future work. Same goes for PUT requests, or at least
non-chunked ones.
On some real hardware I had laying around (not a VM), this produced a
37% reduction in CPU usage for GETs made directly to the object
server. Measurements were done by looking at /proc/<pid>/stat,
specifically the utime and stime fields (user and kernel CPU jiffies,
respectively).
Note: There is a Python module called "splicetee" available on PyPi,
but it's licensed under the GPL, so it cannot easily be added to
OpenStack's requirements. That's why this patch uses ctypes instead.
Also fixed a long-standing annoyance in FakeLogger:
 >>> fake_logger.warn('stuff')
 >>> fake_logger.get_lines_for_level('warn')
 []
 >>>
This, of course, is because the correct log level is 'warning'. Now
you get a KeyError if you call get_lines_for_level with a bogus log
level.
Change-Id: Ic6d6b833a5b04ca2019be94b1b90d941929d21c8
2014年09月18日 16:02:47 -07:00
Jenkins
034fae630c Merge "Restrict keystone cross-tenant ACLs to IDs" 2014年09月13日 00:53:47 +00:00
John Dickinson
b7281cf2c5 make the bind_port config setting required
In a long-term effort to change the recommended ports for Swift,
the first step is to require the bind_port in config files. Later,
we can change the recommended setting.
Anyone currently explicitly setting the ports will not be affected.
Anyone not setting the ports will need to specify them to match their
rings.
DocImpact
Change-Id: Icca83a263acdd0afc9016424a3e9f8c15e944789
2014年09月08日 07:28:43 -07:00