4ed2b89cb78f06cbd08b8a3f94745613def12e97
87 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Jianjian Huo
|
4ed2b89cb7 |
Sharder: warn when sharding appears to have stalled.
This patch add a configurable timeout after which the sharder will warn if a container DB has not completed sharding. The new config is container_sharding_timeout with a default of 172800 seconds (2 days). Drive-by fix: recording sharding progress will cover the case of shard range shrinking too. Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: I6ce299b5232a8f394e35f148317f9e08208a0c0f |
||
|
Zuul
|
24acc6e56b | Merge "Add backend rate limiting middleware" | ||
|
Matthew Oliver
|
bf4edefce4 |
DB Replicator: Add handoff_delete option
Currently the object-replicator has an option called `handoff_delete` which allows us to define the the number of replicas which are ensured in swift. Once a handoff node ensures that many successful responses it can go ahead and delete the handoff partition. By default it's 'auto' or rather the number of primary nodes. But this can be reduced. It's useful in draining full disks, but has to be used carefully. This patch adds the same option to the DB replicator and works the same way. But instead of deleting a partition it's done at the per DB level. Because it's done in the DB Replicator level it means the option is now available to both the Account and Container replicators. Change-Id: Ide739a6d805bda20071c7977f5083574a5345a33 |
||
|
Alistair Coles
|
ccaf49a00c |
Add backend rate limiting middleware
This is a fairly blunt tool: ratelimiting is per device and applied independently in each worker, but this at least provides some limit to disk IO on backend servers. GET, HEAD, PUT, POST, DELETE, UPDATE and REPLICATE methods may be rate-limited. Only requests with a path starting '<device>/<partition>', where <partition> can be cast to an integer, will be rate-limited. Other requests, including, for example, recon requests with paths such as 'recon/version', are unconditionally forwarded to the next app in the pipeline. OPTIONS and SSYNC methods are not rate-limited. Note that SSYNC sub-requests are passed directly to the object server app and will not pass though this middleware. Change-Id: I78b59a081698a6bff0d74cbac7525e28f7b5d7c1 |
||
|
Tim Burke
|
5b9a90b65d |
sharder: Make stats interval configurable
Change-Id: Ia794a7e21794d2c1212be0e2d163004f85c2ab78 |
||
|
Alistair Coles
|
2a593174a5 |
sharder: avoid small tail shards
A container is typically sharded when it has grown to have an object count of shard_container_threshold + N, where N << shard_container_threshold. If sharded using the default rows_per_shard of shard_container_threshold / 2 then this would previously result in 3 shards: the tail shard would typically be small, having only N rows. This behaviour caused more shards to be generated than desirable. This patch adds a minimum-shard-size option to swift-manage-shard-ranges, and a corresponding option in the sharder config, which can be used to avoid small tail shards. If set to greater than one then the final shard range may be extended to more than rows_per_shard in order to avoid a further shard range with less than minimum-shard-size rows. In the example given, if minimum-shard-size is set to M > N then the container would shard into two shards having rows_per_shard rows and rows_per_shard + N respectively. The default value for minimum-shard-size is rows_per_shard // 5. If all options have their default values this results in minimum-shard-size being 100000. Closes-Bug: #1928370 Co-Authored-By: Matthew Oliver <matt@oliver.net.au> Change-Id: I3baa278c6eaf488e3f390a936eebbec13f2c3e55 |
||
|
Alistair Coles
|
a87317db6e |
sharder: support rows_per_shard in config file
Make rows_per_shard an option that can be configured in the [container-sharder] section of a config file. For auto-sharding, this option was previously hard-coded to shard_container_threshold // 2. The swift-manage-shard-ranges command line tool already supported rows_per_shard on the command line and will now also load it from a config file if specified. Any value given on the command line takes precedence over any value found in a config file. Change-Id: I820e133a4e24400ed1e6a87ebf357f7dac463e38 |
||
|
Zuul
|
60dd36cb6b | Merge "Add absolute values for shard shrinking config options" | ||
|
Zuul
|
b3def185c6 | Merge "Allow floats for all intervals" | ||
|
Alistair Coles
|
18f20daf38 |
Add absolute values for shard shrinking config options
Add two new sharder config options for configuring shrinking behaviour: - shrink_threshold: the size below which a shard may shrink - expansion_limit: the maximum size to which an acceptor shard may grow The new options match the 'swift-manage-shard-ranges' command line options and take absolute values. The new options provide alternatives to the current equivalent options 'shard_shrink_point' and 'shard_shrink_merge_point', which are expressed as percentages of 'shard_container_threshold'. 'shard_shrink_point' and 'shard_shrink_merge_point' are deprecated and will be overridden by the new options if the new options are explicitly set in a config file. The default values of the new options are the same as the values that would result from the default 'shard_container_threshold', 'shard_shrink_point' and 'shard_shrink_merge_point' i.e.: - shrink_threshold: 100000 - expansion_limit: 750000 Change-Id: I087eac961c1eab53540fe56be4881e01ded1f60e |
||
|
Alistair Coles
|
f7fd99a880 |
Use ContainerSharderConf class in sharder and manage-shard-ranges
Change the swift-manage-shard-ranges default expansion-limit to equal the sharder daemon default merge_size i.e 750000. The previous default of 500000 had erroneously differed from the sharder default value. Introduce a ContainerSharderConf class to encapsulate loading of sharder conf and the definition of defaults. ContainerSharder inherits this and swift-manage-shard-ranges instantiates it. Rename ContainerSharder member vars to match the equivalent vars and cli options in manage_shard_ranges: shrink_size -> shrink_threshold merge_size -> expansion_limit split_size -> rows_per_shard (This direction of renaming is chosen so that the manage_shard_ranges cli options are not changed.) Rename ContainerSharder member vars to match the conf file option name: scanner_batch_size -> shard_scanner_batch_size Remove some ContainerSharder member vars that were not used outside of the __init__ method: shrink_merge_point shard_shrink_point Change-Id: I8a58a82c08ac3abaddb43c11d26fda9fb45fe6c1 |
||
|
Tim Burke
|
c374a7a851 |
Allow floats for all intervals
Change-Id: I91e9bc02d94fe7ea6e89307305705c383087845a |
||
|
Matthew Oliver
|
fb186f6710 |
Add a config file option to swift-manage-shard-ranges
While working on the shrinking recon drops, we want to display numbers that directly relate to how tool should behave. But currently all options of the s-m-s-r tool is driven by cli options. This creates a disconnect, defining what should be used in the sharder and in the tool via options are bound for failure. It would be much better to be able to define the required default options for your environment in one place that both the sharder and tool could use. This patch does some refactoring and adding max_shrinking and max_expanding options to the sharding config. As well as adds a --config option to the tool. The --config option expects a config with at '[container-sharder]' section. It only supports the shard options: - max_shrinking - max_expanding - shard_container_threshold - shard_shrink_point - shard_merge_point The latter 2 are used to generate the s-m-s-r's: - shrink_threshold - expansion_limit - rows_per_shard Use of cli arguments take precedence over that of the config. Change-Id: I4d0147ce284a1a318b3cd88975e060956d186aec |
||
|
Matthew Oliver
|
1de9834816 |
Report final in_progress when sharding is complete
On every sharder cycle up update in progress recon stats for each sharding container. However, we tend to not run it one final time once sharding is complete because the DB state is changed to SHARDED and therefore the in_progress stats never get their final update. For those collecting this data to monitor, this makes sharding/cleaving shards never complete. This patch, adds a new option `recon_shared_timeout` which will now allow sharded containers to be processed by `_record_sharding_progress()` after they've finished sharding for an amount of time. Change-Id: I5fa39d41f9cd3b211e45d2012fd709f4135f595e |
||
|
Tim Burke
|
9eb81f6e69 |
Allow replication servers to handle all request methods
Previously, the replication_server setting could take one of three states: * If unspecified, the server would handle all available methods. * If "true", "yes", "on", etc. it would only handle replication methods (REPLICATE, SSYNC). * If any other value (including blank), it would only handle non-replication methods. However, because SSYNC tunnels PUTs, POSTs, and DELETEs through the same object-server app that's responding to SSYNC, setting `replication_server = true` would break the protocol. This has been the case ever since ssync was introduced. Now, get rid of that second state -- operators can still set `replication_server = false` as a principle-of-least-privilege guard to ensure proxy-servers can't make replication requests, but replication servers will be able to serve all traffic. This will allow replication servers to be used as general internal-to-the-cluster endpoints, leaving non-replication servers to handle client-driven traffic. Closes-Bug: #1446873 Change-Id: Ica2b41a52d11cb10c94fa8ad780a201318c4fc87 |
||
|
Clay Gerrard
|
4601548dab |
Deprecate per-service auto_create_account_prefix
If we move it to constraints it's more globally accessible in our code, but more importantly it's more obvious to ops that everything breaks if you try to mis-configure different values per-service. Change-Id: Ib8f7d08bc48da12be5671abe91a17ae2b49ecfee |
||
|
Clay Gerrard
|
e7cd8df5e9 |
Add option for debug query logging
Change-Id: Ic16b505a37748f50dc155212671efb45e2c5051f |
||
|
Gilles Biannic
|
a4cc353375 |
Make log format for requests configurable
Add the log_msg_template option in proxy-server.conf and log_format in a/c/o-server.conf. It is a string parsable by Python's format() function. Some fields containing user data might be anonymized by using log_anonymization_method and log_anonymization_salt. Change-Id: I29e30ef45fe3f8a026e7897127ffae08a6a80cd9 |
||
|
Clay Gerrard
|
06cf5d298f |
Add databases_per_second to db daemons
Most daemons have a "go as fast as you can then sleep for 30 seconds" strategy towards resource utilization; the object-updater and object-auditor however have some "X_per_second" options that allow operators much better control over how they spend their I/O budget. This change extends that pattern into the account-replicator, container-replicator, and container-sharder which have been known to peg CPUs when they're not IO limited. Partial-Bug: #1784753 Change-Id: Ib7f2497794fa2f384a1a6ab500b657c624426384 |
||
|
FatemaKhalid
|
cfeb32c66b |
Adding keep_idle config value to socket
User can cofigure KEEPIDLE time for sockets in TCP connection. The default value is the old value which is 600. Change-Id: Ib7fb166deb8a87ae4e97ba0671048b1ec079a2ef Closes-Bug:1759606 |
||
|
Samuel Merritt
|
8e651a2d3d |
Add fallocate_reserve to account and container servers.
The object server can be configured to leave a certain amount of disk space free; default is 1%. This is useful in avoiding 100%-full filesystems, as those can get Swift in a state where the filesystem is too full to write tombstones, so you can't delete objects to free up space. When a cluster has accounts/containers and objects on the same disks, then you can wind up with a 100%-full disk since account and container servers don't respect fallocate_reserve. This commit makes account and container servers respect fallocate_reserve so that disks shared between account/container and object rings won't get 100% full. When a disk's free space falls below the configured reserve, account and container PUT, POST, and REPLICATE requests will fail with a 507 status code. These are the operations that can significantly increase the disk space used by a given database. I called the parameter "fallocate_reserve" for consistency with the object server. No actual fallocate() call happens under Swift's control in the account or container servers (sqlite3 might make such a call, but it's out of our hands). Change-Id: I083442eef14bf83c0ea717b1decb3e6b56dbf1d0 |
||
|
Matthew Oliver
|
2641814010 |
Add sharder daemon, manage_shard_ranges tool and probe tests
The sharder daemon visits container dbs and when necessary executes the sharding workflow on the db. The workflow is, in overview: - perform an audit of the container for sharding purposes. - move any misplaced objects that do not belong in the container to their correct shard. - move shard ranges from FOUND state to CREATED state by creating shard containers. - move shard ranges from CREATED to CLEAVED state by cleaving objects to shard dbs and replicating those dbs. By default this is done in batches of 2 shard ranges per visit. Additionally, when the auto_shard option is True (NOT yet recommeneded in production), the sharder will identify shard ranges for containers that have exceeded the threshold for sharding, and will also manage the sharding and shrinking of shard containers. The manage_shard_ranges tool provides a means to manually identify shard ranges and merge them to a container in order to trigger sharding. This is currently the recommended way to shard a container. Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Change-Id: I7f192209d4d5580f5a0aa6838f9f04e436cf6b1f |
||
|
Samuel Merritt
|
47fed6f2f9 |
Add handoffs-only mode to DB replicators.
The object reconstructor has a handoffs-only mode that is very useful when a cluster requires rapid rebalancing, like when disks are nearing fullness. This mode's goal is to remove handoff partitions from disks without spending effort on primary partitions. The object replicator has a similar mode, though it varies in some details. This commit adds a handoffs-only mode to the account and container replicators. Change-Id: I588b151ee65ae49d204bd6bf58555504c15edf9f Closes-Bug: 1668399 |
||
|
Ondřej Nový
|
a8bc94c7e3 |
Replace slowdown option with *_per_second option
container and object updaters sleeps "slowdown" (default 0.01) seconds after every processed container/object. Because time.sleep call adds overhead, use ratelimit_sleep from common.utils instead. Same as in auditor. Change-Id: I362aa0f13c78ad03ce1f76ee0257b0646f981212 |
||
|
Ondřej Nový
|
99a13d9386 |
Fixed rysnc -> rsync typo
Change-Id: I671b4206072c6e22f4ae38033502336ec32e86ad |
||
|
Peter Lisák
|
ed772236c7 |
Change schedule priority of daemon/server in config
The goal is to modify schedule priority and I/O scheduling class and priority of daemon/server via configuration. Setting is optional, default keeps current behaviour. Use case: Prioritize object-server to object-auditor, because all user's requests needed to be served in peak hours and audit could wait. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> DocImpact Change-Id: I1018a18f4706daabdb84574ffd9a58d831e68396 |
||
|
Jenkins
|
a403faadd4 | Merge "Allow fallocate_reserve to be a percentage" | ||
|
Shashirekha Gundur
|
cf48e75c25 |
change default ports for servers
Changing the recommended ports for Swift services from ports 6000-6002 to unused ports 6200-6202; so they do not conflict with X-Windows or other services. Updated SAIO docs. DocImpact Closes-Bug: #1521339 Change-Id: Ie1c778b159792c8e259e2a54cb86051686ac9d18 |
||
|
Andy McCrae
|
0da9da5131 |
Allow fallocate_reserve to be a percentage
Add the ability to set the fallocate_reserve value as a percentage. This happens automatically when adding the '%' at the end of the value. Having the ability to set a % of free space rather than a byte value is useful especially when drive sizes are heterogenous. The default for fallocate_reserve has been adjusted to 1%, having the fallocate_reserve set seems sensible for all deploys and percentages are far safer to default than byte values (across drives of any size). Tests added for using fallocate_reserve as a percentage. Duplicate tests for fallocate_reserve have been removed. Docs updated to reflect the fallocate_reserve change. Change-Id: I4aea613a708205c917e81d6b2861396655e73238 |
||
|
gh159m
|
b5311f63db |
Removed default value for log_statsd_host
Multiple files and documents showed that log_statsd_host had a default value, usually localhost. This was incorrect, instead setting a value for log_statsd_host enables statsd logging. Removed any reference of log_statsd_host having a default value. Also changed descriptions to show setting a value enables logging. Change-Id: I3ca5c0e8b8e4981de3aa6db0c476072b5a59723d Closes-Bug: #1542227 |
||
|
Alistair Coles
|
1a2b54fc0a |
Fix missing *-replicator conf sections in deployment guide
The doc for these sections was missing because of an rst error - the source is there in rst file but didn't make it into the html output. Add doc for per_diff and max_diffs in account and container doc sections. Also, fix a bunch of other sphinx build errors and most of the warnings. Change-Id: If9ed2619b2f92c6c65a94f41d8819db8726d3893 |
||
|
Romain LE DISEZ
|
71f6fd025e |
Allows to configure the rsync modules where the replicators will send data
Currently, the rsync module where the replicators send data is static. It
forbids administrators to set rsync configuration based on their current
deployment or needs.
As an example, the rsyncd configuration example encourages to set a connections
limit for the modules account, container and object. It permits to protect
devices from excessives parallels connections, because it would impact
performances.
On a server with many devices, it is tempting to increase this number
proportionally, but nothing guarantees that the distribution of the connections
will be balanced. In the worst scenario, a single device can receive all the
connections, which is a severe impact on performances.
This commit adds a new option named 'rsync_module' to the *-replicator sections
of the *-server configuration file. This configuration variable can be
extrapolated with device attributes like ip, port, device, zone, ... by using
the format {NAME}. eg:
rsync_module = {replication_ip}::object_{device}
With this configuration, an administrators can solve the problem of connections
distribution by creating one module per device in rsyncd configuration.
The default values are backward compatible:
{replication_ip}::account
{replication_ip}::container
{replication_ip}::object
Option vm_test_mode is deprecated by this commit, but backward compatibility is
maintained. The option is only effective when rsync_module is not set. In that
case, {replication_port} is appended to the default value of rsync_module.
Change-Id: Iad91df50dadbe96c921181797799b4444323ce2e
|
||
|
Joanna H. Huang
|
af8d842076 |
Replaced setting run_pause with standard interval
The deprecated directive `run_pause` should be replaced with the more standard one `interval`. The `run_pause` should be still supported for backward compatibility. This patch updates object replicator to use `interval` and support `run_pause`. It also updates its sample config and documentation. Co-Authored-By: Joanna H. Huang <joanna.huitzu.huang@gmail.com> Co-Authored-By: Kamil Rykowski <kamil.rykowski@intel.com> Change-Id: Ie2a3414a96a94efb9273ff53a80b9d90c74fff09 Closes-Bug: #1364735 |
||
|
Jenkins
|
2ea8bae389 | Merge "Allow rsync to use compression" | ||
|
Yuan Zhou
|
61a9d35fd5 |
Update container sync to use internal client
This patch changes container sync to use Internal Client instead of Direct Client. In the current design, container sync uses direct_get_object to get the newest source object(which talks to storage node directly). This works fine for replication storage policies however in erasure coding policies, direct_get_object would only return part of the object(it's encoded as several pieces). Using Internal Client can get the original object in EC case. Note that for the container sync put/delete part, it's working in EC since it's using Simple Client. Signed-off-by: Yuan Zhou <yuan.zhou@intel.com> DocImpact Change-Id: I91952bc9337f354ce6024bf8392046a1ecf6ecc9 |
||
|
Christian Schwede
|
16ee994c1e |
Set connection timeout in container sync
Container sync might get stuck without a connection timeout if the remote proxy is not responding. This patch sets a default timeout of 5.0 seconds for the connection attempt. The value is much higher than other connection timeouts inside Swift (0.5); however there might be a much higher latency to the remote peer, thus playing it safe. There is also a retry if the attempt timed out. Note that this setting only applies to the connection request itself. Setting this timeout does not apply when the remote proxy goes away during a request. Also added a short test to ensure urlopen is called with the timeout value. Co-Authored-By: Alistair Coles <alistair.coles@hp.com> Change-Id: Ic08a55157fa91fe1316653781adf4d66eead61bc Partial-Bug: 1419916 |
||
|
Prashanth Pai
|
9c33bbde69 |
Allow rsync to use compression
From rsync's man page: -z, --compress With this option, rsync compresses the file data as it is sent to the destination machine, which reduces the amount of data being transmitted -- something that is useful over a slow connection. A configurable option has been added to allow rsync to compress, but only if the remote node is in a different region than the local one. NOTE: Objects that are already compressed (for example: .tar.gz, .mp3) might slow down the syncing process. On wire compression can also be extended to ssync later in a different change if required. In case of ssync, we could explore faster compression libraries like lz4. rsync uses zlib which is slow but offers higher compression ratio. Change-Id: Ic9b9cbff9b5e68bef8257b522cc352fc3544db3c Signed-off-by: Prashanth Pai <ppai@redhat.com> |
||
|
Rafael Rivero
|
c1f6569c00 |
Fixes several typos (Swift)
Corrects spelling errors found in comments. Change-Id: I228a888e3f256569ea32ef1613092dbd63e13c62 |
||
|
John Dickinson
|
b7281cf2c5 |
make the bind_port config setting required
In a long-term effort to change the recommended ports for Swift, the first step is to require the bind_port in config files. Later, we can change the recommended setting. Anyone currently explicitly setting the ports will not be affected. Anyone not setting the ports will need to specify them to match their rings. DocImpact Change-Id: Icca83a263acdd0afc9016424a3e9f8c15e944789 |
||
|
Matthew Oliver
|
090baa1fa9 |
Swift configuration parameter audit
This change is the result of an audit through the config parameters
provided by swift and how/if they are addressed in the swift
documentation. The documentation being the sample config files in
the /etc directory or the documentation.
This change is only concerned with the config files in etc/ next
I will look at the documentation in the doc/ folder.
This change makes the following assumptions:
- Unless stated otherwise, the commented out parameter in the
sample configuration is the default for swift.
- When the default in the code differs from that of the sample
configuration, the default in the code is correct.
Container reconciler:
Parameter: interval
- code: 30
- config: 300
Result: config = 30
Object Expirer:
Parameter: recon_cache_path
- code: /var/cache/swift
- config: Parameter missing
Result: Add parameter
swift-dispersion-populate && swift-dispersion-report
Parameter: auth_version
- code: 1.0
- config: 2.0 (due to being a confusing example of how to setup
version 2.0).
Result: Added 'auth_version = 1.0' to the right section (showing
default and make the sample configuration for auth version
2.0 easier to understand.
swift-drive-audit:
Parameter: log_file_pattern
- code: /var/log/kern.*[!.][!g][!z]
- config: /var/log/kern*
Result: config = /var/log/kern.*[!.][!g][!z]
NOTE: swift-drive-audit uses a parameter called device_dir which
defaults to '/srv/node'. In all other swift binaries/services
there is a similar parameter called devices which stores the
same thing. This is an inconsistency which I haven't fixed
as this could break existing swift clusters out in the wild.
Proxy Server:
Parameter: object_chunk_size
- code: 65536
- config: 8192
Result: config = 65536
Parameter: client_chunk_size
- code: 65536
- config: 8192
Result: config = 65536
Parameter: strict_cors_mode
- code: True
- config: No parameter
Result: config = True
Account and Container replicator configuration confusion:
NOTES:
The account and container replicators have parameters:
- interval
- run_pause
Both of these are loaded into the same variable in code:
self.interval = int(conf.get('interval') or
conf.get('run_pause') or 30)
If a user sets both to different values then interval is used.
Result: Update the configuration to make this more clear.
DocImpact
Change-Id: Iaadbb1a6284f8b3e0801bc343b29772f70f4bf6e
|
||
|
gholt
|
2d00f7b7ba |
New log_max_line_length option.
Log lines can get quite large, as we previously noticed with rsync error log lines. We added a setting to cap those, but it really looks like we should have just done this overall limit. We noticed the issue when we switched to UDP syslogging and it would occasionally blow past the 16436 lo MTU! This causes Python's logging code to get an error and hilarity ensues. Change-Id: I44bdbe68babd58da58c14360379e8fef8a6b75f7 |
||
|
zhang-hare
|
f5caac43ac |
Add profiling middleware in Swift
The profile middleware provide a tool to profile Swift code on the fly and collect statistic data for performance analysis. An native simple Web UI is also provided to help query and visualize the data. Change-Id: I6a1554b2f8dc22e9c8cd20cff6743513eb9acc05 Implements: blueprint profiling-middleware |
||
|
gholt
|
69d331d0d6 |
Container Sync: Simple HTTP Proxy load balancing
Change-Id: I021b043b927153bacff48cae648d4d8c5bbad765 |
||
|
gholt
|
f60d05686f |
New container sync configuration option
Summary of the new configuration option: The cluster operators add the container_sync middleware to their proxy pipeline and create a container-sync-realms.conf for their cluster and copy this out to all their proxy and container servers. This file specifies the available container sync "realms". A container sync realm is a group of clusters with a shared key that have agreed to provide container syncing to one another. The end user can then set the X-Container-Sync-To value on a container to //realm/cluster/account/container instead of the previously required URL. The allowed hosts list is not used with this configuration and instead every container sync request sent is signed using the realm key and user key. This offers better security as source hosts can be faked much more easily than faking per request signatures. Replaying signed requests, assuming it could easily be done, shouldn't be an issue as the X-Timestamp is part of the signature and so would just short-circuit as already current or as superceded. This also makes configuration easier for the end user, especially with difficult networking situations where a different host might need to be used for the container sync daemon since it's connecting from within a cluster. With this new configuration option, the end user just specifies the realm and cluster names and that is resolved to the proper endpoint configured by the operator. If the operator changes their configuration (key or endpoint), the end user does not need to change theirs. DocImpact Change-Id: Ie1704990b66d0434e4991e26ed1da8b08cb05a37 |
||
|
Jenkins
|
a2126add0b | Merge "Set default wsgi workers to cpu_count" | ||
|
Newptone
|
5c1a7871d9 |
Unified format of boolean params in conf files
In swift conf files, boolean options use different format: some use true/false, and some use True/False. This patch is aim to using lowcase true/false to unify boolean params formats in swift conf files. Fix Bug #1203421 Change-Id: I3e1bfc6e43231f51e0710aa54869f3774ee896b1 |
||
|
Clay Gerrard
|
de3acec4bf |
Set default wsgi workers to cpu_count
Change the default value of wsgi workers from 1 to auto. The new default value for workers in the proxy, container, account & object wsgi servers will spawn as many workers per process as you have cpu cores. This will not be ideal for some configurations, but it's much more likely to produce a successful out of the box deployment. Inspect the number of cpu_cores using python's multiprocessing when available. Multiprocessing was added in python 2.6, but I know I've compiled python without it before on accident. The cpu_count method seems to be pretty system agnostic, but it says it can raise NotImplementedError or sometimes return 0. Add a new utility method 'config_auto_int_value' to pull an integer out of the config which has a dynamic default. * drive by s/container/proxy/ in proxy-server.conf.5 * fix misplaced max_clients in *-server.conf-sample * update doc/development_saio to force workers = 1 DocImpact Change-Id: Ifa563d22952c902ab8cbe1d339ba385413c54e95 |
||
|
Samuel Merritt
|
efdb0e3681 |
Make sample configs more readable.
Inject some empty lines to avoid the wall-of-text effect and to make it a little clearer which descriptions go with which options. Change-Id: I58914b83dad76ea5ca330903a246bee7ffaeba83 |
||
|
Sergey Kraynev
|
ea7858176b |
Implementation of replication servers
Support separate replication ip address: - Added new function in utils. This function provides ability to select separate IP address for replication service. - Db_replicator and object replicators were changed. Replication process uses new function now. Replication network parameters: - Replication network fields (replication_ip, replication_port) support was added to device dictionary in swift-ring-builder script. - Changes were made to support new fields in search, show and set_info functions. Implementation of replication servers: - Separate replication servers use the same code as normal replication servers, but with replication_server parameter = True. When using a separate replication network, the non-replication servers set replication_server = False. When there is no separate replication network (the default case), replication_server is not included in the config. DocImpact Change-Id: Ie9af5bdcdf9241c355e36053ca4adfe49dc35bd0 Implements: blueprint dedicated-replication-network |
||
|
Peter Portante
|
2d42b37303 |
Add the max_clients parameter to bound clients
The new max_clients parameter allows one full control over the maximum number of client requests that will be handled by a given worker for any of the proxy, account, container or object servers. Lowering the number of clients handled per worker, and raising the number of workers can lessen the impact that a CPU intensive, or blocking, request can have on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another accept(2) call while processing, allowing other workers a chance to process it. DocImpact Signed-off-by: Peter Portante <peter.portante@redhat.com> Change-Id: Ic01430f7a6c5ff48d7aa349dc86a5f8ac463a420 |