11d10221633bfa586b89aec4395a6f0625d16341
306 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Tim Burke
|
11d1022163 |
s3api: Allow multiple storage domains
Sometimes a cluster might be accessible via more than one set of domain names. Allow operators to configure them such that virtual-host style requests work with all names. Change-Id: I83b2fded44000bf04f558e2deb6553565d54fd4a |
||
|
Pete Zaitcev
|
6198284839 |
Add a project scope read-only role to keystoneauth
This patch continues work for more of the "Consistent and Secure Default Policies". We already have system scope personas implemented, but the architecture people are asking for project scope now. At least we don't need domain scope. Change-Id: If7d39ac0dfbe991d835b76eb79ae978fc2fd3520 |
||
|
Zuul
|
b3def185c6 | Merge "Allow floats for all intervals" | ||
|
Alistair Coles
|
46ea3aeae8 |
Quarantine stale EC fragments after checking handoffs
If the reconstructor finds a fragment that appears to be stale then it will now quarantine the fragment. Fragments are considered stale if insufficient fragments at the same timestamp can be found to rebuild missing fragments, and the number found is less than or equal to a new reconstructor 'quarantine_threshold' config option. Before quarantining a fragment the reconstructor will attempt to fetch fragments from handoff nodes in addition to the usual primary nodes. The handoff requests are limited by a new 'request_node_count' config option. 'quarantine_threshold' defaults to zero i.e. no fragments will be quarantined. 'request node count' defaults to '2 * replicas'. Closes-Bug: 1655608 Change-Id: I08e1200291833dea3deba32cdb364baa99dc2816 |
||
|
Tim Burke
|
c374a7a851 |
Allow floats for all intervals
Change-Id: I91e9bc02d94fe7ea6e89307305705c383087845a |
||
|
Tim Burke
|
e35365df51 |
s3api: Add config option to return 429s on ratelimit
Change-Id: If04c083ccc9f63696b1f53ac13edc932740a0654 |
||
|
Tim Burke
|
27a734c78a |
s3api: Allow CORS preflight requests
Unfortunately, we can't identify the user, so we can't map to an account, so we can't respect whatever CORS metadata might be set on the container. As a result, the allowed origins must be configured cluster-wide. Add a new config option, cors_preflight_allow_origin, for that; default it to blank (ie, deny preflights from all origins, preserving existing behavior), but allow either a comma-separated list of origins or * (to allow all origins). Change-Id: I985143bf03125a05792e79bc5e5f83722d6431b3 Co-Authored-By: Matthew Oliver <matt@oliver.net.au> |
||
|
Tim Burke
|
cf4f320644 |
tempauth: Add .reseller_reader group
Change-Id: I8c5197ed327fbb175c8a2c0e788b1ae14e6dfe23 |
||
|
Pete Zaitcev
|
98a0275a9d |
Add a read-only role to keystoneauth
An idea was floated recently of a read-only role that can be used for cluster-wide audits, and is otherwise safe. It was also included into the "Consistent and Secure Default Policies" effort in OpenStack, where it implements "reader" personas in system, domain, and project scopes. This patch implements it for system scope, where it's most useful for operators. Change-Id: I5f5fff2e61a3e5fb4f4464262a8ea558a6e7d7ef |
||
|
Alistair Coles
|
6896f1f54b |
s3api: actually execute check_pipeline in real world
Previously, S3ApiMiddleware.check_pipeline would always exit early because the __file__ attribute of the Config instance passed to check_pipeline was never set. The __file__ key is typically passed to the S3ApiMiddleware constructor in the wsgi config dict, so this dict is now passed to check_pipeline() for it to test for the existence of __file__. Also, the use of a Config object is replaced with a dict where it mimics the wsgi conf object in the unit tests setup. UpgradeImpact ============= The bug prevented the pipeline order checks described in proxy-server.conf-sample being made on the proxy-server pipeline when s3api middleware was included. With this change, these checks will now be made and an invalid pipeline configuration will result in a ValueError being raised during proxy-server startup. A valid pipeline has another middleware (presumed to be an auth middleware) between s3api and the proxy-server app. If keystoneauth is found, then a further check is made that s3token is configured after s3api and before keystoneauth. The pipeline order checks can be disabled by setting the s3api auth_pipeline_check option to False in proxy-server.conf. This mitigation is recommended if previously operating with what will now be considered an invalid pipeline. The bug also prevented a check for slo middleware being in the pipeline between s3api and the proxy-server app. If the slo middleware is not found then multipart uploads will now not be supported, regardless of the value of the allow_multipart_uploads option described in proxy-server.conf-sample. In this case a warning will be logged during startup but no exception is raised. Closes-Bug: #1912391 Change-Id: I357537492733b97e5afab4a7b8e6a5c527c650e4 |
||
|
Tim Burke
|
10d9a737d8 |
s3api: Make allowable clock skew configurable
While we're at it, make the default match AWS's 15 minute limit (instead of our old 5 minute limit). UpgradeImpact ============= This (somewhat) weakens some security protections for requests over the S3 API; operators may want to preserve the prior behavior by setting allowable_clock_skew = 300 in the [filter:s3api] section of their proxy-server.conf Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: I0da777fcccf056e537b48af4d3277835b265d5c9 |
||
|
Zuul
|
d5bb644a17 | Merge "Use cached shard ranges for container GETs" | ||
|
Grzegorz Grasza
|
6930bc24b2 |
Memcached client TLS support
This patch specifies a set of configuration options required to build a TLS context, which is used to wrap the client connection socket. Closes-Bug: #1906846 Change-Id: I03a92168b90508956f367fbb60b7712f95b97f60 |
||
|
Alistair Coles
|
077ba77ea6 |
Use cached shard ranges for container GETs
This patch makes four significant changes to the handling of GET requests for sharding or sharded containers: - container server GET requests may now result in the entire list of shard ranges being returned for the 'listing' state regardless of any request parameter constraints. - the proxy server may cache that list of shard ranges in memcache and the requests environ infocache dict, and subsequently use the cached shard ranges when handling GET requests for the same container. - the proxy now caches more container metadata so that it can synthesize a complete set of container GET response headers from cache. - the proxy server now enforces more container GET request validity checks that were previously only enforced by the backend server, e.g. checks for valid request parameter values With this change, when the proxy learns from container metadata that the container is sharded then it will cache shard ranges fetched from the backend during a container GET in memcache. On subsequent container GETs the proxy will use the cached shard ranges to gather object listings from shard containers, avoiding further GET requests to the root container until the cached shard ranges expire from cache. Cached shard ranges are most useful if they cover the entire object name space in the container. The proxy therefore uses a new X-Backend-Override-Shard-Name-Filter header to instruct the container server to ignore any request parameters that would constrain the returned shard range listing i.e. 'marker', 'end_marker', 'includes' and 'reverse' parameters. Having obtained the entire shard range listing (either from the server or from cache) the proxy now applies those request parameter constraints itself when constructing the client response. When using cached shard ranges the proxy will synthesize response headers from the container metadata that is also in cache. To enable the full set of container GET response headers to be synthezised in this way, the set of metadata that the proxy caches when handling a backend container GET response is expanded to include various timestamps. The X-Newest header may be used to disable looking up shard ranges in cache. Change-Id: I5fc696625d69d1ee9218ee2a508a1b9be6cf9685 |
||
|
Zuul
|
ebfc3a61fa | Merge "Use socket_timeout kwarg instead of useless eventlet.wsgi.WRITE_TIMEOUT" | ||
|
Zuul
|
cd228fafad | Merge "Add a new URL parameter to allow for async cleanup of SLO segments" | ||
|
Tim Burke
|
918ab8543e |
Use socket_timeout kwarg instead of useless eventlet.wsgi.WRITE_TIMEOUT
No version of eventlet that I'm aware of hasany sort of support for eventlet.wsgi.WRITE_TIMEOUT; I don't know why we've been setting that. On the other hand, the socket_timeout argument for eventlet.wsgi.Server has been supported for a while -- since 0.14 in 2013. Drive-by: Fix up handling of sub-second client_timeouts. Change-Id: I1dca3c3a51a83c9d5212ee5a0ad2ba1343c68cf9 Related-Change: I1d4d028ac5e864084a9b7537b140229cb235c7a3 Related-Change: I433c97df99193ec31c863038b9b6fd20bb3705b8 |
||
|
Tim Burke
|
e78377624a |
Add a new URL parameter to allow for async cleanup of SLO segments
Add a new config option to SLO, allow_async_delete, to allow operators to opt-in to this new behavior. If their expirer queues get out of hand, they can always turn it back off. If the option is disabled, handle the delete inline; this matches the behavior of old Swift. Only allow an async delete if all segments are in the same container and none are nested SLOs, that way we only have two auth checks to make. Have s3api try to use this new mode if the data seems to have been uploaded via S3 (since it should be safe to assume that the above criteria are met). Drive-by: Allow the expirer queue and swift-container-deleter to use high-precision timestamps. Change-Id: I0bbe1ccd06776ef3e23438b40d8fb9a7c2de8921 |
||
|
Zuul
|
2593f7f264 | Merge "memcache: Make error-limiting values configurable" | ||
|
Tim Burke
|
aff65242ff |
memcache: Make error-limiting values configurable
Previously these were all hardcoded; let operators tweak them as needed. Significantly, this also allows operators to disable error-limiting entirely, which may be a useful protection in case proxies are configured with a single memcached server. Use error_suppression_limit and error_suppression_interval to mirror the option names used by the proxy-server to ratelimit backend Swift servers. Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: Ife005cb8545dd966d7b0e34e5496a0354c003881 |
||
|
Zuul
|
b9a404b4d1 | Merge "ec: Add an option to write fragments with legacy crc" | ||
|
Tim Burke
|
599f63e762 |
ec: Add an option to write fragments with legacy crc
When upgrading from liberasurecode<=1.5.0, you may want to continue writing legacy CRCs until all nodes are upgraded and capabale of reading fragments with zlib CRCs. Starting in liberasurecode>=1.6.2, we can use the environment variable LIBERASURECODE_WRITE_LEGACY_CRC to control whether we write zlib or legacy CRCs, but for many operators it's easier to manage swift configs than environment variables. Add a new option, write_legacy_ec_crc, to the proxy-server app and object-reconstructor; if set to true, ensure legacy frags are written. Note that more daemons instantiate proxy-server apps than just the proxy-server. The complete set of impacted daemons should be: * proxy-server * object-reconstructor * container-reconciler * any users of internal-client.conf UpgradeImpact ============= To ensure a smooth liberasurecode upgrade: 1. Determine whether your cluster writes legacy or zlib CRCs. Depending on the order in which shared libraries are loaded, your servers may already be reading and writing zlib CRCs, even with old liberasurecode. In that case, no special action is required and WRITING LEGACY CRCS DURING THE UPGRADE WILL CAUSE AN OUTAGE. Just upgrade liberasurecode normally. See the closed bug for more information and a script to determine which CRC is used. 2. On all nodes, ensure Swift is upgraded to a version that includes write_legacy_ec_crc support and write_legacy_ec_crc is enabled on all daemons. 3. On each node, upgrade liberasurecode and restart Swift services. Because of (2), they will continue writing legacy CRCs which will still be readable by nodes that have not yet upgraded. 4. Once all nodes are upgraded, remove the write_legacy_ec_crc option from all configs across all nodes. After restarting daemons, they will write zlib CRCs which will also be readable by all nodes. Change-Id: Iff71069f808623453c0ff36b798559015e604c7d Related-Bug: #1666320 Closes-Bug: #1886088 Depends-On: https://review.opendev.org/#/c/738959/ |
||
|
Clay Gerrard
|
754defc39c |
Client should retry when there's just one 404 and a bunch of errors
During a rebalance, it's expected that we may get a 404 for data that does exist elsewhere in the cluster. Normally this isn't a problem; the proxy sees the 404, keeps digging, and one of the other primaries will serve the response. Previously, if the other replicas were heavily loaded, the proxy would see a bunch of timeouts and the fresh (empty) primary, treat the 404 as good, and send that on to the client. Now, have the proxy throw out that first 404 (provided it doesn't have a timestamp); it will then return a 503 to the client, indicating that it should try again. Add a new (per-policy) proxy-server config option, rebalance_missing_suppression_count; operators may use this to increase the number of 404-no-timestamp responses to discard if their rebalances are going faster than replication can keep up, or set it to zero to return to the previous behavior. Change-Id: If4bd39788642c00d66579b26144af8f116735b4d |
||
|
Zuul
|
cca5e8b1de | Merge "Make all concurrent_get options per-policy" | ||
|
Zuul
|
20e1544ad8 | Merge "Extend concurrent_gets to EC GET requests" | ||
|
Clay Gerrard
|
f043aedec1 |
Make all concurrent_get options per-policy
Change-Id: Ib81f77cc343c3435d7e6258d4631563fa022d449 |
||
|
Zuul
|
7015ac2fdc | Merge "py3: Work with proper native string paths in crypto meta" | ||
|
Clay Gerrard
|
8f60e0a260 |
Extend concurrent_gets to EC GET requests
After the initial requests are started, if the proxy still does not have enough backend responses to return a client response additional requests will be spawned to remaining primaries at the frequency configured by the concurrency_timeout. A new tunable concurrent_ec_extra_requests allows operators to control how many requests to backend fragments are started immediately with a client request to an object stored in an EC storage policy. By default the minimum ndata backend requests are started immediately, but operators may increase concurrent_ec_extra_requests up to nparity which is similar in effect to a concurrency_timeout of 0. Change-Id: Ia0a9398107a400815be2e0097b1b8e76336a0253 |
||
|
Tim Burke
|
7d429318dd |
py3: Work with proper native string paths in crypto meta
Previously, we would work with these paths as WSGI strings -- this would work fine when all data were read and written on the same major version of Python, but fail pretty badly during and after upgrading Python. In particular, if a py3 proxy-server tried to read existing data that was written down by a py2 proxy-server, it would hit an error and respond 500. Worse, if an un-upgraded py2 proxy tried to read data that was freshly-written by a py3 proxy, it would serve corrupt data back to the client (including a corrupt/invalid ETag and Content-Type). Now, ensure that both py2 and py3 write down paths as native strings. Make an effort to still work with WSGI-string metadata, though it can be ambiguous as to whether a string is a WSGI string or not. The heuristic used is if * the path from metadata does not match the (native-string) request path and * the path from metadata (when interpreted as a WSGI string) can be "un-wsgi-fied" without any encode/decode errors and * the native-string path from metadata *does* match the native-string request path then trust the path from the request. By contrast, we usually prefer the path from metadata in case there was a pipeline misconfiguration (see related bug). Add the ability to read and write a new, unambiguous version of metadata that always has the path as a native string. To support rolling upgrades, a new config option is added: meta_version_to_write. This defaults to 2 to support rolling upgrades without configuration changes, but the default may change to 3 in a future release. UpgradeImpact ============= When upgrading from Swift 2.20.0 or Swift 2.19.1 or earlier, set meta_version_to_write = 1 in your keymaster's configuration. Regardless of prior Swift version, set meta_version_to_write = 3 after upgrading all proxy servers. When switching from Python 2 to Python 3, first upgrade Swift while on Python 2, then upgrade to Python 3. Change-Id: I00c6693c42c1a0220b64d8016d380d5985339658 Closes-Bug: #1888037 Related-Bug: #1813725 |
||
|
Tim Burke
|
2ffe598f48 |
proxy-logging: Be able to configure log_route
This lets you have separate loggers for the left and right proxy-logging middlewares, so you can have a config like [pipeline:main] pipeline = ... proxy-logging-client ... proxy-logging-subrequest proxy-server [proxy-logging-client] use = egg:swift#proxy_logging access_log_statsd_metric_prefix = client-facing [proxy-logging-subrequest] use = egg:swift#proxy_logging access_log_route = subrequest access_log_statsd_metric_prefix = subrequest to isolate subrequest metrics from client-facing metrics. Change-Id: If41e3d542b30747da7ca289708e9d24873c46e2e |
||
|
Tim Burke
|
1db11df4f2 |
ratelimit: Allow multiple placements
We usually want to have ratelimit fairly far left in the pipeline -- the assumption is that something like an auth check will be fairly expensive and we should try to shield the auth system so it doesn't melt under the load of a misbehaved swift client. But with S3 requests, we can't know the account/container that a request is destined for until *after* auth. Fortunately, we've already got some code to make s3api play well with ratelimit. So, let's have our cake and eat it, too: allow operators to place ratelimit once, before auth, for swift requests and again, after auth, for s3api. They'll both use the same memcached keys (so users can't switch APIs to effectively double their limit), but still only have each S3 request counted against the limit once. Change-Id: If003bb43f39427fe47a0f5a01dbcc19e1b3b67ef |
||
|
John Dickinson
|
d358b9130d |
added value and notes to a sample config file for s3token
Change-Id: I18accffb2cf6ba6a3fff6fd5d95f06a424d1d919 |
||
|
Romain LE DISEZ
|
27fd97cef9 |
Middleware that allows a user to have quoted Etags
Users have complained for a while that Swift's ETags don't match the expected RFC formats. We've resisted fixing this for just as long, worrying that the fix would break innumerable clients that expect the value to be a hex-encoded MD5 digest and *nothing else*. But, users keep asking for it, and some consumers (including some CDNs) break if we *don't* have quoted etags -- so, let's make it an option. With this middleware, Swift users can set metadata per-account or even per-container to explicitly request RFC compliant etags or not. Swift operators also get an option to change the default behavior cluster-wide; it defaults to the old, non-compliant format. See also: - https://tools.ietf.org/html/rfc2616#section-3.11 - https://tools.ietf.org/html/rfc7232#section-2.3 Closes-Bug: 1099087 Closes-Bug: 1424614 Co-Authored-By: Tim Burke <tim.burke@gmail.com> Change-Id: I380c6e34949d857158e11eb428b3eda9975d855d |
||
|
Clay Gerrard
|
2759d5d51c |
New Object Versioning mode
This patch adds a new object versioning mode. This new mode provides a new set of APIs for users to interact with older versions of an object. It also changes the naming scheme of older versions and adds a version-id to each object. This new mode is not backwards compatible or interchangeable with the other two modes (i.e., stack and history), especially due to the changes in the namimg scheme of older versions. This new mode will also serve as a foundation for adding S3 versioning compatibility in the s3api middleware. Note that this does not (yet) support using a versioned container as a source in container-sync. Container sync should be enhanced to sync previous versions of objects. Change-Id: Ic7d39ba425ca324eeb4543a2ce8d03428e2225a1 Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Thiago da Silva <thiagodasilva@gmail.com> |
||
|
Clay Gerrard
|
4601548dab |
Deprecate per-service auto_create_account_prefix
If we move it to constraints it's more globally accessible in our code, but more importantly it's more obvious to ops that everything breaks if you try to mis-configure different values per-service. Change-Id: Ib8f7d08bc48da12be5671abe91a17ae2b49ecfee |
||
|
Clay Gerrard
|
698717d886 |
Allow internal clients to use reserved namespace
Reserve the namespace starting with the NULL byte for internal use-cases. Backend services will allow path names to include the NULL byte in urls and validate names in the reserved namespace. Database services will filter all names starting with the NULL byte from responses unless the request includes the header: X-Backend-Allow-Reserved-Names: true The proxy server will not allow path names to include the NULL byte in urls unless a middlware has set the X-Backend-Allow-Reserved-Names header. Middlewares can use the reserved namespace to create objects and containers that can not be directly manipulated by clients. Any objects and bytes created in the reserved namespace will be aggregated to the user's account totals. When deploying internal proxys developers and operators may configure the gatekeeper middleware to translate the X-Allow-Reserved-Names header to the Backend header so they can manipulate the reserved namespace directly through the normal API. UpgradeImpact: it's not safe to rollback from this change Change-Id: If912f71d8b0d03369680374e8233da85d8d38f85 |
||
|
Romain LE DISEZ
|
2f1111a436 |
proxy: stop sending chunks to objects with a Queue
During a PUT of an object, the proxy instanciates one Putter per object-server that will store data (either the full object or a fragment, depending on the storage policy). Each Putter is owning a Queue that will be used to bufferize data chunks before they are written to the socket connected to the object-server. The chunks are moved from the queue to the socket by a greenthread. There is one greenthread per Putter. If the client is uploading faster than the object-servers can manage, the Queue could grow and consume a lot of memory. To avoid that, the queue is bounded (default: 10). Having a bounded queue also allows to ensure that all object-servers will get the data at the same rate because if one queue is full, the greenthread reading from the client socket will block when trying to write to the queue. So the global rate is the one of the slowest object-server. The thing is, every operating system manages socket buffers for incoming and outgoing data. Concerning the send buffer, the behavior is such that if the buffer is full, a call to write() will block, otherwise the call will return immediately. It behaves a lot like the Putter's Queue, except that the size of the buffer is dynamic so it adapts itself to the speed of the receiver. Thus, managing a queue in addition to the socket send buffer is a duplicate queueing/buffering that provides no interest but is, as shown by profiling and benchmarks, very CPU costly. This patch removes the queuing mecanism. Instead, the greenthread reading data from the client will directly write to the socket. If an object-server is getting slow, the buffer will fulfill, blocking the reader greenthread. Benchmark shows a CPU consumption reduction of more than 30% will the observed rate for an upload is increasing by about 45%. Change-Id: Icf8f800cb25096f93d3faa1e6ec091eb29500758 |
||
|
Zuul
|
cf18e1f47b | Merge "sharding: Cache shard ranges for object writes" | ||
|
Tim Burke
|
a1af3811a7 |
sharding: Cache shard ranges for object writes
Previously, we issued a GET to the root container for every object PUT, POST, and DELETE. This puts load on the container server, potentially leading to timeouts, error limiting, and erroneous 404s (!). Now, cache the complete set of 'updating' shards, and find the shard for this particular update in the proxy. Add a new config option, recheck_updating_shard_ranges, to control the cache time; it defaults to one hour. Set to 0 to fall back to previous behavior. Note that we should be able to tolerate stale shard data just fine; we already have to worry about async pendings that got written down with one shard but may not get processed until that shard has itself sharded or shrunk into another shard. Also note that memcache has a default value limit of 1MiB, which may be exceeded if a container has thousands of shards. In that case, set() will act like a delete(), causing increased memcache churn but otherwise preserving existing behavior. In the future, we may want to add support for gzipping the cached shard ranges as they should compress well. Change-Id: Ic7a732146ea19a47669114ad5dbee0bacbe66919 Closes-Bug: 1781291 |
||
|
zengjia
|
0ae1ad63c1 |
Update auth_url in install docs
Beginning with the Queens release, the keystone install guide recommends running all interfaces on the same port.This patch updates the swift install guide to reflect that change Change-Id: Id00cfd2c921da352abdbbbb6668b921f3cb31a1a Closes-bug: #1754104 |
||
|
Tim Burke
|
9d1b749740 |
py3: port staticweb and domain_remap func tests
Drive-by: Tighten domain_remap assertions on listings, which required that we fix proxy pipeline placement. Add a note about it to the sample config. Change-Id: I41835148051294088a2c0fb4ed4e7a7b61273e5f |
||
|
Tim Burke
|
345f577ff1 |
s3token: fix conf option name
Related-Change: Ica740c28b47aa3f3b38dbfed4a7f5662ec46c2c4 Change-Id: I71f411a2e99fa8259b86f11ed29d1b816ff469cb |
||
|
Tim Burke
|
4f7c44a9d7 |
Add information about secret_cache_duration to sample config
Related-Change-Id: Id0c01da6aa6ca804c8f49a307b5171b87ec92228 Change-Id: Ica740c28b47aa3f3b38dbfed4a7f5662ec46c2c4 |
||
|
Gilles Biannic
|
a4cc353375 |
Make log format for requests configurable
Add the log_msg_template option in proxy-server.conf and log_format in a/c/o-server.conf. It is a string parsable by Python's format() function. Some fields containing user data might be anonymized by using log_anonymization_method and log_anonymization_salt. Change-Id: I29e30ef45fe3f8a026e7897127ffae08a6a80cd9 |
||
|
Tim Burke
|
d748851766 |
s3token: Add note about config change when upgrading from swift3
Change-Id: I2610cbdc9b7bc2b4d614eaedb4f3369d7a424ab3 |
||
|
Zuul
|
3043c54f28 | Merge "s3api: Allow concurrent multi-deletes" | ||
|
Tim Burke
|
00be3f595e |
s3api: Allow concurrent multi-deletes
Previously, a thousand-item multi-delete request would consider each object to delete serially, and not start trying to delete one until the previous was deleted (or hit an error). Now, allow operators to configure a concurrency factor to allow multiple deletes at the same time. Default the concurrency to 2, like we did for slo and bulk. See also: http://lists.openstack.org/pipermail/openstack-dev/2016-May/095737.html Change-Id: If235931635094b7251e147d79c8b7daa10cdcb3d Related-Change: I128374d74a4cef7a479b221fd15eec785cc4694a |
||
|
Tim Burke
|
692a03473f |
s3api: Change default location to us-east-1
This is more likely to be the default region that a client would try for v4 signatures. UpgradeImpact: ============== Deployers with clusters that relied on the old implicit default location of US should explicitly set location = US in the [filter:s3api] section of proxy-server.conf before upgrading. Change-Id: Ib6659a7ad2bd58d711002125e7820f6e86383be8 |
||
|
Alistair Coles
|
904e7c97f1 |
Add more doc and test for cors_expose_headers option
In follow-up to the related change, mention the new cors_expose_headers option (and other proxy-server.conf options) in the CORS doc. Add a test for the cors options being loaded into the proxy server. Improve CORS comments in docs. Change-Id: I647d8f9e9cbd98de05443638628414b1e87d1a76 Related-Change: I5ca90a052f27c98a514a96ee2299bfa1b6d46334 |
||
|
Zuul
|
5d46c0d8b3 | Merge "Adding keep_idle config value to socket" |