11eb17d3b268258a1fa60957e33d5cbe8566db98
322 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
indianwhocodes
|
11eb17d3b2 |
support x-open-expired header for expired objects
If the global configuration option 'enable_open_expired' is set to true in the config, then the client will be able to make a request with the header 'x-open-expired' set to true in order to access an object that has expired, provided it is in its grace period. If this config flag is set to false, the client will not be able to access any expired objects, even with the header, which is the default behavior unless the flag is set. When a client sets a 'x-open-expired' header to a true value for a GET/HEAD/POST request the proxy will forward x-backend-open-expired to storage server. The storage server will allow clients that set x-backend-open-expired to open and read an object that has not yet been reaped by the object-expirer, even after the x-delete-at time has passed. The header is always ignored when used with temporary URLs. Co-Authored-By: Anish Kachinthaya <akachinthaya@nvidia.com> Related-Change: I106103438c4162a561486ac73a09436e998ae1f0 Change-Id: Ibe7dde0e3bf587d77e14808b169c02f8fb3dddb3 |
||
|
Alistair Coles
|
2500fbeea9 |
proxy: don't use recoverable_node_timeout with x-newest
Object GET requests with a truthy X-Newest header are not resumed if a backend request times out. The GetOrHeadHandler therefore uses the regular node_timeout when waiting for a backend connection response, rather than the possibly shorter recoverable_node_timeout. However, previously while reading data from a backend response the recoverable_node_timeout would still be used with X-Newest requests. This patch simplifies GetOrHeadHandler to never use recoverable_node_timeout when X-Newest is truthy. Change-Id: I326278ecb21465f519b281c9f6c2dedbcbb5ff14 |
||
|
Takashi Kajinami
|
bd64748a03 |
Document allowed_digests for formpost middleware
The allowed_digests option were added to the formpost middleware in
addition to the tempurl middleware[1], but the option was not added to
the formpost section in the example proxy config file.
[1]
|
||
|
Tim Burke
|
0c9b545ea7 |
docs: Clean up proxy logging docs
Change-Id: I6ef909e826d3901f24d3c42a78d2ab1e4e47bb64 |
||
|
Tim Burke
|
469c38e9fb |
wsgi: Add keepalive_timeout option
Clients sometimes hold open connections "just in case" they might later pipeline requests. This can cause issues for proxies, especially if operators restrict max_clients in an effort to improve response times for the requests that *do* get serviced. Add a new keepalive_timeout option to give proxies a way to drop these established-but-idle connections without impacting active connections (as may happen when reducing client_timeout). Note that this requires eventlet 0.33.4 or later. Change-Id: Ib5bb84fa3f8a4b9c062d58c8d3689e7030d9feb3 |
||
|
Tim Burke
|
cbba65ac91 |
quotas: Add account-level per-policy quotas
Reseller admins can set new headers on accounts like X-Account-Quota-Bytes-Policy-<policy-name>: <quota> This may be done to limit consumption of a faster, all-flash policy, for example. This is independent of the existing X-Account-Meta-Quota-Bytes header, which continues to limit the total storage for an account across all policies. Change-Id: Ib25c2f667e5b81301f8c67375644981a13487cfe |
||
|
Zuul
|
0470994a03 | Merge "slo: Default allow_async_delete to true" | ||
|
Tim Burke
|
5c6407bf59 |
proxy: Add a chance to skip memcache for get_*_info calls
If you've got thousands of requests per second for objects in a single container, you basically NEVER want that container's info to ever fall out of memcache. If it *does*, all those clients are almost certainly going to overload the container. Avoid this by allowing some small fraction of requests to bypass and refresh the cache, pushing out the TTL as long as there continue to be requests to the container. The likelihood of skipping the cache is configurable, similar to what we did for shard range sets. Change-Id: If9249a42b30e2a2e7c4b0b91f947f24bf891b86f Closes-Bug: #1883324 |
||
|
Tim Burke
|
f6196b0a22 |
AUTHORS/CHANGELOG for 2.30.0
Change-Id: If7c9e13fc62f8104ccb70a12b9c839f78e7e6e3e |
||
|
Zuul
|
5398204f22 | Merge "tempurl: Deprecate sha1 signatures" | ||
|
Tim Burke
|
11b9761cdf |
Rip out pickle support in our memcached client
We said this would be going away back in 1.7.0 -- lets actually remove it. Change-Id: I9742dd907abea86da9259740d913924bb1ce73e7 Related-Change: Id7d6d547b103b4f23ebf5be98b88f09ec6027ce4 |
||
|
Tim Burke
|
118cf2ba8a |
tempurl: Deprecate sha1 signatures
We've known this would eventually be necessary for a while [1], and way back in 2017 we started seeing SHA-1 collisions [2]. [1] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html [2] https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html UpgradeImpact: ============== "sha1" has been removed from the default set of `allowed_digests` in the tempurl middleware config. If your cluster still has clients requiring the use of SHA-1, - explicitly configure `allowed_digests` to include "sha1" and - encourage your clients to move to more-secure algorithms. Depends-On: https://review.opendev.org/c/openstack/tempest/+/832771 Change-Id: I6e6fa76671c860191a2ce921cb6caddc859b1066 Related-Change: Ia9dd1a91cc3c9c946f5f029cdefc9e66bcf01046 Closes-Bug: #1733634 |
||
|
Matthew Oliver
|
f2c279bae9 |
Trim sensitive information in the logs (CVE-2017-8761)
Several headers and query params were previously revealed in logs but
are now redacted:
* X-Auth-Token header (previously redacted in the {auth_token} field,
but not the {headers} field)
* temp_url_sig query param (used by tempurl middleware)
* Authorization header and X-Amz-Signature and Signature query
parameters (used by s3api middleware)
This patch adds some new middleware helper methods to track headers and
query parameters that should be redacted by proxy-logging. While
instantiating the middleware, authors can call either:
register_sensitive_header('case-insensitive-header-name')
register_sensitive_param('case-sensitive-query-param-name')
to add items that should be redacted. The redaction uses proxy-logging's
existing reveal_sensitive_prefix config option to determine how much to
reveal.
Note that query params will still be logged in their entirety if
eventlet_debug is enabled.
UpgradeImpact
=============
The reveal_sensitive_prefix config option now applies to more items;
operators should review their currently-configured value to ensure it
is appropriate for these new contexts. In particular, operators should
consider reducing the value if it is more than 20 or so, even if that
previously offered sufficient protection for auth tokens.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Closes-Bug: #1685798
Change-Id: I88b8cfd30292325e0870029058da6fb38026ae1a
|
||
|
Zuul
|
c1d2e661b1 | Merge "s3api: Allow multiple storage domains" | ||
|
Tim Burke
|
8c6ccb5fd4 |
proxy: Add a chance to skip memcache when looking for shard ranges
By having some small portion of calls skip cache and go straight to disk, we can ensure the cache is always kept fresh and never expires (at least, for active containers). Previously, when shard ranges fell out of cache there would frequently be a thundering herd that could overwhelm the container server, leading to 503s served to clients or an increase in async pendings. Include metrics for hit/miss/skip rates. Change-Id: I6d74719fb41665f787375a08184c1969c86ce2cf Related-Bug: #1883324 |
||
|
Tim Burke
|
11d1022163 |
s3api: Allow multiple storage domains
Sometimes a cluster might be accessible via more than one set of domain names. Allow operators to configure them such that virtual-host style requests work with all names. Change-Id: I83b2fded44000bf04f558e2deb6553565d54fd4a |
||
|
Tim Burke
|
fa1058b6ed |
slo: Default allow_async_delete to true
We've had this option for a year now, and it seems to help. Let's enable it for everyone. Note that Swift clients still need to opt into the async delete via a query param, while S3 clients get it for free. Change-Id: Ib4164f877908b855ce354cc722d9cb0be8be9921 |
||
|
Pete Zaitcev
|
6198284839 |
Add a project scope read-only role to keystoneauth
This patch continues work for more of the "Consistent and Secure Default Policies". We already have system scope personas implemented, but the architecture people are asking for project scope now. At least we don't need domain scope. Change-Id: If7d39ac0dfbe991d835b76eb79ae978fc2fd3520 |
||
|
Zuul
|
b3def185c6 | Merge "Allow floats for all intervals" | ||
|
Alistair Coles
|
46ea3aeae8 |
Quarantine stale EC fragments after checking handoffs
If the reconstructor finds a fragment that appears to be stale then it will now quarantine the fragment. Fragments are considered stale if insufficient fragments at the same timestamp can be found to rebuild missing fragments, and the number found is less than or equal to a new reconstructor 'quarantine_threshold' config option. Before quarantining a fragment the reconstructor will attempt to fetch fragments from handoff nodes in addition to the usual primary nodes. The handoff requests are limited by a new 'request_node_count' config option. 'quarantine_threshold' defaults to zero i.e. no fragments will be quarantined. 'request node count' defaults to '2 * replicas'. Closes-Bug: 1655608 Change-Id: I08e1200291833dea3deba32cdb364baa99dc2816 |
||
|
Tim Burke
|
c374a7a851 |
Allow floats for all intervals
Change-Id: I91e9bc02d94fe7ea6e89307305705c383087845a |
||
|
Tim Burke
|
e35365df51 |
s3api: Add config option to return 429s on ratelimit
Change-Id: If04c083ccc9f63696b1f53ac13edc932740a0654 |
||
|
Tim Burke
|
27a734c78a |
s3api: Allow CORS preflight requests
Unfortunately, we can't identify the user, so we can't map to an account, so we can't respect whatever CORS metadata might be set on the container. As a result, the allowed origins must be configured cluster-wide. Add a new config option, cors_preflight_allow_origin, for that; default it to blank (ie, deny preflights from all origins, preserving existing behavior), but allow either a comma-separated list of origins or * (to allow all origins). Change-Id: I985143bf03125a05792e79bc5e5f83722d6431b3 Co-Authored-By: Matthew Oliver <matt@oliver.net.au> |
||
|
Tim Burke
|
cf4f320644 |
tempauth: Add .reseller_reader group
Change-Id: I8c5197ed327fbb175c8a2c0e788b1ae14e6dfe23 |
||
|
Pete Zaitcev
|
98a0275a9d |
Add a read-only role to keystoneauth
An idea was floated recently of a read-only role that can be used for cluster-wide audits, and is otherwise safe. It was also included into the "Consistent and Secure Default Policies" effort in OpenStack, where it implements "reader" personas in system, domain, and project scopes. This patch implements it for system scope, where it's most useful for operators. Change-Id: I5f5fff2e61a3e5fb4f4464262a8ea558a6e7d7ef |
||
|
Alistair Coles
|
6896f1f54b |
s3api: actually execute check_pipeline in real world
Previously, S3ApiMiddleware.check_pipeline would always exit early because the __file__ attribute of the Config instance passed to check_pipeline was never set. The __file__ key is typically passed to the S3ApiMiddleware constructor in the wsgi config dict, so this dict is now passed to check_pipeline() for it to test for the existence of __file__. Also, the use of a Config object is replaced with a dict where it mimics the wsgi conf object in the unit tests setup. UpgradeImpact ============= The bug prevented the pipeline order checks described in proxy-server.conf-sample being made on the proxy-server pipeline when s3api middleware was included. With this change, these checks will now be made and an invalid pipeline configuration will result in a ValueError being raised during proxy-server startup. A valid pipeline has another middleware (presumed to be an auth middleware) between s3api and the proxy-server app. If keystoneauth is found, then a further check is made that s3token is configured after s3api and before keystoneauth. The pipeline order checks can be disabled by setting the s3api auth_pipeline_check option to False in proxy-server.conf. This mitigation is recommended if previously operating with what will now be considered an invalid pipeline. The bug also prevented a check for slo middleware being in the pipeline between s3api and the proxy-server app. If the slo middleware is not found then multipart uploads will now not be supported, regardless of the value of the allow_multipart_uploads option described in proxy-server.conf-sample. In this case a warning will be logged during startup but no exception is raised. Closes-Bug: #1912391 Change-Id: I357537492733b97e5afab4a7b8e6a5c527c650e4 |
||
|
Tim Burke
|
10d9a737d8 |
s3api: Make allowable clock skew configurable
While we're at it, make the default match AWS's 15 minute limit (instead of our old 5 minute limit). UpgradeImpact ============= This (somewhat) weakens some security protections for requests over the S3 API; operators may want to preserve the prior behavior by setting allowable_clock_skew = 300 in the [filter:s3api] section of their proxy-server.conf Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: I0da777fcccf056e537b48af4d3277835b265d5c9 |
||
|
Zuul
|
d5bb644a17 | Merge "Use cached shard ranges for container GETs" | ||
|
Grzegorz Grasza
|
6930bc24b2 |
Memcached client TLS support
This patch specifies a set of configuration options required to build a TLS context, which is used to wrap the client connection socket. Closes-Bug: #1906846 Change-Id: I03a92168b90508956f367fbb60b7712f95b97f60 |
||
|
Alistair Coles
|
077ba77ea6 |
Use cached shard ranges for container GETs
This patch makes four significant changes to the handling of GET requests for sharding or sharded containers: - container server GET requests may now result in the entire list of shard ranges being returned for the 'listing' state regardless of any request parameter constraints. - the proxy server may cache that list of shard ranges in memcache and the requests environ infocache dict, and subsequently use the cached shard ranges when handling GET requests for the same container. - the proxy now caches more container metadata so that it can synthesize a complete set of container GET response headers from cache. - the proxy server now enforces more container GET request validity checks that were previously only enforced by the backend server, e.g. checks for valid request parameter values With this change, when the proxy learns from container metadata that the container is sharded then it will cache shard ranges fetched from the backend during a container GET in memcache. On subsequent container GETs the proxy will use the cached shard ranges to gather object listings from shard containers, avoiding further GET requests to the root container until the cached shard ranges expire from cache. Cached shard ranges are most useful if they cover the entire object name space in the container. The proxy therefore uses a new X-Backend-Override-Shard-Name-Filter header to instruct the container server to ignore any request parameters that would constrain the returned shard range listing i.e. 'marker', 'end_marker', 'includes' and 'reverse' parameters. Having obtained the entire shard range listing (either from the server or from cache) the proxy now applies those request parameter constraints itself when constructing the client response. When using cached shard ranges the proxy will synthesize response headers from the container metadata that is also in cache. To enable the full set of container GET response headers to be synthezised in this way, the set of metadata that the proxy caches when handling a backend container GET response is expanded to include various timestamps. The X-Newest header may be used to disable looking up shard ranges in cache. Change-Id: I5fc696625d69d1ee9218ee2a508a1b9be6cf9685 |
||
|
Zuul
|
ebfc3a61fa | Merge "Use socket_timeout kwarg instead of useless eventlet.wsgi.WRITE_TIMEOUT" | ||
|
Zuul
|
cd228fafad | Merge "Add a new URL parameter to allow for async cleanup of SLO segments" | ||
|
Tim Burke
|
918ab8543e |
Use socket_timeout kwarg instead of useless eventlet.wsgi.WRITE_TIMEOUT
No version of eventlet that I'm aware of hasany sort of support for eventlet.wsgi.WRITE_TIMEOUT; I don't know why we've been setting that. On the other hand, the socket_timeout argument for eventlet.wsgi.Server has been supported for a while -- since 0.14 in 2013. Drive-by: Fix up handling of sub-second client_timeouts. Change-Id: I1dca3c3a51a83c9d5212ee5a0ad2ba1343c68cf9 Related-Change: I1d4d028ac5e864084a9b7537b140229cb235c7a3 Related-Change: I433c97df99193ec31c863038b9b6fd20bb3705b8 |
||
|
Tim Burke
|
e78377624a |
Add a new URL parameter to allow for async cleanup of SLO segments
Add a new config option to SLO, allow_async_delete, to allow operators to opt-in to this new behavior. If their expirer queues get out of hand, they can always turn it back off. If the option is disabled, handle the delete inline; this matches the behavior of old Swift. Only allow an async delete if all segments are in the same container and none are nested SLOs, that way we only have two auth checks to make. Have s3api try to use this new mode if the data seems to have been uploaded via S3 (since it should be safe to assume that the above criteria are met). Drive-by: Allow the expirer queue and swift-container-deleter to use high-precision timestamps. Change-Id: I0bbe1ccd06776ef3e23438b40d8fb9a7c2de8921 |
||
|
Zuul
|
2593f7f264 | Merge "memcache: Make error-limiting values configurable" | ||
|
Tim Burke
|
aff65242ff |
memcache: Make error-limiting values configurable
Previously these were all hardcoded; let operators tweak them as needed. Significantly, this also allows operators to disable error-limiting entirely, which may be a useful protection in case proxies are configured with a single memcached server. Use error_suppression_limit and error_suppression_interval to mirror the option names used by the proxy-server to ratelimit backend Swift servers. Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: Ife005cb8545dd966d7b0e34e5496a0354c003881 |
||
|
Zuul
|
b9a404b4d1 | Merge "ec: Add an option to write fragments with legacy crc" | ||
|
Tim Burke
|
599f63e762 |
ec: Add an option to write fragments with legacy crc
When upgrading from liberasurecode<=1.5.0, you may want to continue writing legacy CRCs until all nodes are upgraded and capabale of reading fragments with zlib CRCs. Starting in liberasurecode>=1.6.2, we can use the environment variable LIBERASURECODE_WRITE_LEGACY_CRC to control whether we write zlib or legacy CRCs, but for many operators it's easier to manage swift configs than environment variables. Add a new option, write_legacy_ec_crc, to the proxy-server app and object-reconstructor; if set to true, ensure legacy frags are written. Note that more daemons instantiate proxy-server apps than just the proxy-server. The complete set of impacted daemons should be: * proxy-server * object-reconstructor * container-reconciler * any users of internal-client.conf UpgradeImpact ============= To ensure a smooth liberasurecode upgrade: 1. Determine whether your cluster writes legacy or zlib CRCs. Depending on the order in which shared libraries are loaded, your servers may already be reading and writing zlib CRCs, even with old liberasurecode. In that case, no special action is required and WRITING LEGACY CRCS DURING THE UPGRADE WILL CAUSE AN OUTAGE. Just upgrade liberasurecode normally. See the closed bug for more information and a script to determine which CRC is used. 2. On all nodes, ensure Swift is upgraded to a version that includes write_legacy_ec_crc support and write_legacy_ec_crc is enabled on all daemons. 3. On each node, upgrade liberasurecode and restart Swift services. Because of (2), they will continue writing legacy CRCs which will still be readable by nodes that have not yet upgraded. 4. Once all nodes are upgraded, remove the write_legacy_ec_crc option from all configs across all nodes. After restarting daemons, they will write zlib CRCs which will also be readable by all nodes. Change-Id: Iff71069f808623453c0ff36b798559015e604c7d Related-Bug: #1666320 Closes-Bug: #1886088 Depends-On: https://review.opendev.org/#/c/738959/ |
||
|
Clay Gerrard
|
754defc39c |
Client should retry when there's just one 404 and a bunch of errors
During a rebalance, it's expected that we may get a 404 for data that does exist elsewhere in the cluster. Normally this isn't a problem; the proxy sees the 404, keeps digging, and one of the other primaries will serve the response. Previously, if the other replicas were heavily loaded, the proxy would see a bunch of timeouts and the fresh (empty) primary, treat the 404 as good, and send that on to the client. Now, have the proxy throw out that first 404 (provided it doesn't have a timestamp); it will then return a 503 to the client, indicating that it should try again. Add a new (per-policy) proxy-server config option, rebalance_missing_suppression_count; operators may use this to increase the number of 404-no-timestamp responses to discard if their rebalances are going faster than replication can keep up, or set it to zero to return to the previous behavior. Change-Id: If4bd39788642c00d66579b26144af8f116735b4d |
||
|
Zuul
|
cca5e8b1de | Merge "Make all concurrent_get options per-policy" | ||
|
Zuul
|
20e1544ad8 | Merge "Extend concurrent_gets to EC GET requests" | ||
|
Clay Gerrard
|
f043aedec1 |
Make all concurrent_get options per-policy
Change-Id: Ib81f77cc343c3435d7e6258d4631563fa022d449 |
||
|
Zuul
|
7015ac2fdc | Merge "py3: Work with proper native string paths in crypto meta" | ||
|
Clay Gerrard
|
8f60e0a260 |
Extend concurrent_gets to EC GET requests
After the initial requests are started, if the proxy still does not have enough backend responses to return a client response additional requests will be spawned to remaining primaries at the frequency configured by the concurrency_timeout. A new tunable concurrent_ec_extra_requests allows operators to control how many requests to backend fragments are started immediately with a client request to an object stored in an EC storage policy. By default the minimum ndata backend requests are started immediately, but operators may increase concurrent_ec_extra_requests up to nparity which is similar in effect to a concurrency_timeout of 0. Change-Id: Ia0a9398107a400815be2e0097b1b8e76336a0253 |
||
|
Tim Burke
|
7d429318dd |
py3: Work with proper native string paths in crypto meta
Previously, we would work with these paths as WSGI strings -- this would work fine when all data were read and written on the same major version of Python, but fail pretty badly during and after upgrading Python. In particular, if a py3 proxy-server tried to read existing data that was written down by a py2 proxy-server, it would hit an error and respond 500. Worse, if an un-upgraded py2 proxy tried to read data that was freshly-written by a py3 proxy, it would serve corrupt data back to the client (including a corrupt/invalid ETag and Content-Type). Now, ensure that both py2 and py3 write down paths as native strings. Make an effort to still work with WSGI-string metadata, though it can be ambiguous as to whether a string is a WSGI string or not. The heuristic used is if * the path from metadata does not match the (native-string) request path and * the path from metadata (when interpreted as a WSGI string) can be "un-wsgi-fied" without any encode/decode errors and * the native-string path from metadata *does* match the native-string request path then trust the path from the request. By contrast, we usually prefer the path from metadata in case there was a pipeline misconfiguration (see related bug). Add the ability to read and write a new, unambiguous version of metadata that always has the path as a native string. To support rolling upgrades, a new config option is added: meta_version_to_write. This defaults to 2 to support rolling upgrades without configuration changes, but the default may change to 3 in a future release. UpgradeImpact ============= When upgrading from Swift 2.20.0 or Swift 2.19.1 or earlier, set meta_version_to_write = 1 in your keymaster's configuration. Regardless of prior Swift version, set meta_version_to_write = 3 after upgrading all proxy servers. When switching from Python 2 to Python 3, first upgrade Swift while on Python 2, then upgrade to Python 3. Change-Id: I00c6693c42c1a0220b64d8016d380d5985339658 Closes-Bug: #1888037 Related-Bug: #1813725 |
||
|
Tim Burke
|
2ffe598f48 |
proxy-logging: Be able to configure log_route
This lets you have separate loggers for the left and right proxy-logging middlewares, so you can have a config like [pipeline:main] pipeline = ... proxy-logging-client ... proxy-logging-subrequest proxy-server [proxy-logging-client] use = egg:swift#proxy_logging access_log_statsd_metric_prefix = client-facing [proxy-logging-subrequest] use = egg:swift#proxy_logging access_log_route = subrequest access_log_statsd_metric_prefix = subrequest to isolate subrequest metrics from client-facing metrics. Change-Id: If41e3d542b30747da7ca289708e9d24873c46e2e |
||
|
Tim Burke
|
1db11df4f2 |
ratelimit: Allow multiple placements
We usually want to have ratelimit fairly far left in the pipeline -- the assumption is that something like an auth check will be fairly expensive and we should try to shield the auth system so it doesn't melt under the load of a misbehaved swift client. But with S3 requests, we can't know the account/container that a request is destined for until *after* auth. Fortunately, we've already got some code to make s3api play well with ratelimit. So, let's have our cake and eat it, too: allow operators to place ratelimit once, before auth, for swift requests and again, after auth, for s3api. They'll both use the same memcached keys (so users can't switch APIs to effectively double their limit), but still only have each S3 request counted against the limit once. Change-Id: If003bb43f39427fe47a0f5a01dbcc19e1b3b67ef |
||
|
John Dickinson
|
d358b9130d |
added value and notes to a sample config file for s3token
Change-Id: I18accffb2cf6ba6a3fff6fd5d95f06a424d1d919 |
||
|
Romain LE DISEZ
|
27fd97cef9 |
Middleware that allows a user to have quoted Etags
Users have complained for a while that Swift's ETags don't match the expected RFC formats. We've resisted fixing this for just as long, worrying that the fix would break innumerable clients that expect the value to be a hex-encoded MD5 digest and *nothing else*. But, users keep asking for it, and some consumers (including some CDNs) break if we *don't* have quoted etags -- so, let's make it an option. With this middleware, Swift users can set metadata per-account or even per-container to explicitly request RFC compliant etags or not. Swift operators also get an option to change the default behavior cluster-wide; it defaults to the old, non-compliant format. See also: - https://tools.ietf.org/html/rfc2616#section-3.11 - https://tools.ietf.org/html/rfc7232#section-2.3 Closes-Bug: 1099087 Closes-Bug: 1424614 Co-Authored-By: Tim Burke <tim.burke@gmail.com> Change-Id: I380c6e34949d857158e11eb428b3eda9975d855d |
||
|
Clay Gerrard
|
2759d5d51c |
New Object Versioning mode
This patch adds a new object versioning mode. This new mode provides a new set of APIs for users to interact with older versions of an object. It also changes the naming scheme of older versions and adds a version-id to each object. This new mode is not backwards compatible or interchangeable with the other two modes (i.e., stack and history), especially due to the changes in the namimg scheme of older versions. This new mode will also serve as a foundation for adding S3 versioning compatibility in the s3api middleware. Note that this does not (yet) support using a versioned container as a source in container-sync. Container sync should be enhanced to sync previous versions of objects. Change-Id: Ic7d39ba425ca324eeb4543a2ce8d03428e2225a1 Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Thiago da Silva <thiagodasilva@gmail.com> |