5f89d14ebb6e6fed3b2ef507f8084fe7f05fc951
Commit Graph

338 Commits

Author SHA1 Message Date
Tim Burke
5f89d14ebb s3token: Enable secret caching by default
Now that we need to pass the service creds to keystone, we might as well
default secret caching by default now that they need to be provided.
This patch also adds the required s3token configuration to CI so we can use the
swift service creds to fetch s3api secrets.
As well as also configuring keystone users for cross-compatibility tests.
Change-Id: Ief0a29c4300edf2e0d52c041960d756ecc8a2677
Signed-off-by: Tim Burke <tburke@nvidia.com>
2025年11月06日 13:23:00 +11:00
Zuul
3c7a2b4f02 Merge "s3token: Pass service auth token to Keystone" 2025年11月04日 16:56:40 +00:00
Tim Burke
e7bb2a3855 s3token: Pass service auth token to Keystone
Recent versions of Keystone require auth tokens when accessing the
/v3/s3tokens endpoint to prevent exposure of a lot of information that
a user who just has a presigned URL should not be able to see.
UpgradeImpact
=============
The s3token middleware now requires Keystone auth credentials to be
configured. If secret_cache_duration is enabled, these credentials
should already be configured. Without these credentials, Keystone users
will no longer be able to make S3 API requests.
Closes-Bug: #2119646
Change-Id: Ie80bc33d0d9de17ca6eaad3b43628724538001f6
Signed-off-by: Tim Burke <tim.burke@gmail.com>
2025年11月03日 14:04:20 +11:00
Yan Xiao
dcd5a265f6 proxy-logging: Add real-time transfer bytes counters
Currently we can get one proxy-logging transfer stat emission over the
duration of the upload/download. We want another stat coming out of
proxy-logging: something that gets emitted periodically as bytes are
actually sent/received so we can get reasonably accurate point-in-time
breakdowns of bandwidth usage.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Co-Authored-By: Shreeya Deshpande <shreeyad@nvidia.com>
Change-Id: Ideecd0aa58ddf091c9f25f15022a9066088f532b
Signed-off-by: Yan Xiao <yanxiao@nvidia.com>
2025年10月30日 15:30:37 -04:00
Tim Burke
79feb12b28 docs: More proxy-server.conf-sample cleanup
Change-Id: I99dbd9590ff39343422852e4154f98bc194d161d
Signed-off-by: Tim Burke <tim.burke@gmail.com>
2025年10月01日 08:23:42 -07:00
Clay Gerrard
389747a8b2 doc: specify seconds in proxy-server.conf-sample
Most of swift's timing configuration values should accept units in
seconds; make this explicit in the sample config for values that did not
already do so.
Related-Change-Id: I38c11b7aae8c4112bb3d671fa96012ab0c44d5a2
Change-Id: I5b25b7e830a31f03d11f371adf12289222222eb2
Signed-off-by: Clay Gerrard <clay.gerrard@gmail.com>
2025年09月30日 12:34:05 -05:00
Jianjian Huo
d9883d0834 proxy: use cooperative tokens to coalesce updating shard range requests into backend
The cost of memcache misses could be deadly. For example, when
updating shard range cache query miss, PUT requests would have to
query the backend to figure out which shard to upload the objects.
And when a lot of requests are sending to the backend at the same
time, this could easily overload the root containers and cause a
lot of 500/503 errors; and when proxy-servers receive responses of
all those 200 backend shard range queries, they could in turn try
to write the same shard range data into memcached servers at the
same time, and cause memcached to return OOM failures too.
We have seen cache misses frequently to updating shard range cache
in production, due to Memcached out-of-memory and cache evictions.
To cope with those kind of situations, a memcached based cooperative
token mechanism can be added into proxy-server to coalesce lots of
in-flight backend requests into a few: when updating shard range
cache misses, only the first few of requests will get global
cooperative tokens and then be able to fetch updating shard ranges
from backend container servers. And the following cache miss
requests will wait for cache filling to finish, instead of all
querying the backend container servers. This will prevent a flood
of backend requests to overload both container servers and memcached
servers.
Drive-by fix: when memcache is not available, object controller will
only need to retrieve a specific shard range from the container server
to send the update request to.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Yan Xiao <yanxiao@nvidia.com>
Co-Authored-By: Shreeya Deshpande <shreeyad@nvidia.com>
Signed-off-by: Jianjian Huo <jhuo@nvidia.com>
Change-Id: I38c11b7aae8c4112bb3d671fa96012ab0c44d5a2
2025年09月29日 19:44:50 -07:00
Vitaly Bordyug
32eaab20b1 proxy-logging: create field for access_user_id
Added the new field to be able to log the access key during the
s3api calls, while reserving the field to be filled with auth relevant
information in case of other middlewares. Added respective code to
the tempauth and keystone middlewares.
Since s3api creates a copy of the environ dict for the downstream
request object when translating the s3req.to_swift_req the environ
dict that is seen/modifed in other mw module is not the same instance
seen in proxy-logging - using mutable objects get transfered into the
swift_req.environ.
Change the assert in test_proxy_logging from "the last field" to
the index 21 in the interests of maintainability.
Also added some regression tests for object, bucket and s3 v4 apis and
updated the documentation with the details about the new field.
Signed-off-by: Vitaly Bordyug <vbordug@gmail.com>
Change-Id: I0ce4e92458e2b05a4848cc7675604c1aa2b64d64
2025年08月26日 01:14:37 +00:00
Tim Burke
74030236ad tempauth: Support fernet tokens
Tempauth fernet tokens use a secret shared among all proxies to encrypt
user group information. Because they are encrypted, clients can neither
view nor edit this information; it is an opaque bearer token similar to
the existing memcached-backed tokens (just much longer). Note that
tokens still expire after the configured token_life.
Add a new set of config options of the form
 fernet_key_<keyid> = <32 url-safe base64-encoded bytes>
Any of the configured keys will be used to attempt to decrypt tokens
starting with "ftk" and extract group information.
Another new config option
 active_fernet_key_id = <keyid>
dictates which key should be used when minting tokens. Such tokens
will start with "ftk" to distinguish them from memcached-backed tokens
(which continue to start with "tk"). If active_fernet_key_id is not
configured, memcached-backed tokens continue to be used.
Together, these allow seamless transitions from memcached-backed tokens
to fernet tokens, as well as transitions from one fernet key to another:
 1. Add a new fernet_key_<keyid> entry.
 2. Ensure all proxies have the new config with fernet_key_<keyid>.
 3. Set active_fernet_key_id = <keyid>.
 4. Ensure all proxies have the new config with the new
 active_fernet_key_id.
This is similar to the key-rotation process for the encryption feature,
except that old keys may be pruned following a token_life period.
Additionally, opportunistically compress groups before minting tokens.
Compressed tokens will begin with "zftk" but otherwise behave just like
"ftk" tokens.
Change-Id: I0bdc98765d05e91f872ef39d4722f91711a5641f
2025年04月25日 14:49:12 -07:00
Clay Gerrard
0e2791a88a Remove deprecated statsd label_mode
Hopefully if we never do a release that supports signalfx no one will
ever use it and we won't have to maintain it.
Drive-by: refactor label model dispatch to fix a weird bug where a
config name could be a class attribute and blow up weird.
Change-Id: I2c67b59820c5ca094077bf47628426f4b0445ba0
2025年04月04日 13:02:37 +01:00
Tim Burke
7e5235894b stats: API for native labeled metrics
Introduce a LabeledStatsdClient API; no callers yet.
Include three config options:
 - statsd_label_mode, which specifies which label format to use
 - statsd_emit_legacy, which dictates whether to emit old-style
 metrics dotted metrics
 - statsd_user_label_<name> = <value>, which supports user defined
 labels in restricted ASCII characters
Co-Authored-By: yanxiao@nvidia.com
Co-Authored-By: alistairncoles@gmail.com
Change-Id: I115ffb1dc601652a979895d7944e011b951a91c1
2025年04月03日 14:26:08 -04:00
Clay Gerrard
b69a2bef45 Deprecate expirer options
The following configuration options are deprecated:
 * expiring_objects_container_divisor
 * expiring_objects_account_name
The upstream maintainers are not aware of any clusters where these have
been configured to non-default values.
UpgradeImpact:
Operators are encouraged to remove their "container_divisor" setting and
use the default value of 86400.
If a cluster was deployed with a non-standard "account_name", operators
should remove the option from all configs so they are using a supported
configuration going forward, but will need to deploy stand-alone expirer
processes with legacy expirer config to clean-up old expiration tasks
from the previously configured account name.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Co-Authored-By: Jianjian Huo <jhuo@nvidia.com>
Change-Id: I5ea9e6dc8b44c8c5f55837debe24dd76be7d6248
2025年02月07日 08:33:34 -08:00
Tim Burke
ae6300af86 wsgi: Reap stale workers (after a timeout) following a reload
Add a new tunable, `stale_worker_timeout`, defaulting to 86400 (i.e. 24
hours). Once this time elapses following a reload, the manager process
will issue SIGKILLs to any remaining stale workers.
This gives operators a way to configure a limit for how long old code
and configs may still be running in their cluster.
To enable this, the temporary reload child (which waits for the reload
to complete then closes the accept socket on all the old workers) has
grown the ability to send state to the re-exec'ed manager. Currently,
this is limited to just the set of pre-re-exec child PIDs and their
reload times, though it was designed to be reasonably extensible.
This allows the new manager to recognize stale workers as they exit
instead of logging
 Ignoring wait() result from unknown PID ...
With the improved knowledge of subprocesses, we can kick the log level
for the above message up from info to warning; we no longer expect it
to trigger in practice.
Drive-by: Add logging to ServersPerPortStrategy.register_worker_exit
that's comparable to what WorkersStrategy does.
Change-Id: I8227939d04fda8db66fb2f131f2c71ce8741c7d9
2025年01月16日 13:44:21 +11:00
Tim Burke
a55a48ffc8 docs: Call out that xprofile is not intended for production
Change-Id: I1e9d4d5df403040d69db93a08647cd0abe1b8037
2024年12月10日 15:17:11 -08:00
Tim Burke
ef8764cb06 logging: Add UPDATE to valid http methods
We introduced this a while back, but forgot to add it then.
Related-Change: Ia13ee5da3d1b5c536eccaadc7a6fdcd997374443
Change-Id: Ib65ddf50d7f5c3e27475626000943eb18e65c73a
2024年10月09日 08:18:49 -07:00
Alistair Coles
d555755423 proxy_logging config: unit tests and doc pointers
Add unit tests to verify the precedence of access_log_ and log_
prefixes to options.
Add pointers from proxy_logging sections in other sample config files
to the proxy-server.conf-sample file.
Change-Id: Id18176d3790fd187e304f0e33e3f74a94dc5305c
2024年07月16日 11:33:58 +01:00
indianwhocodes
11eb17d3b2 support x-open-expired header for expired objects
If the global configuration option 'enable_open_expired' is set
to true in the config, then the client will be able to make a
request with the header 'x-open-expired' set to true in order
to access an object that has expired, provided it is in its
grace period. If this config flag is set to false, the client
will not be able to access any expired objects, even with the
header, which is the default behavior unless the flag is set.
When a client sets a 'x-open-expired' header to a true value for a
GET/HEAD/POST request the proxy will forward x-backend-open-expired to
storage server. The storage server will allow clients that set
x-backend-open-expired to open and read an object that has not yet
been reaped by the object-expirer, even after the x-delete-at time
has passed.
The header is always ignored when used with temporary URLs.
Co-Authored-By: Anish Kachinthaya <akachinthaya@nvidia.com>
Related-Change: I106103438c4162a561486ac73a09436e998ae1f0
Change-Id: Ibe7dde0e3bf587d77e14808b169c02f8fb3dddb3
2024年04月26日 10:13:40 +01:00
Alistair Coles
2500fbeea9 proxy: don't use recoverable_node_timeout with x-newest
Object GET requests with a truthy X-Newest header are not resumed if a
backend request times out. The GetOrHeadHandler therefore uses the
regular node_timeout when waiting for a backend connection response,
rather than the possibly shorter recoverable_node_timeout. However,
previously while reading data from a backend response the
recoverable_node_timeout would still be used with X-Newest requests.
This patch simplifies GetOrHeadHandler to never use
recoverable_node_timeout when X-Newest is truthy.
Change-Id: I326278ecb21465f519b281c9f6c2dedbcbb5ff14
2024年02月26日 09:54:36 +00:00
Takashi Kajinami
bd64748a03 Document allowed_digests for formpost middleware
The allowed_digests option were added to the formpost middleware in
addition to the tempurl middleware[1], but the option was not added to
the formpost section in the example proxy config file.
[1] 2d063cd61f
Change-Id: Ic885e8bde7c1bbb3d93d032080b591db1de80970
2023年12月25日 17:17:39 +09:00
Tim Burke
0c9b545ea7 docs: Clean up proxy logging docs
Change-Id: I6ef909e826d3901f24d3c42a78d2ab1e4e47bb64
2023年08月04日 11:30:42 -07:00
Tim Burke
469c38e9fb wsgi: Add keepalive_timeout option
Clients sometimes hold open connections "just in case" they might later
pipeline requests. This can cause issues for proxies, especially if
operators restrict max_clients in an effort to improve response times
for the requests that *do* get serviced.
Add a new keepalive_timeout option to give proxies a way to drop these
established-but-idle connections without impacting active connections
(as may happen when reducing client_timeout). Note that this requires
eventlet 0.33.4 or later.
Change-Id: Ib5bb84fa3f8a4b9c062d58c8d3689e7030d9feb3
2023年04月18日 11:49:05 -07:00
Tim Burke
cbba65ac91 quotas: Add account-level per-policy quotas
Reseller admins can set new headers on accounts like
 X-Account-Quota-Bytes-Policy-<policy-name>: <quota>
This may be done to limit consumption of a faster, all-flash policy, for
example.
This is independent of the existing X-Account-Meta-Quota-Bytes header, which
continues to limit the total storage for an account across all policies.
Change-Id: Ib25c2f667e5b81301f8c67375644981a13487cfe
2023年03月21日 17:27:31 +00:00
Zuul
0470994a03 Merge "slo: Default allow_async_delete to true" 2022年12月01日 19:25:50 +00:00
Tim Burke
5c6407bf59 proxy: Add a chance to skip memcache for get_*_info calls
If you've got thousands of requests per second for objects in a single
container, you basically NEVER want that container's info to ever fall
out of memcache. If it *does*, all those clients are almost certainly
going to overload the container.
Avoid this by allowing some small fraction of requests to bypass and
refresh the cache, pushing out the TTL as long as there continue to be
requests to the container. The likelihood of skipping the cache is
configurable, similar to what we did for shard range sets.
Change-Id: If9249a42b30e2a2e7c4b0b91f947f24bf891b86f
Closes-Bug: #1883324 
2022年08月30日 18:49:48 +10:00
Tim Burke
f6196b0a22 AUTHORS/CHANGELOG for 2.30.0
Change-Id: If7c9e13fc62f8104ccb70a12b9c839f78e7e6e3e
2022年08月17日 22:21:45 -07:00
Zuul
5398204f22 Merge "tempurl: Deprecate sha1 signatures" 2022年06月01日 15:54:25 +00:00
Tim Burke
11b9761cdf Rip out pickle support in our memcached client
We said this would be going away back in 1.7.0 -- lets actually remove it.
Change-Id: I9742dd907abea86da9259740d913924bb1ce73e7
Related-Change: Id7d6d547b103b4f23ebf5be98b88f09ec6027ce4
2022年04月27日 11:16:16 -07:00
Tim Burke
118cf2ba8a tempurl: Deprecate sha1 signatures
We've known this would eventually be necessary for a while [1], and
way back in 2017 we started seeing SHA-1 collisions [2].
[1] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html
[2] https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
UpgradeImpact:
==============
"sha1" has been removed from the default set of `allowed_digests` in the
tempurl middleware config. If your cluster still has clients requiring
the use of SHA-1,
- explicitly configure `allowed_digests` to include "sha1" and
- encourage your clients to move to more-secure algorithms.
Depends-On: https://review.opendev.org/c/openstack/tempest/+/832771
Change-Id: I6e6fa76671c860191a2ce921cb6caddc859b1066
Related-Change: Ia9dd1a91cc3c9c946f5f029cdefc9e66bcf01046
Closes-Bug: #1733634 
2022年04月22日 20:43:01 +10:00
Matthew Oliver
f2c279bae9 Trim sensitive information in the logs (CVE-2017-8761)
Several headers and query params were previously revealed in logs but
are now redacted:
 * X-Auth-Token header (previously redacted in the {auth_token} field,
 but not the {headers} field)
 * temp_url_sig query param (used by tempurl middleware)
 * Authorization header and X-Amz-Signature and Signature query
 parameters (used by s3api middleware)
This patch adds some new middleware helper methods to track headers and
query parameters that should be redacted by proxy-logging. While
instantiating the middleware, authors can call either:
 register_sensitive_header('case-insensitive-header-name')
 register_sensitive_param('case-sensitive-query-param-name')
to add items that should be redacted. The redaction uses proxy-logging's
existing reveal_sensitive_prefix config option to determine how much to
reveal.
Note that query params will still be logged in their entirety if
eventlet_debug is enabled.
UpgradeImpact
=============
The reveal_sensitive_prefix config option now applies to more items;
operators should review their currently-configured value to ensure it
is appropriate for these new contexts. In particular, operators should
consider reducing the value if it is more than 20 or so, even if that
previously offered sufficient protection for auth tokens.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Closes-Bug: #1685798
Change-Id: I88b8cfd30292325e0870029058da6fb38026ae1a
2022年02月09日 10:53:46 +00:00
Zuul
c1d2e661b1 Merge "s3api: Allow multiple storage domains" 2022年01月28日 20:24:04 +00:00
Tim Burke
8c6ccb5fd4 proxy: Add a chance to skip memcache when looking for shard ranges
By having some small portion of calls skip cache and go straight to
disk, we can ensure the cache is always kept fresh and never expires (at
least, for active containers). Previously, when shard ranges fell out of
cache there would frequently be a thundering herd that could overwhelm
the container server, leading to 503s served to clients or an increase
in async pendings.
Include metrics for hit/miss/skip rates.
Change-Id: I6d74719fb41665f787375a08184c1969c86ce2cf
Related-Bug: #1883324 
2022年01月26日 18:15:09 +00:00
Tim Burke
11d1022163 s3api: Allow multiple storage domains
Sometimes a cluster might be accessible via more than one set
of domain names. Allow operators to configure them such that
virtual-host style requests work with all names.
Change-Id: I83b2fded44000bf04f558e2deb6553565d54fd4a
2022年01月24日 15:39:13 -08:00
Tim Burke
fa1058b6ed slo: Default allow_async_delete to true
We've had this option for a year now, and it seems to help. Let's enable
it for everyone. Note that Swift clients still need to opt into the
async delete via a query param, while S3 clients get it for free.
Change-Id: Ib4164f877908b855ce354cc722d9cb0be8be9921
2021年12月21日 14:12:34 -08:00
Pete Zaitcev
6198284839 Add a project scope read-only role to keystoneauth
This patch continues work for more of the "Consistent and
Secure Default Policies". We already have system scope
personas implemented, but the architecture people are asking
for project scope now. At least we don't need domain scope.
Change-Id: If7d39ac0dfbe991d835b76eb79ae978fc2fd3520
2021年08月02日 14:35:32 -05:00
Zuul
b3def185c6 Merge "Allow floats for all intervals" 2021年05月25日 18:15:56 +00:00
Alistair Coles
46ea3aeae8 Quarantine stale EC fragments after checking handoffs
If the reconstructor finds a fragment that appears to be stale then it
will now quarantine the fragment. Fragments are considered stale if
insufficient fragments at the same timestamp can be found to rebuild
missing fragments, and the number found is less than or equal to a new
reconstructor 'quarantine_threshold' config option.
Before quarantining a fragment the reconstructor will attempt to fetch
fragments from handoff nodes in addition to the usual primary nodes.
The handoff requests are limited by a new 'request_node_count'
config option.
'quarantine_threshold' defaults to zero i.e. no fragments will be
quarantined. 'request node count' defaults to '2 * replicas'.
Closes-Bug: 1655608
Change-Id: I08e1200291833dea3deba32cdb364baa99dc2816
2021年05月10日 20:45:17 +01:00
Tim Burke
c374a7a851 Allow floats for all intervals
Change-Id: I91e9bc02d94fe7ea6e89307305705c383087845a
2021年05月05日 15:30:21 -07:00
Tim Burke
e35365df51 s3api: Add config option to return 429s on ratelimit
Change-Id: If04c083ccc9f63696b1f53ac13edc932740a0654
2021年03月17日 10:58:58 -07:00
Tim Burke
27a734c78a s3api: Allow CORS preflight requests
Unfortunately, we can't identify the user, so we can't map to an
account, so we can't respect whatever CORS metadata might be set on the
container.
As a result, the allowed origins must be configured cluster-wide. Add a
new config option, cors_preflight_allow_origin, for that; default it
to blank (ie, deny preflights from all origins, preserving existing
behavior), but allow either a comma-separated list of origins or
* (to allow all origins).
Change-Id: I985143bf03125a05792e79bc5e5f83722d6431b3
Co-Authored-By: Matthew Oliver <matt@oliver.net.au>
2021年03月15日 13:52:05 -07:00
Tim Burke
cf4f320644 tempauth: Add .reseller_reader group
Change-Id: I8c5197ed327fbb175c8a2c0e788b1ae14e6dfe23
2021年02月09日 16:35:03 -08:00
Pete Zaitcev
98a0275a9d Add a read-only role to keystoneauth
An idea was floated recently of a read-only role that can be used for
cluster-wide audits, and is otherwise safe. It was also included into
the "Consistent and Secure Default Policies" effort in OpenStack,
where it implements "reader" personas in system, domain, and project
scopes. This patch implements it for system scope, where it's most
useful for operators.
Change-Id: I5f5fff2e61a3e5fb4f4464262a8ea558a6e7d7ef
2021年02月08日 22:02:17 -06:00
Alistair Coles
6896f1f54b s3api: actually execute check_pipeline in real world
Previously, S3ApiMiddleware.check_pipeline would always exit early
because the __file__ attribute of the Config instance passed to
check_pipeline was never set. The __file__ key is typically passed to
the S3ApiMiddleware constructor in the wsgi config dict, so this dict
is now passed to check_pipeline() for it to test for the existence of
__file__.
Also, the use of a Config object is replaced with a dict where it
mimics the wsgi conf object in the unit tests setup.
UpgradeImpact
=============
The bug prevented the pipeline order checks described in
proxy-server.conf-sample being made on the proxy-server pipeline when
s3api middleware was included. With this change, these checks will now
be made and an invalid pipeline configuration will result in a
ValueError being raised during proxy-server startup.
A valid pipeline has another middleware (presumed to be an auth
middleware) between s3api and the proxy-server app. If keystoneauth is
found, then a further check is made that s3token is configured after
s3api and before keystoneauth.
The pipeline order checks can be disabled by setting the s3api
auth_pipeline_check option to False in proxy-server.conf. This
mitigation is recommended if previously operating with what will now
be considered an invalid pipeline.
The bug also prevented a check for slo middleware being in the
pipeline between s3api and the proxy-server app. If the slo middleware
is not found then multipart uploads will now not be supported,
regardless of the value of the allow_multipart_uploads option
described in proxy-server.conf-sample. In this case a warning will be
logged during startup but no exception is raised.
Closes-Bug: #1912391
Change-Id: I357537492733b97e5afab4a7b8e6a5c527c650e4
2021年01月19日 20:22:43 +00:00
Tim Burke
10d9a737d8 s3api: Make allowable clock skew configurable
While we're at it, make the default match AWS's 15 minute limit (instead
of our old 5 minute limit).
UpgradeImpact
=============
This (somewhat) weakens some security protections for requests over the
S3 API; operators may want to preserve the prior behavior by setting
 allowable_clock_skew = 300
in the [filter:s3api] section of their proxy-server.conf
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I0da777fcccf056e537b48af4d3277835b265d5c9
2021年01月14日 10:40:23 +00:00
Zuul
d5bb644a17 Merge "Use cached shard ranges for container GETs" 2021年01月08日 20:50:45 +00:00
Grzegorz Grasza
6930bc24b2 Memcached client TLS support
This patch specifies a set of configuration options required to build
a TLS context, which is used to wrap the client connection socket.
Closes-Bug: #1906846
Change-Id: I03a92168b90508956f367fbb60b7712f95b97f60
2021年01月06日 09:47:38 -08:00
Alistair Coles
077ba77ea6 Use cached shard ranges for container GETs
This patch makes four significant changes to the handling of GET
requests for sharding or sharded containers:
 - container server GET requests may now result in the entire list of
 shard ranges being returned for the 'listing' state regardless of
 any request parameter constraints.
 - the proxy server may cache that list of shard ranges in memcache
 and the requests environ infocache dict, and subsequently use the
 cached shard ranges when handling GET requests for the same
 container.
 - the proxy now caches more container metadata so that it can
 synthesize a complete set of container GET response headers from
 cache.
 - the proxy server now enforces more container GET request validity
 checks that were previously only enforced by the backend server,
 e.g. checks for valid request parameter values
With this change, when the proxy learns from container metadata
that the container is sharded then it will cache shard
ranges fetched from the backend during a container GET in memcache.
On subsequent container GETs the proxy will use the cached shard
ranges to gather object listings from shard containers, avoiding
further GET requests to the root container until the cached shard
ranges expire from cache.
Cached shard ranges are most useful if they cover the entire object
name space in the container. The proxy therefore uses a new
X-Backend-Override-Shard-Name-Filter header to instruct the container
server to ignore any request parameters that would constrain the
returned shard range listing i.e. 'marker', 'end_marker', 'includes'
and 'reverse' parameters. Having obtained the entire shard range
listing (either from the server or from cache) the proxy now applies
those request parameter constraints itself when constructing the
client response.
When using cached shard ranges the proxy will synthesize response
headers from the container metadata that is also in cache. To enable
the full set of container GET response headers to be synthezised in
this way, the set of metadata that the proxy caches when handling a
backend container GET response is expanded to include various
timestamps.
The X-Newest header may be used to disable looking up shard ranges
in cache.
Change-Id: I5fc696625d69d1ee9218ee2a508a1b9be6cf9685
2021年01月06日 16:28:49 +00:00
Zuul
ebfc3a61fa Merge "Use socket_timeout kwarg instead of useless eventlet.wsgi.WRITE_TIMEOUT" 2020年11月18日 02:19:01 +00:00
Zuul
cd228fafad Merge "Add a new URL parameter to allow for async cleanup of SLO segments" 2020年11月18日 00:50:54 +00:00
Tim Burke
918ab8543e Use socket_timeout kwarg instead of useless eventlet.wsgi.WRITE_TIMEOUT
No version of eventlet that I'm aware of hasany sort of support for
eventlet.wsgi.WRITE_TIMEOUT; I don't know why we've been setting that.
On the other hand, the socket_timeout argument for eventlet.wsgi.Server
has been supported for a while -- since 0.14 in 2013.
Drive-by: Fix up handling of sub-second client_timeouts.
Change-Id: I1dca3c3a51a83c9d5212ee5a0ad2ba1343c68cf9
Related-Change: I1d4d028ac5e864084a9b7537b140229cb235c7a3
Related-Change: I433c97df99193ec31c863038b9b6fd20bb3705b8
2020年11月11日 14:23:40 -08:00
Tim Burke
e78377624a Add a new URL parameter to allow for async cleanup of SLO segments
Add a new config option to SLO, allow_async_delete, to allow operators
to opt-in to this new behavior. If their expirer queues get out of hand,
they can always turn it back off.
If the option is disabled, handle the delete inline; this matches the
behavior of old Swift.
Only allow an async delete if all segments are in the same container and
none are nested SLOs, that way we only have two auth checks to make.
Have s3api try to use this new mode if the data seems to have been
uploaded via S3 (since it should be safe to assume that the above
criteria are met).
Drive-by: Allow the expirer queue and swift-container-deleter to use
high-precision timestamps.
Change-Id: I0bbe1ccd06776ef3e23438b40d8fb9a7c2de8921
2020年11月10日 18:22:01 +00:00