6896f1f54b511f90414f414a07371b1101e59931
Commit Graph

297 Commits

Author SHA1 Message Date
Alistair Coles
6896f1f54b s3api: actually execute check_pipeline in real world
Previously, S3ApiMiddleware.check_pipeline would always exit early
because the __file__ attribute of the Config instance passed to
check_pipeline was never set. The __file__ key is typically passed to
the S3ApiMiddleware constructor in the wsgi config dict, so this dict
is now passed to check_pipeline() for it to test for the existence of
__file__.
Also, the use of a Config object is replaced with a dict where it
mimics the wsgi conf object in the unit tests setup.
UpgradeImpact
=============
The bug prevented the pipeline order checks described in
proxy-server.conf-sample being made on the proxy-server pipeline when
s3api middleware was included. With this change, these checks will now
be made and an invalid pipeline configuration will result in a
ValueError being raised during proxy-server startup.
A valid pipeline has another middleware (presumed to be an auth
middleware) between s3api and the proxy-server app. If keystoneauth is
found, then a further check is made that s3token is configured after
s3api and before keystoneauth.
The pipeline order checks can be disabled by setting the s3api
auth_pipeline_check option to False in proxy-server.conf. This
mitigation is recommended if previously operating with what will now
be considered an invalid pipeline.
The bug also prevented a check for slo middleware being in the
pipeline between s3api and the proxy-server app. If the slo middleware
is not found then multipart uploads will now not be supported,
regardless of the value of the allow_multipart_uploads option
described in proxy-server.conf-sample. In this case a warning will be
logged during startup but no exception is raised.
Closes-Bug: #1912391
Change-Id: I357537492733b97e5afab4a7b8e6a5c527c650e4
2021年01月19日 20:22:43 +00:00
Tim Burke
10d9a737d8 s3api: Make allowable clock skew configurable
While we're at it, make the default match AWS's 15 minute limit (instead
of our old 5 minute limit).
UpgradeImpact
=============
This (somewhat) weakens some security protections for requests over the
S3 API; operators may want to preserve the prior behavior by setting
 allowable_clock_skew = 300
in the [filter:s3api] section of their proxy-server.conf
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I0da777fcccf056e537b48af4d3277835b265d5c9
2021年01月14日 10:40:23 +00:00
Zuul
d5bb644a17 Merge "Use cached shard ranges for container GETs" 2021年01月08日 20:50:45 +00:00
Grzegorz Grasza
6930bc24b2 Memcached client TLS support
This patch specifies a set of configuration options required to build
a TLS context, which is used to wrap the client connection socket.
Closes-Bug: #1906846
Change-Id: I03a92168b90508956f367fbb60b7712f95b97f60
2021年01月06日 09:47:38 -08:00
Alistair Coles
077ba77ea6 Use cached shard ranges for container GETs
This patch makes four significant changes to the handling of GET
requests for sharding or sharded containers:
 - container server GET requests may now result in the entire list of
 shard ranges being returned for the 'listing' state regardless of
 any request parameter constraints.
 - the proxy server may cache that list of shard ranges in memcache
 and the requests environ infocache dict, and subsequently use the
 cached shard ranges when handling GET requests for the same
 container.
 - the proxy now caches more container metadata so that it can
 synthesize a complete set of container GET response headers from
 cache.
 - the proxy server now enforces more container GET request validity
 checks that were previously only enforced by the backend server,
 e.g. checks for valid request parameter values
With this change, when the proxy learns from container metadata
that the container is sharded then it will cache shard
ranges fetched from the backend during a container GET in memcache.
On subsequent container GETs the proxy will use the cached shard
ranges to gather object listings from shard containers, avoiding
further GET requests to the root container until the cached shard
ranges expire from cache.
Cached shard ranges are most useful if they cover the entire object
name space in the container. The proxy therefore uses a new
X-Backend-Override-Shard-Name-Filter header to instruct the container
server to ignore any request parameters that would constrain the
returned shard range listing i.e. 'marker', 'end_marker', 'includes'
and 'reverse' parameters. Having obtained the entire shard range
listing (either from the server or from cache) the proxy now applies
those request parameter constraints itself when constructing the
client response.
When using cached shard ranges the proxy will synthesize response
headers from the container metadata that is also in cache. To enable
the full set of container GET response headers to be synthezised in
this way, the set of metadata that the proxy caches when handling a
backend container GET response is expanded to include various
timestamps.
The X-Newest header may be used to disable looking up shard ranges
in cache.
Change-Id: I5fc696625d69d1ee9218ee2a508a1b9be6cf9685
2021年01月06日 16:28:49 +00:00
Zuul
ebfc3a61fa Merge "Use socket_timeout kwarg instead of useless eventlet.wsgi.WRITE_TIMEOUT" 2020年11月18日 02:19:01 +00:00
Zuul
cd228fafad Merge "Add a new URL parameter to allow for async cleanup of SLO segments" 2020年11月18日 00:50:54 +00:00
Tim Burke
918ab8543e Use socket_timeout kwarg instead of useless eventlet.wsgi.WRITE_TIMEOUT
No version of eventlet that I'm aware of hasany sort of support for
eventlet.wsgi.WRITE_TIMEOUT; I don't know why we've been setting that.
On the other hand, the socket_timeout argument for eventlet.wsgi.Server
has been supported for a while -- since 0.14 in 2013.
Drive-by: Fix up handling of sub-second client_timeouts.
Change-Id: I1dca3c3a51a83c9d5212ee5a0ad2ba1343c68cf9
Related-Change: I1d4d028ac5e864084a9b7537b140229cb235c7a3
Related-Change: I433c97df99193ec31c863038b9b6fd20bb3705b8
2020年11月11日 14:23:40 -08:00
Tim Burke
e78377624a Add a new URL parameter to allow for async cleanup of SLO segments
Add a new config option to SLO, allow_async_delete, to allow operators
to opt-in to this new behavior. If their expirer queues get out of hand,
they can always turn it back off.
If the option is disabled, handle the delete inline; this matches the
behavior of old Swift.
Only allow an async delete if all segments are in the same container and
none are nested SLOs, that way we only have two auth checks to make.
Have s3api try to use this new mode if the data seems to have been
uploaded via S3 (since it should be safe to assume that the above
criteria are met).
Drive-by: Allow the expirer queue and swift-container-deleter to use
high-precision timestamps.
Change-Id: I0bbe1ccd06776ef3e23438b40d8fb9a7c2de8921
2020年11月10日 18:22:01 +00:00
Zuul
2593f7f264 Merge "memcache: Make error-limiting values configurable" 2020年11月07日 01:32:38 +00:00
Tim Burke
aff65242ff memcache: Make error-limiting values configurable
Previously these were all hardcoded; let operators tweak them as needed.
Significantly, this also allows operators to disable error-limiting
entirely, which may be a useful protection in case proxies are
configured with a single memcached server.
Use error_suppression_limit and error_suppression_interval to mirror the
option names used by the proxy-server to ratelimit backend Swift
servers.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: Ife005cb8545dd966d7b0e34e5496a0354c003881
2020年11月05日 23:37:24 +00:00
Zuul
b9a404b4d1 Merge "ec: Add an option to write fragments with legacy crc" 2020年11月02日 23:03:49 +00:00
Tim Burke
599f63e762 ec: Add an option to write fragments with legacy crc
When upgrading from liberasurecode<=1.5.0, you may want to continue
writing legacy CRCs until all nodes are upgraded and capabale of reading
fragments with zlib CRCs.
Starting in liberasurecode>=1.6.2, we can use the environment variable
LIBERASURECODE_WRITE_LEGACY_CRC to control whether we write zlib or
legacy CRCs, but for many operators it's easier to manage swift configs
than environment variables. Add a new option, write_legacy_ec_crc, to the
proxy-server app and object-reconstructor; if set to true, ensure legacy
frags are written.
Note that more daemons instantiate proxy-server apps than just the
proxy-server. The complete set of impacted daemons should be:
 * proxy-server
 * object-reconstructor
 * container-reconciler
 * any users of internal-client.conf
UpgradeImpact
=============
To ensure a smooth liberasurecode upgrade:
 1. Determine whether your cluster writes legacy or zlib CRCs. Depending
 on the order in which shared libraries are loaded, your servers may
 already be reading and writing zlib CRCs, even with old
 liberasurecode. In that case, no special action is required and
 WRITING LEGACY CRCS DURING THE UPGRADE WILL CAUSE AN OUTAGE.
 Just upgrade liberasurecode normally. See the closed bug for more
 information and a script to determine which CRC is used.
 2. On all nodes, ensure Swift is upgraded to a version that includes
 write_legacy_ec_crc support and write_legacy_ec_crc is enabled on
 all daemons.
 3. On each node, upgrade liberasurecode and restart Swift services.
 Because of (2), they will continue writing legacy CRCs which will
 still be readable by nodes that have not yet upgraded.
 4. Once all nodes are upgraded, remove the write_legacy_ec_crc option
 from all configs across all nodes. After restarting daemons, they
 will write zlib CRCs which will also be readable by all nodes.
Change-Id: Iff71069f808623453c0ff36b798559015e604c7d
Related-Bug: #1666320
Closes-Bug: #1886088
Depends-On: https://review.opendev.org/#/c/738959/ 
2020年09月30日 16:49:59 -07:00
Clay Gerrard
754defc39c Client should retry when there's just one 404 and a bunch of errors
During a rebalance, it's expected that we may get a 404 for data that
does exist elsewhere in the cluster. Normally this isn't a problem; the
proxy sees the 404, keeps digging, and one of the other primaries will
serve the response.
Previously, if the other replicas were heavily loaded, the proxy would
see a bunch of timeouts and the fresh (empty) primary, treat the 404 as
good, and send that on to the client.
Now, have the proxy throw out that first 404 (provided it doesn't have a
timestamp); it will then return a 503 to the client, indicating that it
should try again.
Add a new (per-policy) proxy-server config option,
rebalance_missing_suppression_count; operators may use this to increase
the number of 404-no-timestamp responses to discard if their rebalances
are going faster than replication can keep up, or set it to zero to
return to the previous behavior.
Change-Id: If4bd39788642c00d66579b26144af8f116735b4d
2020年09月08日 14:33:09 -07:00
Zuul
cca5e8b1de Merge "Make all concurrent_get options per-policy" 2020年09月04日 19:31:12 +00:00
Zuul
20e1544ad8 Merge "Extend concurrent_gets to EC GET requests" 2020年09月04日 14:22:20 +00:00
Clay Gerrard
f043aedec1 Make all concurrent_get options per-policy
Change-Id: Ib81f77cc343c3435d7e6258d4631563fa022d449
2020年09月02日 12:11:49 -05:00
Zuul
7015ac2fdc Merge "py3: Work with proper native string paths in crypto meta" 2020年08月30日 04:11:59 +00:00
Clay Gerrard
8f60e0a260 Extend concurrent_gets to EC GET requests
After the initial requests are started, if the proxy still does not have
enough backend responses to return a client response additional requests
will be spawned to remaining primaries at the frequency configured by
the concurrency_timeout.
A new tunable concurrent_ec_extra_requests allows operators to control
how many requests to backend fragments are started immediately with a
client request to an object stored in an EC storage policy. By default
the minimum ndata backend requests are started immediately, but
operators may increase concurrent_ec_extra_requests up to nparity which
is similar in effect to a concurrency_timeout of 0.
Change-Id: Ia0a9398107a400815be2e0097b1b8e76336a0253
2020年08月24日 13:30:44 -05:00
Tim Burke
7d429318dd py3: Work with proper native string paths in crypto meta
Previously, we would work with these paths as WSGI strings -- this would
work fine when all data were read and written on the same major version
of Python, but fail pretty badly during and after upgrading Python.
In particular, if a py3 proxy-server tried to read existing data that
was written down by a py2 proxy-server, it would hit an error and
respond 500. Worse, if an un-upgraded py2 proxy tried to read data that
was freshly-written by a py3 proxy, it would serve corrupt data back to
the client (including a corrupt/invalid ETag and Content-Type).
Now, ensure that both py2 and py3 write down paths as native strings.
Make an effort to still work with WSGI-string metadata, though it can be
ambiguous as to whether a string is a WSGI string or not. The heuristic
used is if
 * the path from metadata does not match the (native-string) request
 path and
 * the path from metadata (when interpreted as a WSGI string) can be
 "un-wsgi-fied" without any encode/decode errors and
 * the native-string path from metadata *does* match the native-string
 request path
then trust the path from the request. By contrast, we usually prefer the
path from metadata in case there was a pipeline misconfiguration (see
related bug).
Add the ability to read and write a new, unambiguous version of metadata
that always has the path as a native string. To support rolling
upgrades, a new config option is added: meta_version_to_write. This
defaults to 2 to support rolling upgrades without configuration changes,
but the default may change to 3 in a future release.
UpgradeImpact
=============
When upgrading from Swift 2.20.0 or Swift 2.19.1 or earlier, set
 meta_version_to_write = 1
in your keymaster's configuration. Regardless of prior Swift version, set
 meta_version_to_write = 3
after upgrading all proxy servers.
When switching from Python 2 to Python 3, first upgrade Swift while on
Python 2, then upgrade to Python 3.
Change-Id: I00c6693c42c1a0220b64d8016d380d5985339658
Closes-Bug: #1888037
Related-Bug: #1813725 
2020年07月29日 17:33:54 -07:00
Tim Burke
2ffe598f48 proxy-logging: Be able to configure log_route
This lets you have separate loggers for the left and right proxy-logging
middlewares, so you can have a config like
 [pipeline:main]
 pipeline = ... proxy-logging-client ... proxy-logging-subrequest proxy-server
 [proxy-logging-client]
 use = egg:swift#proxy_logging
 access_log_statsd_metric_prefix = client-facing
 [proxy-logging-subrequest]
 use = egg:swift#proxy_logging
 access_log_route = subrequest
 access_log_statsd_metric_prefix = subrequest
to isolate subrequest metrics from client-facing metrics.
Change-Id: If41e3d542b30747da7ca289708e9d24873c46e2e
2020年06月11日 13:30:20 -07:00
Tim Burke
1db11df4f2 ratelimit: Allow multiple placements
We usually want to have ratelimit fairly far left in the pipeline -- the
assumption is that something like an auth check will be fairly expensive
and we should try to shield the auth system so it doesn't melt under the
load of a misbehaved swift client.
But with S3 requests, we can't know the account/container that a request
is destined for until *after* auth. Fortunately, we've already got some
code to make s3api play well with ratelimit.
So, let's have our cake and eat it, too: allow operators to place
ratelimit once, before auth, for swift requests and again, after auth,
for s3api. They'll both use the same memcached keys (so users can't
switch APIs to effectively double their limit), but still only have each
S3 request counted against the limit once.
Change-Id: If003bb43f39427fe47a0f5a01dbcc19e1b3b67ef
2020年05月19日 11:10:22 -07:00
John Dickinson
d358b9130d added value and notes to a sample config file for s3token
Change-Id: I18accffb2cf6ba6a3fff6fd5d95f06a424d1d919
2020年01月31日 10:47:58 -08:00
Romain LE DISEZ
27fd97cef9 Middleware that allows a user to have quoted Etags
Users have complained for a while that Swift's ETags don't match the
expected RFC formats. We've resisted fixing this for just as long,
worrying that the fix would break innumerable clients that expect the
value to be a hex-encoded MD5 digest and *nothing else*.
But, users keep asking for it, and some consumers (including some CDNs)
break if we *don't* have quoted etags -- so, let's make it an option.
With this middleware, Swift users can set metadata per-account or even
per-container to explicitly request RFC compliant etags or not. Swift
operators also get an option to change the default behavior
cluster-wide; it defaults to the old, non-compliant format.
See also:
 - https://tools.ietf.org/html/rfc2616#section-3.11
 - https://tools.ietf.org/html/rfc7232#section-2.3
Closes-Bug: 1099087
Closes-Bug: 1424614
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Change-Id: I380c6e34949d857158e11eb428b3eda9975d855d
2020年01月27日 12:53:35 -08:00
Clay Gerrard
2759d5d51c New Object Versioning mode
This patch adds a new object versioning mode. This new mode provides
a new set of APIs for users to interact with older versions of an
object. It also changes the naming scheme of older versions and adds
a version-id to each object.
This new mode is not backwards compatible or interchangeable with the
other two modes (i.e., stack and history), especially due to the changes
in the namimg scheme of older versions. This new mode will also serve
as a foundation for adding S3 versioning compatibility in the s3api
middleware.
Note that this does not (yet) support using a versioned container as
a source in container-sync. Container sync should be enhanced to sync
previous versions of objects.
Change-Id: Ic7d39ba425ca324eeb4543a2ce8d03428e2225a1
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Thiago da Silva <thiagodasilva@gmail.com>
2020年01月24日 17:39:56 -08:00
Clay Gerrard
4601548dab Deprecate per-service auto_create_account_prefix
If we move it to constraints it's more globally accessible in our code,
but more importantly it's more obvious to ops that everything breaks if
you try to mis-configure different values per-service.
Change-Id: Ib8f7d08bc48da12be5671abe91a17ae2b49ecfee
2020年01月05日 09:53:30 -06:00
Clay Gerrard
698717d886 Allow internal clients to use reserved namespace
Reserve the namespace starting with the NULL byte for internal
use-cases. Backend services will allow path names to include the NULL
byte in urls and validate names in the reserved namespace. Database
services will filter all names starting with the NULL byte from
responses unless the request includes the header:
 X-Backend-Allow-Reserved-Names: true
The proxy server will not allow path names to include the NULL byte in
urls unless a middlware has set the X-Backend-Allow-Reserved-Names
header. Middlewares can use the reserved namespace to create objects
and containers that can not be directly manipulated by clients. Any
objects and bytes created in the reserved namespace will be aggregated
to the user's account totals.
When deploying internal proxys developers and operators may configure
the gatekeeper middleware to translate the X-Allow-Reserved-Names header
to the Backend header so they can manipulate the reserved namespace
directly through the normal API.
UpgradeImpact: it's not safe to rollback from this change
Change-Id: If912f71d8b0d03369680374e8233da85d8d38f85
2019年11月27日 11:22:00 -06:00
Romain LE DISEZ
2f1111a436 proxy: stop sending chunks to objects with a Queue
During a PUT of an object, the proxy instanciates one Putter per
object-server that will store data (either the full object or a
fragment, depending on the storage policy). Each Putter is owning a
Queue that will be used to bufferize data chunks before they are
written to the socket connected to the object-server. The chunks are
moved from the queue to the socket by a greenthread. There is one
greenthread per Putter. If the client is uploading faster than the
object-servers can manage, the Queue could grow and consume a lot of
memory. To avoid that, the queue is bounded (default: 10). Having a
bounded queue also allows to ensure that all object-servers will get
the data at the same rate because if one queue is full, the
greenthread reading from the client socket will block when trying to
write to the queue. So the global rate is the one of the slowest
object-server.
The thing is, every operating system manages socket buffers for incoming
and outgoing data. Concerning the send buffer, the behavior is such that
if the buffer is full, a call to write() will block, otherwise the call
will return immediately. It behaves a lot like the Putter's Queue,
except that the size of the buffer is dynamic so it adapts itself to the
speed of the receiver.
Thus, managing a queue in addition to the socket send buffer is a
duplicate queueing/buffering that provides no interest but is, as shown
by profiling and benchmarks, very CPU costly.
This patch removes the queuing mecanism. Instead, the greenthread
reading data from the client will directly write to the socket. If an
object-server is getting slow, the buffer will fulfill, blocking the
reader greenthread. Benchmark shows a CPU consumption reduction of more
than 30% will the observed rate for an upload is increasing by about
45%.
Change-Id: Icf8f800cb25096f93d3faa1e6ec091eb29500758
2019年11月07日 18:01:58 +08:00
Zuul
cf18e1f47b Merge "sharding: Cache shard ranges for object writes" 2019年07月13日 00:34:06 +00:00
Tim Burke
a1af3811a7 sharding: Cache shard ranges for object writes
Previously, we issued a GET to the root container for every object PUT,
POST, and DELETE. This puts load on the container server, potentially
leading to timeouts, error limiting, and erroneous 404s (!).
Now, cache the complete set of 'updating' shards, and find the shard for
this particular update in the proxy. Add a new config option,
recheck_updating_shard_ranges, to control the cache time; it defaults to
one hour. Set to 0 to fall back to previous behavior.
Note that we should be able to tolerate stale shard data just fine; we
already have to worry about async pendings that got written down with
one shard but may not get processed until that shard has itself sharded
or shrunk into another shard.
Also note that memcache has a default value limit of 1MiB, which may be
exceeded if a container has thousands of shards. In that case, set()
will act like a delete(), causing increased memcache churn but otherwise
preserving existing behavior. In the future, we may want to add support
for gzipping the cached shard ranges as they should compress well.
Change-Id: Ic7a732146ea19a47669114ad5dbee0bacbe66919
Closes-Bug: 1781291
2019年07月11日 10:40:38 -07:00
zengjia
0ae1ad63c1 Update auth_url in install docs
Beginning with the Queens release, the keystone install guide
recommends running all interfaces on the same port.This patch
updates the swift install guide to reflect that change
Change-Id: Id00cfd2c921da352abdbbbb6668b921f3cb31a1a
Closes-bug: #1754104 
2019年07月11日 15:03:16 +08:00
Tim Burke
9d1b749740 py3: port staticweb and domain_remap func tests
Drive-by: Tighten domain_remap assertions on listings, which required
that we fix proxy pipeline placement. Add a note about it to the sample
config.
Change-Id: I41835148051294088a2c0fb4ed4e7a7b61273e5f
2019年07月10日 09:51:38 -07:00
Tim Burke
345f577ff1 s3token: fix conf option name
Related-Change: Ica740c28b47aa3f3b38dbfed4a7f5662ec46c2c4
Change-Id: I71f411a2e99fa8259b86f11ed29d1b816ff469cb
2019年07月03日 07:28:36 -07:00
Tim Burke
4f7c44a9d7 Add information about secret_cache_duration to sample config
Related-Change-Id: Id0c01da6aa6ca804c8f49a307b5171b87ec92228
Change-Id: Ica740c28b47aa3f3b38dbfed4a7f5662ec46c2c4
2019年07月02日 18:43:59 +00:00
Gilles Biannic
a4cc353375 Make log format for requests configurable
Add the log_msg_template option in proxy-server.conf and log_format in
a/c/o-server.conf. It is a string parsable by Python's format()
function. Some fields containing user data might be anonymized by using
log_anonymization_method and log_anonymization_salt.
Change-Id: I29e30ef45fe3f8a026e7897127ffae08a6a80cd9
2019年05月02日 17:43:25 -06:00
Tim Burke
d748851766 s3token: Add note about config change when upgrading from swift3
Change-Id: I2610cbdc9b7bc2b4d614eaedb4f3369d7a424ab3
2019年03月05日 14:50:22 -08:00
Zuul
3043c54f28 Merge "s3api: Allow concurrent multi-deletes" 2018年12月08日 10:05:39 +00:00
Tim Burke
00be3f595e s3api: Allow concurrent multi-deletes
Previously, a thousand-item multi-delete request would consider each
object to delete serially, and not start trying to delete one until the
previous was deleted (or hit an error).
Now, allow operators to configure a concurrency factor to allow multiple
deletes at the same time.
Default the concurrency to 2, like we did for slo and bulk.
See also: http://lists.openstack.org/pipermail/openstack-dev/2016-May/095737.html
Change-Id: If235931635094b7251e147d79c8b7daa10cdcb3d
Related-Change: I128374d74a4cef7a479b221fd15eec785cc4694a
2018年12月06日 23:20:52 +00:00
Tim Burke
692a03473f s3api: Change default location to us-east-1
This is more likely to be the default region that a client would try for
v4 signatures.
UpgradeImpact:
==============
Deployers with clusters that relied on the old implicit default
location of US should explicitly set
 location = US
in the [filter:s3api] section of proxy-server.conf before upgrading.
Change-Id: Ib6659a7ad2bd58d711002125e7820f6e86383be8
2018年11月12日 11:04:20 -08:00
Alistair Coles
904e7c97f1 Add more doc and test for cors_expose_headers option
In follow-up to the related change, mention the new
cors_expose_headers option (and other proxy-server.conf
options) in the CORS doc.
Add a test for the cors options being loaded into the
proxy server.
Improve CORS comments in docs.
Change-Id: I647d8f9e9cbd98de05443638628414b1e87d1a76
Related-Change: I5ca90a052f27c98a514a96ee2299bfa1b6d46334
2018年09月17日 12:35:25 -07:00
Zuul
5d46c0d8b3 Merge "Adding keep_idle config value to socket" 2018年09月15日 00:43:52 +00:00
FatemaKhalid
cfeb32c66b Adding keep_idle config value to socket
User can cofigure KEEPIDLE time for sockets in TCP connection.
The default value is the old value which is 600.
Change-Id: Ib7fb166deb8a87ae4e97ba0671048b1ec079a2ef
Closes-Bug:1759606
2018年09月15日 01:30:53 +02:00
Tim Burke
5a8cfd6e06 Add another user for s3api func tests
Previously we'd use two users, one admin and one unprivileged.
Ceph's s3-tests, however, assume that both users should have access to
create buckets. Further, there are different errors that may be returned
depending on whether you are the *bucket* owner or not when using
s3_acl. So now we've got:
 test:tester1 (admin)
 test:tester2 (also admin)
 test:tester3 (unprivileged)
Change-Id: I0b67c53de3bcadc2c656d86131fca5f2c3114f14
2018年09月14日 13:33:51 +00:00
Alistair Coles
2722e49a8c Add support for multiple root encryption secrets
For some use cases operators would like to periodically introduce a
new encryption root secret that would be used when new object data is
written. However, existing encrypted data does not need to be
re-encrypted with keys derived from the new root secret. Older root
secret(s) would still be used as necessary to decrypt older object
data.
This patch modifies the KeyMaster class to support multiple root
secrets indexed via unique secret_id's, and to store the id of the
root secret used for an encryption operation in the crypto meta. The
decrypter is modified to fetch appropriate keys based on the secret id
in retrieved crypto meta.
The changes are backwards compatible with previous crypto middleware
configurations and existing encrypted object data.
Change-Id: I40307acf39b6c1cc9921f711a8da55d03924d232
2018年08月17日 17:54:30 +00:00
Alistair Coles
1951dc7e9a Add keymaster to fetch root secret from KMIP service
Add a new middleware that can be used to fetch an encryption root
secret from a KMIP service. The middleware uses a PyKMIP client
to interact with a KMIP endpoint. The middleware is configured with
a unique identifier for the key to be fetched and options required
for the PyKMIP client.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Change-Id: Ib0943fb934b347060fc66c091673a33bcfac0a6d
2018年07月03日 09:00:21 +01:00
Greg Lange
5d601b78f3 Adds read_only middleware
This patch adds a read_only middleware to swift. It gives the ability
to make an entire cluster or individual accounts read only.
When a cluster or an account is in read only mode, requests that would
result in writes to the cluser are not allowed.
DocImpact
Change-Id: I7e0743aecd60b171bbcefcc8b6e1f3fd4cef2478
2018年05月30日 03:26:36 +00:00
Darrell Bishop
661838d968 Add support for PROXY protocol v1 (only)
...to the proxy-server.
The point is to allow the Swift proxy server to log accurate
client IP addresses when there is a proxy or SSL-terminator between the
client and the Swift proxy server. Example servers supporting this
PROXY protocol:
 stud (v1 only)
 stunnel
 haproxy
 hitch (v2 only)
 varnish
See http://www.haproxy.org/download/1.7/doc/proxy-protocol.txt
The feature is enabled by adding this to your proxy config file:
 [app:proxy-server]
 use = egg:swift#proxy
 ...
 require_proxy_protocol = true
The protocol specification states:
 The receiver MUST be configured to only receive the protocol
 described in this specification and MUST not try to guess
 whether the protocol header is present or not.
so valid deployments are:
 1) require_proxy_protocol = false (or missing; default is false)
 and NOT behind a proxy that adds or proxies existing PROXY lines.
 2) require_proxy_protocol = true
 and IS behind a proxy that adds or proxies existing PROXY lines.
Specifically, in the default configuration, one cannot send the swift
proxy PROXY lines (no change from before this patch). When this
feature is enabled, one _must_ send PROXY lines.
Change-Id: Icb88902f0a89b8d980c860be032d5e822845d03a
2018年05月23日 18:10:40 -07:00
Kota Tsuyuzaki
636b922f3b Import swift3 into swift repo as s3api middleware
This attempts to import openstack/swift3 package into swift upstream
repository, namespace. This is almost simple porting except following items.
1. Rename swift3 namespace to swift.common.middleware.s3api
1.1 Rename also some conflicted class names (e.g. Request/Response)
2. Port unittests to test/unit/s3api dir to be able to run on the gate.
3. Port functests to test/functional/s3api and setup in-process testing
4. Port docs to doc dir, then address the namespace change.
5. Use get_logger() instead of global logger instance
6. Avoid global conf instance
Ex. fix various minor issue on those steps (e.g. packages, dependencies,
 deprecated things)
The details and patch references in the work on feature/s3api are listed
at https://trello.com/b/ZloaZ23t/s3api (completed board)
Note that, because this is just a porting, no new feature is developed since
the last swift3 release, and in the future work, Swift upstream may continue
to work on remaining items for further improvements and the best compatibility
of Amazon S3. Please read the new docs for your deployment and keep track to
know what would be changed in the future releases.
Change-Id: Ib803ea89cfee9a53c429606149159dd136c036fd
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
2018年04月27日 15:53:57 +09:00
wangqi
708b24aef1 Deprecate auth_uri option
Option auth_uri from group keystone_authtoken is deprecated[1].
Use option www_authenticate_uri from group keystone_authtoken.
[1]https://review.openstack.org/#/c/508522/
Change-Id: I43bbc8b8c986e54a9a0829a0631d78d4077306f8
2018年04月18日 02:07:11 +00:00
melissaml
3bc267d10c fix a typo in documentation
Change-Id: I0492ae1d50493585ead919904d6d9502b7738266
2018年03月23日 07:29:02 +08:00