ccaf49a00cea1990027eb699bff8f10c664b2c9a
69 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Alistair Coles
|
ccaf49a00c |
Add backend rate limiting middleware
This is a fairly blunt tool: ratelimiting is per device and applied independently in each worker, but this at least provides some limit to disk IO on backend servers. GET, HEAD, PUT, POST, DELETE, UPDATE and REPLICATE methods may be rate-limited. Only requests with a path starting '<device>/<partition>', where <partition> can be cast to an integer, will be rate-limited. Other requests, including, for example, recon requests with paths such as 'recon/version', are unconditionally forwarded to the next app in the pipeline. OPTIONS and SSYNC methods are not rate-limited. Note that SSYNC sub-requests are passed directly to the object server app and will not pass though this middleware. Change-Id: I78b59a081698a6bff0d74cbac7525e28f7b5d7c1 |
||
|
Zuul
|
1a5b42da9e | Merge "Remove babel.cfg" | ||
|
Tim Burke
|
bacef722a3 |
Use underscores instead of dashes in setup.cfg
We've been seeing warnings like UserWarning: Usage of dash-separated 'home-page' will not be supported in future versions. Please use the underscore name 'home_page' instead for a while in our pep8 jobs; this ought to clean them up. As best I can tell, setuptools has supported either format for several years. Change-Id: I4d538859f278ac26a6b6d6845df103ebd4c49da3 |
||
|
Tim Burke
|
44390d1ec1 |
Test under py39
Depends-On: https://review.opendev.org/c/openstack/project-config/+/774906 Change-Id: I4a16370646a1a684240d29950ba197baccae91bb |
||
|
Samuel Merritt
|
b971280907 |
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their cluster in some way. This commit provides them a way to hook into the object auditor with a simple, clearly-defined boundary so that they can iterate over their objects without additional disk IO. For example, a cluster operator may want to ensure a semantic consistency with all SLO segments accounted in their manifests, or locate objects that aren't in container listings. Now that Swift has encryption support, this could be used to locate unencrypted objects. The list goes on. This commit makes the auditor locate, via entry points, the watchers named in its config file. A watcher is a class with at least these four methods: __init__(self, conf, logger, **kwargs) start(self, audit_type, **kwargs) see_object(self, object_metadata, data_file_path, **kwargs) end(self, **kwargs) The auditor will call watcher.start(audit_type) at the start of an audit pass, watcher.see_object(...) for each object audited, and watcher.end() at the end of an audit pass. All method arguments are passed as keyword args. This version of the API is implemented on the context of the auditor itself, without spawning any additional processes. If the plugins are not working well -- hang, crash, or leak -- it's easier to debug them when there's no additional complication of processes that run by themselves. In addition, we include a reference implementation of plugin for the watcher API, as a help to plugin writers. Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a |
||
|
likui
|
66da8eaeef |
Remove babel.cfg
Remove babel.cfg and the translation bits from setup.cfg, those are not needed anymore. Change-Id: I444e018b52279ffed96158a96cf66123a0e6267b |
||
|
zhangboye
|
f25705ee13 |
Add py38 package metadata
Change-Id: Icd5312dec647f69a5df111c3a6766093ecfcb6ef |
||
|
Romain LE DISEZ
|
27fd97cef9 |
Middleware that allows a user to have quoted Etags
Users have complained for a while that Swift's ETags don't match the expected RFC formats. We've resisted fixing this for just as long, worrying that the fix would break innumerable clients that expect the value to be a hex-encoded MD5 digest and *nothing else*. But, users keep asking for it, and some consumers (including some CDNs) break if we *don't* have quoted etags -- so, let's make it an option. With this middleware, Swift users can set metadata per-account or even per-container to explicitly request RFC compliant etags or not. Swift operators also get an option to change the default behavior cluster-wide; it defaults to the old, non-compliant format. See also: - https://tools.ietf.org/html/rfc2616#section-3.11 - https://tools.ietf.org/html/rfc7232#section-2.3 Closes-Bug: 1099087 Closes-Bug: 1424614 Co-Authored-By: Tim Burke <tim.burke@gmail.com> Change-Id: I380c6e34949d857158e11eb428b3eda9975d855d |
||
|
Tim Burke
|
dad8fb7c3c |
packaging: Build universal wheels
Apparently this had been done for us previously? See also: http://lists.openstack.org/pipermail/openstack-discuss/2020-January/012097.html Change-Id: I27288f9e503f26827a8274d46b07c1330a21de10 |
||
|
Corey Bryant
|
3532df0fd3 |
Add Python 3 Train unit tests
This is a mechanically generated patch to ensure unit testing is in place for all of the Tested Runtimes for Train. See the Train python3-updates goal document for details: https://governance.openstack.org/tc/goals/train/python3-updates.html Change-Id: I29f2951053fa6714de46ed757116d99dd5c896cf Story: #2005924 Task: #34249 |
||
|
Tim Burke
|
83d0161991 |
Add operator tool to async-delete some or all objects in a container
Adds a tool, swift-container-deleter, that takes an account/container and optional prefix, marker, and/or end-marker; spins up an internal client; makes listing requests against the container; and pushes the found objects into the object-expirer queue with a special application/async-deleted content-type. In order to do this enqueuing efficiently, a new internal-to-the-cluster container method is introduced: UPDATE. It takes a JSON list of object entries and runs them through merge_items. The object-expirer is updated to look for work items with this content-type and skip the X-If-Deleted-At check that it would normally do. Note that the target-container's listing will continue to show the objects until data is actually deleted, bypassing some of the concerns raised in the related change about clearing out a container entirely and then deleting it. Change-Id: Ia13ee5da3d1b5c536eccaadc7a6fdcd997374443 Related-Change: I50e403dee75585fc1ff2bb385d6b2d2f13653cf8 |
||
|
ZhijunWei
|
9ab276d9b8 |
Change openstack-dev to openstack-discuss
Change-Id: Ib5f0b7431819892510d1913465ce4a81c92bdf25 |
||
|
Romain LE DISEZ
|
673fda7620 |
Configure diskfile per storage policy
With this commit, each storage policy can define the diskfile to use to access objects. Selection of the diskfile is done in swift.conf. Example: [storage-policy:0] name = gold policy_type = replication default = yes diskfile = egg:swift#replication.fs The diskfile configuration item accepts the same format than middlewares declaration: [[scheme:]egg_name#]entry_point The egg_name is optional and default to "swift". The scheme is optional and default to the only valid value "egg". The upstream entry points are "replication.fs" and "erasure_coding.fs". Co-Authored-By: Alexandre Lécuyer <alexandre.lecuyer@corp.ovh.com> Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: I070c21bc1eaf1c71ac0652cec9e813cadcc14851 |
||
|
Alistair Coles
|
1951dc7e9a |
Add keymaster to fetch root secret from KMIP service
Add a new middleware that can be used to fetch an encryption root secret from a KMIP service. The middleware uses a PyKMIP client to interact with a KMIP endpoint. The middleware is configured with a unique identifier for the key to be fetched and options required for the PyKMIP client. Co-Authored-By: Tim Burke <tim.burke@gmail.com> Change-Id: Ib0943fb934b347060fc66c091673a33bcfac0a6d |
||
|
Zuul
|
a3cc7ccc69 | Merge "Experimental swift-ring-composer CLI to build composite rings" | ||
|
Alistair Coles
|
6b626f2f98 |
Experimental swift-ring-composer CLI to build composite rings
Provides a simple, experimental, CLI tool to generate a composite ring from a list of component builder files. For example: swift-ring-composer <composite-file> compose \ <builder-file> <builder-file> --output <ring-file> Commands available: - compose: compose a list of builder file to a composite ring - show: show the metadata for a composite ring Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp> Co-Authored-By: Matthew Oliver <matt@oliver.net.au> Change-Id: I25a79e71c13af352e19e4358f60545265b51584f |
||
|
Greg Lange
|
5d601b78f3 |
Adds read_only middleware
This patch adds a read_only middleware to swift. It gives the ability to make an entire cluster or individual accounts read only. When a cluster or an account is in read only mode, requests that would result in writes to the cluser are not allowed. DocImpact Change-Id: I7e0743aecd60b171bbcefcc8b6e1f3fd4cef2478 |
||
|
Matthew Oliver
|
2641814010 |
Add sharder daemon, manage_shard_ranges tool and probe tests
The sharder daemon visits container dbs and when necessary executes the sharding workflow on the db. The workflow is, in overview: - perform an audit of the container for sharding purposes. - move any misplaced objects that do not belong in the container to their correct shard. - move shard ranges from FOUND state to CREATED state by creating shard containers. - move shard ranges from CREATED to CLEAVED state by cleaving objects to shard dbs and replicating those dbs. By default this is done in batches of 2 shard ranges per visit. Additionally, when the auto_shard option is True (NOT yet recommeneded in production), the sharder will identify shard ranges for containers that have exceeded the threshold for sharding, and will also manage the sharding and shrinking of shard containers. The manage_shard_ranges tool provides a means to manually identify shard ranges and merge them to a container in order to trigger sharding. This is currently the recommended way to shard a container. Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Change-Id: I7f192209d4d5580f5a0aa6838f9f04e436cf6b1f |
||
|
Kota Tsuyuzaki
|
636b922f3b |
Import swift3 into swift repo as s3api middleware
This attempts to import openstack/swift3 package into swift upstream repository, namespace. This is almost simple porting except following items. 1. Rename swift3 namespace to swift.common.middleware.s3api 1.1 Rename also some conflicted class names (e.g. Request/Response) 2. Port unittests to test/unit/s3api dir to be able to run on the gate. 3. Port functests to test/functional/s3api and setup in-process testing 4. Port docs to doc dir, then address the namespace change. 5. Use get_logger() instead of global logger instance 6. Avoid global conf instance Ex. fix various minor issue on those steps (e.g. packages, dependencies, deprecated things) The details and patch references in the work on feature/s3api are listed at https://trello.com/b/ZloaZ23t/s3api (completed board) Note that, because this is just a porting, no new feature is developed since the last swift3 release, and in the future work, Swift upstream may continue to work on remaining items for further improvements and the best compatibility of Amazon S3. Please read the new docs for your deployment and keep track to know what would be changed in the future releases. Change-Id: Ib803ea89cfee9a53c429606149159dd136c036fd Co-Authored-By: Thiago da Silva <thiago@redhat.com> Co-Authored-By: Tim Burke <tim.burke@gmail.com> |
||
|
wangqi
|
a027f2c105 |
Follow the new PTI for document build
For compliance with the Project Testing Interface as described in: https://governance.openstack.org/tc/reference/project-testing-interface.html For more details information, please refer to: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125710.html http://lists.openstack.org/pipermail/openstack-dev/2018-March/128594.html Co-Authored-By: Nguyen Hai <nguyentrihai93@gmail.com> Change-Id: I26dc41c7df57bf79db531c6e67e148e01c17e992 |
||
|
John Dickinson
|
8dc40125ce |
add keystonemiddleware to extras area
Change-Id: I805cf220aee4af69bf4984cc148d69520e958073 |
||
|
Thiago da Silva
|
a9964a7fc3 |
fix barbican integration
Added auth_url to the context we pass to castellan library. In a change [1] intended to deprecate the use of auth_endpoint passed as the oslo config, it actually completely removed the use of it[2], so this change became necessary or the integration is broken. [1] - https://review.openstack.org/#/c/483457 [2] - https://review.openstack.org/#/c/483457/6/castellan/key_manager/barbican_key_manager.py@143 Change-Id: I933367fa46aa0a3dc9aedf078b1be715bfa8c054 |
||
|
Robert Francis
|
99b89aea10 |
Symlink implementation.
Add a symbolic link ("symlink") object support to Swift. This
object will reference another object. GET and HEAD
requests for a symlink object will operate on the referenced object.
DELETE and PUT requests for a symlink object will operate on the
symlink object, not the referenced object, and will delete or
overwrite it, respectively.
POST requests are *not* forwarded to the referenced object and should
be sent directly. POST requests sent to a symlink object will
result in a 307 Error.
Historical information on symlink design can be found here:
https://github.com/openstack/swift-specs/blob/master/specs/in_progress/symlinks.rst.
https://etherpad.openstack.org/p/swift_symlinks
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: Janie Richling <jrichli@us.ibm.com>
Co-Authored-By: Kazuhiro MIYAHARA <miyahara.kazuhiro@lab.ntt.co.jp>
Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Change-Id: I838ed71bacb3e33916db8dd42c7880d5bb9f8e18
Signed-off-by: Thiago da Silva <thiago@redhat.com>
|
||
|
Tim Burke
|
250da37a7b |
Remove swift-temp-url script
This has been deprecated since Swift 2.10.0 (Newton) including a message that it would go away. Let's actually remove it. Change-Id: I7d3659761c71119363ff2c0c750e37b4c6374a39 Related-Change: Ifa8bf636f20f82db4845b02d1b58699edaa39356 |
||
|
Tim Burke
|
4806434cb0 |
Move listing formatting out to proxy middleware
Make some json -> (text, xml) stuff in a common module, reference that in account/container servers so we don't break existing clients (including out-of-date proxies), but have the proxy controllers always force a json listing. This simplifies operations on listings (such as the ones already happening in decrypter, or the ones planned for symlink and sharding) by only needing to consider a single response type. There is a downside of larger backend requests for text/plain listings, but it seems like a net win? Change-Id: Id3ce37aa0402e2d8dd5784ce329d7cb4fbaf700d |
||
|
Christian Schwede
|
cbddec340e |
Add bin/swift-dispersion-report
Change-Id: I81736080fc478c2b69d5b71edd0cada39aad9400 |
||
|
Christian Schwede
|
b77de5393f |
Make swift-dispersion-report importable
This patch allows to import the dispersion report tool, and thus making it more easily usable within other Python tools. This can be also used in a follow up patch to add some tests for the report tool. It also fixes a bug when using the "--dump-json" option - until now it returned the policy name and made the JSON invalid. Change-Id: Ie0d52a1a54fc152bb72cbb3f84dcc36a8dad972a |
||
|
Jenkins
|
b0142d0cd2 | Merge "Retrieve encryption root secret from Barbican" | ||
|
Jenkins
|
f7c55c169a | Merge "Turn on warning-is-error in doc build" | ||
|
Mathias Bjoerkqvist
|
77bd74da09 |
Retrieve encryption root secret from Barbican
This patch adds support for retrieving the encryption root secret from an external key management system. In practice, this is currently limited to Barbican. Change-Id: I1700e997f4ae6fa1a7e68be6b97539a24046e80b |
||
|
Akihiro Motoki
|
d18e847c94 |
Turn on warning-is-error in doc build
* Fixes warnings in RST file * Suppress warning log from pyeclib during the doc build. pyeclib emits a warning message on an older liberasurecode [1] and sphinx treats this as error (when warning-is-error is set). There is no need to check warnings during the doc build, so we can safely suppress the warning. This is a part of the doc migration community-wide effort. http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html [1] https://github.com/openstack/pyeclib/commit/d163972b Change-Id: I9adaee29185a2990cc3985bbe0dd366e22f4f1a2 |
||
|
Luong Anh Tuan
|
6193ed8312 |
Update URL home-page in documents according to document migration
Change-Id: I9590d129e7d5e72d955983f3ce38b4e2d44c95b1 |
||
|
Christian Schwede
|
e1140666d6 |
Add support to increase object ring partition power
This patch adds methods to increase the partition power of an existing object ring without downtime for the users using a 3-step process. Data won't be moved to other nodes; objects using the new increased partition power will be located on the same device and are hardlinked to avoid data movement. 1. A new setting "next_part_power" will be added to the rings, and once the proxy server reloaded the rings it will send this value to the object servers on any write operation. Object servers will now create a hard-link in the new location to the original DiskFile object. Already existing data will be relinked using a new tool in the new locations using hardlinks. 2. The actual partition power itself will be increased. Servers will now use the new partition power to read from and write to. No longer required hard links in the old object location have to be removed now by the relinker tool; the relinker tool reads the next_part_power setting to find object locations that need to be cleaned up. 3. The "next_part_power" flag will be removed. This mostly implements the spec in [1]; however it's not using an "epoch" as described there. The idea of the epoch was to store data using different partition powers in their own namespace to avoid conflicts with auditors and replicators as well as being able to abort such an operation and just remove the new tree. This would require some heavy change of the on-disk data layout, and other object-server implementations would be required to adopt this scheme too. Instead the object-replicator is now aware that there is a partition power increase in progress and will skip replication of data in that storage policy; the relinker tool should be simply run and afterwards the partition power will be increased. This shouldn't take that much time (it's only walking the filesystem and hardlinking); impact should be low therefore. The relinker should be run on all storage nodes at the same time in parallel to decrease the required time (though this is not mandatory). Failures during relinking should not affect cluster operations - relinking can be even aborted manually and restarted later. Auditors are not quarantining objects written to a path with a different partition power and therefore working as before (though they are reading each object twice in the worst case before the no longer needed hard links are removed). Co-Authored-By: Alistair Coles <alistair.coles@hpe.com> Co-Authored-By: Matthew Oliver <matt@oliver.net.au> Co-Authored-By: Tim Burke <tim.burke@gmail.com> [1] https://specs.openstack.org/openstack/swift-specs/specs/in_progress/ increasing_partition_power.html Change-Id: I7d6371a04f5c1c4adbb8733a71f3c177ee5448bb |
||
|
Tim Burke
|
c34ac98c90 |
Stop having Sphinx treat warnings as errors
Warnings-as-errors breaks our docs gate currently, as the gate is using an old liberasurecode. Stop treating warnings as errors until the gate can be updated with a more-recent version. Related-Change: I3985d027f0ac2119ceaeb4daba5964f937de6cea Change-Id: I4869ec370f08e338be78d2c0f930106081f21a0a |
||
|
Tim Burke
|
2ca303597e |
Make Sphinx treat warnings as errors
...and fix up the one warning that's crept in. Change-Id: I3985d027f0ac2119ceaeb4daba5964f937de6cea |
||
|
gengchc2
|
eb535904ba |
modify the home-page info with the developer documentation
update home-page info Change-Id: I625c25a8a5698d98174603c6fa2b42391471c03d |
||
|
Janie Richling
|
96a0e07753 |
Enable object body and metadata encryption
Adds encryption middlewares. All object servers and proxy servers should be upgraded before introducing encryption middleware. Encryption middleware should be first introduced with the encryption middleware disable_encryption option set to True. Once all proxies have encryption middleware installed this option may be set to False (the default). Increases constraints.py:MAX_HEADER_COUNT by 4 to allow for headers generated by encryption-related middleware. Co-Authored-By: Tim Burke <tim.burke@gmail.com> Co-Authored-By: Christian Cachin <cca@zurich.ibm.com> Co-Authored-By: Mahati Chamarthy <mahati.chamarthy@gmail.com> Co-Authored-By: Peter Chng <pchng@ca.ibm.com> Co-Authored-By: Alistair Coles <alistair.coles@hpe.com> Co-Authored-By: Jonathan Hinson <jlhinson@us.ibm.com> Co-Authored-By: Hamdi Roumani <roumani@ca.ibm.com> UpgradeImpact Change-Id: Ie6db22697ceb1021baaa6bddcf8e41ae3acb5376 |
||
|
Prashanth Pai
|
46d61a4dcd |
Refactor server side copy as middleware
Rewrite server side copy and 'object post as copy' feature as middleware to simplify the PUT method in the object controller code. COPY is no longer a verb implemented as public method in Proxy application. The server side copy middleware is inserted to the left of dlo, slo and versioned_writes middlewares in the proxy server pipeline. As a result, dlo and slo copy_hooks are no longer required. SLO manifests are now validated when copied so when copying a manifest to another account the referenced segments must be readable in that account for the manifest copy to succeed (previously this validation was not made, meaning the manifest was copied but could be unusable if the segments were not readable). With this change, there should be no change in functionality or existing behavior. This is asserted with (almost) no changes required to existing functional tests. Some notes (for operators): * Middleware required to be auto-inserted before slo and dlo and versioned_writes * Turning off server side copy is not configurable. * object_post_as_copy is no longer a configurable option of proxy server but of this middleware. However, for smooth upgrade, config option set in proxy server app is also read. DocImpact: Introducing server side copy as middleware Co-Authored-By: Alistair Coles <alistair.coles@hpe.com> Co-Authored-By: Thiago da Silva <thiago@redhat.com> Change-Id: Ic96a92e938589a2f6add35a40741fd062f1c29eb Signed-off-by: Prashanth Pai <ppai@redhat.com> Signed-off-by: Thiago da Silva <thiago@redhat.com> |
||
|
Tin Lam
|
0e7fca576c |
Convert README.md to README.rst
Change-Id: I223890bd4ffe469becc2127f9362243cdb52bc08 Closes-Bug: #1567026 |
||
|
Thiago da Silva
|
035a411660 |
versioned writes middleware
Rewrite object versioning as middleware to simplify the PUT method in the object controller. The functionality remains basically the same with the only major difference being the ability to now version slo manifest files. dlo manifests are still not supported as part of this patch. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> DocImpact Change-Id: Ie899290b3312e201979eafefb253d1a60b65b837 Signed-off-by: Thiago da Silva <thiago@redhat.com> Signed-off-by: Prashanth Pai <ppai@redhat.com> |
||
|
John Dickinson
|
51f806d3e3 |
remove Python 2.6 from the classifier
Change-Id: I67233e9c7b69826242546bd6bd98c24b81070579 |
||
|
Samuel Merritt
|
ccf0758ef1 |
Add ring-builder analyzer.
This is a tool to help developers quantify changes to the ring builder. It takes a scenario (JSON file) describing the builder's basic parameters (part_power, replicas, etc.) and a number of "rounds", where each round is a set of operations to perform on the builder. For each round, the operations are applied, and then the builder is rebalanced until it reaches a steady state. The idea is that a developer observes the ring builder behaving suboptimally, writes a scenario to reproduce the behavior, modifies the ring builder to fix it, and references the scenario with the commit so that others can see that things have improved. I decided to write this after writing my fourth or fifth hacky one-off script to reproduce some bad behavior in the ring builder. Change-Id: I114242748368f142304aab90a6d99c1337bced4c |
||
|
paul luse
|
647b66a2ce |
Erasure Code Reconstructor
This patch adds the erasure code reconstructor. It follows the design of the replicator but: - There is no notion of update() or update_deleted(). - There is a single job processor - Jobs are processed partition by partition. - At the end of processing a rebalanced or handoff partition, the reconstructor will remove successfully reverted objects if any. And various ssync changes such as the addition of reconstruct_fa() function called from ssync_sender which performs the actual reconstruction while sending the object to the receiver Co-Authored-By: Alistair Coles <alistair.coles@hp.com> Co-Authored-By: Thiago da Silva <thiago@redhat.com> Co-Authored-By: John Dickinson <me@not.mn> Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Co-Authored-By: Tushar Gohad <tushar.gohad@intel.com> Co-Authored-By: Samuel Merritt <sam@swiftstack.com> Co-Authored-By: Christian Schwede <christian.schwede@enovance.com> Co-Authored-By: Yuan Zhou <yuan.zhou@intel.com> blueprint ec-reconstructor Change-Id: I7d15620dc66ee646b223bb9fff700796cd6bef51 |
||
|
Andreas Jaeger
|
ddf8b0594b |
Fix translation setup
Fix the output directory, it should be swift/locale. This fixes the importing of translations. Change-Id: I48311773c9d200c3b1739dc796618849416096ed |
||
|
Matt Riedemann
|
efdc27caac |
Fix directory value for compile_catalog
Commit
|
||
|
Andreas Jaeger
|
7a192987c0 |
Setup localization properly
To start translation of swift, we need to initially import the translation file - and place it at the proper place so that the usual CI scripts can handle it. The proper place is for all python projects $PROJECT/locale/$PROJECT.pot, so move locale/$PROJECT.pot to the new location and regenerate the file. Update setup.cfg with the new paths. Further imports will be done by the OpenStack Proposal bot. Change-Id: Ide4da91f2af71db529f4a06d6b1e30ba79883506 Partial-Bug: #608725 Closes-Bug: #1082805 |
||
|
Clay Gerrard
|
3fc4d6f91d |
Add container-reconciler daemon
This daemon will take objects that are in the wrong storage policy and move them to the right ones, or delete requests that went to the wrong storage policy and apply them to the right ones. It operates on a queue similar to the object-expirer's queue. Discovering that the object is in the wrong policy will be done in subsequent commits by the container replicator; this is the daemon that handles them once they happen. Like the object expirer, you only need to run one of these per cluster see etc/container-reconciler.conf. DocImpact Implements: blueprint storage-policies Change-Id: I5ea62eb77ddcbc7cfebf903429f2ee4c098771c9 |
||
|
zhang-hare
|
f5caac43ac |
Add profiling middleware in Swift
The profile middleware provide a tool to profile Swift code on the fly and collect statistic data for performance analysis. An native simple Web UI is also provided to help query and visualize the data. Change-Id: I6a1554b2f8dc22e9c8cd20cff6743513eb9acc05 Implements: blueprint profiling-middleware |
||
|
Madhuri Kumari
|
c90ede29ff |
Added swift-account-info tool.
This is a very simple swift tool to retrieve information of an account that is located on the storage node. One can call the tool with a given account db file as it is stored on the storage node system. It will then return several information about that account. Change-Id: Ibfeee790adc000fc177b4b3c03d22ff785fda325 |
||
|
Madhuri Kumari
|
6b441fabd9 |
Added swift-container-info tool.
This is a very simple swift tool to retrieve information of a container that is located on the storage node. One can call the tool with a given container db file as it is stored on the storage node system. It will then return several information about that container. Change-Id: Ifebaed6c51a9ed5fbc0e7572bb43ef05d7dd254b |