Remove swift.common.client from here.
- It has been to its own gerrit project. - direct_client should follow next. - Implements blueprint clientbindings. Change-Id: I3bb50c95eba81302bfec71cb7ce5288b85a41dc0
This commit is contained in:
20 changed files with 69 additions and 3535 deletions
@@ -7,6 +7,10 @@ Associated Projects
Application Bindings
--------------------
* OpenStack supported binding:
* `Python-SwiftClient <http://pypi.python.org/pypi/python-swiftclient>`_
* Made for Cloud Files, but mostly work with Swift too:
* `CSharp-CloudFiles <https://github.com/rackspace/csharp-cloudfiles>`_
@@ -266,6 +266,9 @@ Do these commands as you on guest.
#. Check out the swift repo with `git clone https://github.com/openstack/swift.git`
#. Build a development installation of swift, for example:
`cd ~/swift; sudo python setup.py develop`
#. Check out the python-swiftclient repo with `git clone https://github.com/openstack/python-swiftclient.git`
#. Build a development installation of python-swiftclient, for example:
`cd ~/python-swiftclient; sudo python setup.py develop`
#. Edit `~/.bashrc` and add to the end::
export SWIFT_TEST_CONFIG_FILE=/etc/swift/test.conf
@@ -27,11 +27,11 @@ This document shows a cluster using the following types of nodes:
- five Storage nodes
- Runs the swift-account-server, swift-container-server, and
- Runs the swift-account-server, swift-container-server, and
swift-object-server processes which control storage of the account
databases, the container databases, as well as the actual stored
objects.
.. note::
Fewer Storage nodes can be used initially, but a minimum of 5 is
recommended for a production cluster.
@@ -62,7 +62,7 @@ General OS configuration and partitioning for each node
apt-get install python-software-properties
add-apt-repository ppa:swift-core/release
apt-get update
apt-get install swift openssh-server
apt-get install swift python-swiftclient openssh-server
#. Create and populate configuration directories::
@@ -79,7 +79,7 @@ General OS configuration and partitioning for each node
#. On the second and subsequent nodes: Copy that file over. It must be the same on every node in the cluster!::
scp firstnode.example.com:/etc/swift/swift.conf /etc/swift/
scp firstnode.example.com:/etc/swift/swift.conf /etc/swift/
#. Publish the local network IP address for use by scripts found later in this documentation::
@@ -87,8 +87,8 @@ General OS configuration and partitioning for each node
export PROXY_LOCAL_NET_IP=10.1.2.4
.. note::
The random string of text in /etc/swift/swift.conf is
used as a salt when hashing to determine mappings in the ring.
The random string of text in /etc/swift/swift.conf is
used as a salt when hashing to determine mappings in the ring.
.. _config-proxy:
@@ -130,22 +130,22 @@ Configure the Proxy node
bind_port = 8080
workers = 8
user = swift
[pipeline:main]
pipeline = healthcheck cache tempauth proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
[filter:tempauth]
use = egg:swift#tempauth
user_system_root = testpass .admin https://$PROXY_LOCAL_NET_IP:8080/v1/AUTH_system
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
memcache_servers = $PROXY_LOCAL_NET_IP:11211
@@ -153,7 +153,7 @@ Configure the Proxy node
.. note::
If you run multiple memcache servers, put the multiple IP:port listings
If you run multiple memcache servers, put the multiple IP:port listings
in the [filter:cache] section of the proxy-server.conf file like:
`10.1.2.3:11211,10.1.2.4:11211`. Only the proxy server uses memcache.
@@ -165,11 +165,11 @@ Configure the Proxy node
swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1
.. note::
For more information on building rings, see :doc:`overview_ring`.
#. For every storage device in /srv/node on each node add entries to each ring::
export ZONE= # set the zone number for that storage device
@@ -189,7 +189,7 @@ Configure the Proxy node
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder
#. Rebalance the rings::
swift-ring-builder account.builder rebalance
@@ -219,7 +219,7 @@ Configure the Storage nodes
Extended Attributes (XATTRS). We currently recommend XFS as it
demonstrated the best overall performance for the swift use case after
considerable testing and benchmarking at Rackspace. It is also the
only filesystem that has been thoroughly tested. These instructions
only filesystem that has been thoroughly tested. These instructions
assume that you are going to devote /dev/sdb1 to an XFS filesystem.
#. Install Storage node packages::
@@ -244,19 +244,19 @@ Configure the Storage nodes
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = $STORAGE_LOCAL_NET_IP
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
@@ -282,17 +282,17 @@ Configure the Storage nodes
[DEFAULT]
bind_ip = $STORAGE_LOCAL_NET_IP
workers = 2
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
[account-auditor]
[account-reaper]
EOF
@@ -302,17 +302,17 @@ Configure the Storage nodes
[DEFAULT]
bind_ip = $STORAGE_LOCAL_NET_IP
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
EOF
@@ -322,17 +322,17 @@ Configure the Storage nodes
[DEFAULT]
bind_ip = $STORAGE_LOCAL_NET_IP
workers = 2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
[object-updater]
[object-auditor]
EOF
@@ -412,11 +412,11 @@ You run these commands from the Proxy node.
Adding a Proxy Server
---------------------
For reliability's sake you may want to have more than one proxy server. You can set up the additional proxy node in the same manner that you set up the first proxy node but with additional configuration steps.
For reliability's sake you may want to have more than one proxy server. You can set up the additional proxy node in the same manner that you set up the first proxy node but with additional configuration steps.
Once you have more than two proxies, you also want to load balance between the two, which means your storage endpoint also changes. You can select from different strategies for load balancing. For example, you could use round robin dns, or an actual load balancer (like pound) in front of the two proxies, and point your storage url to the load balancer.
See :ref:`config-proxy` for the initial setup, and then follow these additional steps.
See :ref:`config-proxy` for the initial setup, and then follow these additional steps.
#. Update the list of memcache servers in /etc/swift/proxy-server.conf for all the added proxy servers. If you run multiple memcache servers, use this pattern for the multiple IP:port listings: `10.1.2.3:11211,10.1.2.4:11211` in each proxy server's conf file.::
@@ -430,15 +430,14 @@ See :ref:`config-proxy` for the initial setup, and then follow these additional
use = egg:swift#tempauth
user_system_root = testpass .admin http[s]://<LOAD_BALANCER_HOSTNAME>:<PORT>/v1/AUTH_system
#. Next, copy all the ring information to all the nodes, including your new proxy nodes, and ensure the ring info gets to all the storage nodes as well.
#. Next, copy all the ring information to all the nodes, including your new proxy nodes, and ensure the ring info gets to all the storage nodes as well.
#. After you sync all the nodes, make sure the admin has the keys in /etc/swift and the ownership for the ring file is correct.
#. After you sync all the nodes, make sure the admin has the keys in /etc/swift and the ownership for the ring file is correct.
Troubleshooting Notes
---------------------
If you see problems, look in var/log/syslog (or messages on some distros).
If you see problems, look in var/log/syslog (or messages on some distros).
Also, at Rackspace we have seen hints at drive failures by looking at error messages in /var/log/kern.log.
Also, at Rackspace we have seen hints at drive failures by looking at error messages in /var/log/kern.log.
There are more debugging hints and tips in the :doc:`admin_guide`.
@@ -60,16 +60,6 @@ WSGI
:members:
:show-inheritance:
.. _client:
Client
======
.. automodule:: swift.common.client
:members:
:undoc-members:
:show-inheritance:
.. _direct_client:
Direct Client
@@ -76,6 +76,10 @@ setups -- for testing.
Using the ``swift`` tool to set up synchronized containers
----------------------------------------------------------
.. note::
The ``swift`` tool is available from the `python-swiftclient`_ library.
.. note::
You must be the account admin on the account to set synchronization targets
@@ -145,6 +149,8 @@ You can also set up a chain of synced containers if you want more than two.
You'd point 1 -> 2, then 2 -> 3, and finally 3 -> 1 for three containers.
They'd all need to share the same secret synchronization key.
.. _`python-swiftclient`: http://github.com/openstack/python-swiftclient
-----------------------------------
Using curl (or other tools) instead
-----------------------------------
@@ -17,9 +17,10 @@ with the possibility of parallel uploads of the segments.
Using ``swift`` for Segmented Objects
-------------------------------------
The quickest way to try out this feature is use the included ``swift``
Swift Tool. You can use the ``-S`` option to specify the segment size
to use when splitting a large file. For example::
The quickest way to try out this feature is use the ``swift`` Swift Tool
included with the `python-swiftclient`_ library. You can use the ``-S``
option to specify the segment size to use when splitting a large file. For
example::
swift upload test_container -S 1073741824 large_file
@@ -48,6 +49,8 @@ deletes and overwrites, etc. You can override this behavior with the
``--leave-segments`` option if desired; this is useful if you want to have
multiple versions of the same large object available.
.. _`python-swiftclient`: http://github.com/openstack/python-swiftclient
----------
Direct API
----------
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.