1e7f093fee43bd9e46a18d5183a360ffc1c9ac0d
Commit Graph

25 Commits

Author SHA1 Message Date
Kevin Carter
874c8df029 Cleanup files and templates using smart sources
The files and templates we carry are almost always in a state of
maintenance. The upstream services are maintaining these files and
there's really no reason we need to carry duplicate copies of them. This
change removes all of the files we expect to get from the upstream
service. while the focus of this change is to remove configuration file
maintenance burdens it also allows the role to execute faster.
 * Source installs have the configuration files within the venv at
 "<<VENV_PATH>>/etc/<<SERVICE_NAME>>". The role will now link the
 default configuration path to this directory. When the service is
 upgraded the link will move to the new venv path.
 * Distro installs package all of the required configuration files.
To maintain our current capabilities to override configuration the
role will fetch files from the disk whenever an override is provided and
then push the fetched file back to the target using `config_template`.
Depends-On: https://review.openstack.org/636162
Change-Id: Ib7d8039513bc2581cf7bc0e2e73aa8ab5da82235
Signed-off-by: Kevin Carter <kevin@cloudnull.com>
2019年02月12日 10:21:06 +00:00
Jesse Pretorius
f529f0f6c7 Use a common python build/install role
In order to radically simplify how we prepare the service
venvs, we use a common role to do the wheel builds and the
venv preparation. This makes the process far simpler to
understand, because the role does its own building and
installing. It also reduces the code maintenance burden,
because instead of duplicating the build processes in the
repo_build role and the service role - we only have it all
done in a single place.
We also change the role venv tag var to use the integrated
build's common venv tag so that we can remove the role's
venv tag in group_vars in the integrated build. This reduces
memory consumption and also reduces the duplication.
This is by no means the final stop in the simplification
process, but it is a step forward. The will be work to follow
which:
1. Replaces 'developer mode' with an equivalent mechanism
 that uses the common role and is simpler to understand.
 We will also simplify the provisioning of pip install
 arguments when doing this.
2. Simplifies the installation of optional pip packages.
 Right now it's more complicated than it needs to be due
 to us needing to keep the py_pkgs plugin working in the
 integrated build.
3. Deduplicates the distro package installs. Right now the
 role installs the distro packages twice - just before
 building the venv, and during the python_venv_build role
 execution.
Depends-On: https://review.openstack.org/598957
Change-Id: I182bde29c049a97bc2b55193aee0b5b3d8532916
Implements: blueprint python-build-install-simplification
Signed-off-by: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
2018年09月04日 11:45:20 +00:00
Jesse Pretorius
d0696a90ab Execute service setup against a delegated host using Ansible built-in modules
In order to reduce the packages required to pip install on to the hosts,
we allow the service setup to be delegated to a specific host, defaulting
to the deploy host. We also switch as many tasks as possible to using the
built-in Ansible modules which make use of the shade library.
The 'virtualenv' package is now installed appropriately by the openstack_hosts
role, so there's no need to install it any more. The 'httplib2' package is a
legacy Ansible requirement for the get_url/get_uri module which is no longer
needed. The keystone client library is not required any more now that we're
using the upstream modules. As there are no required packages left, the task
to install them is also removed.
Unfortunately we need to use the openstack client to wait for a compute host
to register, so we add it into the nova venv and implement a change in the
way we do the wait so that openrc/clouds.yaml is only implemented on a single
compute host and the wait task is executed there.
Depends-On: https://review.openstack.org/582359
Change-Id: I702480a5188a583a03f66bb39609f7d25a996e4a
2018年07月22日 14:22:40 +00:00
zhulingjie
acb2c87038 Remove the unnecessary space
Change-Id: I993181a2d352a83d25bcddf5b39f4be016f0018d
2018年07月11日 23:23:18 -04:00
Jean-Philippe Evrard
9f53e04687 Fix usage of "|" for tests
With the more recent versions of ansible, we should now use
"is" instead of the "|" sign for the tests.
This should fix it.
Change-Id: If3e4366c22e900557e4730a7e8838f55ffe30ecc
2018年07月12日 16:44:21 +02:00
Major Hayden
ff26ba2158 Remove systemd conditionals
All operating systems supported by the role have systemd and these
conditionals are no longer needed.
Change-Id: I35500f7eec993b2bcdb245a995a05cacf2c596f8
2018年02月20日 09:39:58 +00:00
Cuong Nguyen
67b570702f Use group_names to check a host belongs to group
Also, use nova_services dict to get group name
Change-Id: Iec090937b0213120854847eebf099df4ffc03528
2017年11月22日 09:58:26 +07:00
Logan V
902e638d95 Add external LB management handler hook interface
Based on conversation on an ansible issue[1], I implemented
a LB orchestration role[2] similar to the POC here[3].
This will allow external loadbalancer management roles to hook
into a universal notify listener "Manage LB" to perform before/
after endpoint management actions when the service is being
restarted.
[1]: https://github.com/ansible/ansible/issues/27813
[2]: https://github.com/Logan2211/ansible-haproxy-endpoints
[3]: https://github.com/Logan2211/tmp-ansible-27813
Change-Id: I5aecc26606f41bc6b27fbe9a5f600914a88ff2c7
2017年09月16日 14:23:03 -05:00
Andy McCrae
823a80bd44 Move to use UWsgi for Nova
The placement service is already setup to use UWsgi, we need
to move the other Nova services to follow suit as part of our community
goal for Pike.
Additionally, we need to clean up the nginx configuration as we are
moving away from fronting uWSGI with nginx inside the roles.
Depends-On: Ib66b9709fb88205eaf3f133c87357a4dbbdde5ae
Change-Id: If6c30e00c1c753692c970457b75e3ae7f5cc066c
Implements: blueprint goal-deploy-api-in-wsgi
2017年08月14日 14:27:25 +01:00
Jesse Pretorius
7fc1497ebe Implement data migrations for rolling upgrades
In order to cater for artifact-based installed, and
rolling upgrades, this patch implements a set of local
facts to inform the online migrations task.
The 'nova_all_software_updated' variable will be
set by the playbook on each run to ensure that the
online migrations only happen once all venvs are
homogenous. This ensures that the playbook can be
executed in a serialised fashion and the data will
not be corrupted.
The ``upgrade_levels`` setting for ``compute`` is set
to ``auto`` to ensure that a mixed RPC version
deployment can operate properly when doing a rolling
upgrade as suggested by [1].
Additional changes are made to improve the role's
ability to be executed using serialised playbooks.
Finally, the nova-manage command references to the
config file location have been removed as they refer
to the default location.
[1] https://docs.openstack.org/developer/nova/upgrade.html
Change-Id: I08e5a7f0ce526b11aa52c35ee29c458954a5f22d
2017年07月06日 06:18:21 +00:00
Jesse Pretorius
4b9100a612 Perform an atomic policy file change
The policy.json file is currently read continually by the
services and is not only read on service start. We therefore
cannot template directly to the file read by the service
(if the service is already running) because the new policies
may not be valid until the service restarts. This is
particularly important during a major upgrade. We therefore
only put the policy file in place after the service restart.
This patch also tidies up the handlers and some of the install
tasks to simplify them and reduce the tasks/code a little.
Change-Id: Icba9df7be6012576eca0afb040a6953809cc9a5f
2017年06月21日 11:58:00 +01:00
Andy McCrae
97cf209d69 Use handlers based on filtered_nova_services
Change-Id: I3f9cde33af0a5f2aaeb0aedc964cb03a40ccbb9f
2017年05月08日 14:47:46 +00:00
Marc Gariepy
fa3797d857 Enable Nginx for nova-placement
On CentOS the default is to have service disabled, this ensure nginx is enabled.
Closes-bug: 1681533
Change-Id: I98018fca9c277248b77b60081ea560c012b370af
2017年04月23日 02:17:11 +00:00
Dan Kolb
5fbbff6b46 Reload service files on Nova services restart
During an upgrade new service files are added, but systemd is not
reloaded during restart of nova services to pick up these file
changes. This performs a daemon-reload when restarting nova
services.
Change-Id: I98b3f66429ee045f052ad491847cf82d2f5d4efc
Closes-Bug: #1673889 
2017年03月31日 19:26:43 +00:00
Andy McCrae
6867e24438 Reload nginx instead of restart
We don't need to restart nginx - we can instead just reload the
service. Additionally, as more services move to use nginx frontend, it
would be bad to restart all NGinx services at the same time.
For now we need to investigate the impact of reloads in UWsgi before
moving over to a "reload" on UWsgi.
Change-Id: I60e370e784a1ff3a0f5bf8551be804bf05d8bb43
2017年02月23日 17:55:51 -05:00
Logan V
5c99b10178 Ordered service restarts
Use specific ordering for nova service restarts.
Change-Id: I29e17c09c6aa1b626aead8e4916cc89604a371d6
2017年02月22日 07:15:18 -06:00
Logan V
b9b8e08ac0 Wait for nova-compute service registration
A race condition is caused when nova-compute is started for the
first time because it takes a period of time for nova-compute
to spin up, register itself with nova API, and become available for
cell enrollment.
Prior to this there was no wait condition when nova-compute
restarts occurred, so the first time nova-compute started, often
the compute service was not registered in the database and available
for cell enrollment when the enrollment tasks ran.
Change-Id: I510f0a957f53d15affa1fc23f809abff52208438
2017年02月09日 10:07:14 -06:00
Andy McCrae
966ea269c9 Add nova-placement-api service and cell_v2 setup
This patch adds Nova requirements for Ocata:
* Nova Placement API running as uwsgi with Nginx.
* cell_v2 setup for cell0 and cell1
* All required settings for these services with sane defaults
It fixes up some ordering for DB operations:
* online_db_migrations should only happen after a full upgrade.
* Cell setup needs to happen after api_db sync but before db sync.
* Discover_hosts for cell_v2 needs to happen after compute is restarted
This adds functionality to allow uwsgi apps in the init scripts:
* Allowing the "--log" line to be adjusted.
* Setting the condition value so that only enabled services are deployed
* Fixes a bug for program_override which mean this value was never being
used.
Depends-On: I082f37bb3ce61a900e06a58f21c7882f83671355
Change-Id: I282d25988377d18257b708859f89a7ae4260ac07
2017年02月02日 16:47:19 +00:00
Cuong Nguyen
a89f13c608 Use systemd module instead of shell
Using ansible systemd module to daemon reload and service reload is the solution for the future.
Change-Id: I3f9142357379a548b1e1f4190e61157596f750fa
Co-Authored-By: Jean-Philippe Evrard <Jean-Philippe.Evrard@rackspace.co.uk>
2017年01月25日 08:29:43 +07:00
Andy McCrae
167fe1b74a Remove Trusty support from os_nova role
Change-Id: Ib0747040d6b53cbb7aec67cfaceae6cc1efb1abc
Implements: blueprint trusty-removal
2016年12月15日 13:21:13 +00:00
Marc Gariepy
83a9864b0d Add CentOS support for os_nova
* only kvm host are supported right now.
Depends-On: Iff4a5999be0263a2c1843d7ca29843468cbc0ccc
Depends-On: I78fb85d44b5b0e1643bd07af3e15462c02041c89
Change-Id: Ie05c243daa7d2d46b5e8779371a363d95cc990e9
2016年11月15日 08:10:56 -05:00
Logan V
6361372415 Fix linting issues for ansible-lint 3.4.1
Preparing this role for the ansible-lint version bump
Change-Id: Ia5d254d43f9541c82c700080aafee276dafad0a7
2016年11月02日 12:48:25 +00:00
Jesse Pretorius
9a17ca682d Use dictionary for service group mappings
Change the 'nova_service_names' from a list to a dictionary mapping
of services, groups that install those services. This brings the
method into line with that used in the os_neutron role in order to
implement a more standardised method.
The init tasks have been updated to run once and loop through this
mapping rather than being included multiple times and re-run against
each host. This may potentially reduce role run times.
Currently the reload of upstart/systemd scripts may not happen if
only one script changes as the task uses a loop with only one result
register. This patch implements handlers to reload upstart/systemd
scripts to ensure that this happens when any one of the scripts
change.
The handler to reload the services now only tries to restart the
service if the host is in the group for the service according to the
service group mapping. This allows us to ensure that handler
failures are no longer ignored and that no execution time is wasted
trying to restart services which do not exist on the host.
Finally:
- Common variables shared by each service's template files have
 been updated to use the service namespaced variables.
- Unused handlers have been removed.
- Unused variables have been removed.
Change-Id: I53fb0ab1cc5762e3559d4ee2635d4cca532df7e3
2016年09月30日 17:41:26 +00:00
Travis Truman
2701d29caf Address Ansible bare variable usage
When executing the role with Ansible 2.1, the following
deprecation warning is issued in the output for some tasks.
[DEPRECATION WARNING]: Using bare variables is deprecated.
This patch addresses the tasks to fix the behaviour appropriately.
Change-Id: I7ef4e446d6fc509420d5b297378f4fa91a519fc8
2016年06月15日 11:13:57 -04:00
Kevin Carter
fdd1c4c689 Convert existing roles into galaxy roles
This change implements the blueprint to convert all roles and plays into
a more generic setup, following upstream ansible best practices.
Items Changed:
* All tasks have tags.
* All roles use namespaced variables.
* All redundant tasks within a given play and role have been removed.
* All of the repetitive plays have been removed in-favor of a more
 simplistic approach. This change duplicates code within the roles but
 ensures that the roles only ever run within their own scope.
* All roles have been built using an ansible galaxy syntax.
* The `*requirement.txt` files have been reformatted follow upstream
 Openstack practices.
* Dynamically generated inventory is now more organized, this should assist
 anyone who may want or need to dive into the JSON blob that is created.
 In the inventory a properties field is used for items that customize containers
 within the inventory.
* The environment map has been modified to support additional host groups to
 enable the seperation of infrastructure pieces. While the old infra_hosts group
 will still work this change allows for groups to be divided up into seperate
 chunks; eg: deployment of a swift only stack.
* The LXC logic now exists within the plays.
* etc/openstack_deploy/user_variables.yml has all password/token
 variables extracted into the separate file
 etc/openstack_deploy/user_secrets.yml in order to allow seperate
 security settings on that file.
Items Excised:
* All of the roles have had the LXC logic removed from within them which
 should allow roles to be consumed outside of the `os-ansible-deployment`
 reference architecture.
Note:
* the directory rpc_deployment still exists and is presently pointed at plays
 containing a deprecation warning instructing the user to move to the standard
 playbooks directory.
* While all of the rackspace specific components and variables have been removed
 and or were refactored the repository still relies on an upstream mirror of
 Openstack built python files and container images. This upstream mirror is hosted
 at rackspace at "http://rpc-repo.rackspace.com" though this is
 not locked to and or tied to rackspace specific installations. This repository
 contains all of the needed code to create and/or clone your own mirror.
DocImpact
Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
Closes-Bug: #1403676
Implements: blueprint galaxy-roles
Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e
2015年02月18日 10:56:25 +00:00