master
534 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Zuul
|
b1dfe41dff | Merge "Include both novnc and spice tasks if needed" | ||
|
Dmitriy Rabotyagov
|
2532e424bd |
Do not remove policy.yaml file
oslo.policy can not handle policy file removal. As a result, if policy overrides were defined at some point, but then removed, causes service outage. While we could add a handler trigger to restart the service on policy removal, it's better to simplify the logic and always place an empty policy.yaml even if no overrides are defined. Change-Id: I76836a7a61cce8c94f30b7dcf2e8a3a6078b9900 Signed-off-by: Dmitriy Rabotyagov <dmitriy.rabotyagov@cleura.com> |
||
|
Dmitriy Rabotyagov
|
1b6f02c433 |
Include both novnc and spice tasks if needed
nova_console_proxy_types assumes console types being a list, which means means that spice and novnc are not mutually exclusive console types now. Thus, we might want to deploy both of them, in case they are both in the list. Change-Id: Ib24f394f05674c6a8543a4bd336a53debf064992 Signed-off-by: Dmitriy Rabotyagov <dmitriy.rabotyagov@cleura.com> |
||
|
Dmitriy Rabotyagov
|
d3fb3a5e9e |
Extend apparmor overrides for custom nova folder
In case arbitrary folder is being used for Nova, more folders needs to be allowed in apparmor. With that, we don't need to have any overrides by default, as they all are already present in default aa-helper profile. Change-Id: Ib7a03434dae9f838289fbb16bfeb6c640eeccfc2 Signed-off-by: Dmitriy Rabotyagov <dmitriy.rabotyagov@cleura.com> |
||
|
Zuul
|
840087f608 | Merge "Allow to skip discovered mdevs" | ||
|
Dmitriy Rabotyagov
|
4906e4d641 |
Drop rootwrap.d creation
As rootwrap.d is now shipped by nova, we don't need to ensure that directory exists explicitly. Related-Bug: #2115295 Change-Id: If4aa8b289dc5664d36fd67991d481f49670a13f9 Signed-off-by: Dmitriy Rabotyagov <dmitriy.rabotyagov@cleura.com> |
||
|
Dmitriy Chubinidze
|
30a74c07bf |
Drop os_nova "Copy nova rootwrap filter config" task
Drop the os_nova task "Copy nova rootwrap filter config", as it is a leftover from an earlier cleanup where rootwrap.d files were removed from the codebase. Change-Id: Ia5620952fb50c6cc6a3e47f18f67a1b1cd77992f Closes-Bug: #2115295 |
||
|
Dmitriy Rabotyagov
|
8968c235ec |
Allow to skip discovered mdevs
Currently there is no way to avoid auto-discovery of mdev devices. The only way to avoid them propagating to nova.conf is through the config override. Change-Id: Ie1c40a427599e610278262cfdb55fdcf017d4ede |
||
|
Dmitriy Rabotyagov
|
11ff642fe6 |
Auto-fix usage of modules via FQCN
Since ansible-core 2.10 it is recommended to use modules via FQCN In order to align with recommendation, we perform migration by applying suggestions made by `ansible-lint --fix=fqcn` Change-Id: I335f0f1bcdb5e5564ce3f82f44eec7d8c6ab4e0e |
||
|
Dmitriy Rabotyagov
|
aa1503d8ce |
Auto-fix yaml rules
In order to reduce divergance with ansible-lint rules, we apply auto-fixing of violations. In current patch we replace all kind of truthy variables with `true` or `false` values to align with recommendations along with alignment of used quotes. Change-Id: Ie1737a7f88d783e39492c704bb6805c89a199553 |
||
|
Andrew Bonney
|
61be9e722d |
Change ordering of /etc/ operations to improve upgrades
This change matches an earlier modification to os_neutron Currently we symlink /etc/<service> to empty directory at pre-stage, and filling it with config only during post_install. This means, that policies and rootwrap filters are not working properly until playbook execution finish. Additionally, we replace sudoers file with new path in it, which makes current operations impossible for the service, since rootwrap can not gain sudo privileges. With this change we move symlinking and rootwrap steps to handlers, which means that we will do replace configs while service is stopped. During post_install we place all of the configs inside the venv, which is versioned at the moment. This way we minimise downtime of the service while performing upgrades Closes-Bug: #2056180 Change-Id: I9c8212408c21e09895ee5805011aecb40b689a13 |
||
|
Dmitriy Rabotyagov
|
5884318116 |
Allow to apply custom configuration to Nova SSH config
In case compute nodes using non-standard SSH port or some other hacky connection between each other, deployers might need to supply extra configuration inside it. community.general.ssh_config module was not used, as it requires extra `paramiko` module to be installed on each destination host. Change-Id: Ic79aa391e729adf61f5653dd3cf72fee1708e2f5 |
||
|
Dmitriy Rabotyagov
|
1b6740f3f8 |
Allow to supply multiline overrides to vendor_data
According to the documentation, it is expected to have a multiline data inside vendor_data.json [1] [1] https://cloudinit.readthedocs.io/en/latest/reference/datasources/openstack.html#vendor-data Depends-On: https://review.opendev.org/c/openstack/ansible-config_template/+/924217 Closes-Bug: #2073171 Change-Id: Ifc1239e4ef768e94c44d8d07df7a0b93c73638f9 |
||
|
Dmitriy Rabotyagov
|
6d32065246 |
Update conditions for kernel statoverride
With update of ansible version having variables in conditions is not allowed anymore, which results in error like: `Conditional is marked as unsafe, and cannot be evaluated` Change-Id: I6e8e0ee1ffc2c154bac0f64f2e797281d7ba966f |
||
|
Andrew Bonney
|
ae20d2d9fd |
Add tag to enable targeting of post-install config elements only
<service>-config tags are quite broad and have a long execution time. Where you only need to modify a service's '.conf' file and similar it is useful to have a quicker method to do so. Change-Id: Idf0a0a7033d8f6c4d6efebff456ea3f19ea81185 |
||
|
Dmitriy Rabotyagov
|
97c408e19d |
Add variable to globally control notifications enablement
In order to be able to globally enable notification reporting for all services, without an need to have ceilometer deployed or bunch of overrides for each service, we add `oslomsg_notify_enabled` variable that aims to control behaviour of enabled notifications. Presence of ceilometer is still respected by default and being referenced. Potential usecase are various billing panels that do rely on notifications but do not require presence of Ceilometer. Change-Id: Ib5d4f174be922f9b6f5ece35128a604fddb58e59 |
||
|
Dmitriy Rabotyagov
|
82d439c3fb |
Add service policies defenition
In order to allow definition of policies per service, we need to add variables so service roles, that will be passed to openstack.osa.mq_setup. Currently this can be handled by leveraging group_vars and overriding `oslomsg_rpc_policies` as a whole, but it's not obvious and can be non-trivial for some groups which are co-locating multiple services or in case of metal deployments. Change-Id: I6a4989df2cd53cc50faae120e96aa4480268f42d |
||
|
Zuul
|
737da47464 | Merge "Include PKI role only once" | ||
|
Zuul
|
4943bab3fd | Merge "fix apparmor profile for non-standard nova home" | ||
|
Dmitriy Rabotyagov
|
466e7572bb |
Include PKI role only once
This patch proposes to move condition on when to install certificates from the role include statement to a combined "view" for API and Consoles. While adding computes to the same logic might be beneficial for CI and AIO metal deployments, it potentially might have a negative effect for real deployments, as it will create bunch of Skipped tasks for computes so we leave them separated. With that API and Console are usually placed on same hosts, so it makes sense to distribute certs towards them once but keeping possibility of different hosts in mind. Change-Id: I8e28a79a6e3a5be1fe54004ea1d2c3a3ccdc20bc |
||
|
Dmitriy Rabotyagov
|
51177a6574 |
Enable deployers to force update cell mappings
Add variable nova_cell_force_update to enable deployers to ensure that role execution will also update cell mappings whenever that is needed. For instance, it could be password rotation or intention to update MySQL address. Change-Id: I5b99d58a5c4d27a363306361544c5d80759483fd |
||
|
Dmitriy Rabotyagov
|
3515638326 |
Ensure TLS is enabled properly for cell0 mapping DB connection
Once we've enabled TLS requirement in [1] jobs started failing on cell0 mapping as it was actually different and not connecting to MariaDB through TLS when it was assumed it is. [1] https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/911009 Change-Id: I96fa921cfdb849f59b5abd8452061d4c5bd04a76 |
||
|
Aleksandr Chudinov
|
7bec243c62 |
fix apparmor profile for non-standard nova home
in cases when non-standard path to nova instances is configured with nova_system_home_folder variable there may be problems with instances spawning due to libvirt virt-aa-helper missing permission in apparmor profile, this commit resolves this Change-Id: I3d37eb5a9635044570690370dfcbc060ff4d9e49 |
||
|
Dmitriy Rabotyagov
|
9843c47e81 |
Always distribute qemu config file
In case when ceph is not being used as backend for nova, qemu.conf file is not distributed, thus some settings, like nova_qemu_vnc_tls do not have any effect Closes-Bug: #2003749 Change-Id: I4bc68567cda57d73d030d9a5017cc411f7ee7732 |
||
|
Dmitriy Rabotyagov
|
5300fcea9d |
Run ceph_client when cinder uses Ceph
In usecases where only cinder is using ceph we currently do not execute ceph_client role, which makes nodes failing to spawn instances from RBD volumes. Sample usecase where Glance might be using Swift and it might be desired to use local storage for Nova ephemeral drives, but cinder spawning volumes on Ceph Currently this can be workarounded with setting `nova_rbd_inuse: True` but at the same time `nova_libvirt_images_rbd_pool: ''`, though this is counter-intuitive and this patch aims to improve this. Change-Id: I412d1e9ccb51f0cd33a98333bfa1a01510867fbe |
||
|
Zuul
|
20e83153bb | Merge "Drop until-complete flag for db purge" | ||
|
Damian Dabrowski
|
ab72a180e6 |
Avoid failures when default libvirt network does not exist
This is a follow-up change to [1]. Depending on operating system and environment configuration, default libvirt network may not exist. Right now, `Check for libvirt default network` task throws an error in this case causing nova playbook to fail. This change fixes that by instructing ansible to not throw an error if `virsh net-list` fails with "Network not found: no network with matching name" because it is acceptable to not have this network. [1] https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/899768 Change-Id: If692bc94f421bc84ad9e6d43f548b68196a9e751 |
||
|
Damian Dabrowski
|
feb15af75b |
Always disable libvirt default network
Currently, autostart for libvirt default network is disabled only when this network is active during nova playbook execution. It's an incorrect behavior because in some cases this network may not be active from the beginning. Autostart should be always disabled to ensure that this network will not be unexpectedly marked as active in the future(during package upgrade, host reboot etc.). Closes-Bug: #2042369 Change-Id: I697234bda1601b534ce1b6ab186fa98f83179ee8 |
||
|
Zuul
|
5b7678c503 | Merge "Cleanup upgrade to ssh_keypairs step" | ||
|
Dmitriy Rabotyagov
|
51ce1d4923 |
Drop until-complete flag for db purge
Flag --until-complete is not valid for the nova-manage db purge command, it is working only for archive_deleted_rows [1]. Suposedly it was a copy/paste mistake to keep the flag in place. [1] https://docs.openstack.org/nova/latest/cli/nova-manage.html#db-archive-deleted-rows Change-Id: I7be8c41bd52b955d83c4452e67ef323abe00969e |
||
|
Dmitriy Rabotyagov
|
4aa65eb606 |
Fix logic of discovering hosts by service
For quite some time, we relate usage of --by-service flag for nova-manage cell_v2 discover_hosts command to the used nova_virt_type. However, we run db_post_setup tasks only once and delegating to the conductor host. With latest changes to the logic, when this task in included from the playbook level it makes even less sense, since definition of nova_virt_type for conductor is weird and wrong. Instead, we attempt to detect if ironic is in use by checking hostvars of all compute nodes for that. It will include host_vars, group_vars, all sort of extra variables, etc. Thus, ironic hosts should be better discovered now with nova-manage command. Related-Bug: #2034583 Change-Id: I3deea859a4017ff96919290ba50cb375c0f960ea |
||
|
Dmitriy Rabotyagov
|
738ac83cf5 |
Cleanup upgrade to ssh_keypairs step
We have migrated to usage of ssh_keypairs role a while ago and we can remove old migration clean-up task. Change-Id: Ie3cbeb4bd41d3137f2332f28dbc72c8028fb5b3a |
||
|
Zuul
|
32867052d7 | Merge "Run nova_db_post_setup from playbook directly" | ||
|
Dmitriy Rabotyagov
|
b266f9cda4 |
Stop generating ssh keypair for nova user
With transition to ssh-certificates for nova authorization, we no longer need to generate and have SSH certificates for the nova user. Change-Id: Iff105bafc177271cb59fb0662d4c139f56e64325 |
||
|
Dmitriy Rabotyagov
|
e4ffb047c0 |
Run nova_db_post_setup from playbook directly
Due to some bugs delegation of tasks from compute to conductor hosts does not work in real life. Due to that task import was moved to the playbook level using role import in combination with tasks_from. Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/897570 Change-Id: I777b1c90f57c805bc0a8593b5a5c7e63e43c4cd8 |
||
|
Dmitriy Rabotyagov
|
08ccb5108a |
Split lines to not exceed 160 characters limit
Change-Id: Ia5afdded2df7ec80b36072dec3c7fbbce5600647 |
||
|
Zuul
|
6873b7d8a1 | Merge "Add quorum queues support for the service" | ||
|
Zuul
|
bf6aaf7ab0 | Merge "Enable multiple console proxies where requried in deployments" | ||
|
Dmitriy Rabotyagov
|
da9793f18e |
Add quorum queues support for the service
This change implements and enables by default quorum support for rabbitmq as well as providing default variables to globally tune it's behaviour. In order to ensure upgrade path and ability to switch back to HA queues we change vhost names with removing leading `/`, as enabling quorum requires to remove exchange which is tricky thing to do with running services. Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/875399 Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/873618 Change-Id: I792595dac8b651debcd364cd245145721575a516 |
||
|
Andrew Bonney
|
d0877c6fd3 |
Enable multiple console proxies where requried in deployments
When Nova is deployed with a mix of x86 and arm systems (for example), it may be necessary to deploy both 'novnc' and 'serialconsole' proxy services on the same host in order to service the mixed compute estate. This patch introduces a list which defines the required proxy console types. Change-Id: I93cece8babf35854e5a30938eeb9b25538fb37f6 |
||
|
Dmitriy Rabotyagov
|
9b9bc21121 |
Fix linters and metadata
With update of ansible-lint to version >=6.0.0 a lot of new linters were added, that enabled by default. In order to comply with linter rules we're applying changes to the role. With that we also update metdata to reflect current state. Depends-On: https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/888223 Change-Id: I730ae569f199fc8542a5a61beb149f459465d7e2 |
||
|
Damian Dabrowski
|
c90a5c2b92 |
Apply always tag to nova_virt_detect.yml
Running nova playbook with tag limit may lead to an error: The conditional check 'nova_virt_type != 'ironic'' failed. The error was: error while evaluating conditional (nova_virt_type != 'ironic'): 'nova_virt_type' is undefined\n\nThe error appears to be in '/etc/ansible/roles/os_nova/tasks/main.yml': line 289, column 3, but may be elsewhere in the file depending on the exact syntax problem. It can be easily fixed by applying always tag to tasks from nova_virt_detect.yml Change-Id: I56aee80180804b8a3e3316cffc6fa8115513b8f1 |
||
|
Dmitriy Rabotyagov
|
efe64725e1 |
Add way to periodically trim Nova DB
We're adding 2 services that are responsible for executing db purge and archive_deleted_rows. Services will be deployed by default, but left stopped/disabled. This way we allow deployers to enable/disable feature by changing value of nova_archive/purge_deleted. Otherwise, when variables set to true once, setting them to false won't lead to stopoing of DB trimming and that would need to be done manualy. Change-Id: I9f110f663fae71f5f3c01c6d09e6d1302d517466 |
||
|
Zuul
|
2925c1c29c | Merge "Delegate compute wait tasks to service_setup_host" | ||
|
Zuul
|
5a839b7af3 | Merge "Use include instead of import for conditional tasks" | ||
|
Zuul
|
dd00e710d7 | Merge "Add TLS support to nova API backends" | ||
|
Damian Dabrowski
|
e02e56fc93 |
Add TLS support to nova API backends
By overriding the variable `nova_backend_ssl: True` HTTPS will be enabled, disabling HTTP support on the nova backend api. The ansible-role-pki is used to generate the required TLS certificates if this functionality is enabled. `nova_pki_console_certificates` are used to encrypt: - traffic between console proxy and compute hosts `nova_pki_certificates` are used to encrypt: - traffic between haproxy and its backends(including console proxy) It would be complex to use nova_pki_console_certificates to encrypt traffic between haproxy and console proxy because they don't have valid key_usage for that and changing key_usage would require to manually set `pki_regen_cert` for existing environments. Certs securing traffic between haproxy and console proxy are provided in execstarts because otherwise they would have to be defined in nova.conf that may be shared with nova-api(which stands behind uwsgi and should not use TLS). Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/879085 Change-Id: Ibff3bf0b5eedc87c221bbb1b5976b12972fda608 |
||
|
Dmitriy Rabotyagov
|
5d310c69fd |
Use include instead of import for conditional tasks
When import is used ansible loads imported role or tasks which results in plenty of skipped tasks which also consume time. With includes ansible does not try to load play so time not wasted on skipping things. Depends-On: https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/880344 Change-Id: I47c6623e166254802ed0b479b2353c5f2ceb5cfa |
||
|
Dmitriy Rabotyagov
|
ef4ca0c2b4 |
Delegate compute wait tasks to service_setup_host
At the moment, we do deploy openrc file on conductors and delegate task to them. At the moment there is no good reason to do so, since we're actively utilizing service_setup_host for all interactions with API. With that we also replace `openstack` commands with native compute_service_info module that provides all information we need. Change-Id: I016ba4c5dd211c5165a74a6011da7bb384c7a82a |
||
|
Dmitriy Rabotyagov
|
cb62372a31 |
Move online_data_migrations to post-setup
According to nova rolling upgrade process [1], online_data_migrations should run once all the services are running the latest version of the code and were restarted. With that, we should move online migrations after handlers being flushed, when all services are restarted. At the same time, nova-status upgrade check must run before services are restarted to the new version, as service restart might lead to service breakage if upgrade check fails [2]. It makes no sense to run upgrade check when upgrade is fully finished. [1] https://docs.openstack.org/nova/latest/admin/upgrades.html#rolling-upgrade-process [2] https://docs.openstack.org/nova/latest/cli/nova-status.html#upgrade Change-Id: Ic681f73a09bb0ac280c227f85c6e79b31fd3429a |