6d32065246ee80d3a8daa4b298b769227a56f9d7
521 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Dmitriy Rabotyagov
|
6d32065246 |
Update conditions for kernel statoverride
With update of ansible version having variables in conditions is not allowed anymore, which results in error like: `Conditional is marked as unsafe, and cannot be evaluated` Change-Id: I6e8e0ee1ffc2c154bac0f64f2e797281d7ba966f |
||
|
Andrew Bonney
|
ae20d2d9fd |
Add tag to enable targeting of post-install config elements only
<service>-config tags are quite broad and have a long execution time. Where you only need to modify a service's '.conf' file and similar it is useful to have a quicker method to do so. Change-Id: Idf0a0a7033d8f6c4d6efebff456ea3f19ea81185 |
||
|
Dmitriy Rabotyagov
|
97c408e19d |
Add variable to globally control notifications enablement
In order to be able to globally enable notification reporting for all services, without an need to have ceilometer deployed or bunch of overrides for each service, we add `oslomsg_notify_enabled` variable that aims to control behaviour of enabled notifications. Presence of ceilometer is still respected by default and being referenced. Potential usecase are various billing panels that do rely on notifications but do not require presence of Ceilometer. Change-Id: Ib5d4f174be922f9b6f5ece35128a604fddb58e59 |
||
|
Dmitriy Rabotyagov
|
82d439c3fb |
Add service policies defenition
In order to allow definition of policies per service, we need to add variables so service roles, that will be passed to openstack.osa.mq_setup. Currently this can be handled by leveraging group_vars and overriding `oslomsg_rpc_policies` as a whole, but it's not obvious and can be non-trivial for some groups which are co-locating multiple services or in case of metal deployments. Change-Id: I6a4989df2cd53cc50faae120e96aa4480268f42d |
||
|
Zuul
|
737da47464 | Merge "Include PKI role only once" | ||
|
Zuul
|
4943bab3fd | Merge "fix apparmor profile for non-standard nova home" | ||
|
Dmitriy Rabotyagov
|
466e7572bb |
Include PKI role only once
This patch proposes to move condition on when to install certificates from the role include statement to a combined "view" for API and Consoles. While adding computes to the same logic might be beneficial for CI and AIO metal deployments, it potentially might have a negative effect for real deployments, as it will create bunch of Skipped tasks for computes so we leave them separated. With that API and Console are usually placed on same hosts, so it makes sense to distribute certs towards them once but keeping possibility of different hosts in mind. Change-Id: I8e28a79a6e3a5be1fe54004ea1d2c3a3ccdc20bc |
||
|
Dmitriy Rabotyagov
|
51177a6574 |
Enable deployers to force update cell mappings
Add variable nova_cell_force_update to enable deployers to ensure that role execution will also update cell mappings whenever that is needed. For instance, it could be password rotation or intention to update MySQL address. Change-Id: I5b99d58a5c4d27a363306361544c5d80759483fd |
||
|
Dmitriy Rabotyagov
|
3515638326 |
Ensure TLS is enabled properly for cell0 mapping DB connection
Once we've enabled TLS requirement in [1] jobs started failing on cell0 mapping as it was actually different and not connecting to MariaDB through TLS when it was assumed it is. [1] https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/911009 Change-Id: I96fa921cfdb849f59b5abd8452061d4c5bd04a76 |
||
|
Aleksandr Chudinov
|
7bec243c62 |
fix apparmor profile for non-standard nova home
in cases when non-standard path to nova instances is configured with nova_system_home_folder variable there may be problems with instances spawning due to libvirt virt-aa-helper missing permission in apparmor profile, this commit resolves this Change-Id: I3d37eb5a9635044570690370dfcbc060ff4d9e49 |
||
|
Dmitriy Rabotyagov
|
9843c47e81 |
Always distribute qemu config file
In case when ceph is not being used as backend for nova, qemu.conf file is not distributed, thus some settings, like nova_qemu_vnc_tls do not have any effect Closes-Bug: #2003749 Change-Id: I4bc68567cda57d73d030d9a5017cc411f7ee7732 |
||
|
Dmitriy Rabotyagov
|
5300fcea9d |
Run ceph_client when cinder uses Ceph
In usecases where only cinder is using ceph we currently do not execute ceph_client role, which makes nodes failing to spawn instances from RBD volumes. Sample usecase where Glance might be using Swift and it might be desired to use local storage for Nova ephemeral drives, but cinder spawning volumes on Ceph Currently this can be workarounded with setting `nova_rbd_inuse: True` but at the same time `nova_libvirt_images_rbd_pool: ''`, though this is counter-intuitive and this patch aims to improve this. Change-Id: I412d1e9ccb51f0cd33a98333bfa1a01510867fbe |
||
|
Zuul
|
20e83153bb | Merge "Drop until-complete flag for db purge" | ||
|
Damian Dabrowski
|
ab72a180e6 |
Avoid failures when default libvirt network does not exist
This is a follow-up change to [1]. Depending on operating system and environment configuration, default libvirt network may not exist. Right now, `Check for libvirt default network` task throws an error in this case causing nova playbook to fail. This change fixes that by instructing ansible to not throw an error if `virsh net-list` fails with "Network not found: no network with matching name" because it is acceptable to not have this network. [1] https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/899768 Change-Id: If692bc94f421bc84ad9e6d43f548b68196a9e751 |
||
|
Damian Dabrowski
|
feb15af75b |
Always disable libvirt default network
Currently, autostart for libvirt default network is disabled only when this network is active during nova playbook execution. It's an incorrect behavior because in some cases this network may not be active from the beginning. Autostart should be always disabled to ensure that this network will not be unexpectedly marked as active in the future(during package upgrade, host reboot etc.). Closes-Bug: #2042369 Change-Id: I697234bda1601b534ce1b6ab186fa98f83179ee8 |
||
|
Zuul
|
5b7678c503 | Merge "Cleanup upgrade to ssh_keypairs step" | ||
|
Dmitriy Rabotyagov
|
51ce1d4923 |
Drop until-complete flag for db purge
Flag --until-complete is not valid for the nova-manage db purge command, it is working only for archive_deleted_rows [1]. Suposedly it was a copy/paste mistake to keep the flag in place. [1] https://docs.openstack.org/nova/latest/cli/nova-manage.html#db-archive-deleted-rows Change-Id: I7be8c41bd52b955d83c4452e67ef323abe00969e |
||
|
Dmitriy Rabotyagov
|
4aa65eb606 |
Fix logic of discovering hosts by service
For quite some time, we relate usage of --by-service flag for nova-manage cell_v2 discover_hosts command to the used nova_virt_type. However, we run db_post_setup tasks only once and delegating to the conductor host. With latest changes to the logic, when this task in included from the playbook level it makes even less sense, since definition of nova_virt_type for conductor is weird and wrong. Instead, we attempt to detect if ironic is in use by checking hostvars of all compute nodes for that. It will include host_vars, group_vars, all sort of extra variables, etc. Thus, ironic hosts should be better discovered now with nova-manage command. Related-Bug: #2034583 Change-Id: I3deea859a4017ff96919290ba50cb375c0f960ea |
||
|
Dmitriy Rabotyagov
|
738ac83cf5 |
Cleanup upgrade to ssh_keypairs step
We have migrated to usage of ssh_keypairs role a while ago and we can remove old migration clean-up task. Change-Id: Ie3cbeb4bd41d3137f2332f28dbc72c8028fb5b3a |
||
|
Zuul
|
32867052d7 | Merge "Run nova_db_post_setup from playbook directly" | ||
|
Dmitriy Rabotyagov
|
b266f9cda4 |
Stop generating ssh keypair for nova user
With transition to ssh-certificates for nova authorization, we no longer need to generate and have SSH certificates for the nova user. Change-Id: Iff105bafc177271cb59fb0662d4c139f56e64325 |
||
|
Dmitriy Rabotyagov
|
e4ffb047c0 |
Run nova_db_post_setup from playbook directly
Due to some bugs delegation of tasks from compute to conductor hosts does not work in real life. Due to that task import was moved to the playbook level using role import in combination with tasks_from. Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/897570 Change-Id: I777b1c90f57c805bc0a8593b5a5c7e63e43c4cd8 |
||
|
Dmitriy Rabotyagov
|
08ccb5108a |
Split lines to not exceed 160 characters limit
Change-Id: Ia5afdded2df7ec80b36072dec3c7fbbce5600647 |
||
|
Zuul
|
6873b7d8a1 | Merge "Add quorum queues support for the service" | ||
|
Zuul
|
bf6aaf7ab0 | Merge "Enable multiple console proxies where requried in deployments" | ||
|
Dmitriy Rabotyagov
|
da9793f18e |
Add quorum queues support for the service
This change implements and enables by default quorum support for rabbitmq as well as providing default variables to globally tune it's behaviour. In order to ensure upgrade path and ability to switch back to HA queues we change vhost names with removing leading `/`, as enabling quorum requires to remove exchange which is tricky thing to do with running services. Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/875399 Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/873618 Change-Id: I792595dac8b651debcd364cd245145721575a516 |
||
|
Andrew Bonney
|
d0877c6fd3 |
Enable multiple console proxies where requried in deployments
When Nova is deployed with a mix of x86 and arm systems (for example), it may be necessary to deploy both 'novnc' and 'serialconsole' proxy services on the same host in order to service the mixed compute estate. This patch introduces a list which defines the required proxy console types. Change-Id: I93cece8babf35854e5a30938eeb9b25538fb37f6 |
||
|
Dmitriy Rabotyagov
|
9b9bc21121 |
Fix linters and metadata
With update of ansible-lint to version >=6.0.0 a lot of new linters were added, that enabled by default. In order to comply with linter rules we're applying changes to the role. With that we also update metdata to reflect current state. Depends-On: https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/888223 Change-Id: I730ae569f199fc8542a5a61beb149f459465d7e2 |
||
|
Damian Dabrowski
|
c90a5c2b92 |
Apply always tag to nova_virt_detect.yml
Running nova playbook with tag limit may lead to an error: The conditional check 'nova_virt_type != 'ironic'' failed. The error was: error while evaluating conditional (nova_virt_type != 'ironic'): 'nova_virt_type' is undefined\n\nThe error appears to be in '/etc/ansible/roles/os_nova/tasks/main.yml': line 289, column 3, but may be elsewhere in the file depending on the exact syntax problem. It can be easily fixed by applying always tag to tasks from nova_virt_detect.yml Change-Id: I56aee80180804b8a3e3316cffc6fa8115513b8f1 |
||
|
Dmitriy Rabotyagov
|
efe64725e1 |
Add way to periodically trim Nova DB
We're adding 2 services that are responsible for executing db purge and archive_deleted_rows. Services will be deployed by default, but left stopped/disabled. This way we allow deployers to enable/disable feature by changing value of nova_archive/purge_deleted. Otherwise, when variables set to true once, setting them to false won't lead to stopoing of DB trimming and that would need to be done manualy. Change-Id: I9f110f663fae71f5f3c01c6d09e6d1302d517466 |
||
|
Zuul
|
2925c1c29c | Merge "Delegate compute wait tasks to service_setup_host" | ||
|
Zuul
|
5a839b7af3 | Merge "Use include instead of import for conditional tasks" | ||
|
Zuul
|
dd00e710d7 | Merge "Add TLS support to nova API backends" | ||
|
Damian Dabrowski
|
e02e56fc93 |
Add TLS support to nova API backends
By overriding the variable `nova_backend_ssl: True` HTTPS will be enabled, disabling HTTP support on the nova backend api. The ansible-role-pki is used to generate the required TLS certificates if this functionality is enabled. `nova_pki_console_certificates` are used to encrypt: - traffic between console proxy and compute hosts `nova_pki_certificates` are used to encrypt: - traffic between haproxy and its backends(including console proxy) It would be complex to use nova_pki_console_certificates to encrypt traffic between haproxy and console proxy because they don't have valid key_usage for that and changing key_usage would require to manually set `pki_regen_cert` for existing environments. Certs securing traffic between haproxy and console proxy are provided in execstarts because otherwise they would have to be defined in nova.conf that may be shared with nova-api(which stands behind uwsgi and should not use TLS). Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/879085 Change-Id: Ibff3bf0b5eedc87c221bbb1b5976b12972fda608 |
||
|
Dmitriy Rabotyagov
|
5d310c69fd |
Use include instead of import for conditional tasks
When import is used ansible loads imported role or tasks which results in plenty of skipped tasks which also consume time. With includes ansible does not try to load play so time not wasted on skipping things. Depends-On: https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/880344 Change-Id: I47c6623e166254802ed0b479b2353c5f2ceb5cfa |
||
|
Dmitriy Rabotyagov
|
ef4ca0c2b4 |
Delegate compute wait tasks to service_setup_host
At the moment, we do deploy openrc file on conductors and delegate task to them. At the moment there is no good reason to do so, since we're actively utilizing service_setup_host for all interactions with API. With that we also replace `openstack` commands with native compute_service_info module that provides all information we need. Change-Id: I016ba4c5dd211c5165a74a6011da7bb384c7a82a |
||
|
Dmitriy Rabotyagov
|
cb62372a31 |
Move online_data_migrations to post-setup
According to nova rolling upgrade process [1], online_data_migrations should run once all the services are running the latest version of the code and were restarted. With that, we should move online migrations after handlers being flushed, when all services are restarted. At the same time, nova-status upgrade check must run before services are restarted to the new version, as service restart might lead to service breakage if upgrade check fails [2]. It makes no sense to run upgrade check when upgrade is fully finished. [1] https://docs.openstack.org/nova/latest/admin/upgrades.html#rolling-upgrade-process [2] https://docs.openstack.org/nova/latest/cli/nova-status.html#upgrade Change-Id: Ic681f73a09bb0ac280c227f85c6e79b31fd3429a |
||
|
Dmitriy Rabotyagov
|
6dfcf9d4c8 |
Remove calico driver reference
Calico driver support has been removed from OpenStack-Ansible starting in Antelope release [1]. We clean-up nove role to drop calico support from it as well. [1] https://review.opendev.org/c/openstack/openstack-ansible/+/866119 Change-Id: Ie9c118b8bab265e5bf06b6ec05731cd673ee4d95 |
||
|
Zuul
|
c0fa21ca47 | Merge "Install openvswitch repo for RDO scenario" | ||
|
Dmitriy Rabotyagov
|
45877c692b |
Install openvswitch repo for RDO scenario
RDO packages for nova does depend on python3-openvswitch, which makes it required to install OVS on computes regardless of everything else. We also clean out pre-rhel9 variable files as they're not needed anymore Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/872896 Change-Id: I3e31254b7dd1c0ff3cb46153cefce6f6cadd52aa |
||
|
Jimmy McCrory
|
740a26e7ea |
Use SSL database connections with nova-manage
When Galera SSL is enabled, use SSL encrypted database connections with nova-manage commands where a connection string is provided. Change-Id: I7019b966b475c09a4e3218461941c1112ae28028 |
||
|
Jonathan Rosser
|
b0fcbce66f |
Support configuration of resource providers with config files
Resource providers can be configured using the API or CLI, or they can also be configured on a per-compute node basis using config files stored in /etc/nova/provider_config. This patch adds support for a user defined list of provider config files to be created on the compute nodes. This can be specified in user_variables or perhaps more usefully in group_vars/host_vars. A typical use case would be describing the resources made available as a result of GPU or other hardware installed in a compute node. Change-Id: I13d70a1030b1173b1bc051f00323e6fb0781872b |
||
|
Dmitriy Rabotyagov
|
a8a338fb99 |
Define local facts separately only for distro
We do define local facts locally using python_venv_build role so no need to do the same as a separate task for source installs. Though these facts are still needed for distro path. Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/862924 Change-Id: I2f7f1281d19d61a7b4cbf14369aa3bb007debd0d Needed-By: https://review.opendev.org/c/openstack/openstack-ansible/+/866126 |
||
|
Zuul
|
abac462dc2 | Merge "Remove redundant vars line" | ||
|
Erik Berg
|
cde5a003e1 |
Remove redundant vars line
This line was introduced by I3046953f3e27157914dbe1fefd78c7eb2ddddcf6 to bring it in line with other OSA roles, but should already be covered by the distribution_major_version line above. Change-Id: I21b3972553acf38af205e17aa2d48ed19332bcb0 |
||
|
Zuul
|
7f2334c785 | Merge "Support service tokens" | ||
|
Dmitriy Rabotyagov
|
c36fdaa960 |
Support service tokens
Implement support for service_tokens. For that we convert role_name to be a list along with renaming corresponding variable. Additionally service_type is defined now for keystone_authtoken which enables to validate tokens with restricted access rules Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/845690 Change-Id: I04b22722b32b6dc8b1dc95e18c3fe96ad17e51ac |
||
|
Dmitriy Rabotyagov
|
604085ffe6 |
Remove mention of haproxy-endpoints role
Keystone role was never migrated to usage of haproxy-endpoints role and included task was used instead the whole time. With that to reduce complexity and to have unified approach, all mention of the role and handler are removed from the code. Change-Id: I3693ee3a9a756161324e3a79464f9650fb7a9f1a |
||
|
Zuul
|
49f0d150c0 | Merge "Do not adjust libvirtd sysconfig for centos-9" | ||
|
Jonathan Rosser
|
f5800a48dc |
Do not adjust libvirtd sysconfig for centos-9
Centos-9 no longer ships this file so skip adjusting it [1]. The file should not exist on Centos-9 systems where OSA is used. If this file is created by a deployer it will potentially interfere with the operation of libvirt and other configuration made by openstack-ansible. [1] https://bugzilla.redhat.com/show_bug.cgi?id=2042529 Change-Id: Ieeba7fb803e151a9e6d0adac3d1512aef3785e9a |