ca89c07cd4f1b187460630f2ccb308e02815065e
1401 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
Vincent Legoll
|
ca89c07cd4 |
Fix ansible difference() filter use
The `difference()` filter inputs a list, and takes another list as
parameter, computes a set difference between the two, and returns
the resulting (unordered) list.
This is documented here:
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/difference_filter.html
This filter was changed in:
|
||
|
Dmitriy Rabotyagov
|
11ff642fe6 |
Auto-fix usage of modules via FQCN
Since ansible-core 2.10 it is recommended to use modules via FQCN In order to align with recommendation, we perform migration by applying suggestions made by `ansible-lint --fix=fqcn` Change-Id: I335f0f1bcdb5e5564ce3f82f44eec7d8c6ab4e0e |
||
|
Dmitriy Rabotyagov
|
aa1503d8ce |
Auto-fix yaml rules
In order to reduce divergance with ansible-lint rules, we apply auto-fixing of violations. In current patch we replace all kind of truthy variables with `true` or `false` values to align with recommendations along with alignment of used quotes. Change-Id: Ie1737a7f88d783e39492c704bb6805c89a199553 |
||
|
Jonathan Rosser
|
5cdbe69b50 |
Remove support for amqp1
Support is removed in oslo.messaging so we remove support in openstack-ansible roles. Change-Id: I13f77bb8b63b3cc3d198dcbf918a6708f7d9d80e |
||
|
Andrew Bonney
|
61be9e722d |
Change ordering of /etc/ operations to improve upgrades
This change matches an earlier modification to os_neutron Currently we symlink /etc/<service> to empty directory at pre-stage, and filling it with config only during post_install. This means, that policies and rootwrap filters are not working properly until playbook execution finish. Additionally, we replace sudoers file with new path in it, which makes current operations impossible for the service, since rootwrap can not gain sudo privileges. With this change we move symlinking and rootwrap steps to handlers, which means that we will do replace configs while service is stopped. During post_install we place all of the configs inside the venv, which is versioned at the moment. This way we minimise downtime of the service while performing upgrades Closes-Bug: #2056180 Change-Id: I9c8212408c21e09895ee5805011aecb40b689a13 |
||
|
Zuul
|
d106a515eb | Merge "Allow to apply custom configuration to Nova SSH config" | ||
|
Dmitriy Rabotyagov
|
5884318116 |
Allow to apply custom configuration to Nova SSH config
In case compute nodes using non-standard SSH port or some other hacky connection between each other, deployers might need to supply extra configuration inside it. community.general.ssh_config module was not used, as it requires extra `paramiko` module to be installed on each destination host. Change-Id: Ic79aa391e729adf61f5653dd3cf72fee1708e2f5 |
||
|
Dmitriy Rabotyagov
|
3d385e9d3f |
Ensure that first/last host detection is deterministic
With ansible-core 2.16 a breaking changes landed [1] to some filters making their result returned in arbitrary order. With that, we were relying on them to always return exactly same ordered lists. With that we need to ensure that we still have determenistic behaviour where this is important. [1] https://github.com/ansible/ansible/issues/82554 Change-Id: If26ec122b8defaa1dc1a44f8d6cb2510982cfdf7 |
||
|
Jonathan Rosser
|
3719d5bf8b |
Install architecture specific efi firmware for qemu
The qemu-efi package does not exist on Ubuntu Noble, so instead install the specific package for the host architecture. Change-Id: Id91cafc9c2f234bd5f18017a99f757f2bd751b35 |
||
|
Zuul
|
0d90186970 | Merge "Allow to supply multiline overrides to vendor_data" | ||
|
Dmitriy Rabotyagov
|
d40f5a4725 |
Disable heartbeat_in_pthread by default
The default value for heartbeat_in_pthread has been reverted in oslo.messaging to False [1] and backported back to Yoga. At the moment this setting brings intermittent issues during live migrations of instances and some other operations. So makes sense to align it with default value. [1] https://review.opendev.org/c/openstack/oslo.messaging/+/852251 Change-Id: I5601726095ff19620de2d87220efad191cf7cb6d |
||
|
Dmitriy Rabotyagov
|
1b6740f3f8 |
Allow to supply multiline overrides to vendor_data
According to the documentation, it is expected to have a multiline data inside vendor_data.json [1] [1] https://cloudinit.readthedocs.io/en/latest/reference/datasources/openstack.html#vendor-data Depends-On: https://review.opendev.org/c/openstack/ansible-config_template/+/924217 Closes-Bug: #2073171 Change-Id: Ifc1239e4ef768e94c44d8d07df7a0b93c73638f9 |
||
|
Dmitriy Rabotyagov
|
6d32065246 |
Update conditions for kernel statoverride
With update of ansible version having variables in conditions is not allowed anymore, which results in error like: `Conditional is marked as unsafe, and cannot be evaluated` Change-Id: I6e8e0ee1ffc2c154bac0f64f2e797281d7ba966f |
||
|
Zuul
|
4689ed7ebf | Merge "reno: Update master for unmaintained/zed" | ||
|
Dmitriy Rabotyagov
|
85bbd5f2c3 |
Define unique hostname for QManager
Due to the shortcoming of QManager implementation [1], in case of uWSGI usage on metal hosts, the flow ends up with having the same hostname/processname set, making services to fight over same file under SHM. In order to avoid this, we prepend the hostname with a service_name. We can not change processname instead, since it will lead to the fight between different processes of the same service. [1] https://bugs.launchpad.net/oslo.messaging/+bug/2065922 Change-Id: Ie8c68cad4a89e5fcc43dad53d895d093cb3fe671 |
||
|
Andrew Bonney
|
ae20d2d9fd |
Add tag to enable targeting of post-install config elements only
<service>-config tags are quite broad and have a long execution time. Where you only need to modify a service's '.conf' file and similar it is useful to have a quicker method to do so. Change-Id: Idf0a0a7033d8f6c4d6efebff456ea3f19ea81185 |
||
|
Dmitriy Rabotyagov
|
6a592e88d0 |
Implement variables to address oslo.messaging improvements
During last release cycle oslo.messaging has landed [1] series of extremely useful changes that are designed to implement modern messaging techniques for rabbitmq quorum queues. Since these changes are breaking and require queues being re-created, it makes total sense to align these with migration to quorum queues by default. Change-Id: Ia5069c9976d07ee3949e637d8eb76a06b380cdec |
||
| bb958e66c6 |
reno: Update master for unmaintained/zed
Update the zed release notes configuration to build from unmaintained/zed. Change-Id: Ic2423331f637f6054cc9c138aa6ca48ab3c08d61 |
|||
|
Dmitriy Rabotyagov
|
97c408e19d |
Add variable to globally control notifications enablement
In order to be able to globally enable notification reporting for all services, without an need to have ceilometer deployed or bunch of overrides for each service, we add `oslomsg_notify_enabled` variable that aims to control behaviour of enabled notifications. Presence of ceilometer is still respected by default and being referenced. Potential usecase are various billing panels that do rely on notifications but do not require presence of Ceilometer. Change-Id: Ib5d4f174be922f9b6f5ece35128a604fddb58e59 |
||
|
Dmitriy Rabotyagov
|
82d439c3fb |
Add service policies defenition
In order to allow definition of policies per service, we need to add variables so service roles, that will be passed to openstack.osa.mq_setup. Currently this can be handled by leveraging group_vars and overriding `oslomsg_rpc_policies` as a whole, but it's not obvious and can be non-trivial for some groups which are co-locating multiple services or in case of metal deployments. Change-Id: I6a4989df2cd53cc50faae120e96aa4480268f42d |
||
|
Zuul
|
737da47464 | Merge "Include PKI role only once" | ||
|
Zuul
|
4943bab3fd | Merge "fix apparmor profile for non-standard nova home" | ||
|
Dmitriy Rabotyagov
|
466e7572bb |
Include PKI role only once
This patch proposes to move condition on when to install certificates from the role include statement to a combined "view" for API and Consoles. While adding computes to the same logic might be beneficial for CI and AIO metal deployments, it potentially might have a negative effect for real deployments, as it will create bunch of Skipped tasks for computes so we leave them separated. With that API and Console are usually placed on same hosts, so it makes sense to distribute certs towards them once but keeping possibility of different hosts in mind. Change-Id: I8e28a79a6e3a5be1fe54004ea1d2c3a3ccdc20bc |
||
|
Zuul
|
3c62a72725 | Merge "Enable deployers to force update cell mappings" | ||
|
Dmitriy Rabotyagov
|
51177a6574 |
Enable deployers to force update cell mappings
Add variable nova_cell_force_update to enable deployers to ensure that role execution will also update cell mappings whenever that is needed. For instance, it could be password rotation or intention to update MySQL address. Change-Id: I5b99d58a5c4d27a363306361544c5d80759483fd |
||
|
Dmitriy Rabotyagov
|
ea39d38321 |
Ensure PKI role is run idempotently for AIO metal scenario
Due to clash in resulting certificate names they were re-genearated each playbook run. In order to sort that we need to rename certificate names. As `nova_backend_ssl` was implemented latest and not that widely adopted, we change name for it. This will cause all backend certificates for API to be re-generated. Change-Id: I4bca3bb2733fe25dad71345f84d9030c535c901b |
||
|
Dmitriy Rabotyagov
|
3515638326 |
Ensure TLS is enabled properly for cell0 mapping DB connection
Once we've enabled TLS requirement in [1] jobs started failing on cell0 mapping as it was actually different and not connecting to MariaDB through TLS when it was assumed it is. [1] https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/911009 Change-Id: I96fa921cfdb849f59b5abd8452061d4c5bd04a76 |
||
|
Jimmy McCrory
|
501cf14342 |
Ensure nova_device_spec is templated as JSON string
When the nova_device_spec variable is provided as either a string or a mapping, ensure that it's templated as a JSON string. Also handle either strings or mappings within nova_device_spec if it's provided as a list. Closes-Bug: 2057961 Change-Id: I7041a19547af580408ff704578cb8f12d37da1ae |
||
|
Aleksandr Chudinov
|
7bec243c62 |
fix apparmor profile for non-standard nova home
in cases when non-standard path to nova instances is configured with nova_system_home_folder variable there may be problems with instances spawning due to libvirt virt-aa-helper missing permission in apparmor profile, this commit resolves this Change-Id: I3d37eb5a9635044570690370dfcbc060ff4d9e49 |
||
|
Zuul
|
bfa8e12fcc | Merge "Fix nova device_spec to support multiple values" | ||
|
Dmitriy Rabotyagov
|
b78e8a68ea |
Evaluate my_ip address once
Instead of evaluating same condition of my_ip in multiple places across the role this patch suggests doing this once in vars and using the resulting variable afterwards. This not only reduce amount of evaluations made throughout the role runtime, but also solves possible corner cases where some syntax may go off. Closes-Bug: #2052884 Change-Id: I454b53713ecacf844ac14f77b6d1e1adc1322c0e |
||
|
Dmitriy Rabotyagov
|
9843c47e81 |
Always distribute qemu config file
In case when ceph is not being used as backend for nova, qemu.conf file is not distributed, thus some settings, like nova_qemu_vnc_tls do not have any effect Closes-Bug: #2003749 Change-Id: I4bc68567cda57d73d030d9a5017cc411f7ee7732 |
||
|
Andrew Bonney
|
c7a976c584 |
Fix nova device_spec to support multiple values
It appears there was a change to remove the list option when moving from pci_passthrough_whitelist. Instead device_spec can be specified multiple times in the file. This patch aims to resolve this whilst maintaining backwards compatibility. Change-Id: I12b38e45d7b41fbf4786d3320e511eb9127fe216 |
||
|
Dmitriy Rabotyagov
|
5300fcea9d |
Run ceph_client when cinder uses Ceph
In usecases where only cinder is using ceph we currently do not execute ceph_client role, which makes nodes failing to spawn instances from RBD volumes. Sample usecase where Glance might be using Swift and it might be desired to use local storage for Nova ephemeral drives, but cinder spawning volumes on Ceph Currently this can be workarounded with setting `nova_rbd_inuse: True` but at the same time `nova_libvirt_images_rbd_pool: ''`, though this is counter-intuitive and this patch aims to improve this. Change-Id: I412d1e9ccb51f0cd33a98333bfa1a01510867fbe |
||
|
Dmitriy Rabotyagov
|
5a533aae23 |
Improve Blazar integration with Nova
As of today we do not have any means of Blazar integration with Nova, while we do provide roles for Blazar installation for a while now. This patch aims to bring in more native integration and remove necessity of overrides for such deployment. Related-Bug: #2048048 Co-Authored-By: Alexey Rusetsky <fenuks@fenuks.ru> Change-Id: Ica50a5504de1b1604f72123751cbb3f45c85ab46 |
||
|
Zuul
|
20e83153bb | Merge "Drop until-complete flag for db purge" | ||
|
Damian Dabrowski
|
ab72a180e6 |
Avoid failures when default libvirt network does not exist
This is a follow-up change to [1]. Depending on operating system and environment configuration, default libvirt network may not exist. Right now, `Check for libvirt default network` task throws an error in this case causing nova playbook to fail. This change fixes that by instructing ansible to not throw an error if `virsh net-list` fails with "Network not found: no network with matching name" because it is acceptable to not have this network. [1] https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/899768 Change-Id: If692bc94f421bc84ad9e6d43f548b68196a9e751 |
||
|
Damian Dabrowski
|
feb15af75b |
Always disable libvirt default network
Currently, autostart for libvirt default network is disabled only when this network is active during nova playbook execution. It's an incorrect behavior because in some cases this network may not be active from the beginning. Autostart should be always disabled to ensure that this network will not be unexpectedly marked as active in the future(during package upgrade, host reboot etc.). Closes-Bug: #2042369 Change-Id: I697234bda1601b534ce1b6ab186fa98f83179ee8 |
||
|
Zuul
|
f372c88a09 | Merge "Add nova_libvirt_live_migration_inbound_addr to compute SAN" | ||
|
Zuul
|
5b7678c503 | Merge "Cleanup upgrade to ssh_keypairs step" | ||
|
Dmitriy Rabotyagov
|
51ce1d4923 |
Drop until-complete flag for db purge
Flag --until-complete is not valid for the nova-manage db purge command, it is working only for archive_deleted_rows [1]. Suposedly it was a copy/paste mistake to keep the flag in place. [1] https://docs.openstack.org/nova/latest/cli/nova-manage.html#db-archive-deleted-rows Change-Id: I7be8c41bd52b955d83c4452e67ef323abe00969e |
||
|
Stuart Grace
|
7f431ebcda |
Use internal endpoint for barbican API
Nova defaults to using public endpoint for Barbican API which would require internet access from the compute node so change this to use the internal API endpoint. Change-Id: Iaa14a9bf80d2e02197e74d67e812afc518fe1b65 |
||
|
Dmitriy Rabotyagov
|
4aa65eb606 |
Fix logic of discovering hosts by service
For quite some time, we relate usage of --by-service flag for nova-manage cell_v2 discover_hosts command to the used nova_virt_type. However, we run db_post_setup tasks only once and delegating to the conductor host. With latest changes to the logic, when this task in included from the playbook level it makes even less sense, since definition of nova_virt_type for conductor is weird and wrong. Instead, we attempt to detect if ironic is in use by checking hostvars of all compute nodes for that. It will include host_vars, group_vars, all sort of extra variables, etc. Thus, ironic hosts should be better discovered now with nova-manage command. Related-Bug: #2034583 Change-Id: I3deea859a4017ff96919290ba50cb375c0f960ea |
||
|
Dmitriy Rabotyagov
|
738ac83cf5 |
Cleanup upgrade to ssh_keypairs step
We have migrated to usage of ssh_keypairs role a while ago and we can remove old migration clean-up task. Change-Id: Ie3cbeb4bd41d3137f2332f28dbc72c8028fb5b3a |
||
|
Dmitriy Rabotyagov
|
155323fe68 |
Add nova_libvirt_live_migration_inbound_addr to compute SAN
Some deployments might want to perform live migrations over dedicated networks, like fast storage network, while keep management over default mgmt network. Current default behaviour will prevent such usecase, since nova_libvirt_live_migration_inbound_addr is not added to the generated for libvirtd certificate, and thus live migration will fail. Also to enable users override default behviour more nicely and reduce code duplication, new variable ``nova_pki_compute_san`` was introduced, that handles SAN definition for compute nodes. Change-Id: I22cc1a20190f0573b0350369a6cea5310ab0f0a7 |
||
|
Zuul
|
32867052d7 | Merge "Run nova_db_post_setup from playbook directly" | ||
|
Dmitriy Rabotyagov
|
b266f9cda4 |
Stop generating ssh keypair for nova user
With transition to ssh-certificates for nova authorization, we no longer need to generate and have SSH certificates for the nova user. Change-Id: Iff105bafc177271cb59fb0662d4c139f56e64325 |
||
|
Dmitriy Rabotyagov
|
e4ffb047c0 |
Run nova_db_post_setup from playbook directly
Due to some bugs delegation of tasks from compute to conductor hosts does not work in real life. Due to that task import was moved to the playbook level using role import in combination with tasks_from. Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/897570 Change-Id: I777b1c90f57c805bc0a8593b5a5c7e63e43c4cd8 |
||
|
Dmitriy Rabotyagov
|
6fd5535e57 |
Add barbican_service_user section
Defining barbican_service_user is required for succesfull attachement of ecnrypted volumes to VMs. Without it being in place nova-compute fails with not being able to get service_token. Change-Id: I8ae3e263185b1cd8036a4fde12d9c950f2ce8b98 |
||
|
Dmitriy Rabotyagov
|
d82a9d424e |
Fix example playbook linters
Change-Id: I0d44b87c2ac31827eeb72c1db3d48e0ca571633a |