Amend "Configure max number of volumes to attach" spec
Update the spec to reflect what was actually implemented. TL;DR is during implementation, we found that the old limit in the libvirt driver of 26 was a limit on the maximum number of disk devices allowed to attach to a single instance, including the root disk (and any other disks). So "volumes" wasn't really correct for representing what is being limited and the terminology was changed to "disk devices". Related to blueprint conf-max-attach-volumes Change-Id: I3152d0ed64709495ff7f13ff1d75ce62558a8731
This commit is contained in:
1 changed files with 35 additions and 23 deletions
@@ -41,34 +41,44 @@ Use Cases
Proposed change
===============
When a user attempts to attach more than 26 volumes with the libvirt driver,
the attach fails in the ``reserve_block_device_name`` method in nova-compute,
which is eventually called by the ``attach_volume`` method in nova-api. The
``reserve_block_device_name`` method calls
When a user attempts to attach more than 26 disk devices with the libvirt
driver, the attach fails in the ``reserve_block_device_name`` method in
nova-compute, which is eventually called by the ``attach_volume`` method in
nova-api. The ``reserve_block_device_name`` method calls
``self.driver.get_device_name_for_instance`` to get the next available device
name for attaching the volume. If the driver has implemented the method, this
is where an attempt to go beyond the maximum allowed number of volumes to
attach, will fail. The libvirt driver fails after 26 volumes have been
is where an attempt to go beyond the maximum allowed number of disk devices to
attach, will fail. The libvirt driver fails after 26 disk devices have been
attached. Drivers that have not implemented ``get_device_name_for_instance``
appear to have no limit on the maximum number of volumes. The default
appear to have no limit on the maximum number of disk devices. The default
implementation of ``get_device_name_for_instance`` is located in the
``nova.compute.utils`` module. Only the libvirt driver has provided its own
implementation of ``get_device_name_for_instance``.
The ``reserve_block_device_name`` method is a synchronous RPC call (not cast).
This means we can have the configured allowed maximum set differently per
nova-compute and still fail fast in the API if the maximum has been exceeded.
nova-compute and still fail fast in the API if the maximum has been exceeded
during an attach volume request.
We propose to add a new configuration option ``[compute]max_volumes_to_attach``
IntOpt to use to configure the maximum allowed volumes to attach to a single
instance per nova-compute. This way, operators can set it appropriately
depending on what virt driver they are running and what their deployed
environment is like. The default will be unlimited (-1) to keep the current
behavior for all drivers except the libvirt driver.
For a server create, rebuild, evacuate, unshelve, or live migrate request, if
the maximum has been exceeded, the server will go into the ``ERROR`` state and
the server fault message will indicate the failure reason.
Note that the limit in the libvirt driver is actually on the total number of
disk devices allowed to attach to a single instance including the root disk
and any other disks. It does not differentiate between volumes and other disks.
We propose to add a new configuration option
``[compute]max_disk_devices_to_attach`` IntOpt to use to configure the maximum
allowed disk devices to attach to a single instance per nova-compute. This way,
operators can set it appropriately depending on what virt driver they are
running and what their deployed environment is like. The default will be
unlimited (-1) to keep the current behavior for all drivers except the libvirt
driver.
The configuration option will be enforced in the
``get_device_name_for_instance`` methods, using the count of the number of
already attached volumes. Upon failure, an exception will be propagated to
already attached disk devices. Upon failure, an exception will be propagated to
nova-api via the synchronous RPC call to nova-compute, and the user will
receive a 403 error (as opposed to the current 500 error).
@@ -76,7 +86,7 @@ Alternatives
------------
Other ways we could solve this include: choosing a new hard-coded maximum only
for the libvirt driver or creating a new quota limit for "maximum volumes
for the libvirt driver or creating a new quota limit for "maximum disk devices
allowed to attach" (see the ML thread in `References`_).
Data model impact
@@ -112,9 +122,9 @@ None
Other deployer impact
---------------------
Deployers will be able to set the ``[compute]max_volumes_to_attach``
configuration option to control how many volumes are allowed to be attached
to a single instance per nova-compute in their deployment.
Deployers will be able to set the ``[compute]max_disk_devices_to_attach``
configuration option to control how many disk devices are allowed to be
attached to a single instance per nova-compute in their deployment.
Developer impact
----------------
@@ -142,13 +152,15 @@ Other contributors:
Work Items
----------
* Add a new configuration option ``[compute]max_volumes_to_attach``, IntOpt
* Add a new configuration option ``[compute]max_disk_devices_to_attach``,
IntOpt
* Modify (or remove) the libvirt driver's implementation of the
``get_device_name_for_instance`` method to accomodate more than 26 volumes
* Add enforcement of ``[compute]max_volumes_to_attach`` to the
``get_device_name_for_instance`` method to accomodate more than 26 disk
devices
* Add enforcement of ``[compute]max_disk_devices_to_attach`` to the
``get_device_name_for_instance`` methods
* Add handling of the raised exception in the API to translate to a 403 to the
user, if the maximum number of allowed volumes is exceeded
user, if the maximum number of allowed disk devices is exceeded
Dependencies
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.