Ceilometer Configuration Options¶
Ceilometer Sample Configuration File¶
Configure Ceilometer by editing /etc/ceilometer/ceilometer.conf.
No config file is provided with the source code, it will be created during the installation. In case where no configuration file was installed, one can be easily created by running:
oslo-config-generator \ --config-file=/etc/ceilometer/ceilometer-config-generator.conf \ --output-file=/etc/ceilometer/ceilometer.conf
The following is a sample Ceilometer configuration for adaptation and use. It is auto-generated from Ceilometer when this documentation is built, and can also be viewed in file form.
[DEFAULT] # # From ceilometer # # Polling namespace(s) to be used while resource polling (list value) #polling_namespaces = compute,central # DEPRECATED: Inspector to use for inspecting the hypervisor layer. (string # value) # Possible values: # libvirt - <No description provided> # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: libvirt is the only supported hypervisor #hypervisor_inspector = libvirt # Libvirt domain type. (string value) # Possible values: # kvm - <No description provided> # lxc - <No description provided> # qemu - <No description provided> # parallels - <No description provided> #libvirt_type = kvm # Override the default libvirt URI (which is dependent on libvirt_type). # (string value) #libvirt_uri = # Swift reseller prefix. Must be on par with reseller_prefix in proxy- # server.conf. (string value) #reseller_prefix = AUTH_ # Configuration file for pipeline definition. (string value) #pipeline_cfg_file = pipeline.yaml # Configuration file for event pipeline definition. (string value) #event_pipeline_cfg_file = event_pipeline.yaml # Configuration file for polling definition. (string value) #cfg_file = polling.yaml # Path to directory where socket file for polling heartbeat will be created. # (string value) #heartbeat_socket_dir = <None> # Work-load partitioning group prefix. Use only if you want to run multiple # polling agents with different config files. For each sub-group of the agent # pool with the same partitioning_group_prefix a disjoint subset of pollsters # should be loaded. (string value) #partitioning_group_prefix = <None> # Batch size of samples to send to notification agent, Set to 0 to disable. # When prometheus exporter feature is used, this should be largered than # maximum number of samples per metric. (integer value) #batch_size = 50 # List of directories with YAML files used to created pollsters. (multi valued) #pollsters_definitions_dirs = /etc/ceilometer/pollsters.d # Identify project and user names from polled samples. By default, collecting # these values is disabled due to the fact that it could overwhelm keystone # service with lots of continuous requests depending upon the number of # projects, users and samples polled from the environment. While using this # feature, it is recommended that ceilometer be configured with a caching # backend to reduce the number of calls made to keystone. (boolean value) # Deprecated group/name - [DEFAULT]/tenant_name_discovery #identity_name_discovery = false # Whether the polling service should be sending notifications after polling # cycles. (boolean value) #enable_notifications = true # Allow this ceilometer polling instance to expose directly the retrieved # metrics in Prometheus format. (boolean value) #enable_prometheus_exporter = false # A list of ipaddr:port combinations on which the exported metrics will be # exposed. (list value) #prometheus_listen_addresses = 127.0.0.1:9101 # Whether the polling service should ignore disabled projects or not. (boolean # value) #ignore_disabled_projects = false # Whether it will expose tls metrics or not (boolean value) #prometheus_tls_enable = false # The certificate file to allow this ceilometer to expose tls scrape endpoints # (string value) #prometheus_tls_certfile = <None> # The private key to allow this ceilometer to expose tls scrape endpoints # (string value) #prometheus_tls_keyfile = <None> # The number of threads used to process the pollsters.The value one (1) means # that the processing is in aserial fashion (not ordered!). The value zero (0) # means that the we will use as much threads as the number of pollsters # configured in the polling task. Any otherpositive integer can be used to fix # an upper bound limitto the number of threads used for processing pollsters # inparallel. One must bear in mind that, using more than onethread might not # take full advantage of the discovery cache and pollsters cache processes; it # is possible though to improve/use pollsters that synchronize themselves in # the cache objects. (integer value) # Minimum value: 0 #threads_to_process_pollsters = 1 # Source for samples emitted on this instance. (string value) #sample_source = openstack # List of metadata prefixes reserved for metering use. (list value) #reserved_metadata_namespace = metering. # Limit on length of reserved metadata values. (integer value) #reserved_metadata_length = 256 # List of metadata keys reserved for metering use. And these keys are # additional to the ones included in the namespace. (list value) #reserved_metadata_keys = # Path to the rootwrap configuration file to use for running commands as root # (string value) #rootwrap_config = /etc/ceilometer/rootwrap.conf # Hostname, FQDN or IP address of this host. Must be valid within AMQP key. # (host address value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #host = <your_hostname> # DEPRECATED: Timeout seconds for HTTP requests. Set it to None to disable # timeout. (integer value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option has no effect #http_timeout = 600 # Maximum number of parallel requests for services to handle at the same time. # (integer value) # Minimum value: 1 #max_parallel_requests = 64 # # From cotyledon # # Enables or disables logging values of all registered options when starting a # service (at DEBUG level). (boolean value) # Note: This option can be changed without restarting. #log_options = true # Specify a timeout after which a gracefully shutdown server will exit. Zero # value means endless wait. (integer value) # Note: This option can be changed without restarting. #graceful_shutdown_timeout = 60 # # From oslo.log # # If set to true, the logging level will be set to DEBUG instead of the default # INFO level. (boolean value) # Note: This option can be changed without restarting. #debug = false # The name of a logging configuration file. This file is appended to any # existing logging configuration files. For details about logging configuration # files, see the Python logging module documentation. Note that when logging # configuration files are used then all logging configuration is set in the # configuration file and other logging configuration options are ignored (for # example, log-date-format). (string value) # Note: This option can be changed without restarting. # Deprecated group/name - [DEFAULT]/log_config #log_config_append = <None> # Defines the format string for %%(asctime)s in log records. Default: # %(default)s . This option is ignored if log_config_append is set. (string # value) #log_date_format = %Y-%m-%d %H:%M:%S # (Optional) Name of log file to send logging output to. If no default is set, # logging will go to stderr as defined by use_stderr. This option is ignored if # log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logfile #log_file = <None> # (Optional) The base directory used for relative log_file paths. This option # is ignored if log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logdir #log_dir = <None> # DEPRECATED: Uses logging handler designed to watch file system. When log file # is moved or removed this handler will open a new log file with specified path # instantaneously. It makes sense only if log_file option is specified and # Linux platform is used. This option is ignored if log_config_append is set. # (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This function is known to have bene broken for long time, and depends # on the unmaintained library #watch_log_file = false # Use syslog for logging. Existing syslog format is DEPRECATED and will be # changed later to honor RFC5424. This option is ignored if log_config_append # is set. (boolean value) #use_syslog = false # Enable journald for logging. If running in a systemd environment you may wish # to enable journal support. Doing so will use the journal native protocol # which includes structured metadata in addition to log messages.This option is # ignored if log_config_append is set. (boolean value) #use_journal = false # Syslog facility to receive log lines. This option is ignored if # log_config_append is set. (string value) #syslog_log_facility = LOG_USER # Use JSON formatting for logging. This option is ignored if log_config_append # is set. (boolean value) #use_json = false # Log output to standard error. This option is ignored if log_config_append is # set. (boolean value) #use_stderr = false # (Optional) Set the 'color' key according to log levels. This option takes # effect only when logging to stderr or stdout is used. This option is ignored # if log_config_append is set. (boolean value) #log_color = false # The amount of time before the log files are rotated. This option is ignored # unless log_rotation_type is set to "interval". (integer value) #log_rotate_interval = 1 # Rotation interval type. The time of the last file change (or the time when # the service was started) is used when scheduling the next rotation. (string # value) # Possible values: # Seconds - <No description provided> # Minutes - <No description provided> # Hours - <No description provided> # Days - <No description provided> # Weekday - <No description provided> # Midnight - <No description provided> #log_rotate_interval_type = days # Maximum number of rotated log files. (integer value) #max_logfile_count = 30 # Log file maximum size in MB. This option is ignored if "log_rotation_type" is # not set to "size". (integer value) #max_logfile_size_mb = 200 # Log rotation type. (string value) # Possible values: # interval - Rotate logs at predefined time intervals. # size - Rotate logs once they reach a predefined size. # none - Do not rotate log files. #log_rotation_type = none # Format string to use for log messages with context. Used by # oslo_log.formatters.ContextFormatter (string value) #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s # Format string to use for log messages when context is undefined. Used by # oslo_log.formatters.ContextFormatter (string value) #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s # Additional data to append to log message when logging level for the message # is DEBUG. Used by oslo_log.formatters.ContextFormatter (string value) #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d # Prefix each line of exception output with this format. Used by # oslo_log.formatters.ContextFormatter (string value) #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s # Defines the format string for %(user_identity)s that is used in # logging_context_format_string. Used by oslo_log.formatters.ContextFormatter # (string value) #logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s # List of package logging levels in logger=LEVEL pairs. This option is ignored # if log_config_append is set. (list value) #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,oslo_policy=INFO,dogpile.core.dogpile=INFO # Enables or disables publication of error events. (boolean value) #publish_errors = false # The format for an instance that is passed with the log message. (string # value) #instance_format = "[instance: %(uuid)s] " # The format for an instance UUID that is passed with the log message. (string # value) #instance_uuid_format = "[instance: %(uuid)s] " # Interval, number of seconds, of log rate limiting. (integer value) #rate_limit_interval = 0 # Maximum number of logged messages per rate_limit_interval. (integer value) #rate_limit_burst = 0 # Log level name used by rate limiting. Logs with level greater or equal to # rate_limit_except_level are not filtered. An empty string means that all # levels are filtered. (string value) # Possible values: # CRITICAL - <No description provided> # ERROR - <No description provided> # INFO - <No description provided> # WARNING - <No description provided> # DEBUG - <No description provided> # '' - <No description provided> #rate_limit_except_level = CRITICAL # Enables or disables fatal status of deprecations. (boolean value) #fatal_deprecations = false # # From oslo.messaging # # Size of executor thread pool when executor is threading or eventlet. (integer # value) # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size #executor_thread_pool_size = 64 # Seconds to wait for a response from a call. (integer value) #rpc_response_timeout = 60 # The network address and optional user credentials for connecting to the # messaging backend, in URL format. The expected format is: # # driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query # # Example: rabbit://rabbitmq:password@127.0.0.1:5672// # # For full details on the fields in the URL see the documentation of # oslo_messaging.TransportURL at # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html # (string value) #transport_url = rabbit:// # The default exchange under which topics are scoped. May be overridden by an # exchange name specified in the transport_url option. (string value) #control_exchange = openstack # Add an endpoint to answer to ping calls. Endpoint is named # oslo_rpc_server_ping (boolean value) #rpc_ping_enabled = false # # From oslo.service.service # # DEPRECATED: Enable eventlet backdoor. Acceptable values are 0, <port>, and # <start>:<end>, where 0 results in listening on a random tcp port number; # <port> results in listening on the specified port number (and not enabling # backdoor if that port is in use); and <start>:<end> results in listening on # the smallest unused port number within the specified range of port numbers. # The chosen port is displayed in the service's log file. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: The 'backdoor_port' option is deprecated and will be removed in a # future release. #backdoor_port = <None> # DEPRECATED: Enable eventlet backdoor, using the provided path as a unix # socket that can receive connections. This option is mutually exclusive with # 'backdoor_port' in that only one should be provided. If both are provided # then the existence of this option overrides the usage of that option. Inside # the path {pid} will be replaced with the PID of the current process. (string # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: The 'backdoor_socket' option is deprecated and will be removed in a # future release. #backdoor_socket = <None> # Enables or disables logging values of all registered options when starting a # service (at DEBUG level). (boolean value) #log_options = true # Specify a timeout after which a gracefully shutdown server will exit. Zero # value means endless wait. (integer value) #graceful_shutdown_timeout = 60 [cache] # # From oslo.cache # # Prefix for building the configuration dictionary for the cache region. This # should not need to be changed unless there is another dogpile.cache region # with the same configuration name. (string value) #config_prefix = cache.oslo # Default TTL, in seconds, for any cached item in the dogpile.cache region. # This applies to any cached method that doesn't have an explicit cache # expiration time defined for it. (integer value) # Minimum value: 1 #expiration_time = 600 # Expiration time in cache backend to purge expired records automatically. This # should be greater than expiration_time and all cache_time options (integer # value) # Minimum value: 1 #backend_expiration_time = <None> # Cache backend module. For eventlet-based or environments with hundreds of # threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is # recommended. For environments with less than 100 threaded servers, Memcached # (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test # environments with a single instance of the server can use the # dogpile.cache.memory backend. (string value) # Possible values: # oslo_cache.memcache_pool - <No description provided> # oslo_cache.dict - <No description provided> # oslo_cache.mongo - <No description provided> # oslo_cache.etcd3gw - <No description provided> # dogpile.cache.pymemcache - <No description provided> # dogpile.cache.memcached - <No description provided> # dogpile.cache.pylibmc - <No description provided> # dogpile.cache.bmemcached - <No description provided> # dogpile.cache.dbm - <No description provided> # dogpile.cache.redis - <No description provided> # dogpile.cache.redis_sentinel - <No description provided> # dogpile.cache.memory - <No description provided> # dogpile.cache.memory_pickle - <No description provided> # dogpile.cache.null - <No description provided> #backend = dogpile.cache.null # Arguments supplied to the backend module. Specify this option once per # argument to be passed to the dogpile.cache backend. Example format: # "<argname>:<value>". (multi valued) #backend_argument = # Proxy classes to import that will affect the way the dogpile.cache backend # functions. See the dogpile.cache documentation on changing-backend-behavior. # (list value) #proxies = # Global toggle for caching. (boolean value) #enabled = false # Extra debugging from the cache backend (cache keys, get/set/delete/etc # calls). This is only really useful if you need to see the specific cache- # backend get/set/delete calls with the keys/values. Typically this should be # left set to false. (boolean value) #debug_cache_backend = false # Memcache servers in the format of "host:port". This is used by backends # dependent on Memcached.If ``dogpile.cache.memcached`` or # ``oslo_cache.memcache_pool`` is used and a given host refer to an IPv6 or a # given domain refer to IPv6 then you should prefix the given address with the # address family (``inet6``) (e.g ``inet6:[::1]:11211``, # ``inet6:[fd12:3456:789a:1::1]:11211``, # ``inet6:[controller-0.internalapi]:11211``). If the address family is not # given then these backends will use the default ``inet`` address family which # corresponds to IPv4 (list value) #memcache_servers = localhost:11211 # Number of seconds memcached server is considered dead before it is tried # again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). # (integer value) #memcache_dead_retry = 300 # Timeout in seconds for every call to a server. (dogpile.cache.memcache and # oslo_cache.memcache_pool backends only). (floating point value) #memcache_socket_timeout = 1.0 # Max total number of open connections to every memcached server. # (oslo_cache.memcache_pool backend only). (integer value) #memcache_pool_maxsize = 10 # Number of seconds a connection to memcached is held unused in the pool before # it is closed. (oslo_cache.memcache_pool backend only). (integer value) #memcache_pool_unused_timeout = 60 # Number of seconds that an operation will wait to get a memcache client # connection. (integer value) #memcache_pool_connection_get_timeout = 10 # Global toggle if memcache will be flushed on reconnect. # (oslo_cache.memcache_pool backend only). (boolean value) #memcache_pool_flush_on_reconnect = false # Enable the SASL(Simple Authentication and SecurityLayer) if the SASL_enable # is true, else disable. (boolean value) #memcache_sasl_enabled = false # the user name for the memcached which SASL enabled (string value) #memcache_username = <None> # the password for the memcached which SASL enabled (string value) #memcache_password = <None> # Redis server in the format of "host:port" (string value) #redis_server = localhost:6379 # Database id in Redis server (integer value) # Minimum value: 0 #redis_db = 0 # the user name for redis (string value) #redis_username = <None> # the password for redis (string value) #redis_password = <None> # Redis sentinel servers in the format of "host:port" (list value) #redis_sentinels = localhost:26379 # Timeout in seconds for every call to a server. (dogpile.cache.redis and # dogpile.cache.redis_sentinel backends only). (floating point value) #redis_socket_timeout = 1.0 # Service name of the redis sentinel cluster. (string value) #redis_sentinel_service_name = mymaster # Global toggle for TLS usage when communicating with the caching servers. # Currently supported by ``dogpile.cache.bmemcache``, # ``dogpile.cache.pymemcache``, ``oslo_cache.memcache_pool``, # ``dogpile.cache.redis`` and ``dogpile.cache.redis_sentinel``. (boolean value) #tls_enabled = false # Path to a file of concatenated CA certificates in PEM format necessary to # establish the caching servers' authenticity. If tls_enabled is False, this # option is ignored. (string value) #tls_cafile = <None> # Path to a single file in PEM format containing the client's certificate as # well as any number of CA certificates needed to establish the certificate's # authenticity. This file is only required when client side authentication is # necessary. If tls_enabled is False, this option is ignored. (string value) #tls_certfile = <None> # Path to a single file containing the client's private key in. Otherwise the # private key will be taken from the file specified in tls_certfile. If # tls_enabled is False, this option is ignored. (string value) #tls_keyfile = <None> # Set the available ciphers for sockets created with the TLS context. It should # be a string in the OpenSSL cipher list format. If not specified, all OpenSSL # enabled ciphers will be available. Currently supported by # ``dogpile.cache.bmemcache``, ``dogpile.cache.pymemcache`` and # ``oslo_cache.memcache_pool``. (string value) #tls_allowed_ciphers = <None> # Global toggle for the socket keepalive of dogpile's pymemcache backend # (boolean value) #enable_socket_keepalive = false # The time (in seconds) the connection needs to remain idle before TCP starts # sending keepalive probes. Should be a positive integer most greater than # zero. (integer value) # Minimum value: 0 #socket_keepalive_idle = 1 # The time (in seconds) between individual keepalive probes. Should be a # positive integer greater than zero. (integer value) # Minimum value: 0 #socket_keepalive_interval = 1 # The maximum number of keepalive probes TCP should send before dropping the # connection. Should be a positive integer greater than zero. (integer value) # Minimum value: 0 #socket_keepalive_count = 1 # Enable retry client mechanisms to handle failure. Those mechanisms can be # used to wrap all kind of pymemcache clients. The wrapper allows you to define # how many attempts to make and how long to wait between attemots. (boolean # value) #enable_retry_client = false # Number of times to attempt an action before failing. (integer value) # Minimum value: 1 #retry_attempts = 2 # Number of seconds to sleep between each attempt. (floating point value) #retry_delay = 0 # Amount of times a client should be tried before it is marked dead and removed # from the pool in the HashClient's internal mechanisms. (integer value) # Minimum value: 1 #hashclient_retry_attempts = 2 # Time in seconds that should pass between retry attempts in the HashClient's # internal mechanisms. (floating point value) #hashclient_retry_delay = 1 # Time in seconds before attempting to add a node back in the pool in the # HashClient's internal mechanisms. (floating point value) #dead_timeout = 60 # Global toggle for enforcing the OpenSSL FIPS mode. This feature requires # Python support. This is available in Python 3.9 in all environments and may # have been backported to older Python versions on select environments. If the # Python executable used does not support OpenSSL FIPS mode, an exception will # be raised. Currently supported by ``dogpile.cache.bmemcache``, # ``dogpile.cache.pymemcache`` and ``oslo_cache.memcache_pool``. (boolean # value) #enforce_fips_mode = false [compute] # # From ceilometer # # Ceilometer offers many methods to discover the instance running on a compute # node (string value) # Possible values: # naive - poll nova to get all instances # workload_partitioning - poll nova to get instances of the compute # libvirt_metadata - get instances from libvirt metadata but without instance # metadata (recommended) #instance_discovery_method = libvirt_metadata # New instances will be discovered periodically based on this option (in # seconds). By default, the agent discovers instances according to pipeline # polling interval. If option is greater than 0, the instance list to poll will # be updated based on this option's interval. Measurements relating to the # instances will match intervals defined in pipeline. This option is only used # for agent polling to Nova API, so it will work only when # 'instance_discovery_method' is set to 'naive'. (integer value) # Minimum value: 0 #resource_update_interval = 0 # The expiry to totally refresh the instances resource cache, since the # instance may be migrated to another host, we need to clean the legacy # instances info in local cache by totally refreshing the local cache. The # minimum should be the value of the config option of resource_update_interval. # This option is only used for agent polling to Nova API, so it will work only # when 'instance_discovery_method' is set to 'naive'. (integer value) # Minimum value: 0 #resource_cache_expiry = 3600 # Whether or not additional instance attributes that require Nova API queries # should be fetched. Currently the only value that requires fetching from Nova # API is 'metadata', the attribute storing user-configured server metadata, # which is used to fill out some optional fields such as the server group of an # instance. fetch_extra_metadata is currently set to True by default, but to # reduce the load on Nova API this will be changed to False in a future # release. (boolean value) #fetch_extra_metadata = true [coordination] # # From ceilometer # # The backend URL to use for distributed coordination. If left empty, per- # deployment central agent and per-host compute agent won't do workload # partitioning and will only function correctly if a single instance of that # service is running. (string value) #backend_url = <None> [event] # # From ceilometer # # Configuration file for event definitions. (string value) #definitions_cfg_file = event_definitions.yaml # Drop notifications if no event definition matches. (Otherwise, we convert # them with just the default traits) (boolean value) #drop_unmatched_notifications = false # Store the raw notification for select priority levels (info and/or error). By # default, raw details are not captured. (multi valued) #store_raw = [ipmi] # # From ceilometer # # Tolerance of IPMI/NM polling failures before disable this pollster. Negative # indicates retrying forever. (integer value) #polling_retry = 3 [meter] # # From ceilometer # # List directory to find files of defining meter notifications. (multi valued) #meter_definitions_dirs = /etc/ceilometer/meters.d #meter_definitions_dirs = /home/zuul/src/opendev.org/openstack/ceilometer/ceilometer/data/meters.d [notification] # # From ceilometer # # Acknowledge message when event persistence fails. (boolean value) #ack_on_event_error = true # Messaging URLs to listen for notifications. Example: # rabbit://user:pass@host1:port1[,user:pass@hostN:portN]/virtual_host # (DEFAULT/transport_url is used if empty). This is useful when you have # dedicate messaging nodes for each service, for example, all nova # notifications go to rabbit-nova:5672, while all cinder notifications go to # rabbit-cinder:5672. (multi valued) #messaging_urls = # Number of notification messages to wait before publishing them. (integer # value) # Minimum value: 1 #batch_size = 1 # Number of seconds to wait before dispatching samples when batch_size is not # reached (None means indefinitely). (integer value) #batch_timeout = <None> # Number of workers for notification service. (integer value) # Minimum value: 1 # Deprecated group/name - [DEFAULT]/notification_workers #workers = 1 # Select which pipeline managers to enable to generate data (multi valued) #pipelines = meter #pipelines = event # Exchanges name to listen for notifications. (multi valued) # Deprecated group/name - [DEFAULT]/http_control_exchanges #notification_control_exchanges = nova #notification_control_exchanges = glance #notification_control_exchanges = neutron #notification_control_exchanges = cinder #notification_control_exchanges = heat #notification_control_exchanges = keystone #notification_control_exchanges = trove #notification_control_exchanges = zaqar #notification_control_exchanges = swift #notification_control_exchanges = ceilometer #notification_control_exchanges = magnum #notification_control_exchanges = dns #notification_control_exchanges = ironic #notification_control_exchanges = aodh [oslo_concurrency] # # From oslo.concurrency # # Enables or disables inter-process locks. (boolean value) #disable_process_locking = false # Directory to use for lock files. For security, the specified directory # should only be writable by the user running the processes that need locking. # Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, # a lock path must be set. (string value) #lock_path = <None> [oslo_messaging_kafka] # # From oslo.messaging # # Max fetch bytes of Kafka consumer (integer value) #kafka_max_fetch_bytes = 1048576 # Default timeout(s) for Kafka consumers (floating point value) #kafka_consumer_timeout = 1.0 # Group id for Kafka consumer. Consumers in one group will coordinate message # consumption (string value) #consumer_group = oslo_messaging_consumer # Upper bound on the delay for KafkaProducer batching in seconds (floating # point value) #producer_batch_timeout = 0.0 # Size of batch for the producer async send (integer value) #producer_batch_size = 16384 # The compression codec for all data generated by the producer. If not set, # compression will not be used. Note that the allowed values of this depend on # the kafka version (string value) # Possible values: # none - <No description provided> # gzip - <No description provided> # snappy - <No description provided> # lz4 - <No description provided> # zstd - <No description provided> #compression_codec = none # Enable asynchronous consumer commits (boolean value) #enable_auto_commit = false # The maximum number of records returned in a poll call (integer value) #max_poll_records = 500 # Protocol used to communicate with brokers (string value) # Possible values: # PLAINTEXT - <No description provided> # SASL_PLAINTEXT - <No description provided> # SSL - <No description provided> # SASL_SSL - <No description provided> #security_protocol = PLAINTEXT # Mechanism when security protocol is SASL (string value) #sasl_mechanism = PLAIN # CA certificate PEM file used to verify the server certificate (string value) #ssl_cafile = # Client certificate PEM file used for authentication. (string value) #ssl_client_cert_file = # Client key PEM file used for authentication. (string value) #ssl_client_key_file = # Client key password file used for authentication. (string value) #ssl_client_key_password = [oslo_messaging_notifications] # # From oslo.messaging # # The Drivers(s) to handle sending notifications. Possible values are # messaging, messagingv2, routing, log, test, noop (multi valued) #driver = # A URL representing the messaging driver to use for notifications. If not set, # we fall back to the same configuration used for RPC. (string value) #transport_url = <None> # AMQP topic used for OpenStack notifications. (list value) #topics = notifications # The maximum number of attempts to re-send a notification message which failed # to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite # (integer value) #retry = -1 [oslo_messaging_rabbit] # # From oslo.messaging # # Use durable queues in AMQP. If rabbit_quorum_queue is enabled, queues will be # durable and this value will be ignored. (boolean value) #amqp_durable_queues = false # Auto-delete queues in AMQP. (boolean value) #amqp_auto_delete = false # Size of RPC connection pool. (integer value) # Minimum value: 1 #rpc_conn_pool_size = 30 # The pool size limit for connections expiration policy (integer value) #conn_pool_min_size = 2 # The time-to-live in sec of idle connections in the pool (integer value) #conn_pool_ttl = 1200 # Connect over SSL. (boolean value) #ssl = false # SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and # SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some # distributions. (string value) #ssl_version = # SSL key file (valid only if SSL enabled). (string value) #ssl_key_file = # SSL cert file (valid only if SSL enabled). (string value) #ssl_cert_file = # SSL certification authority file (valid only if SSL enabled). (string value) #ssl_ca_file = # Global toggle for enforcing the OpenSSL FIPS mode. This feature requires # Python support. This is available in Python 3.9 in all environments and may # have been backported to older Python versions on select environments. If the # Python executable used does not support OpenSSL FIPS mode, an exception will # be raised. (boolean value) #ssl_enforce_fips_mode = false # DEPRECATED: (DEPRECATED) It is recommend not to use this option anymore. Run # the health check heartbeat thread through a native python thread by default. # If this option is equal to False then the health check heartbeat will inherit # the execution model from the parent process. For example if the parent # process has monkey patched the stdlib by using eventlet/greenlet then the # heartbeat will be run through a green thread. This option should be set to # True only for the wsgi services. (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: The option is related to Eventlet which will be removed. In addition # this has never worked as expected with services using eventlet for core # service framework. #heartbeat_in_pthread = false # How long to wait (in seconds) before reconnecting in response to an AMQP # consumer cancel notification. (floating point value) # Minimum value: 0.0 # Maximum value: 4.5 #kombu_reconnect_delay = 1.0 # Random time to wait for when reconnecting in response to an AMQP consumer # cancel notification. (floating point value) # Minimum value: 0.0 #kombu_reconnect_splay = 0.0 # EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not # be used. This option may not be available in future versions. (string value) #kombu_compression = <None> # How long to wait a missing client before abandoning to send it its replies. # This value should not be longer than rpc_response_timeout. (integer value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout #kombu_missing_consumer_retry_timeout = 60 # Determines how the next RabbitMQ node is chosen in case the one we are # currently connected to becomes unavailable. Takes effect only if more than # one RabbitMQ node is provided in config. (string value) # Possible values: # round-robin - <No description provided> # shuffle - <No description provided> #kombu_failover_strategy = round-robin # The RabbitMQ login method. (string value) # Possible values: # PLAIN - <No description provided> # AMQPLAIN - <No description provided> # EXTERNAL - <No description provided> # RABBIT-CR-DEMO - <No description provided> #rabbit_login_method = AMQPLAIN # How frequently to retry connecting with RabbitMQ. (integer value) # Minimum value: 1 #rabbit_retry_interval = 1 # How long to backoff for between retries when connecting to RabbitMQ. (integer # value) # Minimum value: 0 #rabbit_retry_backoff = 2 # Maximum interval of RabbitMQ connection retries. (integer value) # Minimum value: 1 #rabbit_interval_max = 30 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring # is no longer controlled by the x-ha-policy argument when declaring a queue. # If you just want to make sure that all queues (except those with auto- # generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy # HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value) #rabbit_ha_queues = false # Use quorum queues in RabbitMQ (x-queue-type: quorum). The quorum queue is a # modern queue type for RabbitMQ implementing a durable, replicated FIFO queue # based on the Raft consensus algorithm. It is available as of RabbitMQ 3.8.0. # If set this option will conflict with the HA queues (``rabbit_ha_queues``) # aka mirrored queues, in other words the HA queues should be disabled. Quorum # queues are also durable by default so the amqp_durable_queues option is # ignored when this option is enabled. (boolean value) #rabbit_quorum_queue = false # Use quorum queues for transients queues in RabbitMQ. Enabling this option # will then make sure those queues are also using quorum kind of rabbit queues, # which are HA by default. (boolean value) #rabbit_transient_quorum_queue = false # Each time a message is redelivered to a consumer, a counter is incremented. # Once the redelivery count exceeds the delivery limit the message gets dropped # or dead-lettered (if a DLX exchange has been configured) Used only when # rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. # (integer value) #rabbit_quorum_delivery_limit = 0 # By default all messages are maintained in memory if a quorum queue grows in # length it can put memory pressure on a cluster. This option can limit the # number of messages in the quorum queue. Used only when rabbit_quorum_queue is # enabled, Default 0 which means dont set a limit. (integer value) #rabbit_quorum_max_memory_length = 0 # By default all messages are maintained in memory if a quorum queue grows in # length it can put memory pressure on a cluster. This option can limit the # number of memory bytes used by the quorum queue. Used only when # rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. # (integer value) #rabbit_quorum_max_memory_bytes = 0 # Positive integer representing duration in seconds for queue TTL (x-expires). # Queues which are unused for the duration of the TTL are automatically # deleted. The parameter affects only reply and fanout queues. Setting 0 as # value will disable the x-expires. If doing so, make sure you have a rabbitmq # policy to delete the queues or you deployment will create an infinite number # of queue over time.In case rabbit_stream_fanout is set to True, this option # will control data retention policy (x-max-age) for messages in the fanout # queue rather then the queue duration itself. So the oldest data in the stream # queue will be discarded from it once reaching TTL Setting to 0 will disable # x-max-age for stream which make stream grow indefinitely filling up the # diskspace (integer value) # Minimum value: 0 #rabbit_transient_queues_ttl = 1800 # Specifies the number of messages to prefetch. Setting to zero allows # unlimited messages. (integer value) #rabbit_qos_prefetch_count = 0 # Number of seconds after which the Rabbit broker is considered down if # heartbeat's keep-alive fails (0 disables heartbeat). (integer value) #heartbeat_timeout_threshold = 60 # How often times during the heartbeat_timeout_threshold we check the # heartbeat. (integer value) #heartbeat_rate = 3 # DEPRECATED: (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for # direct send. The direct send is used as reply, so the MessageUndeliverable # exception is raised in case the client queue does not # exist.MessageUndeliverable exception will be used to loop for a timeout to # lets a chance to sender to recover.This flag is deprecated and it will not be # possible to deactivate this functionality anymore (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Mandatory flag no longer deactivable. #direct_mandatory_flag = true # Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and # notify consumerswhen queue is down (boolean value) #enable_cancel_on_failover = false # Should we use consistant queue names or random ones (boolean value) #use_queue_manager = false # Hostname used by queue manager. Defaults to the value returned by # socket.gethostname(). (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #hostname = node1.example.com # Process name used by queue manager (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #processname = nova-api # Use stream queues in RabbitMQ (x-queue-type: stream). Streams are a new # persistent and replicated data structure ("queue type") in RabbitMQ which # models an append-only log with non-destructive consumer semantics. It is # available as of RabbitMQ 3.9.0. If set this option will replace all fanout # queues with only one stream queue. (boolean value) #rabbit_stream_fanout = false [oslo_reports] # # From oslo.reports # # Path to a log directory where to create a file (string value) #log_dir = <None> # The path to a file to watch for changes to trigger the reports, instead of # signals. Setting this option disables the signal trigger for the reports. If # application is running as a WSGI application it is recommended to use this # instead of signals. (string value) #file_event_handler = <None> # How many seconds to wait between polls when file_event_handler is set # (integer value) #file_event_handler_interval = 1 [polling] # # From ceilometer # # Configuration file for polling definition. (string value) #cfg_file = polling.yaml # Path to directory where socket file for polling heartbeat will be created. # (string value) #heartbeat_socket_dir = <None> # Work-load partitioning group prefix. Use only if you want to run multiple # polling agents with different config files. For each sub-group of the agent # pool with the same partitioning_group_prefix a disjoint subset of pollsters # should be loaded. (string value) #partitioning_group_prefix = <None> # Batch size of samples to send to notification agent, Set to 0 to disable. # When prometheus exporter feature is used, this should be largered than # maximum number of samples per metric. (integer value) #batch_size = 50 # List of directories with YAML files used to created pollsters. (multi valued) #pollsters_definitions_dirs = /etc/ceilometer/pollsters.d # Identify project and user names from polled samples. By default, collecting # these values is disabled due to the fact that it could overwhelm keystone # service with lots of continuous requests depending upon the number of # projects, users and samples polled from the environment. While using this # feature, it is recommended that ceilometer be configured with a caching # backend to reduce the number of calls made to keystone. (boolean value) # Deprecated group/name - [polling]/tenant_name_discovery #identity_name_discovery = false # Whether the polling service should be sending notifications after polling # cycles. (boolean value) #enable_notifications = true # Allow this ceilometer polling instance to expose directly the retrieved # metrics in Prometheus format. (boolean value) #enable_prometheus_exporter = false # A list of ipaddr:port combinations on which the exported metrics will be # exposed. (list value) #prometheus_listen_addresses = 127.0.0.1:9101 # Whether the polling service should ignore disabled projects or not. (boolean # value) #ignore_disabled_projects = false # Whether it will expose tls metrics or not (boolean value) #prometheus_tls_enable = false # The certificate file to allow this ceilometer to expose tls scrape endpoints # (string value) #prometheus_tls_certfile = <None> # The private key to allow this ceilometer to expose tls scrape endpoints # (string value) #prometheus_tls_keyfile = <None> # The number of threads used to process the pollsters.The value one (1) means # that the processing is in aserial fashion (not ordered!). The value zero (0) # means that the we will use as much threads as the number of pollsters # configured in the polling task. Any otherpositive integer can be used to fix # an upper bound limitto the number of threads used for processing pollsters # inparallel. One must bear in mind that, using more than onethread might not # take full advantage of the discovery cache and pollsters cache processes; it # is possible though to improve/use pollsters that synchronize themselves in # the cache objects. (integer value) # Minimum value: 0 #threads_to_process_pollsters = 1 [publisher] # # From ceilometer # # Secret value for signing messages. Set value empty if signing is not required # to avoid computational overhead. (string value) # Deprecated group/name - [DEFAULT]/metering_secret # Deprecated group/name - [publisher_rpc]/metering_secret # Deprecated group/name - [publisher]/metering_secret #telemetry_secret = change this for valid signing [publisher_notifier] # # From ceilometer # # DEPRECATED: The topic that ceilometer uses for metering notifications. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #metering_topic = metering # DEPRECATED: The topic that ceilometer uses for event notifications. (string # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #event_topic = event # The driver that ceilometer uses for metering notifications. (string value) # Deprecated group/name - [publisher_notifier]/metering_driver #telemetry_driver = messagingv2 [rgw_admin_credentials] # # From ceilometer # # Access key for Radosgw Admin. (string value) #access_key = <None> # Secret key for Radosgw Admin. (string value) #secret_key = <None> [rgw_client] # # From ceilometer # # Whether RGW uses implicit tenants or not. (boolean value) #implicit_tenants = false [service_credentials] # # From ceilometer-auth # # PEM encoded Certificate Authority to use when verifying HTTPs connections. # (string value) #cafile = <None> # PEM encoded client certificate cert file (string value) #certfile = <None> # PEM encoded client certificate key file (string value) #keyfile = <None> # Verify HTTPS connections. (boolean value) #insecure = false # Timeout value for http requests (integer value) #timeout = <None> # Collect per-API call timing information. (boolean value) #collect_timing = false # Log requests to multiple loggers. (boolean value) #split_loggers = false # Authentication type to load (string value) # Deprecated group/name - [service_credentials]/auth_plugin #auth_type = <None> # Config Section from which to load plugin specific options (string value) #auth_section = <None> # Authentication URL (string value) #auth_url = <None> # Scope for system operations (string value) #system_scope = <None> # Domain ID to scope to (string value) #domain_id = <None> # Domain name to scope to (string value) #domain_name = <None> # Project ID to scope to (string value) # Deprecated group/name - [service_credentials]/tenant_id #project_id = <None> # Project name to scope to (string value) # Deprecated group/name - [service_credentials]/tenant_name #project_name = <None> # Domain ID containing project (string value) #project_domain_id = <None> # Domain name containing project (string value) #project_domain_name = <None> # ID of the trust to use as a trustee use (string value) #trust_id = <None> # Optional domain ID to use with v3 and v2 parameters. It will be used for both # the user and project domain in v3 and ignored in v2 authentication. (string # value) #default_domain_id = <None> # Optional domain name to use with v3 API and v2 parameters. It will be used # for both the user and project domain in v3 and ignored in v2 authentication. # (string value) #default_domain_name = <None> # User id (string value) #user_id = <None> # Username (string value) # Deprecated group/name - [service_credentials]/user_name #username = <None> # User's domain id (string value) #user_domain_id = <None> # User's domain name (string value) #user_domain_name = <None> # User's password (string value) #password = <None> # Region name to use for OpenStack service endpoints. (string value) # Deprecated group/name - [DEFAULT]/os_region_name #region_name = <None> # Type of endpoint in Identity service catalog to use for communication with # OpenStack services. (string value) # Possible values: # public - <No description provided> # internal - <No description provided> # admin - <No description provided> # auth - <No description provided> # publicURL - <No description provided> # internalURL - <No description provided> # adminURL - <No description provided> # Deprecated group/name - [service_credentials]/os_endpoint_type #interface = public [service_types] # # From ceilometer # # Aodh service type. (string value) #aodh = alarming # Glance service type. (string value) #glance = image # Neutron service type. (string value) #neutron = network # Nova service type. (string value) #nova = compute # Radosgw service type. (string value) #radosgw = <None> # Swift service type. (string value) #swift = object-store # Cinder service type. (string value) #cinder = volumev3