A command-line interface for interaction with Apache Kafka
-
command auto-completion for bash, zsh, fish shell including dynamic completion for e.g. topics or consumer groups.
-
support for avro schemas
-
Configuration of different contexts
-
directly access kafka clusters inside your kubernetes cluster
-
support for consuming and producing protobuf-encoded messages
You can install the pre-compiled binary or compile from source.
homebrew:
# install kafkactl brew install kafkactl # upgrade kafkactl brew upgrade kafkactl
winget:
winget install kafkactl
deb/rpm:
Download the .deb or .rpm from the releases page and install with dpkg -i and rpm -i respectively.
yay (AUR)
There’s a kafkactl AUR package available for Arch. Install it with your AUR helper of choice (e.g. yay):
yay -S kafkactl
manually:
Download the pre-compiled binaries from the releases page and copy to the desired location.
If no config file is found, a default config is generated in $HOME/.config/kafkactl/config.yml.
This configuration is suitable to get started with a single node cluster on a local machine.
Create $HOME/.config/kafkactl/config.yml with a definition of contexts that should be available
contexts: default: brokers: - localhost:9092 remote-cluster: brokers: - remote-cluster001:9092 - remote-cluster002:9092 - remote-cluster003:9092 # optional: tls config tls: enabled: true ca: my-ca cert: my-cert certKey: my-key # set insecure to true to ignore all tls verification (defaults to false) insecure: false # optional: sasl support sasl: enabled: true username: admin password: admin # optional configure sasl mechanism as plaintext, scram-sha256, scram-sha512, oauth (defaults to plaintext) mechanism: oauth # optional configure sasl version as v0, v1 (defaults to not configured), Refer to: https://github.com/IBM/sarama/issues/3000#issuecomment-2415829478 version: v0 # optional tokenProvider configuration (only used for 'sasl.mechanism=oauth') tokenprovider: # plugin to use as token provider implementation (see plugin section) plugin: azure # optional: additional options passed to the plugin options: key: value # optional: access clusters running kubernetes kubernetes: enabled: false binary: kubectl #optional kubeConfig: ~/.kube/config #optional kubeContext: my-cluster namespace: my-namespace # optional: docker image to use (the tag of the image will be suffixed by `-scratch` or `-ubuntu` depending on command) image: private.registry.com/deviceinsight/kafkactl # optional: secret for private docker registry imagePullSecret: registry-secret # optional: secret containing tls certificates (e.g. ca.crt, cert.crt, key.key) tlsSecret: tls-secret # optional: Username to impersonate for the kubectl command asUser: user # optional: serviceAccount to use for the pod serviceAccount: my-service-account # optional: keep pod after exit (can be set to true for debugging) keepPod: true # optional: labels to add to the pod labels: key: value # optional: annotations to add to the pod annotations: key: value # optional: nodeSelector to add to the pod nodeSelector: key: value # optional: resource limits to add to the pod resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" # optional: affinity to add to the pod affinity: # note: other types of affinity also supported nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "<key>" operator: "<operator>" values: [ "<value>" ] # optional: tolerations to add to the pod tolerations: - key: "<key>" operator: "<operator>" value: "<value>" effect: "<effect>" # optional: clientID config (defaults to kafkactl-{username}) clientID: my-client-id # optional: kafkaVersion (defaults to 2.5.0) kafkaVersion: 1.1.1 # optional: timeout for admin requests (defaults to 3s) requestTimeout: 10s # optional: avro configuration avro: # optional: configure codec for (de)serialization as standard,avro (defaults to standard) # see: https://github.com/deviceinsight/kafkactl/issues/123 jsonCodec: avro # optional: schema registry schemaRegistry: url: localhost:8081 # optional: timeout for requests (defaults to 5s) requestTimeout: 10s # optional: basic auth credentials username: admin password: admin # optional: tls config for avro tls: enabled: true ca: my-ca cert: my-cert certKey: my-key # set insecure to true to ignore all tls verification (defaults to false) insecure: false # optional: default protobuf messages search paths protobuf: importPaths: - "/usr/include/protobuf" protoFiles: - "someMessage.proto" - "otherMessage.proto" protosetFiles: - "/usr/include/protoset/other.protoset" # see: https://pkg.go.dev/google.golang.org/protobuf@v1.36.6/encoding/protojson#MarshalOptions marshalOptions: allowPartial: true useProtoNames: true useEnumNumbers: true emitUnpopulated: true emitDefaultValues: true producer: # optional: changes the default partitioner partitioner: "hash" # optional: changes default required acks in produce request # see: https://pkg.go.dev/github.com/IBM/sarama?utm_source=godoc#RequiredAcks requiredAcks: "WaitForAll" # optional: maximum permitted size of a message (defaults to 1000000) maxMessageBytes: 1000000 consumer: # optional: isolationLevel (defaults to ReadCommitted) isolationLevel: ReadUncommitted
The config file location is resolved by
-
checking for a provided commandline argument:
--config-file=$PATH_TO_CONFIG -
evaluating the environment variable:
export KAFKA_CTL_CONFIG=$PATH_TO_CONFIG -
checking for a project config file in the working directory (see Project config files)
-
as default the config file is looked up from one of the following locations:
-
$HOME/.config/kafkactl/config.yml -
$HOME/.kafkactl/config.yml -
$APPDATA/kafkactl/config.yml -
/etc/kafkactl/config.yml
-
In addition to the config file locations above, kafkactl allows to create a config file on project level. A project config file is meant to be placed at the root level of a git repo and declares the kafka configuration for this repository/project.
In order to identify the config file as belonging to kafkactl the following names can be used:
-
kafkactl.yml -
.kafkactl.yml
During initialization kafkactl starts from the current working directory and recursively looks for a project level
config file. The recursive lookup ends at the boundary of a git repository (i.e. if a .git folder is found).
This way, kafkactl can be used conveniently anywhere in the git repository.
The current context can be set via commandline argument --context, environment variable CURRENT_CONTEXT or
it can be defined in a file.
If no current context is defined, the first context in the config file is used as current context. Additionally, in this case a file storing the current context is created.
The file is typically stored next to the config file and named current-context.yml.
The location of the file can be overridden via environment variable KAFKA_CTL_WRITABLE_CONFIG.
source <(kafkactl completion bash)
To load completions for each session, execute once: Linux:
kafkactl completion bash > /etc/bash_completion.d/kafkactl
MacOS:
kafkactl completion bash > /usr/local/etc/bash_completion.d/kafkactl
If shell completion is not already enabled in your environment, you will need to enable it. You can execute the following once:
echo "autoload -U compinit; compinit" >> ~/.zshrc
To load completions for each session, execute once:
kafkactl completion zsh > "${fpath[1]}/_kafkactl"
You will need to start a new shell for this setup to take effect.
The documentation for all available commands can be found here:
Assuming your Kafka brokers are accessible under kafka1:9092 and kafka2:9092, you can list topics by running:
docker run --env BROKERS="kafka1:9092 kafka2:9092" deviceinsight/kafkactl:latest get topicsIf a more elaborate config is needed, you can mount it as a volume:
docker run -v /absolute/path/to/config.yml:/etc/kafkactl/config.yml deviceinsight/kafkactl get topics
If your kafka cluster is not directly accessible from your machine, but it is accessible from a kubernetes cluster
which in turn is accessible via kubectl from your machine you can configure kubernetes support:
contexts:
kafka-cluster:
brokers:
- broker1:9092
- broker2:9092
kubernetes:
enabled: true
binary: kubectl #optional
kubeContext: k8s-cluster
namespace: k8s-namespaceInstead of directly talking to kafka brokers a kafkactl docker image is deployed as a pod into the kubernetes cluster, and the defined namespace. Standard-Input and Standard-Output are then wired between the pod and your shell running kafkactl.
There are two options:
-
You can run
kafkactl attachwith your kubernetes cluster configured. This will usekubectl runto create a pod in the configured kubeContext/namespace which runs an image of kafkactl and gives you abashinto the container. Standard-in is piped to the pod and standard-out, standard-err directly to your shell. You even get auto-completion. -
You can run any other kafkactl command with your kubernetes cluster configured. Instead of directly querying the cluster a pod is deployed, and input/output are wired between pod and your shell.
The names of the brokers have to match the service names used to access kafka in your cluster. A command like this should give you this information:
kubectl get svc | grep kafkaa bash available. The second option uses a docker image build from scratch and should therefore be quicker. Which option is more suitable, will depend on your use-case.
Every key in the config.yml can be overwritten via environment variables. The corresponding environment variable
for a key can be found by applying the following rules:
-
replace
.by_ -
replace
-by_ -
write the key name in ALL CAPS
e.g. the key contexts.default.tls.certKey has the corresponding environment variable CONTEXTS_DEFAULT_TLS_CERTKEY.
NOTE: an array variable can be written using whitespace as delimiter. For example BROKERS can be provided as
BROKERS="broker1:9092 broker2:9092 broker3:9092".
If environment variables for the default context should be set, the prefix CONTEXTS_DEFAULT_ can be omitted.
So, instead of CONTEXTS_DEFAULT_TLS_CERTKEY one can also set TLS_CERTKEY.
See root_test.go for more examples.
kafkactl supports plugins to cope with specifics when using Kafka-compatible clusters available from cloud providers such as Azure or AWS.
At the moment, plugins can only be used to implement a tokenProvider for oauth authentication.
In the future, plugins might implement additional commands to query data or configuration which is not part of the Kafka-API. One example would be Eventhub consumer groups/offsets for Azure.
See the plugin documentation for additional documentation and usage examples.
Available plugins:
Consuming messages from a topic can be done with:
kafkactl consume my-topic
In order to consume starting from the oldest offset use:
kafkactl consume my-topic --from-beginning
The following example prints message key and timestamp as well as partition and offset in yaml format:
kafkactl consume my-topic --print-keys --print-timestamps -o yaml
To print partition in default output format use:
kafkactl consume my-topic --print-partitions
Headers of kafka messages can be printed with the parameter --print-headers e.g.:
kafkactl consume my-topic --print-headers -o yaml
If one is only interested in the last n messages this can be achieved by --tail e.g.:
kafkactl consume my-topic --tail=5
The consumer can be stopped when the latest offset is reached using --exit parameter e.g.:
kafkactl consume my-topic --from-beginning --exit
The consumer can compute the offset it starts from using a timestamp:
kafkactl consume my-topic --from-timestamp 1384216367189 kafkactl consume my-topic --from-timestamp 2014年04月26日T17:24:37.123Z kafkactl consume my-topic --from-timestamp 2014年04月26日T17:24:37.123 kafkactl consume my-topic --from-timestamp 2009年08月12日T22:15:09Z kafkactl consume my-topic --from-timestamp 2017年07月19日T03:21:51 kafkactl consume my-topic --from-timestamp 2013年04月01日T22:43 kafkactl consume my-topic --from-timestamp 2014年04月26日
The from-timestamp parameter supports different timestamp formats. It can either be a number representing the epoch milliseconds
or a string with a timestamp in one of the supported date formats.
NOTE: --from-timestamp is not designed to schedule the beginning of consumer’s consumption. The offset corresponding to the timestamp is computed at the beginning of the process. So if you set it to a date in the future, the consumer will start from the latest offset.
The consumer can be stopped when the offset corresponding to a particular timestamp is reached:
kafkactl consume my-topic --from-timestamp 2017年07月19日T03:30:00 --to-timestamp 2017年07月19日T04:30:00
The to-timestamp parameter supports the same formats as from-timestamp.
NOTE: --to-timestamp is not designed to schedule the end of consumer’s consumption. The offset corresponding to the timestamp is computed at the beginning of the process. So if you set it to a date in the future, the consumer will stop at the current latest offset.
The following example prints keys in hex and values in base64:
kafkactl consume my-topic --print-keys --key-encoding=hex --value-encoding=base64
The consumer can convert protobuf messages to JSON in keys (optional) and values:
kafkactl consume my-topic --value-proto-type MyTopicValue --key-proto-type MyTopicKey --proto-file kafkamsg.proto
To join a consumer group and consume messages as a member of the group:
kafkactl consume my-topic --group my-consumer-group
If you want to limit the number of messages that will be read, specify --max-messages:
kafkactl consume my-topic --max-messages 2
Producing messages can be done in multiple ways. If we want to produce a message with key='my-key',
value='my-value' to the topic my-topic this can be achieved with one of the following commands:
echo "my-key#my-value" | kafkactl produce my-topic --separator=# echo "my-value" | kafkactl produce my-topic --key=my-key kafkactl produce my-topic --key=my-key --value=my-value
If we have a file containing messages where each line contains key and value separated by #, the file can be
used as input to produce messages to topic my-topic:
cat myfile | kafkactl produce my-topic --separator=#
The same can be accomplished without piping the file to stdin with the --file parameter:
kafkactl produce my-topic --separator=# --file=myfileIf the messages in the input file need to be split by a different delimiter than \n a custom line separator can be provided:
kafkactl produce my-topic --separator=# --lineSeparator=|| --file=myfileNOTE: if the file was generated with kafkactl consume --print-keys --print-timestamps my-topic the produce
command is able to detect the message timestamp in the input and will ignore it.
It is also possible to produce messages in json format:
# each line in myfile.json is expected to contain a json object with fields key, value kafkactl produce my-topic --file=myfile.json --input-format=json cat myfile.json | kafkactl produce my-topic --input-format=json
the number of messages produced per second can be controlled with the --rate parameter:
cat myfile | kafkactl produce my-topic --separator=# --rate=200
It is also possible to specify the partition to insert the message:
kafkactl produce my-topic --key=my-key --value=my-value --partition=2
Additionally, a different partitioning scheme can be used. When a key is provided the default partitioner
uses the hash of the key to assign a partition. So the same key will end up in the same partition:
# the following 3 messages will all be inserted to the same partition kafkactl produce my-topic --key=my-key --value=my-value kafkactl produce my-topic --key=my-key --value=my-value kafkactl produce my-topic --key=my-key --value=my-value # the following 3 messages will probably be inserted to different partitions kafkactl produce my-topic --key=my-key --value=my-value --partitioner=random kafkactl produce my-topic --key=my-key --value=my-value --partitioner=random kafkactl produce my-topic --key=my-key --value=my-value --partitioner=random
Message headers can also be written:
kafkactl produce my-topic --key=my-key --value=my-value --header key1:value1 --header key2:value\:2The following example writes the key from base64 and value from hex:
kafkactl produce my-topic --key=dGVzdC1rZXk= --key-encoding=base64 --value=0000000000000000 --value-encoding=hex
You can control how many replica acknowledgements are needed for a response:
kafkactl produce my-topic --key=my-key --value=my-value --required-acks=WaitForAll
Producing null values (tombstone record) is also possible:
kafkactl produce my-topic --null-value
Producing protobuf message converted from JSON:
kafkactl produce my-topic --key='{"keyField":123}' --key-proto-type MyKeyMessage --value='{"valueField":"value"}' --value-proto-type MyValueMessage --proto-file kafkamsg.proto
A more complex protobuf message converted from a multi-line JSON string can be produced using a file input with custom separators.
For example, if you have the following protobuf definition (complex.proto):
syntax = "proto3"; import "google/protobuf/timestamp.proto"; message ComplexMessage { CustomerInfo customer_info = 1; DeviceInfo device_info = 2; } message CustomerInfo { string customer_id = 1; string name = 2; } message DeviceInfo { string serial = 1; google.protobuf.Timestamp last_update = 2; }
And you have the following file (complex-msg.txt) that contains the key and value of the message:
msg-key##
{
"customer_info": {
"customer_id": "12345",
"name": "Bob"
},
"device_info": {
"serial": "abcde",
"last_update": "2024-03-02T07:01:02.000Z"
}
}
+++The command to produce the protobuf message using sample protobuf definition and input file would be:
kafkactl produce my-topic --value-proto-type=ComplexMessage --proto-file=complex.proto --lineSeparator='+++' --separator='##' --file=complex-msg.txt
In order to enable avro support you just have to add the schema registry to your configuration:
contexts:
localhost:
schemaRegistry:
url: localhost:8081kafkactl will lookup the topic in the schema registry in order to determine if key or value needs to be avro encoded.
If producing with the latest schemaVersion is sufficient, no additional configuration is needed an kafkactl handles
this automatically.
If however one needs to produce an older schemaVersion this can be achieved by providing the parameters keySchemaVersion, valueSchemaVersion.
# create a topic kafkactl create topic avro_topic # add a schema for the topic value curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \ --data '{"schema": "{\"type\": \"record\", \"name\": \"LongList\", \"fields\" : [{\"name\": \"next\", \"type\": [\"null\", \"LongList\"], \"default\": null}]}"}' \ http://localhost:8081/subjects/avro_topic-value/versions # produce a message kafkactl produce avro_topic --value {\"next\":{\"LongList\":{}}} # consume the message kafkactl consume avro_topic --from-beginning --print-schema -o yaml
As for producing kafkactl will also lookup the topic in the schema registry to determine if key or value needs to be
decoded with an avro schema.
The consume command handles this automatically and no configuration is needed.
An additional parameter print-schema can be provided to display the schema used for decoding.
kafkactl can consume and produce protobuf-encoded messages. In order to enable protobuf serialization/deserialization
you should add flag --value-proto-type and optionally --key-proto-type (if keys encoded in protobuf format)
with type name. Protobuf-encoded messages are mapped with pbjson.
kafkactl will search messages in following order:
-
Protoset files specified in
--protoset-fileflag -
Protoset files specified in
context.protobuf.protosetFilesconfig value -
Proto files specified in
--proto-fileflag -
Proto files specified in
context.protobuf.protoFilesconfig value
Proto files may require some dependencies in import sections. To specify additional lookup paths use
--proto-import-path flag or context.protobuf.importPaths config value.
If provided message types was not found kafkactl will return error.
Note that if you want to use raw proto files protoc installation don’t need to be installed.
Also note that protoset files must be compiled with included imports:
protoc -o kafkamsg.protoset --include_imports kafkamsg.proto
Assume you have following proto schema in kafkamsg.proto:
syntax = "proto3"; import "google/protobuf/timestamp.proto"; message TopicMessage { google.protobuf.Timestamp produced_at = 1; int64 num = 2; } message TopicKey { float fvalue = 1; }
"well-known" google/protobuf types are included so no additional proto files needed.
To produce message run
kafkactl produce <topic> --key '{"fvalue":1.2}' --key-proto-type TopicKey --value '{"producedAt":"2021-12-01T14:10:12Z","num":"1"}' --value-proto-type TopicValue --proto-file kafkamsg.proto
or with protoset
kafkactl produce <topic> --key '{"fvalue":1.2}' --key-proto-type TopicKey --value '{"producedAt":"2021-12-01T14:10:12Z","num":"1"}' --value-proto-type TopicValue --protoset-file kafkamsg.protoset
To consume messages run
kafkactl consume <topic> --key-proto-type TopicKey --value-proto-type TopicValue --proto-file kafkamsg.proto
or with protoset
kafkactl consume <topic> --key-proto-type TopicKey --value-proto-type TopicValue --protoset-file kafkamsg.protoset
In order to get a list of topics the get topics command can be used:
kafkactl get topics kafkactl list topics
A detailed description of a topic can be obtained with describe topic:
kafkactl describe topic my-topic
Per default only overwritten config entries are printed. To print all config entries including defaults use:
kafkactl describe topic my-topic --all-configs
To print only partition details of partitions with messages use:
kafkactl describe topic my-topic --skip-empty
The create topic allows you to create one or multiple topics.
Basic usage:
kafkactl create topic my-topic
The partition count can be specified with:
kafkactl create topic my-topic --partitions 32
The replication factor can be specified with:
kafkactl create topic my-topic --replication-factor 3
Configs can also be provided:
kafkactl create topic my-topic --config retention.ms=3600000 --config=cleanup.policy=compact
The topic configuration can also be taken from an existing topic using the following:
kafkactl describe topic my-topic -o json > my-topic-config.json
kafkactl create topic my-topic-clone --file my-topic-config.jsonUsing the alter topic command allows you to change the partition count, replication factor and topic-level
configurations of an existing topic.
The partition count can be increased with:
kafkactl alter topic my-topic --partitions 32
The replication factor can be altered with:
kafkactl alter topic my-topic --replication-factor 2
broker balanced. If you need more control over the assigned replicas use
alter partitiondirectly.
The topic configs can be edited by supplying key value pairs as follows:
kafkactl alter topic my-topic --config retention.ms=3600000 --config cleanup.policy=compact
The assigned replicas of a partition can directly be altered with:
# set brokers 102,103 as replicas for partition 3 of topic my-topic
kafkactl alter partition my-topic 3 -r 102,103New topic may be created from existing topic as follows:
kafkactl clone topic source-topic target-topic
Source topic must exist, target topic must not exist.
kafkactl clones partitions count, replication factor and config entries.
In order to get a list of consumer groups the get consumer-groups command can be used:
# all available consumer groups kafkactl get consumer-groups # only consumer groups for a single topic kafkactl get consumer-groups --topic my-topic # using command alias kafkactl get cg
To get detailed information about the consumer group use describe consumer-group. If the parameter --partitions
is provided details will be printed for each partition otherwise the partitions are aggregated to the clients.
# describe a consumer group kafkactl describe consumer-group my-group # show partition details only for partitions with lag kafkactl describe consumer-group my-group --only-with-lag # show details only for a single topic kafkactl describe consumer-group my-group --topic my-topic # using command alias kafkactl describe cg my-group
A consumer-group can be created as follows:
# create group with offset for all partitions set to oldest kafkactl create consumer-group my-group --topic my-topic --oldest # create group with offset for all partitions set to newest kafkactl create consumer-group my-group --topic my-topic --newest # create group with offset for a single partition set to specific offset kafkactl create consumer-group my-group --topic my-topic --partition 5 --offset 100 # create group for multiple topics with offset for all partitions set to oldest kafkactl create consumer-group my-group --topic my-topic-a --topic my-topic-b --oldest
A consumer group may be created as clone of another consumer group as follows:
kafkactl clone consumer-group source-group target-group
Source group must exist and have committed offsets. Target group must not exist or don’t have committed offsets.
kafkactl clones topic assignment and partition offsets.
in order to ensure the reset does what it is expected, per default only
the results are printed without actually executing it. Use the additional parameter --execute to perform the reset.
# reset offset of for all partitions to oldest offset kafkactl reset offset my-group --topic my-topic --oldest # reset offset of for all partitions to newest offset kafkactl reset offset my-group --topic my-topic --newest # reset offset for a single partition to specific offset kafkactl reset offset my-group --topic my-topic --partition 5 --offset 100 # reset offset to newest for all topics in the group kafkactl reset offset my-group --all-topics --newest # reset offset of for all partitions on multiple topics to oldest offset kafkactl reset offset my-group --topic my-topic-a --topic my-topic-b --oldest # reset offset to offset at a given timestamp(epoch)/datetime kafkactl reset offset my-group --topic my-topic-a --to-datetime 2014年04月26日T17:24:37.123Z # reset offset to offset at a given timestamp(epoch)/datetime kafkactl reset offset my-group --topic my-topic-a --to-datetime 1697726906352
In order to delete a consumer group offset use delete offset
# delete offset for all partitions of topic my-topic kafkactl delete offset my-group --topic my-topic # delete offset for partition 1 of topic my-topic kafkactl delete offset my-group --topic my-topic --partition 1
Available ACL operations are documented here.
# create an acl that allows topic read for a user 'consumer' kafkactl create acl --topic my-topic --operation read --principal User:consumer --allow # create an acl that denies topic write for a user 'consumer' coming from a specific host kafkactl create acl --topic my-topic --operation write --host 1.2.3.4 --principal User:consumer --deny # allow multiple operations kafkactl create acl --topic my-topic --operation read --operation describe --principal User:consumer --allow # allow on all topics with prefix common prefix kafkactl create acl --topic my-prefix --pattern prefixed --operation read --principal User:consumer --allow
# list all acl kafkactl get acl # list all acl (alias command) kafkactl get access-control-list # filter only topic resources kafkactl get acl --topics # filter only consumer group resources with operation read kafkactl get acl --groups --operation read # filter specific topic and user kafkactl get acl --resource-name my-topic --principal User:myUser # filter specific topic and host kafkactl get acl --resource-name my-topic --host my-host
# delete all topic read acls kafkactl delete acl --topics --operation read --pattern any # delete all topic acls for any operation kafkactl delete acl --topics --operation any --pattern any # delete all cluster acls for any operation kafkactl delete acl --cluster --operation any --pattern any # delete all consumer-group acls with operation describe, patternType prefixed and permissionType allow kafkactl delete acl --groups --operation describe --pattern prefixed --allow # delete all topic acls for a principal kafkactl delete acl --topics --operation any --pattern any --prinicipal User:myUser # delete all topic acls for a host kafkactl delete acl --topics --operation any --pattern any --host my-host
To get the list of brokers of a kafka cluster use get brokers
# get the list of brokers
kafkactl get brokersTo view configs for a single broker use describe broker
# describe broker
kafkactl describe broker 1Per default only dynamic configs are shown. To view all configs use --all-configs
kafkactl describe broker 1 --all-configs
Additionally, only default configs can be shown with:
kafkactl describe broker default
Using the alter broker command allows you to change dynamic broker configurations for individual brokers or cluster-wide defaults.
To alter a configuration for a specific broker:
kafkactl alter broker 101 --config background.threads=8
To alter a cluster-wide default configuration (affects brokers without individual overrides):
kafkactl alter broker default --config background.threads=8
Multiple configurations can be altered simultaneously:
kafkactl alter broker 101 --config background.threads=8 --config log.cleaner.threads=2
kafkactl provides comprehensive SCRAM (Salted Challenge Response Authentication Mechanism) user management capabilities for Kafka clusters that support SCRAM authentication. This allows you to create, modify, and manage user credentials directly through kafkactl.
-
Kafka 2.7.0+ (for SCRAM user management APIs)
-
Admin privileges on the Kafka cluster
-
SCRAM-enabled listeners configured on Kafka brokers
Create a new SCRAM user with default settings:
# Create user with SCRAM-SHA-256 (default mechanism) kafkactl create user myuser --password mypassword # Create user with specific mechanism kafkactl create user myuser --password mypassword --mechanism SCRAM-SHA-512 # Create user with custom iterations (default: 4096) kafkactl create user myuser --password mypassword --iterations 8192 # Create user with custom base64-encoded salt kafkactl create user myuser --password mypassword --salt "c2FsdA=="
Update existing user credentials:
# Update user password (keeps existing mechanism) kafkactl alter user myuser --password newpassword # Update user with different mechanism kafkactl alter user myuser --password newpassword --mechanism SCRAM-SHA-512 # Update with custom iterations kafkactl alter user myuser --password newpassword --iterations 16384
Remove SCRAM credentials by mechanism:
# Delete SCRAM-SHA-256 credentials (default) kafkactl delete user myuser --mechanism SCRAM-SHA-256 # Delete SCRAM-SHA-512 credentials kafkactl delete user myuser --mechanism SCRAM-SHA-512 # Note: A user may have multiple mechanisms, delete each separately
Get all SCRAM users and their mechanisms:
# List all users (table format) kafkactl get users # List users in JSON format kafkactl get users -o json # List users in YAML format kafkactl get users -o yaml
Get detailed information about a specific user:
# Describe user (table format) kafkactl describe user myuser # Describe user in JSON format kafkactl describe user myuser -o json # Describe user in YAML format kafkactl describe user myuser -o yaml
-
Salt Generation: kafkactl automatically generates cryptographically secure random salts unless custom salts are provided
-
Password Security: Passwords are transmitted securely to Kafka and never stored by kafkactl
-
Mechanism Support: Users can have credentials for multiple SCRAM mechanisms simultaneously
-
Admin Privileges: SCRAM user management requires admin-level access to the Kafka cluster
A single user can have credentials for multiple SCRAM mechanisms:
# Create user with SCRAM-SHA-256 kafkactl create user myuser --password mypassword --mechanism SCRAM-SHA-256 # Add SCRAM-SHA-512 credentials to the same user kafkactl create user myuser --password mypassword --mechanism SCRAM-SHA-512 # User now has both mechanisms kafkactl describe user myuser