当前位置:   article > 正文

librdkafka配置参数_enable.auto.offset.store

enable.auto.offset.store

Configuration properties

Global configuration properties

PropertyC/PRangeDefaultImportanceDescription
builtin.features*gzip, snappy, ssl, sasl, regex, lz4, sasl_gssapi, sasl_plain, sasl_scram, plugins, zstd, sasl_oauthbearer, http, oidclowIndicates the builtin features for this build of librdkafka. An application can either query this value or attempt to set it with its list of required features to check for library support.
Type: CSV flags
client.id*rdkafkalowClient identifier.
Type: string
metadata.broker.list*highInitial list of brokers as a CSV list of broker host or host:port. The application may also use rd_kafka_brokers_add() to add brokers during runtime.
Type: string
bootstrap.servers*highAlias for metadata.broker.list: Initial list of brokers as a CSV list of broker host or host:port. The application may also use rd_kafka_brokers_add() to add brokers during runtime.
Type: string
message.max.bytes*1000 … 10000000001000000mediumMaximum Kafka protocol request message size. Due to differing framing overhead between protocol versions the producer is unable to reliably enforce a strict max message limit at produce time and may exceed the maximum size by one message in protocol ProduceRequests, the broker will enforce the the topic’s max.message.bytes limit (see Apache Kafka documentation).
Type: integer
message.copy.max.bytes*0 … 100000000065535lowMaximum size for message to be copied to buffer. Messages larger than this will be passed by reference (zero-copy) at the expense of larger iovecs.
Type: integer
receive.message.max.bytes*1000 … 2147483647100000000mediumMaximum Kafka protocol response message size. This serves as a safety precaution to avoid memory exhaustion in case of protocol hickups. This value must be at least fetch.max.bytes + 512 to allow for protocol overhead; the value is adjusted automatically unless the configuration property is explicitly set.
Type: integer
max.in.flight.requests.per.connection*1 … 10000001000000lowMaximum number of in-flight requests per broker connection. This is a generic property applied to all broker communication, however it is primarily relevant to produce requests. In particular, note that other mechanisms limit the number of outstanding consumer fetch request per broker to one.
Type: integer
max.in.flight*1 … 10000001000000lowAlias for max.in.flight.requests.per.connection: Maximum number of in-flight requests per broker connection. This is a generic property applied to all broker communication, however it is primarily relevant to produce requests. In particular, note that other mechanisms limit the number of outstanding consumer fetch request per broker to one.
Type: integer
topic.metadata.refresh.interval.ms*-1 … 3600000300000lowPeriod of time in milliseconds at which topic and broker metadata is refreshed in order to proactively discover any new brokers, topics, partitions or partition leader changes. Use -1 to disable the intervalled refresh (not recommended). If there are no locally referenced topics (no topic objects created, no messages produced, no subscription or no assignment) then only the broker list will be refreshed every interval but no more often than every 10s.
Type: integer
metadata.max.age.ms*1 … 86400000900000lowMetadata cache max age. Defaults to topic.metadata.refresh.interval.ms * 3
Type: integer
topic.metadata.refresh.fast.interval.ms*1 … 60000250lowWhen a topic loses its leader a new metadata request will be enqueued with this initial interval, exponentially increasing until the topic metadata has been refreshed. This is used to recover quickly from transitioning leader brokers.
Type: integer
topic.metadata.refresh.fast.cnt*0 … 100010lowDEPRECATED No longer used.
Type: integer
topic.metadata.refresh.sparse*true, falsetruelowSparse metadata requests (consumes less network bandwidth)
Type: boolean
topic.metadata.propagation.max.ms*0 … 360000030000lowApache Kafka topic creation is asynchronous and it takes some time for a new topic to propagate throughout the cluster to all brokers. If a client requests topic metadata after manual topic creation but before the topic has been fully propagated to the broker the client is requesting metadata from, the topic will seem to be non-existent and the client will mark the topic as such, failing queued produced messages with ERR__UNKNOWN_TOPIC. This setting delays marking a topic as non-existent until the configured propagation max time has passed. The maximum propagation time is calculated from the time the topic is first referenced in the client, e.g., on produce().
Type: integer
topic.blacklist*lowTopic blacklist, a comma-separated list of regular expressions for matching topic names that should be ignored in broker metadata information as if the topics did not exist.
Type: pattern list
debug*generic, broker, topic, metadata, feature, queue, msg, protocol, cgrp, security, fetch, interceptor, plugin, consumer, admin, eos, mock, assignor, conf, allmediumA comma-separated list of debug contexts to enable. Detailed Producer debugging: broker,topic,msg. Consumer: consumer,cgrp,topic,fetch
Type: CSV flags
socket.timeout.ms*10 … 30000060000lowDefault timeout for network requests. Producer: ProduceRequests will use the lesser value of socket.timeout.ms and remaining message.timeout.ms for the first message in the batch. Consumer: FetchRequests will use fetch.wait.max.ms + socket.timeout.ms. Admin: Admin requests will use socket.timeout.ms or explicitly set rd_kafka_AdminOptions_set_operation_timeout() value.
Type: integer
socket.blocking.max.ms*1 … 600001000lowDEPRECATED No longer used.
Type: integer
socket.send.buffer.bytes*0 … 1000000000lowBroker socket send buffer size. System default is used if 0.
Type: integer
socket.receive.buffer.bytes*0 … 1000000000lowBroker socket receive buffer size. System default is used if 0.
Type: integer
socket.keepalive.enable*true, falsefalselowEnable TCP keep-alives (SO_KEEPALIVE) on broker sockets
Type: boolean
socket.nagle.disable*true, falsefalselowDisable the Nagle algorithm (TCP_NODELAY) on broker sockets.
Type: boolean
socket.max.fails*0 … 10000001lowDisconnect from broker when this number of send failures (e.g., timed out requests) is reached. Disable with 0. WARNING: It is highly recommended to leave this setting at its default value of 1 to avoid the client and broker to become desynchronized in case of request timeouts. NOTE: The connection is automatically re-established.
Type: integer
broker.address.ttl*0 … 864000001000lowHow long to cache the broker address resolving results (milliseconds).
Type: integer
broker.address.family*any, v4, v6anylowAllowed broker IP address families: any, v4, v6
Type: enum value
connections.max.idle.ms*0 … 21474836470mediumClose broker connections after the specified time of inactivity. Disable with 0. If this property is left at its default value some heuristics are performed to determine a suitable default value, this is currently limited to identifying brokers on Azure (see librdkafka issue #3109 for more info).
Type: integer
reconnect.backoff.jitter.ms*0 … 36000000lowDEPRECATED No longer used. See reconnect.backoff.ms and reconnect.backoff.max.ms.
Type: integer
reconnect.backoff.ms*0 … 3600000100mediumThe initial time to wait before reconnecting to a broker after the connection has been closed. The time is increased exponentially until reconnect.backoff.max.ms is reached. -25% to +50% jitter is applied to each reconnect backoff. A value of 0 disables the backoff and reconnects immediately.
Type: integer
reconnect.backoff.max.ms*0 … 360000010000mediumThe maximum time to wait before reconnecting to a broker after the connection has been closed.
Type: integer
statistics.interval.ms*0 … 864000000highlibrdkafka statistics emit interval. The application also needs to register a stats callback using rd_kafka_conf_set_stats_cb(). The granularity is 1000ms. A value of 0 disables statistics.
Type: integer
enabled_events*0 … 21474836470lowSee rd_kafka_conf_set_events()
Type: integer
error_cb*lowError callback (set with rd_kafka_conf_set_error_cb())
Type: see dedicated API
throttle_cb*lowThrottle callback (set with rd_kafka_conf_set_throttle_cb())
Type: see dedicated API
stats_cb*lowStatistics callback (set with rd_kafka_conf_set_stats_cb())
Type: see dedicated API
log_cb*lowLog callback (set with rd_kafka_conf_set_log_cb())
Type: see dedicated API
log_level*0 … 76lowLogging level (syslog(3) levels)
Type: integer
log.queue*true, falsefalselowDisable spontaneous log_cb from internal librdkafka threads, instead enqueue log messages on queue set with rd_kafka_set_log_queue() and serve log callbacks or events through the standard poll APIs. NOTE: Log messages will linger in a temporary queue until the log queue has been set.
Type: boolean
log.thread.name*true, falsetruelowPrint internal thread name in log messages (useful for debugging librdkafka internals)
Type: boolean
enable.random.seed*true, falsetruelowIf enabled librdkafka will initialize the PRNG with srand(current_time.milliseconds) on the first invocation of rd_kafka_new() (required only if rand_r() is not available on your platform). If disabled the application must call srand() prior to calling rd_kafka_new().
Type: boolean
log.connection.close*true, falsetruelowLog broker disconnects. It might be useful to turn this off when interacting with 0.9 brokers with an aggressive connection.max.idle.ms value.
Type: boolean
background_event_cb*lowBackground queue event callback (set with rd_kafka_conf_set_background_event_cb())
Type: see dedicated API
socket_cb*lowSocket creation callback to provide race-free CLOEXEC
Type: see dedicated API
connect_cb*lowSocket connect callback
Type: see dedicated API
closesocket_cb*lowSocket close callback
Type: see dedicated API
open_cb*lowFile open callback to provide race-free CLOEXEC
Type: see dedicated API
opaque*lowApplication opaque (set with rd_kafka_conf_set_opaque())
Type: see dedicated API
default_topic_conf*lowDefault topic configuration for automatically subscribed topics
Type: see dedicated API
internal.termination.signal*0 … 1280lowSignal that librdkafka will use to quickly terminate on rd_kafka_destroy(). If this signal is not set then there will be a delay before rd_kafka_wait_destroyed() returns true as internal threads are timing out their system calls. If this signal is set however the delay will be minimal. The application should mask this signal as an internal signal handler is installed.
Type: integer
api.version.request*true, falsetruehighRequest broker’s supported API versions to adjust functionality to available protocol features. If set to false, or the ApiVersionRequest fails, the fallback version broker.version.fallback will be used. NOTE: Depends on broker version >=0.10.0. If the request is not supported by (an older) broker the broker.version.fallback fallback is used.
Type: boolean
api.version.request.timeout.ms*1 … 30000010000lowTimeout for broker API version requests.
Type: integer
api.version.fallback.ms*0 … 6048000000mediumDictates how long the broker.version.fallback fallback is used in the case the ApiVersionRequest fails. NOTE: The ApiVersionRequest is only issued when a new connection to the broker is made (such as after an upgrade).
Type: integer
broker.version.fallback*0.10.0mediumOlder broker versions (before 0.10.0) provide no way for a client to query for supported protocol features (ApiVersionRequest, see api.version.request) making it impossible for the client to know what features it may use. As a workaround a user may set this property to the expected broker version and the client will automatically adjust its feature set accordingly if the ApiVersionRequest fails (or is disabled). The fallback broker version will be used for api.version.fallback.ms. Valid values are: 0.9.0, 0.8.2, 0.8.1, 0.8.0. Any other value >= 0.10, such as 0.10.2.1, enables ApiVersionRequests.
Type: string
security.protocol*plaintext, ssl, sasl_plaintext, sasl_sslplaintexthighProtocol used to communicate with brokers.
Type: enum value
ssl.cipher.suites*lowA cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. See manual page for ciphers(1) and `SSL_CTX_set_cipher_list(3).
Type: string
ssl.curves.list*lowThe supported-curves extension in the TLS ClientHello message specifies the curves (standard/named, or ‘explicit’ GF(2^k) or GF§) the client is willing to have the server use. See manual page for SSL_CTX_set1_curves_list(3). OpenSSL >= 1.0.2 required.
Type: string
ssl.sigalgs.list*lowThe client uses the TLS ClientHello signature_algorithms extension to indicate to the server which signature/hash algorithm pairs may be used in digital signatures. See manual page for SSL_CTX_set1_sigalgs_list(3). OpenSSL >= 1.0.2 required.
Type: string
ssl.key.location*lowPath to client’s private key (PEM) used for authentication.
Type: string
ssl.key.password*lowPrivate key passphrase (for use with ssl.key.location and set_ssl_cert())
Type: string
ssl.key.pem*lowClient’s private key string (PEM format) used for authentication.
Type: string
ssl_key*lowClient’s private key as set by rd_kafka_conf_set_ssl_cert()
Type: see dedicated API
ssl.certificate.location*lowPath to client’s public key (PEM) used for authentication.
Type: string
ssl.certificate.pem*lowClient’s public key string (PEM format) used for authentication.
Type: string
ssl_certificate*lowClient’s public key as set by rd_kafka_conf_set_ssl_cert()
Type: see dedicated API
ssl.ca.location*lowFile or directory path to CA certificate(s) for verifying the broker’s key. Defaults: On Windows the system’s CA certificates are automatically looked up in the Windows Root certificate store. On Mac OSX this configuration defaults to probe. It is recommended to install openssl using Homebrew, to provide CA certificates. On Linux install the distribution’s ca-certificates package. If OpenSSL is statically linked or ssl.ca.location is set to probe a list of standard paths will be probed and the first one found will be used as the default CA certificate location path. If OpenSSL is dynamically linked the OpenSSL library’s default path will be used (see OPENSSLDIR in openssl version -a).
Type: string
ssl.ca.pem*lowCA certificate string (PEM format) for verifying the broker’s key.
Type: string
ssl_ca*lowCA certificate as set by rd_kafka_conf_set_ssl_cert()
Type: see dedicated API
ssl.ca.certificate.stores*RootlowComma-separated list of Windows Certificate stores to load CA certificates from. Certificates will be loaded in the same order as stores are specified. If no certificates can be loaded from any of the specified stores an error is logged and the OpenSSL library’s default CA location is used instead. Store names are typically one or more of: MY, Root, Trust, CA.
Type: string
ssl.crl.location*lowPath to CRL for verifying broker’s certificate validity.
Type: string
ssl.keystore.location*lowPath to client’s keystore (PKCS#12) used for authentication.
Type: string
ssl.keystore.password*lowClient’s keystore (PKCS#12) password.
Type: string
ssl.engine.location*lowPath to OpenSSL engine library. OpenSSL >= 1.1.0 required.
Type: string
ssl.engine.id*dynamiclowOpenSSL engine id is the name used for loading engine.
Type: string
ssl_engine_callback_data*lowOpenSSL engine callback data (set with rd_kafka_conf_set_engine_callback_data()).
Type: see dedicated API
enable.ssl.certificate.verification*true, falsetruelowEnable OpenSSL’s builtin broker (server) certificate verification. This verification can be extended by the application by implementing a certificate_verify_cb.
Type: boolean
ssl.endpoint.identification.algorithm*none, httpsnonelowEndpoint identification algorithm to validate broker hostname using broker certificate. https - Server (broker) hostname verification as specified in RFC2818. none - No endpoint verification. OpenSSL >= 1.0.2 required.
Type: enum value
ssl.certificate.verify_cb*lowCallback to verify the broker certificate chain.
Type: see dedicated API
sasl.mechanisms*GSSAPIhighSASL mechanism to use for authentication. Supported: GSSAPI, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, OAUTHBEARER. NOTE: Despite the name only one mechanism must be configured.
Type: string
sasl.mechanism*GSSAPIhighAlias for sasl.mechanisms: SASL mechanism to use for authentication. Supported: GSSAPI, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, OAUTHBEARER. NOTE: Despite the name only one mechanism must be configured.
Type: string
sasl.kerberos.service.name*kafkalowKerberos principal name that Kafka runs as, not including /hostname@REALM
Type: string
sasl.kerberos.principal*kafkaclientlowThis client’s Kerberos principal name. (Not supported on Windows, will use the logon user’s principal).
Type: string
sasl.kerberos.kinit.cmd*kinit -R -t “%{sasl.kerberos.keytab}” -k %{sasl.kerberos.principal} || kinit -t “%{sasl.kerberos.keytab}” -k %{sasl.kerberos.principal}lowShell command to refresh or acquire the client’s Kerberos ticket. This command is executed on client creation and every sasl.kerberos.min.time.before.relogin (0=disable). %{config.prop.name} is replaced by corresponding config object value.
Type: string
sasl.kerberos.keytab*lowPath to Kerberos keytab file. This configuration property is only used as a variable in sasl.kerberos.kinit.cmd as ... -t "%{sasl.kerberos.keytab}".
Type: string
sasl.kerberos.min.time.before.relogin*0 … 8640000060000lowMinimum time in milliseconds between key refresh attempts. Disable automatic key refresh by setting this property to 0.
Type: integer
sasl.username*highSASL username for use with the PLAIN and SASL-SCRAM-… mechanisms
Type: string
sasl.password*highSASL password for use with the PLAIN and SASL-SCRAM-… mechanism
Type: string
sasl.oauthbearer.config*lowSASL/OAUTHBEARER configuration. The format is implementation-dependent and must be parsed accordingly. The default unsecured token implementation (see https://tools.ietf.org/html/rfc7515#appendix-A.5) recognizes space-separated name=value pairs with valid names including principalClaimName, principal, scopeClaimName, scope, and lifeSeconds. The default value for principalClaimName is “sub”, the default value for scopeClaimName is “scope”, and the default value for lifeSeconds is 3600. The scope value is CSV format with the default value being no/empty scope. For example: principalClaimName=azp principal=admin scopeClaimName=roles scope=role1,role2 lifeSeconds=600. In addition, SASL extensions can be communicated to the broker via extension_NAME=value. For example: principal=admin extension_traceId=123
Type: string
enable.sasl.oauthbearer.unsecure.jwt*true, falsefalselowEnable the builtin unsecure JWT OAUTHBEARER token handler if no oauthbearer_refresh_cb has been set. This builtin handler should only be used for development or testing, and not in production.
Type: boolean
oauthbearer_token_refresh_cb*lowSASL/OAUTHBEARER token refresh callback (set with rd_kafka_conf_set_oauthbearer_token_refresh_cb(), triggered by rd_kafka_poll(), et.al. This callback will be triggered when it is time to refresh the client’s OAUTHBEARER token. Also see rd_kafka_conf_enable_sasl_queue().
Type: see dedicated API
sasl.oauthbearer.method*default, oidcdefaultlowSet to “default” or “oidc” to control which login method is used. If set it to “oidc”, OAuth/OIDC login method will be used. sasl.oauthbearer.client.id, sasl.oauthbearer.client.secret, sasl.oauthbearer.scope, and sasl.oauthbearer.token.endpoint.url are needed if sasl.oauthbearer.method is set to “oidc”.
Type: enum value
sasl.oauthbearer.client.id*lowIt’s a public identifier for the application. It must be unique across all clients that the authorization server handles. This is only used when sasl.oauthbearer.method is set to oidc.
Type: string
sasl.oauthbearer.client.secret*lowA client secret only known to the application and the authorization server. This should be a sufficiently random string that are not guessable. This is only used when sasl.oauthbearer.method is set to “oidc”.
Type: string
sasl.oauthbearer.scope*lowClient use this to specify the scope of the access request to the broker. This is only used when sasl.oauthbearer.method is set to “oidc”.
Type: string
sasl.oauthbearer.extensions*lowAllow additional information to be provided to the broker. It’s comma-separated list of key=value pairs. The example of the input is “supportFeatureX=true,organizationId=sales-emea”. This is only used when sasl.oauthbearer.method is set to “oidc”.
Type: string
sasl.oauthbearer.token.endpoint.url*lowOAUTH issuer token endpoint HTTP(S) URI used to retrieve the token. This is only used when sasl.oauthbearer.method is set to “oidc”.
Type: string
plugin.library.paths*lowList of plugin libraries to load (; separated). The library search path is platform dependent (see dlopen(3) for Unix and LoadLibrary() for Windows). If no filename extension is specified the platform-specific extension (such as .dll or .so) will be appended automatically.
Type: string
interceptors*lowInterceptors added through rd_kafka_conf_interceptor_add_…() and any configuration handled by interceptors.
Type: see dedicated API
group.idChighClient group id string. All clients sharing the same group.id belong to the same group.
Type: string
group.instance.idCmediumEnable static group membership. Static group members are able to leave and rejoin a group within the configured session.timeout.ms without prompting a group rebalance. This should be used in combination with a larger session.timeout.ms to avoid group rebalances caused by transient unavailability (e.g. process restarts). Requires broker version >= 2.3.0.
Type: string
partition.assignment.strategyCrange,roundrobinmediumThe name of one or more partition assignment strategies. The elected group leader will use a strategy supported by all members of the group to assign partitions to group members. If there is more than one eligible strategy, preference is determined by the order of this list (strategies earlier in the list have higher priority). Cooperative and non-cooperative (eager) strategies must not be mixed. Available strategies: range, roundrobin, cooperative-sticky.
Type: string
session.timeout.msC1 … 360000045000highClient group session and failure detection timeout. The consumer sends periodic heartbeats (heartbeat.interval.ms) to indicate its liveness to the broker. If no hearts are received by the broker for a group member within the session timeout, the broker will remove the consumer from the group and trigger a rebalance. The allowed range is configured with the broker configuration properties group.min.session.timeout.ms and group.max.session.timeout.ms. Also see max.poll.interval.ms.
Type: integer
heartbeat.interval.msC1 … 36000003000lowGroup session keepalive heartbeat interval.
Type: integer
group.protocol.typeCconsumerlowGroup protocol type. NOTE: Currently, the only supported group protocol type is consumer.
Type: string
coordinator.query.interval.msC1 … 3600000600000lowHow often to query for the current client group coordinator. If the currently assigned coordinator is down the configured query interval will be divided by ten to more quickly recover in case of coordinator reassignment.
Type: integer
max.poll.interval.msC1 … 86400000300000highMaximum allowed time between calls to consume messages (e.g., rd_kafka_consumer_poll()) for high-level consumers. If this interval is exceeded the consumer is considered failed and the group will rebalance in order to reassign the partitions to another consumer group member. Warning: Offset commits may be not possible at this point. Note: It is recommended to set enable.auto.offset.store=false for long-time processing applications and then explicitly store offsets (using offsets_store()) after message processing, to make sure offsets are not auto-committed prior to processing has finished. The interval is checked two times per second. See KIP-62 for more information.
Type: integer
enable.auto.commitCtrue, falsetruehighAutomatically and periodically commit offsets in the background. Note: setting this to false does not prevent the consumer from fetching previously committed start offsets. To circumvent this behaviour set specific start offsets per partition in the call to assign().
Type: boolean
auto.commit.interval.msC0 … 864000005000mediumThe frequency in milliseconds that the consumer offsets are committed (written) to offset storage. (0 = disable). This setting is used by the high-level consumer.
Type: integer
enable.auto.offset.storeCtrue, falsetruehighAutomatically store offset of last message provided to application. The offset store is an in-memory store of the next offset to (auto-)commit for each partition.
Type: boolean
queued.min.messagesC1 … 10000000100000mediumMinimum number of messages per topic+partition librdkafka tries to maintain in the local consumer queue.
Type: integer
queued.max.messages.kbytesC1 … 209715165536mediumMaximum number of kilobytes of queued pre-fetched messages in the local consumer queue. If using the high-level consumer this setting applies to the single consumer queue, regardless of the number of partitions. When using the legacy simple consumer or when separate partition queues are used this setting applies per partition. This value may be overshot by fetch.message.max.bytes. This property has higher priority than queued.min.messages.
Type: integer
fetch.wait.max.msC0 … 300000500lowMaximum time the broker may wait to fill the Fetch response with fetch.min.bytes of messages.
Type: integer
fetch.message.max.bytesC1 … 10000000001048576mediumInitial maximum number of bytes per topic+partition to request when fetching messages from the broker. If the client encounters a message larger than this value it will gradually try to increase it until the entire message can be fetched.
Type: integer
max.partition.fetch.bytesC1 … 10000000001048576mediumAlias for fetch.message.max.bytes: Initial maximum number of bytes per topic+partition to request when fetching messages from the broker. If the client encounters a message larger than this value it will gradually try to increase it until the entire message can be fetched.
Type: integer
fetch.max.bytesC0 … 214748313552428800mediumMaximum amount of data the broker shall return for a Fetch request. Messages are fetched in batches by the consumer and if the first message batch in the first non-empty partition of the Fetch request is larger than this value, then the message batch will still be returned to ensure the consumer can make progress. The maximum message batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (broker topic config). fetch.max.bytes is automatically adjusted upwards to be at least message.max.bytes (consumer config).
Type: integer
fetch.min.bytesC1 … 1000000001lowMinimum number of bytes the broker responds with. If fetch.wait.max.ms expires the accumulated data will be sent to the client regardless of this setting.
Type: integer
fetch.error.backoff.msC0 … 300000500mediumHow long to postpone the next fetch request for a topic+partition in case of a fetch error.
Type: integer
offset.store.methodCnone, file, brokerbrokerlowDEPRECATED Offset commit store method: ‘file’ - DEPRECATED: local file store (offset.store.path, et.al), ‘broker’ - broker commit store (requires Apache Kafka 0.8.2 or later on the broker).
Type: enum value
isolation.levelCread_uncommitted, read_committedread_committedhighControls how to read messages written transactionally: read_committed - only return transactional messages which have been committed. read_uncommitted - return all messages, even transactional messages which have been aborted.
Type: enum value
consume_cbClowMessage consume callback (set with rd_kafka_conf_set_consume_cb())
Type: see dedicated API
rebalance_cbClowCalled after consumer group has been rebalanced (set with rd_kafka_conf_set_rebalance_cb())
Type: see dedicated API
offset_commit_cbClowOffset commit result propagation callback. (set with rd_kafka_conf_set_offset_commit_cb())
Type: see dedicated API
enable.partition.eofCtrue, falsefalselowEmit RD_KAFKA_RESP_ERR__PARTITION_EOF event whenever the consumer reaches the end of a partition.
Type: boolean
check.crcsCtrue, falsefalsemediumVerify CRC32 of consumed messages, ensuring no on-the-wire or on-disk corruption to the messages occurred. This check comes at slightly increased CPU usage.
Type: boolean
allow.auto.create.topicsCtrue, falsefalselowAllow automatic topic creation on the broker when subscribing to or assigning non-existent topics. The broker must also be configured with auto.create.topics.enable=true for this configuraiton to take effect. Note: The default value (false) is different from the Java consumer (true). Requires broker version >= 0.11.0.0, for older broker versions only the broker configuration applies.
Type: boolean
client.rack*lowA rack identifier for this client. This can be any string value which indicates where this client is physically located. It corresponds with the broker config broker.rack.
Type: string
transactional.idPhighEnables the transactional producer. The transactional.id is used to identify the same transactional producer instance across process restarts. It allows the producer to guarantee that transactions corresponding to earlier instances of the same producer have been finalized prior to starting any new transactions, and that any zombie instances are fenced off. If no transactional.id is provided, then the producer is limited to idempotent delivery (if enable.idempotence is set). Requires broker version >= 0.11.0.
Type: string
transaction.timeout.msP1000 … 214748364760000mediumThe maximum amount of time in milliseconds that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction. If this value is larger than the transaction.max.timeout.ms setting in the broker, the init_transactions() call will fail with ERR_INVALID_TRANSACTION_TIMEOUT. The transaction timeout automatically adjusts message.timeout.ms and socket.timeout.ms, unless explicitly configured in which case they must not exceed the transaction timeout (socket.timeout.ms must be at least 100ms lower than transaction.timeout.ms). This is also the default timeout value if no timeout (-1) is supplied to the transactional API methods.
Type: integer
enable.idempotencePtrue, falsefalsehighWhen set to true, the producer will ensure that messages are successfully produced exactly once and in the original produce order. The following configuration properties are adjusted automatically (if not modified by the user) when idempotence is enabled: max.in.flight.requests.per.connection=5 (must be less than or equal to 5), retries=INT32_MAX (must be greater than 0), acks=all, queuing.strategy=fifo. Producer instantation will fail if user-supplied configuration is incompatible.
Type: boolean
enable.gapless.guaranteePtrue, falsefalselowEXPERIMENTAL: subject to change or removal. When set to true, any error that could result in a gap in the produced message series when a batch of messages fails, will raise a fatal error (ERR__GAPLESS_GUARANTEE) and stop the producer. Messages failing due to message.timeout.ms are not covered by this guarantee. Requires enable.idempotence=true.
Type: boolean
queue.buffering.max.messagesP1 … 10000000100000highMaximum number of messages allowed on the producer queue. This queue is shared by all topics and partitions.
Type: integer
queue.buffering.max.kbytesP1 … 21474836471048576highMaximum total message size sum allowed on the producer queue. This queue is shared by all topics and partitions. This property has higher priority than queue.buffering.max.messages.
Type: integer
queue.buffering.max.msP0 … 9000005highDelay in milliseconds to wait for messages in the producer queue to accumulate before constructing message batches (MessageSets) to transmit to brokers. A higher value allows larger and more effective (less overhead, improved compression) batches of messages to accumulate at the expense of increased message delivery latency.
Type: float
linger.msP0 … 9000005highAlias for queue.buffering.max.ms: Delay in milliseconds to wait for messages in the producer queue to accumulate before constructing message batches (MessageSets) to transmit to brokers. A higher value allows larger and more effective (less overhead, improved compression) batches of messages to accumulate at the expense of increased message delivery latency.
Type: float
message.send.max.retriesP0 … 21474836472147483647highHow many times to retry sending a failing Message. Note: retrying may cause reordering unless enable.idempotence is set to true.
Type: integer
retriesP0 … 21474836472147483647highAlias for message.send.max.retries: How many times to retry sending a failing Message. Note: retrying may cause reordering unless enable.idempotence is set to true.
Type: integer
retry.backoff.msP1 … 300000100mediumThe backoff time in milliseconds before retrying a protocol request.
Type: integer
queue.buffering.backpressure.thresholdP1 … 10000001lowThe threshold of outstanding not yet transmitted broker requests needed to backpressure the producer’s message accumulator. If the number of not yet transmitted requests equals or exceeds this number, produce request creation that would have otherwise been triggered (for example, in accordance with linger.ms) will be delayed. A lower number yields larger and more effective batches. A higher value can improve latency when using compression on slow machines.
Type: integer
compression.codecPnone, gzip, snappy, lz4, zstdnonemediumcompression codec to use for compressing message sets. This is the default value for all topics, may be overridden by the topic configuration property compression.codec.
Type: enum value
compression.typePnone, gzip, snappy, lz4, zstdnonemediumAlias for compression.codec: compression codec to use for compressing message sets. This is the default value for all topics, may be overridden by the topic configuration property compression.codec.
Type: enum value
batch.num.messagesP1 … 100000010000mediumMaximum number of messages batched in one MessageSet. The total MessageSet size is also limited by batch.size and message.max.bytes.
Type: integer
batch.sizeP1 … 21474836471000000mediumMaximum size (in bytes) of all messages batched in one MessageSet, including protocol framing overhead. This limit is applied after the first message has been added to the batch, regardless of the first message’s size, this is to ensure that messages that exceed batch.size are produced. The total MessageSet size is also limited by batch.num.messages and message.max.bytes.
Type: integer
delivery.report.only.errorPtrue, falsefalselowOnly provide delivery reports for failed messages.
Type: boolean
dr_cbPlowDelivery report callback (set with rd_kafka_conf_set_dr_cb())
Type: see dedicated API
dr_msg_cbPlowDelivery report callback (set with rd_kafka_conf_set_dr_msg_cb())
Type: see dedicated API
sticky.partitioning.linger.msP0 … 90000010lowDelay in milliseconds to wait to assign new sticky partitions for each topic. By default, set to double the time of linger.ms. To disable sticky behavior, set to 0. This behavior affects messages with the key NULL in all cases, and messages with key lengths of zero when the consistent_random partitioner is in use. These messages would otherwise be assigned randomly. A higher value allows for more effective batching of these messages.
Type: integer

Topic configuration properties

PropertyC/PRangeDefaultImportanceDescription
request.required.acksP-1 … 1000-1highThis field indicates the number of acknowledgements the leader broker must receive from ISR brokers before responding to the request: 0=Broker does not send any response/ack to client, -1 or all=Broker will block until message is committed by all in sync replicas (ISRs). If there are less than min.insync.replicas (broker configuration) in the ISR set the produce request will fail.
Type: integer
acksP-1 … 1000-1highAlias for request.required.acks: This field indicates the number of acknowledgements the leader broker must receive from ISR brokers before responding to the request: 0=Broker does not send any response/ack to client, -1 or all=Broker will block until message is committed by all in sync replicas (ISRs). If there are less than min.insync.replicas (broker configuration) in the ISR set the produce request will fail.
Type: integer
request.timeout.msP1 … 90000030000mediumThe ack timeout of the producer request in milliseconds. This value is only enforced by the broker and relies on request.required.acks being != 0.
Type: integer
message.timeout.msP0 … 2147483647300000highLocal message timeout. This value is only enforced locally and limits the time a produced message waits for successful delivery. A time of 0 is infinite. This is the maximum time librdkafka may use to deliver a message (including retries). Delivery error occurs when either the retry count or the message timeout are exceeded. The message timeout is automatically adjusted to transaction.timeout.ms if transactional.id is configured.
Type: integer
delivery.timeout.msP0 … 2147483647300000highAlias for message.timeout.ms: Local message timeout. This value is only enforced locally and limits the time a produced message waits for successful delivery. A time of 0 is infinite. This is the maximum time librdkafka may use to deliver a message (including retries). Delivery error occurs when either the retry count or the message timeout are exceeded. The message timeout is automatically adjusted to transaction.timeout.ms if transactional.id is configured.
Type: integer
queuing.strategyPfifo, lifofifolowEXPERIMENTAL: subject to change or removal. DEPRECATED Producer queuing strategy. FIFO preserves produce ordering, while LIFO prioritizes new messages.
Type: enum value
produce.offset.reportPtrue, falsefalselowDEPRECATED No longer used.
Type: boolean
partitionerPconsistent_randomhighPartitioner: random - random distribution, consistent - CRC32 hash of key (Empty and NULL keys are mapped to single partition), consistent_random - CRC32 hash of key (Empty and NULL keys are randomly partitioned), murmur2 - Java Producer compatible Murmur2 hash of key (NULL keys are mapped to single partition), murmur2_random - Java Producer compatible Murmur2 hash of key (NULL keys are randomly partitioned. This is functionally equivalent to the default partitioner in the Java Producer.), fnv1a - FNV-1a hash of key (NULL keys are mapped to single partition), fnv1a_random - FNV-1a hash of key (NULL keys are randomly partitioned).
Type: string
partitioner_cbPlowCustom partitioner callback (set with rd_kafka_topic_conf_set_partitioner_cb())
Type: see dedicated API
msg_order_cmpPlowEXPERIMENTAL: subject to change or removal. DEPRECATED Message queue ordering comparator (set with rd_kafka_topic_conf_set_msg_order_cmp()). Also see queuing.strategy.
Type: see dedicated API
opaque*lowApplication opaque (set with rd_kafka_topic_conf_set_opaque())
Type: see dedicated API
compression.codecPnone, gzip, snappy, lz4, zstd, inheritinherithighCompression codec to use for compressing message sets. inherit = inherit global compression.codec configuration.
Type: enum value
compression.typePnone, gzip, snappy, lz4, zstdnonemediumAlias for compression.codec: compression codec to use for compressing message sets. This is the default value for all topics, may be overridden by the topic configuration property compression.codec.
Type: enum value
compression.levelP-1 … 12-1mediumCompression level parameter for algorithm selected by configuration property compression.codec. Higher values will result in better compression at the cost of more CPU usage. Usable range is algorithm-dependent: [0-9] for gzip; [0-12] for lz4; only 0 for snappy; -1 = codec-dependent default compression level.
Type: integer
auto.commit.enableCtrue, falsetruelowDEPRECATED [LEGACY PROPERTY: This property is used by the simple legacy consumer only. When using the high-level KafkaConsumer, the global enable.auto.commit property must be used instead]. If true, periodically commit offset of the last message handed to the application. This committed offset will be used when the process restarts to pick up where it left off. If false, the application will have to call rd_kafka_offset_store() to store an offset (optional). Offsets will be written to broker or local file according to offset.store.method.
Type: boolean
enable.auto.commitCtrue, falsetruelowDEPRECATED Alias for auto.commit.enable: [LEGACY PROPERTY: This property is used by the simple legacy consumer only. When using the high-level KafkaConsumer, the global enable.auto.commit property must be used instead]. If true, periodically commit offset of the last message handed to the application. This committed offset will be used when the process restarts to pick up where it left off. If false, the application will have to call rd_kafka_offset_store() to store an offset (optional). Offsets will be written to broker or local file according to offset.store.method.
Type: boolean
auto.commit.interval.msC10 … 8640000060000high[LEGACY PROPERTY: This setting is used by the simple legacy consumer only. When using the high-level KafkaConsumer, the global auto.commit.interval.ms property must be used instead]. The frequency in milliseconds that the consumer offsets are committed (written) to offset storage.
Type: integer
auto.offset.resetCsmallest, earliest, beginning, largest, latest, end, errorlargesthighAction to take when there is no initial offset in offset store or the desired offset is out of range: ‘smallest’,‘earliest’ - automatically reset the offset to the smallest offset, ‘largest’,‘latest’ - automatically reset the offset to the largest offset, ‘error’ - trigger an error (ERR__AUTO_OFFSET_RESET) which is retrieved by consuming messages and checking ‘message->err’.
Type: enum value
offset.store.pathC.lowDEPRECATED Path to local file for storing offsets. If the path is a directory a filename will be automatically generated in that directory based on the topic and partition. File-based offset storage will be removed in a future version.
Type: string
offset.store.sync.interval.msC-1 … 86400000-1lowDEPRECATED fsync() interval for the offset file, in milliseconds. Use -1 to disable syncing, and 0 for immediate sync after each write. File-based offset storage will be removed in a future version.
Type: integer
offset.store.methodCfile, brokerbrokerlowDEPRECATED Offset commit store method: ‘file’ - DEPRECATED: local file store (offset.store.path, et.al), ‘broker’ - broker commit store (requires “group.id” to be configured and Apache Kafka 0.8.2 or later on the broker.).
Type: enum value
consume.callback.max.messagesC0 … 10000000lowMaximum number of messages to dispatch in one rd_kafka_consume_callback*() call (0 = unlimited)
Type: integer

C/P legend: C = Consumer, P = Producer, * = both

源码

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小小林熬夜学编程/article/detail/546877
推荐阅读
相关标签
  

闽ICP备14008679号