

In case you want to force manual commits, you can use KafkaManualCommit API from the Camel Exchange, stored on the message header. I couldn't find any example or explanation where it says any approach for decompression in the consumer end. By default, the Kafka consumer will use auto commit, where the offset will be committed automatically in the background using a given interval. So it appears that the decompression part is handled in the consumer it self all you need to do is to provide the valid / supported compression type using the dec ProducerConfig attribute while creating the producer. This batch of messages will be written in compressed form and will remain compressed in the log and will only be decompressed by the consumer. compression-codec compression-codec The compression codec: either none, gzip, snappy, or lz4. The consumer thread dequeues data from this blocking queue, decompresses and iterates through the messagesĪnd also in the doc page under End-to-end Batch Compression its written thatĪ batch of messages can be clumped together compressed and sent to the server in this form. You can use CLI or set in your server.properties file. Valid values are 'none', 'gzip' and 'snappy'. dec This parameter allows you to specify the compression codec for all data generated by this producer. zstd, lz4, snappy, gzip, producer Server Default Property: compression.type. In Kafka you can set properties on your Producer to compress keys and and values. if tweaked correctly your process will consume less memory. you get access to features like SSL, SASL, and Kerberos. The consumer has background “fetcher” threads that continuously fetch data in batches of 1MB from the brokers and add it to an internal blocking queue. The customer can configure base Topic/Stream settings inside of the. you will be able to produce & consume faster (by a magnitude compared to the JS client) you can tweak the consumer and producer setup better full list of config params.

The consumer iterator transparently decompresses compressed data and only returns an uncompressed messageĪs found in this article the way consumer works is as follows The affected library jar for snappy-java should be replaced with this newer version. The latest version (1.1.10.1, as of July 5, 2023) of snappy-java is backward compatible with all affected versions of Kafka.

#Kafka streams enable snappy compression upgrade
As per my understanding goes the de-compression is taken care by the Consumer it self. We advise all Kafka users to promptly upgrade to a version of snappy-java (>1.1.10.1) to mitigate this vulnerability.
