Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Apache Kafka 1.0 Cookbook

You're reading from   Apache Kafka 1.0 Cookbook Over 100 practical recipes on using distributed enterprise messaging to handle real-time data

Arrow left icon
Product type Paperback
Published in Dec 2017
Publisher Packt
ISBN-13 9781787286849
Length 250 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Raúl Estrada Raúl Estrada
Author Profile Icon Raúl Estrada
Raúl Estrada
Alexey Zinoviev Alexey Zinoviev
Author Profile Icon Alexey Zinoviev
Alexey Zinoviev
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Configuring Kafka FREE CHAPTER 2. Kafka Clusters 3. Message Validation 4. Message Enrichment 5. The Confluent Platform 6. Kafka Streams 7. Managing Kafka 8. Operating Kafka 9. Monitoring and Security 10. Third-Party Tool Integration

Configuring the log settings

Log refers to the file where all the messages are stored in the machine; here (in this book), when log is mentioned, think in terms of data structures, not just event recording.

The log settings are fundamental, so it is the way the messages are persisted in the broker machine.

Getting ready

With your favorite text editor, open your server.properties file copy.

How to do it...

Adjust the following parameters:

log.segment.bytes=1073741824
log.roll.hours=168
log.cleanup.policy=delete
log.retention.hours=168
log.retention.bytes=-1
log.retention.check.interval.ms= 30000
log.cleaner.enable=false
log.cleaner.threads=1
log.cleaner.backoff.ms=15000
log.index.size.max.bytes=10485760
log.index.interval.bytes=4096
log.flush.interval.messages=Long.MaxValue
log.flush.interval.ms=Long.MaxValue

How it works…

Here is an explanation of every parameter:

  • log.segment.bytes: Default value: 1 GB. This defines the maximum segment size in bytes (the concept of segment will be explained later). Once a segment file reaches that size, a new segment file is created. Topics are stored as a bunch of segment files in the log directory. This property can also be set per topic.
  • log.roll.{ms,hours}: Default value: 7 days. This defines the time period after a new segment file is created, even if it has not reached the size limit. This property can also be set per topic.
  • log.cleanup.policy: Default value: delete. Possible options are delete or compact. If the delete option is set, the log segments will be deleted periodically when it reaches its time threshold or size limit. If the compact option is set, log compaction is used to clean up obsolete records. This property can also be set per topic.
  • log.retention.{ms,minutes,hours}: Default value: 7 days. This defines the time to retain the log segments. This property can also be set per topic.
  • log.retention.bytes: Default value: -1. This defines the number of logs per partition to retain before deletion. This property can also be set per topic. The segments are deleted when the log time or size limits are reached.
  • log.retention.check.interval.ms: Default value is five minutes. This defines the time periodicity at which the logs are checked for deletion to meet retention policies.
  • log.cleaner.enable: To enable log compaction, set this to true.
  • log.cleaner.threads: Indicates the number of threads working on clean logs for compaction.
  • log.cleaner.backoff.ms: Periodicity at which the logs will check whether any log needs cleaning.
  • log.index.size.max.bytes: Default value: 1 GB. This sets the maximum size, in bytes, of the offset index. This property can also be set per topic.
  • log.index.interval.bytes: Default value: 4096. The interval at which a new entry is added to the offset index (the offset concept will be explained later). In each fetch request, the broker does a linear scan for this number of bytes to find the correct position in the log to begin and end the fetch. Setting this value too high may mean larger index files and more memory used, but less scanning.
  • log.flush.interval.messages: Default value: 9 223 372 036 854 775 807. The number of messages kept in memory before flushed to disk. It does not guarantee durability, but gives finer control.
  • log.flush.interval.ms: Sets maximum time in ms that a message in any topic is kept in memory before it is flushed to disk. If not set, it is used the value in log.flush.scheduler.interval.ms.

There's more…

See also

You have been reading a chapter from
Apache Kafka 1.0 Cookbook
Published in: Dec 2017
Publisher: Packt
ISBN-13: 9781787286849
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime