DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Related

  • Terraform Best Practices: The 24 Practices You Should Adopt
  • React, Angular, and Vue.js: What’s the Technical Difference?
  • GitOps: Flux vs Argo CD
  • Anypoint CLI Commands in MuleSoft

Trending

  • Secure Your API With JWT: Kong OpenID Connect
  • Maximizing Developer Efficiency and Productivity in 2024: A Personal Toolkit
  • Exploring the Frontiers of AI: The Emergence of LLM-4 Architectures
  • JUnit, 4, 5, Jupiter, Vintage
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Apache Kafka: Basic Setup and Usage With Command-Line Interface

Apache Kafka: Basic Setup and Usage With Command-Line Interface

In this article, we are going to learn basic commands in Kafka and learn how to run Kafka Broker

By 
Chandra Shekhar Pandey user avatar
Chandra Shekhar Pandey
·
Aug. 20, 19 · Tutorial
Like (5)
Save
Tweet
Share
39.0K Views

Join the DZone community and get the full member experience.

Join For Free

In this article, we are going to learn basic commands in Kafka. With these commands, we will be able to gain basic knowledge of how to run Kafka Broker and produce and consume messages, topic details, and offset details.

Just note that this is a standalone setup in order to get an overview of basic setup and functionality using the command-line interface.

So let us quickly go through these commands:

1. Download Kafka first. At the time of writing this article, Kafka version 2.3.0 is the latest. It can be downloaded from Apache Kafka.

2. Extract the downloaded artifact with command. After extracting, we will get a folder named kafka_2.11-2.3.0.

tar xvf kafka_2.11-2.3.0.tgz

3. Change directory to kafka_2.11-2.3.0/bin.

4. Start the Zookeeper server first. It is a must to have a Zookeeper instance running before we actually run Kafka Broker.

./zookeeper-server-start.sh ../config/zookeeper.properties

5. Once the Zookeeper server is started, start Kafka Broker with the following command:

./kafka-server-start.sh ../config/server.properties

6. Now create a topic called 'csptest' with two partitions.

./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 2 --topic csptest

7. Now start two listeners on topic csptest. The same command can be used in two different terminals. With two listeners, we will be able to consume from both partitions. Just note group is set to topic_group for both listeners.

./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

8. Now start a producer/publisher with the following command. Then produce 5 messages.

./kafka-console-producer.sh --broker-list localhost:9092 --topic csptest

>msg-1

>msg-2

>msg-3

>msg-4

>msg-5

9. We will find that in the terminals of both listeners, the messages being consumed are in roundrobin.

$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

msg-2

msg-4



$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

msg-1

msg-3

msg-5

10. Now let's get the details of the topic, like partition count, leader, and replicas. These details are more helpful when we have a clustered environment.

$ ./kafka-topics.sh --describe --zookeeper localhost:2181 --topic csptest

Topic:csptestPartitionCount:2ReplicationFactor:1Configs:

Topic: csptestPartition: 0Leader: 0Replicas: 0Isr: 0

Topic: csptestPartition: 1Leader: 0Replicas: 0Isr: 0

11. We can get consumer details and offset details for each partition with the following command:

$ ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group topic_group --describe



GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID

topic_group csptest 0 3 3 0 consumer-1-379adec4-08e7-4a13-8e26-91c4fe10a3a8 /127.0.0.1 consumer-1

topic_group csptest 1 2 2 0 consumer-1-85381523-5103-4bd0-a523-4ca09f41a6a7 /127.0.0.1 consumer-1

12. We can list all topics with commands.

$ ./kafka-topics.sh --list --zookeeper localhost:2181

__consumer_offsets

csptest

my-topic

13. Topic _consumer_offsets, which is the default and is already available in Kafka Broker store, offsets information in Broker. With the following command, we can browse this topic.

$ ./kafka-console-consumer.sh --formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" --bootstrap-server localhost:9092 --topic __consumer_offsets

[topic_group,csptest,1]::OffsetAndMetadata(offset=2, leaderEpoch=Optional[0], metadata=, commitTimestamp=1566047971652, expireTimestamp=None)

[topic_group,csptest,0]::OffsetAndMetadata(offset=3, leaderEpoch=Optional[0], metadata=, commitTimestamp=1566047971655, expireTimestamp=None)

That's it, I hope you found it interesting and helpful.

kafka Command-line interface Command (computing) Interface (computing)

Opinions expressed by DZone contributors are their own.

Related

  • Terraform Best Practices: The 24 Practices You Should Adopt
  • React, Angular, and Vue.js: What’s the Technical Difference?
  • GitOps: Flux vs Argo CD
  • Anypoint CLI Commands in MuleSoft

Partner Resources


Comments

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: