Enterprise Java

Kafka & Zookeeper for Development: Local and Docker

Kafka popularity increases every day more and more as it takes over the streaming world. It is already provided out of the box on cloud providers like AWS, Azure and IBM Cloud.

Eventually for cases of local development it is a bit peculiar due to requiring various moving parts.

This blog will focus on making it easy for a developer to spin up some Kafka instances on a local machine without having to spin up VMs on the cloud.

We shall start with the usual Zookeeper and Kafka configuration. The example bellow will fetch a specific version so after some time is good to check the Apache Website.

1
2
3
> wget https://www.mirrorservice.org/sites/ftp.apache.org/kafka/2.6.0/kafka_2.13-2.6.0.tgz
> tar xvf kafka_2.13-2.6.0.tgz
> cd kafka_2.13-2.6.0

We just downloaded Kafka locally and now is the time to Spin up Kafka.

First we should spin up the Zookeeper

1
> ./bin/zookeeper-server-start.sh config/zookeeper.properties

Then spin up the Kafka instance

1
> ./bin/kafka-server-start.sh config/server.properties

As you see we only spun up one instance of Kafka & Zookeeper. This is way different from what we do in production where ZooKeeper servers should be deployed on multiple nodes. More specific 2n + 1 ZooKeeper servers where n > 0 need to be deployed. This number helps the ZooKeeper ensemble to perform majority elections for leadership.

In our case for local development one Kafka broker and one Zookeeper instances are enough in order to create and consume a topic.

Let’s push some messages to a topic. There is no need to create the topic, pushing a message will create it.

1
2
3
4
bin/kafka-console-producer.sh --topic tutorial-topic --bootstrap-server localhost:9092
>a
>b
>c

Then let’s read it. Pay attention to the –from-beginning flag, all the messages submitted from the beginning shall be read.

1
2
3
4
bin/kafka-console-consumer.sh --topic tutorial-topic --from-beginning --bootstrap-server localhost:9092
>a
>b
>c

Now let’s try and do this using docker. The advantage of docker is that we can run Kafka on a local docker network and add as many machines as needed and establish a Zookeeper ensemble the easy way.

Start zookeeper first

1
docker run --rm --name zookeeper -p 2181:2181 confluent/zookeeper

And then start your docker container after doing a link with the zookeeper container.

1
docker run --rm --name kafka -p 9092:9092 --link zookeeper:zookeeper confluent/kafka

Let’s create the messages through docker. As with most docker images you already have the tools needed bundled inside the image.
So the publish command would very close to the command we executed previously.

1
2
3
4
5
> docker exec -it kafka /bin/bash
kafka-console-producer --topic tutorial-topic --broker-list localhost:9092
a
b
c

The same applies for the consume command.

1
2
3
4
5
> docker exec -it kafka /bin/bash
kafka-console-consumer --topic tutorial-topic --from-beginning --zookeeper zookeeper:2181
a
b
c

That’s it! We Just run Kafka locally for local development seamlessly!

Published on Java Code Geeks with permission by Emmanouil Gkatziouras, partner at our JCG program. See the original article here: Kafka & Zookeeper for Development: Local and Docker

Opinions expressed by Java Code Geeks contributors are their own.

Emmanouil Gkatziouras

He is a versatile software engineer with experience in a wide variety of applications/services.He is enthusiastic about new projects, embracing new technologies, and getting to know people in the field of software.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button