Instead of running several docker commands to create a network and run a container for ZooKeeper and Kafka brokers, you can use Docker Compose to set up your cluster more easily. Since docker-compose automatically sets up a new network and attaches all deployed services to that network, you don’t need to define kafka-net network explicitly:
version: '2' networks: kafka-net: driver: bridge services: zookeeper-server: image: 'bitnami/zookeeper:latest' networks: - kafka-net ports: - '2181:2181' environment: - ALLOW_ANONYMOUS_LOGIN=yes kafka-server1: image: 'bitnami/kafka:latest' networks: - kafka-net ports: - '9092:9092' environment: - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181 - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 - ALLOW_PLAINTEXT_LISTENER=yes depends_on: - zookeeper-server kafka-server2: image: 'bitnami/kafka:latest' networks: - kafka-net ports: - '9093:9092' environment: - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181 - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9093 - ALLOW_PLAINTEXT_LISTENER=yes depends_on: - zookeeper-server
and then run your cluster by executing just one command:
docker-compose up -d and wait for some minutes and then you can connect to the Kafka cluster using Conduktor.
Everything is ready to start testing Kafka concepts such as topic and partition or developing your application on top of it but note that these setup and configurations are just for test and development purposes NOT for deploy in the production environment.