🐳 Docker Compose Setup NGINX
Configuration for Kafka and Zookeeper 🐳
This docker-compose.yml
file sets up a Kafka and Zookeeper cluster using Confluent's Docker images. It configures the necessary environment variables and port mappings for the services.
Docker Compose File
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
# KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 # use only if you are connecting other conatiner to this kafka
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true" # Enable auto topic creation default
# Command to create a default topic when the container starts
# command: [bash, -c, "echo 'auto.create.topics.enable=true' >> /etc/kafka/server.properties && /etc/confluent/docker/run"]
Configuration Details
Zookeeper Service
-
image
:confluentinc/cp-zookeeper:latest
- The Docker image for Zookeeper provided by Confluent.
-
environment
:ZOOKEEPER_CLIENT_PORT
:2181
- Port on which Zookeeper listens for client connections.
ZOOKEEPER_TICK_TIME
:2000
- The basic time unit in milliseconds used by Zookeeper.
-
ports
:"2181:2181"
- Maps port 2181 on the host to port 2181 in the container.
Kafka Service
-
image
:confluentinc/cp-kafka:latest
- The Docker image for Kafka provided by Confluent.
-
depends_on
:zookeeper
- Ensures that the Kafka service starts only after the Zookeeper service is up.
-
ports
:"9092:9092"
- Maps port 9092 on the host to port 9092 in the container.
-
environment
:KAFKA_BROKER_ID
:1
- Unique identifier for the Kafka broker.
KAFKA_ZOOKEEPER_CONNECT
:zookeeper:2181
- Connection string for Zookeeper.
KAFKA_ADVERTISED_LISTENERS
:PLAINTEXT://kafka:9092
- Advertised address for Kafka brokers that clients use to connect.
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
:PLAINTEXT:PLAINTEXT
- Security protocol mapping for listeners.
KAFKA_INTER_BROKER_LISTENER_NAME
:PLAINTEXT
- Listener name for inter-broker communication.
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
:1
- Replication factor for the offsets topic.
KAFKA_AUTO_CREATE_TOPICS_ENABLE
:"true"
- Enable automatic creation of topics when they are referenced.
-
command
(Commented Out):- The command section is commented out but can be used to append configuration settings or run additional scripts when the container starts.
Usage
-
Start Services:
docker-compose up -d
-
Stop Services:
docker-compose down
Notes
- Network Configuration: Ensure that your Docker network settings allow communication between the Kafka and Zookeeper containers.
- Kafka Topics: By default, Kafka topics will be auto-created due to the
KAFKA_AUTO_CREATE_TOPICS_ENABLE
setting. Adjust this setting as needed based on your use case.
NGINX Configuration for Kafka Reverse Proxy
This NGINX configuration sets up a reverse proxy for a Kafka server. It routes incoming HTTP requests to the Kafka server running on localhost:9092
.
NGINX Configuration File
server {
server_name your-domain.com;
location / {
proxy_pass http://localhost:9092;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Configuration Details
server
Block
server_name
:- Specifies the domain name or IP address that this server block should respond to. Replace
your-domain.com
with your actual domain or server IP.
- Specifies the domain name or IP address that this server block should respond to. Replace
location /
Block
-
proxy_pass
:- Forwards incoming requests to the specified backend server. In this case, it routes requests to
http://localhost:9092
, where your Kafka server is running.
- Forwards incoming requests to the specified backend server. In this case, it routes requests to
-
proxy_set_header
Directives:Host
: Sets theHost
header in the request to the original host header from the client.X-Real-IP
: Sets theX-Real-IP
header with the IP address of the client making the request.X-Forwarded-For
: Sets theX-Forwarded-For
header to include the client's IP address, helping maintain the client's IP address through the proxy chain.X-Forwarded-Proto
: Sets theX-Forwarded-Proto
header to indicate the protocol used by the client (HTTP or HTTPS).
Usage
-
Install NGINX: Make sure NGINX is installed on your server. You can install it using your package manager. For example, on Ubuntu:
sudo apt-get update sudo apt-get install nginx
-
Save Configuration: Save the above configuration in an NGINX configuration file. Typically, this file is located at
/etc/nginx/sites-available/your-config-file
or/etc/nginx/nginx.conf
. -
Create Symbolic Link: Create a symbolic link to enable the configuration:
sudo ln -s /etc/nginx/sites-available/your-config-file /etc/nginx/sites-enabled/
-
Test Configuration: Test the NGINX configuration to ensure there are no syntax errors:
sudo nginx -t
-
Reload NGINX: Apply the new configuration by reloading NGINX:
sudo systemctl reload nginx
Notes
- Security: Ensure your Kafka server is properly secured and not exposed to the public internet unless absolutely necessary. Consider using authentication and encryption (e.g., SSL/TLS) for secure communication.
- Testing: Verify that the proxy is working correctly by accessing your domain and checking if the requests are being forwarded to the Kafka server.
This configuration allows you to use NGINX as a reverse proxy for your Kafka server, making it accessible through a specified domain and handling request forwarding and header management.
🐳 Working with Kafka Topics in Docker 🐳
This guide will help you manage Kafka topics and consume logs within a Docker environment.
1. Accessing the Kafka Container
To start, you need to access the Kafka container's shell:
docker exec -it <kafka_container_id> /bin/bash
Replace <kafka_container_id>
with your Kafka container ID or name.
2. Creating Kafka Topics
Once inside the Kafka container, you can create new topics using the kafka-topics
command:
kafka-topics --create --topic project1_logs --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
kafka-topics --create --topic project2_logs --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
kafka-topics --create --topic project3_logs --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
In these commands:
--topic
specifies the topic name.--bootstrap-server
specifies the Kafka broker address.--partitions
defines the number of partitions.--replication-factor
sets the replication factor.
3. Listing Kafka Topics
To view a list of all Kafka topics:
kafka-topics --list --bootstrap-server localhost:9092
4. Consuming Logs from a Kafka Topic
To see logs from a specific Kafka topic, use the following command:
docker exec <kafka-container-id> kafka-console-consumer --topic project1_logs --bootstrap-server localhost:9092 --from-beginning
Alternatively, if you're using Docker Compose:
docker-compose exec kafka kafka-console-consumer --bootstrap-server kafka:9092 --topic project1_logs --from-beginning
In these commands:
--topic
specifies the topic you want to consume logs from.--from-beginning
tells Kafka to start reading from the beginning of the topic.
5. Kafka Logger Python Script
To create logs in a Kafka topic using a Python script: View
6. Kafka Consumer Python Script
To consume logs from a Kafka topic using another Python script: View
Summary
- Access Kafka Container: Use
docker exec -it <kafka_container_id> /bin/bash
. - Create Topics: Use
kafka-topics --create
. - List Topics: Use
kafka-topics --list
. - Consume Logs: Use
kafka-console-consumer
. - Python Scripts: Use
kafka-logger.py
for producing andconsumer.py
for consuming logs.
This documentation should assist you in effectively managing Kafka topics within your Docker environment.