Failing Dockerized Kafka on MacOS - macos

I’m trying (and failing) to run a dockerized kafka on my mac machine (MacOS Sierra (10.12.2)). I have Docker for Mac version 17.03.1-ce, build c6d412e. These are the images I’m using.
https://hub.docker.com/r/confluentinc/cp-zookeeper/
https://hub.docker.com/r/confluentinc/cp-kafka/
And I’m following the advice in the official quickstart guide, running Zookeeper and Kafka with the following commands.
docker run -d \
--net=host \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
confluentinc/cp-zookeeper:3.2.1
docker run -d \
--net=host \
--name=kafka \
-e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \
confluentinc/cp-kafka:3.2.1
MAIN This otherwise doesn’t work with docker-compose. Again it's on my mac, version 1.11.2, build dfed245.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:3.2.2
environment:
- ZOOKEEPER_CLIENT_PORT=2181
ports:
- 2181:2181
kafka:
image: confluentinc/cp-kafka:3.2.2
environment:
- KAFKA_BROKER_ID=0
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
ports:
- 9092:9092
- 8082:8082
depends_on:
- zookeeper
The services will start, and I can even create a topic with these commands.
kafka-topics --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic test
kafka-topics --list --zookeeper zookeeper:2181
When I try to produce with this command, zookeeper remains silent.
kafka-console-producer --broker-list kafka:9092 --topic test
message-one
message-two
When I try to consume with this command:
kafka-console-consumer --bootstrap-server zookeeper:2181 --topic test --from-beginning
.. zookeeper continuously spits out this error:
...
zookeeper_1 | [2017-06-28 00:55:07,222] INFO Accepted socket connection from /172.20.0.3:52124 (org.apache.zookeeper.server.NIOServerCnxnFactory)
zookeeper_1 | [2017-06-28 00:55:07,222] WARN Exception causing close of session 0x0 due to java.io.EOFException (org.apache.zookeeper.server.NIOServerCnxn)
zookeeper_1 | [2017-06-28 00:55:07,223] INFO Closed socket connection for client /172.20.0.3:52124 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
...

You have specified the wrong host and port for the new console consumer. Try console-consumer --bootstrap-server kafka:9092 --topic test --from-beginning
Also if you are running these commands from outside docker (I.e. on the native macOS host) then edit your /etc/hosts file to add kafka and zookeeper as aliases for localhost
You might also want to declare and mount an external volume for the zookeeper and kafka logs so your data won't be lost if you destroy the docker images and upgrade to a newer version.
Confluent has a full QuickStart documented for these images here http://docs.confluent.io/current/cp-docker-images/docs/quickstart.html

Related

Springboot microservice not showing up in Jaeger UI

I have a micro service and added the jaeger config for tracing. Unable to see the service in the jaeger UI. Jaeger UI shows up only default service.
Step 1 - Below is the config that I created for jaeger.
return new io.jaegertracing.Configuration("test-client")
.withSampler(new io.jaegertracing.Configuration.SamplerConfiguration().withType(ConstSampler.TYPE)
.withParam(1))
.withReporter(new io.jaegertracing.Configuration.ReporterConfiguration().withLogSpans(true))
.getTracer();
Step 2 - After installing local docker, did run the below command.
docker run -d --name jaeger \
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14250:14250 \
-p 14268:14268 \
-p 14269:14269 \
-p 9411:9411 \
jaegertracing/all-in-one:1.31
Here I noticed 2 issues :
Issue 1
docker: Error response from daemon: Ports are not available: listen udp 0.0.0.0:6832: bind: address already in use.
docker: Error response from daemon: Ports are not available: listen udp 0.0.0.0:6831: bind: address already in use.
Issue 2
My micro service is not showing up in the jaeger UI.
Can some one please help me with resolving the issue. Thanks in advance
Note :
At this point, I have only single microservice and not trying to connect with other microservice. Is that an issue ?

Ubuntu cannot reach host when linking docker containers

I have 3 dockerized services. Services A and B run inside same docker-compose file:
docker-compose.yml
version: '3.5'
services:
service_a:
container_name: service_a
networks:
- my_net
service_b:
container_name: service_b
networks:
- my_net
networks: #This is just because I wanted to change the network default name
my_net:
name: my_net
Service C needs make requests against services A and B, but it runs separately using docker without compose (that's because I'm passing --network option). So, I run service C linking A and B:
docker run --network my_net --link service_a --link service_b service_c_docker_image
This is working on MacOS, but not in Ubuntu!
If I run ping command, instead of default service_c_docker_image command:
docker run --network my_net --link service_a --link service_b service_c_docker_image ping service_a
on MacOS, the host is reached properly; on Ubuntu, I get: ping: service_a: Name or service not known. And same with service_b.
Both machines are using same version of docker and docker-compose.
What am I missing?
You may have a typo in your question as that compose file should not run at all, the service level network names my_net should match the top level network name which then can be renamed using name: intra_net. The network set in the docker run command should match what the network was renamed to in the top level networks section (and that network needs to already exist, so run the compose stack first).
working example:
docker-compose.yaml
version: '3.5'
services:
service_a:
image: odise/busybox-curl
command: ["curl", "-s", "service_b:5678"]
depends_on:
- service_b
networks:
- my_net
service_b:
image: hashicorp/http-echo
command: ["-text", "hello world"]
networks:
- my_net
networks:
my_net:
name: infra_network
Run the services docker-compose up -d and check the logs:
> docker-compose logs
Attaching to docker-compose-networks_service_a_1, docker-compose-networks_service_b_1
service_b_1 | 2019/01/06 05:53:55 Server is listening on :5678
service_b_1 | 2019/01/06 05:53:55 service_b:5678 172.19.0.3:46900 "GET / HTTP/1.1" 200 12 "curl/7.39.0" 106.6µs
service_a_1 | hello world
Then start the other container with docker
> docker run --network infra_network odise/busybox-curl curl -s service_b:5678
hello world
Silly me. Actually, my configuration is right but services A and B were not run because an application level error, so links were not working.

Cannot reach Spring Boot component through Traefik, using Docker Swarm

I’m trying to run a Spring Boot application consisting of 2 microservices behind a Traefik reversed proxy in Docker Swarm. When using a dual network stack for my Spring Boot webapplication, the application does not respond.
I do have the following networks
NETWORK ID NAME DRIVER SCOPE
c23c6ac30ecd bridge bridge local
0dcb7c122e69 docker_gwbridge bridge local
1e50cdf3eee7 host host local
wbhyv0itkveu ingress overlay swarm
7sxpebq9pp7j marc_default overlay swarm
e953c2393965 none null local
t8u63pf9l3cb traefik-net overlay swarm
And the following configuration to start Traefik
docker service create \
--name traefik \
--constraint=node.role==manager \
--publish 80:80 \
--publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network traefik-net \
traefik \
--docker \
--docker.swarmmode \
--docker.domain=traefik \
--docker.watch \
—web
Now, there is a docker-compose.yml file
version: '3'
services:
web:
image: myapp-web
env_file:
- db-params.env
environment:
- server.port=8080
deploy:
labels:
- 'traefik.port=8080'
networks:
- web
- default
be:
image: myapp-be
env_file:
- db-params.env
networks:
- default
networks:
web:
external:
name: traefik-net
And a command to start the composite:
docker stack deploy -c docker-compose.yml marc
In Traefik there is an URL visible: web-marc.traefik, that is defined in /etc/hosts
Unfortunately the is just a time-out when I’n asking”
curl http://marc-web.traefik/
I tried to remove the default network from the web component. It could reach the web component through Traefik, but (of course) it cannot find the be component.
Why don’t I get a reply from Spring Boot?
It's seems to be a bug in Traefik. See https://github.com/containous/traefik/pull/2244

Elasticsearch 5.1 and Docker - How to get networking configured properly to reach Elasticsearch from the host

Using Elasticsearch:latest (v5.1) from the Docker public repo, I created my own image containing Cerebro. I am now attempting to get Elasticsearch networking properly configured so that I can connect to Elasticsearch from Cerebro. Cerebro running inside of the container I created, renders properly on my host at: http://localhost:9000.
After committing my image, I created my Docker container with the following:
sudo docker run -d -it --privileged --name es5.1 --restart=always \
-p 9200:9200 \
-p 9300:9300 \
-p 9000:9000 \
-v ~/elasticsearch/5.1/config:/usr/share/elasticsearch/config \
-v ~/elasticsearch/5.1/data:/usr/share/elasticsearch/data \
-v ~/elasticsearch/5.1/cerebro/conf:/root/cerebro-0.4.2/conf \
elasticsearch_cerebro:5.1 \
/root/cerebro-0.4.2/bin/cerebro
my elasticsearch.yml in ~/elasticsearch/5.1/config currently has the following network and discovery entries specified:
network.publish_host: 192.168.1.26
discovery.zen.ping.unicast.hosts: ["192.168.1.26:9300"]
I have also tried 0.0.0.0 and not specifying the values to default to the loopback for these settings. In addition, I've tried specifying network.host with a combination of values. No matter how I set this, elasticsearch logs on startup:
[info] play.api.Play - Application started (Prod)
[info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
[error] p.c.s.n.PlayDefaultUpstreamHandler - Cannot invoke the action
java.net.ConnectException: Connection refused: localhost/127.0.0.1:9200
… cascading errors because of this connection refusal...
No matter how I set the elasticsearch.yml networking, the error message on Elasticsearch startup does not change. I verified that the elasticsearch.yml is being picked-up inside of the Docker container. Please let me know were I'm going wrong with this configuration.
Well, it looks like I"m answering my own question after a days-worth of battle with this! The issue was that elasticsearch wasn't started inside of the container. To determine this, I got a terminal into the container:
docker exec -it es5.1 bash
Once in the container, I checked service status:
service elasticsearch status
To this, the OS responded with:
[FAIL] elasticsearch is not running ... failed!
I started it with:
service elasticsearch start
I add a single script that I'll call from docker run to start elasticsearch and cerebro and that should do the trick. However, I would still like to hear if there is a better way to configure this.
I made a github docker-compose repo that will spin up a elasticsearch, kibana, logstash, cerebro cluster
https://github.com/Shuliyey/elkc
========================================================================
On the other hand, in regard to the actual problem (elasticsearch_cerebro not working).
To get the elasticsearch and cerebro working in one docker container. Need to use supervisor
https://docs.docker.com/engine/admin/using_supervisord/
will update with more details
No need to use supervisor at all. A very simple way to solve this is to use docker-compose and bundle Elasticsearch and Cerebro together, like this:
docker-compose.yml:
version: '2'
services:
elasticsearch:
build: elasticsearch
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx1500m -Xms1500m"
networks:
- elk
cerebro:
build: cerebro
volumes:
- ./cerebro/config/application.conf:/opt/cerebro/conf/application.conf
ports:
- "9000:9000"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
elasticsearch/Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.1
cerebro/Dockerfile:
FROM yannart/cerebro
Then you run docker-compose build and docker-compose up. When everything is started, you can access ES at http://localhost:9200 and Cerebro at http://localhost:9000

Elasticsearch in Docker container cluster

I want to run 2 instances of Elasticsearch on 2 different hosts.
I have built my own Docker image based on Ubuntu 14.04 and the 1.3.2 version of Elasticsearch. If I run 2 ES containers on 1 host, each instance can see and communicate with the other; but when I run 2 instances of ES on 2 different hosts, it didn't work. The 9300 port of the container is bind to the 9300 host's port.
Is it possible to create an ES cluster with my configuration?
I was able to get clustering working using unicast across two docker hosts. I just happen to be using the ehazlett/elasticsearch image, but I do not think this should matter all that much. The really important bit seems to be setting the network.publish_host setting to a public or routable IP its docker host.
Configuration
docker-host-01
eth0: 192.168.1.10
Docker version 1.4.1, build 5bc2ff8/1.4.1
docker-host-02
eth0: 192.168.1.20
Docker version 1.4.1, build 5bc2ff8/1.4.1
Building the Cluster
On Docker Host 01
docker run -d \
-p 9200:9200 \
-p 9300:9300 \
ehazlett/elasticsearch \
--cluster.name=unicast \
--network.publish_host=192.168.1.10 \
--discovery.zen.ping.multicast.enabled=false \
--discovery.zen.ping.unicast.hosts=192.168.1.20 \
--discovery.zen.ping.timeout=3s \
--discovery.zen.minimum_master_nodes=1
On Docker Host 02
docker run -d \
-p 9200:9200 \
-p 9300:9300 \
ehazlett/elasticsearch \
--cluster.name=unicast \
--network.publish_host=192.168.1.20 \
--discovery.zen.ping.multicast.enabled=false \
--discovery.zen.ping.unicast.hosts=192.168.1.10 \
--discovery.zen.ping.timeout=3s \
--discovery.zen.minimum_master_nodes=1
Using docker-compose is much easier than running it manually in command line:
elasticsearch_master:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.node.master=true -Des.node.data=false"
environment:
- ES_HEAP_SIZE=512m
ports:
- "9200:9200"
- "9300:9300"
elasticsearch1:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
volumes:
- "/opt/elasticsearch/data"
environment:
- ES_HEAP_SIZE=512m
elasticsearch2:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
volumes:
- "/opt/elasticsearch/data"
environment:
- ES_HEAP_SIZE=512m
You should be able to communicate the two containers running in different hosts as far as the host machines are accessible between them in the ports needed. I think your problem is that you are trying to use ElasticSearch multicast discovery, but if then you need to expose also port 54328 of the containers. If it doesn't work you can also try to configure ElasticSearch using unicast, setting adequately the machines IP's in your elasticsearch.yml.

Resources