Springboot microservice not showing up in Jaeger UI - spring-boot

I have a micro service and added the jaeger config for tracing. Unable to see the service in the jaeger UI. Jaeger UI shows up only default service.
Step 1 - Below is the config that I created for jaeger.
return new io.jaegertracing.Configuration("test-client")
.withSampler(new io.jaegertracing.Configuration.SamplerConfiguration().withType(ConstSampler.TYPE)
.withParam(1))
.withReporter(new io.jaegertracing.Configuration.ReporterConfiguration().withLogSpans(true))
.getTracer();
Step 2 - After installing local docker, did run the below command.
docker run -d --name jaeger \
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14250:14250 \
-p 14268:14268 \
-p 14269:14269 \
-p 9411:9411 \
jaegertracing/all-in-one:1.31
Here I noticed 2 issues :
Issue 1
docker: Error response from daemon: Ports are not available: listen udp 0.0.0.0:6832: bind: address already in use.
docker: Error response from daemon: Ports are not available: listen udp 0.0.0.0:6831: bind: address already in use.
Issue 2
My micro service is not showing up in the jaeger UI.
Can some one please help me with resolving the issue. Thanks in advance
Note :
At this point, I have only single microservice and not trying to connect with other microservice. Is that an issue ?

Related

Connect two containers between them failed

I have some issue when I try to manage docker-compose and dockerfile together.
I researched and I know that is possible to use docker-compose without a dockerfile, but I think for me is better to use Dockerfile too because I want an environment to be easy to be modified.
The problem is that I want to have a container with postgres, which is a dependent component to another container, container named api which is used to run the application.
This container contains Java 17 and Maven 3 and take using docker-compose, image from Dockerfile. The problem is: while I use Dockerfile, everything is fine, but actually when I use docker-compose, I got this error:
2021-12-08T08:36:37.221247254Z /usr/local/bin/mvn-entrypoint.sh: line 50: exec: mvn test: not found
Configuration files are:
Dockerfile
FROM openjdk:17-jdk-slim
ARG MAVEN_VERSION=3.8.4
ARG USER_HOME_DIR="/root"
ARG SHA=a9b2d825eacf2e771ed5d6b0e01398589ac1bfa4171f36154d1b5787879605507802f699da6f7cfc80732a5282fd31b28e4cd6052338cbef0fa1358b48a5e3c8
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
RUN apt-get update && \
apt-get install -y \
curl procps \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
COPY mvn-entrypoint.sh /usr/local/bin/mvn-entrypoint.sh
COPY settings-docker.xml /usr/share/maven/ref/
COPY . .
RUN ["chmod", "+x", "/usr/local/bin/mvn-entrypoint.sh"]
ENTRYPOINT ["/usr/local/bin/mvn-entrypoint.sh"]
CMD ["mvn", "test"]
And docker-compose file:
services:
api_service:
build:
context: .
dockerfile: Dockerfile
restart: always
container_name: api_core_backend
ports:
- 8080:8080
depends_on:
- postgres_db
postgres_db:
image: "postgres:latest"
container_name: postgres_core_backend
restart: always
ports:
- 5432:5432
environment:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: root
Can anyone explain me why I got errors when I execute with docker-compose but everything is fine if I use dockerfile instead?
thank you.
Update: Link error while I try to connect to another container:
Caused by: org.flywaydb.core.internal.exception.FlywaySqlException:
Unable to obtain connection from database: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08001
Error Code : 0
Message : Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
The issue
Based on the logs, it looks like the issue is that you're using localhost as the hostname when you connect.
Docker compose creates an internal network where the hostnames are mapped to the service names. So in your case, the hostname is postgres_db.
Please see the docker compose docs for more information.
Solution
Try specifying postgres_db as the hostname :)

How to launch graphite docker container locally?

I am following this wiki to setup some performance numbers for my testing I am doing. I needed to setup graphite to see my numbers.
So I ran this command as mentioned in the wiki on my mac -
docker run -d --name graphite -p 80:80 -p 2003-2004:2003-2004 -p 2023-2024:2023-2024 -p 8125:8125/udp -p 8126:8126 graphiteapp/graphite-statsd
Below is what I got:
> docker run -d --name graphite -p 80:80 -p 2003-2004:2003-2004 -p 2023-2024:2023-2024 -p 8125:8125/udp -p 8126:8126 graphiteapp/graphite-statsd
Unable to find image 'graphiteapp/graphite-statsd:latest' locally
latest: Pulling from graphiteapp/graphite-statsd
aad63a933944: Pull complete
9b6d24804914: Pull complete
5f9542cd4cb1: Pull complete
09c978daf42b: Pull complete
Digest: sha256:18fbffd024cd540c7a57febfaa38c3dc5513f05db2263300209deb2a8ecd923c
Status: Downloaded newer image for graphiteapp/graphite-statsd:latest
ac248794f9cdea3bd1ab65659ec321d0aa0111de3f151c5e206b6503202a35e3
Now I ran my program which is pushing my metrics to graphite and then I was trying to configure my grafana dashboard by launching grafana docker container with below command as shown in that same wiki:
docker run -d --name -p 3000:3000 grafana grafana/grafana
But I got an error once I executed above command:
> docker run -d --name -p 3000:3000 grafana grafana/grafana
Unable to find image '3000:3000' locally
docker: Error response from daemon: pull access denied for 3000, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
This is the first time I am working with docker so have some issues setting it up and I have already installed docker on my mac. Any idea what is wrong here?
To explain the problem in your command.
Your command
docker run -d --name -p 3000:3000 grafana grafana/grafana
As you can see, --name, no value is specified and that's why it is picking up random value for the image. Use the below command. Meaning of the flags are
--name => Name of the container which is grafana in this case
-p => Publish a container's port(s) to the host, which is 3000:3000 over here
-d => Run container in background and print container ID
docker run -d -p 3000:3000 --name grafana grafana/grafana
Logs of the command:
docker run -d -p 3000:3000 --name grafana grafana/grafana
Unable to find image 'grafana/grafana:latest' locally
latest: Pulling from grafana/grafana
cbdbe7a5bc2a: Already exists
ed18d4ca725a: Pull complete
5ac007dea7db: Pull complete
33b8e7fbf663: Pull complete
09cd2fb04616: Pull complete
990c0b335bdb: Pull complete
Digest: sha256:4bbfcbf9372e1022bf51b35ec1aaab04bf46e01b76a1d00b424f45b63cf90967
Status: Downloaded newer image for grafana/grafana:latest
7748b112f5004a18144152ac7330749b83120914bb0ab0d3a7112ea16368bfa2
Just set --name grafana.
docker run -d --name grafana -p 3000:3000 grafana/grafana
Unable to find image 'grafana/grafana:latest' locally
latest: Pulling from grafana/grafana
cbdbe7a5bc2a: Already exists
ed18d4ca725a: Pull complete
....
....

Failing Dockerized Kafka on MacOS

I’m trying (and failing) to run a dockerized kafka on my mac machine (MacOS Sierra (10.12.2)). I have Docker for Mac version 17.03.1-ce, build c6d412e. These are the images I’m using.
https://hub.docker.com/r/confluentinc/cp-zookeeper/
https://hub.docker.com/r/confluentinc/cp-kafka/
And I’m following the advice in the official quickstart guide, running Zookeeper and Kafka with the following commands.
docker run -d \
--net=host \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
confluentinc/cp-zookeeper:3.2.1
docker run -d \
--net=host \
--name=kafka \
-e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \
confluentinc/cp-kafka:3.2.1
MAIN This otherwise doesn’t work with docker-compose. Again it's on my mac, version 1.11.2, build dfed245.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:3.2.2
environment:
- ZOOKEEPER_CLIENT_PORT=2181
ports:
- 2181:2181
kafka:
image: confluentinc/cp-kafka:3.2.2
environment:
- KAFKA_BROKER_ID=0
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
ports:
- 9092:9092
- 8082:8082
depends_on:
- zookeeper
The services will start, and I can even create a topic with these commands.
kafka-topics --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic test
kafka-topics --list --zookeeper zookeeper:2181
When I try to produce with this command, zookeeper remains silent.
kafka-console-producer --broker-list kafka:9092 --topic test
message-one
message-two
When I try to consume with this command:
kafka-console-consumer --bootstrap-server zookeeper:2181 --topic test --from-beginning
.. zookeeper continuously spits out this error:
...
zookeeper_1 | [2017-06-28 00:55:07,222] INFO Accepted socket connection from /172.20.0.3:52124 (org.apache.zookeeper.server.NIOServerCnxnFactory)
zookeeper_1 | [2017-06-28 00:55:07,222] WARN Exception causing close of session 0x0 due to java.io.EOFException (org.apache.zookeeper.server.NIOServerCnxn)
zookeeper_1 | [2017-06-28 00:55:07,223] INFO Closed socket connection for client /172.20.0.3:52124 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
...
You have specified the wrong host and port for the new console consumer. Try console-consumer --bootstrap-server kafka:9092 --topic test --from-beginning
Also if you are running these commands from outside docker (I.e. on the native macOS host) then edit your /etc/hosts file to add kafka and zookeeper as aliases for localhost
You might also want to declare and mount an external volume for the zookeeper and kafka logs so your data won't be lost if you destroy the docker images and upgrade to a newer version.
Confluent has a full QuickStart documented for these images here http://docs.confluent.io/current/cp-docker-images/docs/quickstart.html

Lost data docker gitlab on local osx

This is how I run GitLab with Docker:
Step 1. Launch a postgresql container
docker run --name gitlab-postgresql -d \
--env 'DB_NAME=gitlabhq_production' \
--env 'DB_USER=gitlab' --env 'DB_PASS=password' \
--volume /srv/docker/gitlab/postgresql:/var/lib/postgresql \
sameersbn/postgresql:9.4-12
Step 2. Launch a redis container
docker run --name gitlab-redis -d \
--volume /srv/docker/gitlab/redis:/var/lib/redis \
sameersbn/redis:latest
Step 3. Launch the gitlab container
docker run --name gitlab -d \
--link gitlab-postgresql:postgresql --link gitlab-redis:redisio \
--publish 10022:22 --publish 10080:80 \
--env 'GITLAB_PORT=10080' --env 'GITLAB_SSH_PORT=10022' \
--env 'GITLAB_SECRETS_DB_KEY_BASE=long-and-random-alpha-numeric-string' \
--volume /srv/docker/gitlab/gitlab:/home/git/data \
sameersbn/gitlab:8.4.2
However, when I restart or shutdown the computer, all previous data is gone.
Please help me, I am new to Docker and GitLab in Docker.
Your approach seems correct and I do not see why the volumes wouldn't persist your data. When you've restarted your computer, you can try to start the stopped containers using these commands:
docker start gitlab-postgresql
docker start gitlab-redis
docker start gitlab
By the way, I'd recommend using this docker-compose.yml file to setup your gitlab environent. Just download the file and run docker-compose up -d.

Elasticsearch in Docker container cluster

I want to run 2 instances of Elasticsearch on 2 different hosts.
I have built my own Docker image based on Ubuntu 14.04 and the 1.3.2 version of Elasticsearch. If I run 2 ES containers on 1 host, each instance can see and communicate with the other; but when I run 2 instances of ES on 2 different hosts, it didn't work. The 9300 port of the container is bind to the 9300 host's port.
Is it possible to create an ES cluster with my configuration?
I was able to get clustering working using unicast across two docker hosts. I just happen to be using the ehazlett/elasticsearch image, but I do not think this should matter all that much. The really important bit seems to be setting the network.publish_host setting to a public or routable IP its docker host.
Configuration
docker-host-01
eth0: 192.168.1.10
Docker version 1.4.1, build 5bc2ff8/1.4.1
docker-host-02
eth0: 192.168.1.20
Docker version 1.4.1, build 5bc2ff8/1.4.1
Building the Cluster
On Docker Host 01
docker run -d \
-p 9200:9200 \
-p 9300:9300 \
ehazlett/elasticsearch \
--cluster.name=unicast \
--network.publish_host=192.168.1.10 \
--discovery.zen.ping.multicast.enabled=false \
--discovery.zen.ping.unicast.hosts=192.168.1.20 \
--discovery.zen.ping.timeout=3s \
--discovery.zen.minimum_master_nodes=1
On Docker Host 02
docker run -d \
-p 9200:9200 \
-p 9300:9300 \
ehazlett/elasticsearch \
--cluster.name=unicast \
--network.publish_host=192.168.1.20 \
--discovery.zen.ping.multicast.enabled=false \
--discovery.zen.ping.unicast.hosts=192.168.1.10 \
--discovery.zen.ping.timeout=3s \
--discovery.zen.minimum_master_nodes=1
Using docker-compose is much easier than running it manually in command line:
elasticsearch_master:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.node.master=true -Des.node.data=false"
environment:
- ES_HEAP_SIZE=512m
ports:
- "9200:9200"
- "9300:9300"
elasticsearch1:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
volumes:
- "/opt/elasticsearch/data"
environment:
- ES_HEAP_SIZE=512m
elasticsearch2:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
volumes:
- "/opt/elasticsearch/data"
environment:
- ES_HEAP_SIZE=512m
You should be able to communicate the two containers running in different hosts as far as the host machines are accessible between them in the ports needed. I think your problem is that you are trying to use ElasticSearch multicast discovery, but if then you need to expose also port 54328 of the containers. If it doesn't work you can also try to configure ElasticSearch using unicast, setting adequately the machines IP's in your elasticsearch.yml.

Resources