Trouble communicating between docker containers - elasticsearch

I'm running an "elasticsearch" container. I can curl the container and get results but when I try to communicate with the container from within my "web" container it refuses the connection.
docker-compose up
curl localhost:9200 // works.
curl docker-compose run web curl localhost:9200 // connection refused.
docker-compose.yml
version: '2'
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/src
ports:
- "5000:5000"
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:5.1.2
ports:
- "9200:9200"
Dockerfile
FROM python:3.5
ADD . /src
WORKDIR /src
RUN pip install -r requirements.txt
CMD python project/wsgi.py

You cannot use localhost:9200 from within the web container to connect to the elasticsearch container. You could define a link or just use the service name (which is mapped by default):
curl elasticsearch:9200
Links allow you to define extra aliases by which a service is reachable from another service. They are not required to enable services to communicate - by default, any service can reach any other service at that service’s name.
Also see Docker Compose Links

You should be trying to curl elasticsearch:9200, not localhost:9200. The hostname elasticsearch should be in your hosts file on the web container.

Related

Make a request to a spring api running in a docker container from windows host

So, I searched around for an answer on this matter but either people don't address the issue or they say there's no problem doing this on their computer (mac or linux). It seems like this might be a windows problem.
I have a spring api running on a docker container (linux container). I use docker desktop on windows and I'm trying to make a request (in insomnia/postman/wtv) to that api.
If I run the api locally making the following request works perfectly:
http://localhost:8080/api/task/
This will list multiples task elements.
I've containerized this application like so:
Dockerfile
FROM openjdk:11.0.7
COPY ./target/spring-api-0.0.1-SNAPSHOT.jar /usr/app/
WORKDIR /usr/app
RUN sh -c 'touch spring-api-0.0.1-SNAPSHOT.jar'
ENTRYPOINT ["java", "-jar", "spring-api-0.0.1-SNAPSHOT.jar"]
docker-compose.yml
version: '3.8'
services:
api:
build: .
depends_on:
- mysql
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/test?createDatabaseIfNotExist=true
ports:
- "8080:80"
mysql:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=test
If I do docker-compose up this works without issue:
The problem is, if I try to call the same endpoint as before from localhost I don't get any response.
Insomnia returns an error saying: Error: Server returned nothing (no headers, no data)
I've also tried connecting to the container's ip (got it from docker inspect) but no luck.
Ports are exposed in docker-compose.yml. What am I missing?
Thanks in advance.
Port mapping is incorrect.Spring boot application started at 8080 (from the image I see) inside container and it should be mapped to 8080 inside the container.
It should be like below:
ports:
- "8080:8080"

Can't reach server inside docker container from host

I am hosting a mysql server and a go http server in docker. I am unable to hit the http server from my host machine. My host machine is a mac.
I have tried using localhost:8080 and ipofserver:8080. I get the ip from the docker inspect. I am able to connect to my mysql server from my host, but i can't hit the server from the host.
Here is my docker ps output.
0.0.0.0:8080->8080/tcp
0.0.0.0:3306->3306/tcp, 33060/tcp
Below are my details:
Docker Desktop version 2.0.0.3.
docker-compose
version: '3.1'
services:
mysql:
image: mysql:latest
restart: always
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: mydb
volumes:
- mysql:/var/lib/mysql
ports:
- "3306:3306"
networks:
- mynetwork
server:
image: server:latest
networks:
- mynetwork
ports:
- "8080:8080"
volumes:
mysql: ~
networks:
mynetwork:
driver: "bridge"
mysql dockerfile
FROM mysql:8.0.16
COPY ./scripts/mysql/dbgen-v1.sql /docker-entrypoint-initdb.d/
EXPOSE 3306
server dockerfile
FROM golang:1.12.5
WORKDIR a/go/path
COPY . .
ENV GOBIN=/usr/local/bin
RUN go get github.com/go-sql-driver/mysql
RUN go get github.com/iancoleman/strcase
RUN go get github.com/jmoiron/sqlx
RUN go get github.com/spf13/cobra
RUN go get github.com/gorilla/websocket
RUN go get github.com/spf13/viper
RUN go install -v cmd/project/main.go
EXPOSE 8080
CMD ["main"]
(This answer is based on the chat we had in the comments)
In order to expose the web server from inside the container to the host it needs to bind to 0.0.0.0 and not to 127.0.0.1. Using 0.0.0.0 ensures that the web server binds to the bridge interface that can be accessed from the host side.
Relevant Docker docs: https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/

Elasticsearch 5.1 and Docker - How to get networking configured properly to reach Elasticsearch from the host

Using Elasticsearch:latest (v5.1) from the Docker public repo, I created my own image containing Cerebro. I am now attempting to get Elasticsearch networking properly configured so that I can connect to Elasticsearch from Cerebro. Cerebro running inside of the container I created, renders properly on my host at: http://localhost:9000.
After committing my image, I created my Docker container with the following:
sudo docker run -d -it --privileged --name es5.1 --restart=always \
-p 9200:9200 \
-p 9300:9300 \
-p 9000:9000 \
-v ~/elasticsearch/5.1/config:/usr/share/elasticsearch/config \
-v ~/elasticsearch/5.1/data:/usr/share/elasticsearch/data \
-v ~/elasticsearch/5.1/cerebro/conf:/root/cerebro-0.4.2/conf \
elasticsearch_cerebro:5.1 \
/root/cerebro-0.4.2/bin/cerebro
my elasticsearch.yml in ~/elasticsearch/5.1/config currently has the following network and discovery entries specified:
network.publish_host: 192.168.1.26
discovery.zen.ping.unicast.hosts: ["192.168.1.26:9300"]
I have also tried 0.0.0.0 and not specifying the values to default to the loopback for these settings. In addition, I've tried specifying network.host with a combination of values. No matter how I set this, elasticsearch logs on startup:
[info] play.api.Play - Application started (Prod)
[info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
[error] p.c.s.n.PlayDefaultUpstreamHandler - Cannot invoke the action
java.net.ConnectException: Connection refused: localhost/127.0.0.1:9200
… cascading errors because of this connection refusal...
No matter how I set the elasticsearch.yml networking, the error message on Elasticsearch startup does not change. I verified that the elasticsearch.yml is being picked-up inside of the Docker container. Please let me know were I'm going wrong with this configuration.
Well, it looks like I"m answering my own question after a days-worth of battle with this! The issue was that elasticsearch wasn't started inside of the container. To determine this, I got a terminal into the container:
docker exec -it es5.1 bash
Once in the container, I checked service status:
service elasticsearch status
To this, the OS responded with:
[FAIL] elasticsearch is not running ... failed!
I started it with:
service elasticsearch start
I add a single script that I'll call from docker run to start elasticsearch and cerebro and that should do the trick. However, I would still like to hear if there is a better way to configure this.
I made a github docker-compose repo that will spin up a elasticsearch, kibana, logstash, cerebro cluster
https://github.com/Shuliyey/elkc
========================================================================
On the other hand, in regard to the actual problem (elasticsearch_cerebro not working).
To get the elasticsearch and cerebro working in one docker container. Need to use supervisor
https://docs.docker.com/engine/admin/using_supervisord/
will update with more details
No need to use supervisor at all. A very simple way to solve this is to use docker-compose and bundle Elasticsearch and Cerebro together, like this:
docker-compose.yml:
version: '2'
services:
elasticsearch:
build: elasticsearch
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx1500m -Xms1500m"
networks:
- elk
cerebro:
build: cerebro
volumes:
- ./cerebro/config/application.conf:/opt/cerebro/conf/application.conf
ports:
- "9000:9000"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
elasticsearch/Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.1
cerebro/Dockerfile:
FROM yannart/cerebro
Then you run docker-compose build and docker-compose up. When everything is started, you can access ES at http://localhost:9200 and Cerebro at http://localhost:9000

linking kibana with elasticsearch

I have the following docker containers running on my box...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5da7523e527b kibana "/docker-entrypoint.s" About a minute ago Up About a minute 0.0.0.0:5601->5601/tcp elated_lovelace
20aea0e545ca elasticsearch "/docker-entrypoint.s" 3 hours ago Up 3 hours 0.0.0.0:9200->9200/tcp, 9300/tcp sad_meitner
My aim was to get kibana to link to my elasticsearch container however when I hit kibana it's telling me that I do not have any document stores. I know this is not right because I definitely have documents in elasticsearch. I'm guessing my link command is wrong.
This is the docker command I used to start the kibana container.
docker run -p 5601:5601 --link sad_meitner:elasticsearch -d kibana
Can someone tell me what I've done wrong?
thanks
First of all, Linking is a legacy feature, Create a user defined network first:
docker network create mynetwork --driver=bridge
Now use mynetwork for containers you want to be able to communicate with each other.
docker run -p 5601:5601 --name kibana -d --network mynetwork kibana
docker run -p 9200:9200 -p 9300:9300 --name elasticsearch -d --network mynetwork elasticsearch
Docker will run a dns server for your user defined network, so you can ping other container by name.
docker exec -it kibana /bin/bash
ping elasticsearch
You can use telnet or curl to verify kibana->elasticsearch connectivity from kibana container.
p.s I used official (library) docker images for ELK stack with user defined networking recently and it worked like a charm.
you can add ENV ELASTICSEARCH_URL=elasticsearch:9200 to your Dockerfile before build kibana, then use docker-compose to run elasticsearch with kibana like this:
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.3.0
container_name: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
kibana:
image: docker.elastic.co/kibana/kibana:5.3.0
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch

Kibana on Docker cannot connect to Elasticsearch

I tried to create Kibana and Elasticsearch and it seems that Kibana is having trouble identifying Elasticsearch.
Here are my steps:
1) Create network
docker network create mynetwork --driver=bridge
2) Run Elasticsearch Container
docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch_2_4 --network mynetwork elasticsearch:2.4
3) Run Kibana Container
docker run -i --network mynetwork -p 5601:5601 kibana:4.6
I get a JSON output when I connect to Elasticsearch via http://localhost:9200/ through my browser.
But when I open http://localhost:5601/ I get
Unable to connect to Elasticsearch at http://elasticsearch:9200.
Alternate Approach,
I still get a similar error when I try
docker run -d -e ELASTICSEARCH_URL=http://127.0.0.1:9200 -p 5601:5601 kibana:4.6
where I get the error
Unable to connect to Elasticsearch at http://127.0.0.1:9200.
My blog post based on the accepted answer: https://gunith.github.io/docker-kibana-elasticsearch/
There is some misunderstanding about what localhost or 127.0.0.1 means when running a command inside a container. Because every container has its own networking, localhost is not your real host system but either the container itself. So when you are running kibana and pointing the ELASTICSEARCH_URL variable to localhost:9200 the kibana process will look for elasticsearch inside the kibana container which of course isn't running there.
You already introduced some custom network that you referenced when starting the containers. All containers running in the same network can reference each other via name on their exposed ports (see Dockerfiles). As you named your elasticsearch container elasticsearch_2_4, you can reference the http endpoint of elasticsearch as http://elasticsearch_2_4:9200.
docker run -d --network mynetwork -e ELASTICSEARCH_URL=http://elasticsearch_2_4:9200 -p 5601:5601 kibana:4.6
As long as you don't need to access the elasticsearch instance directly, you can even omit mapping the ports 9200 and 9300 to your host.
Instead of starting all containers on their own, I would also suggest to use docker-compose to manage all services and parameters. You should also consider mounting a local folder as volume to have the data persisted. This could be your compose file. Add the networks, if you need to have the external network, otherwise this setup just creates a network for you.
version: "2"
services:
elasticsearch:
image: elasticsearch:2.4
ports:
- "9200:9200"
volumes:
- ./esdata/:/usr/share/elasticsearch/data/
kibana:
image: kibana:4.6
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
Test:
docker run -d -e ELASTICSEARCH_URL=http://yourhostip:9200 -p 5601:5601 kibana:4.6
You can test with your host ip or the ip identified by docker0 in ifconfig
Regards
I changed network configuration for Kibana container and after this it works fine:

Resources