Running elasticsearch in docker - elasticsearch

I'm trying to run elasticsearch in a Docker container on my laptop (Mac OS) and running my tests connecting on the TCP port 9300.
First I tried to run it without docker:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.zip
unzip elasticsearch-5.1.1.zip
cd elasticsearch-5.1.1
echo "cluster.name: test
client.transport.sniff: false
discovery.zen.minimum_master_nodes: 1
network.host:
- _local_
- _site_
network.publish_host: _local_" > config/elasticsearch.yml
./bin/elasticsearch
All works well.
Now if I try in docker:
docker run -p 9300:9300 -ti openjdk
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.zip
unzip elasticsearch-5.1.1.zip
cd elasticsearch-5.1.1
echo "cluster.name: test
client.transport.sniff: false
discovery.zen.minimum_master_nodes: 1
network.host:
- _local_
- _site_
network.publish_host: _local_" > config/elasticsearch.yml
chmod 777 -R .
useradd elastic
su elastic
./bin/elasticsearch
It would work for the first suite of tests but not the second one where it throws:
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [{nlR3i79}{nlR3i797RuKXJqS86GExXQ}{O6ltC6a5R-asNMuvCt3c4w}{127.0.0.1}{127.0.0.1:9300}]
Cheers

Take a look here:
https://hub.docker.com/_/elasticsearch/
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
Here an example:
docker run -d elasticsearch:5.1.1

Related

Connect two containers between them failed

I have some issue when I try to manage docker-compose and dockerfile together.
I researched and I know that is possible to use docker-compose without a dockerfile, but I think for me is better to use Dockerfile too because I want an environment to be easy to be modified.
The problem is that I want to have a container with postgres, which is a dependent component to another container, container named api which is used to run the application.
This container contains Java 17 and Maven 3 and take using docker-compose, image from Dockerfile. The problem is: while I use Dockerfile, everything is fine, but actually when I use docker-compose, I got this error:
2021-12-08T08:36:37.221247254Z /usr/local/bin/mvn-entrypoint.sh: line 50: exec: mvn test: not found
Configuration files are:
Dockerfile
FROM openjdk:17-jdk-slim
ARG MAVEN_VERSION=3.8.4
ARG USER_HOME_DIR="/root"
ARG SHA=a9b2d825eacf2e771ed5d6b0e01398589ac1bfa4171f36154d1b5787879605507802f699da6f7cfc80732a5282fd31b28e4cd6052338cbef0fa1358b48a5e3c8
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
RUN apt-get update && \
apt-get install -y \
curl procps \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
COPY mvn-entrypoint.sh /usr/local/bin/mvn-entrypoint.sh
COPY settings-docker.xml /usr/share/maven/ref/
COPY . .
RUN ["chmod", "+x", "/usr/local/bin/mvn-entrypoint.sh"]
ENTRYPOINT ["/usr/local/bin/mvn-entrypoint.sh"]
CMD ["mvn", "test"]
And docker-compose file:
services:
api_service:
build:
context: .
dockerfile: Dockerfile
restart: always
container_name: api_core_backend
ports:
- 8080:8080
depends_on:
- postgres_db
postgres_db:
image: "postgres:latest"
container_name: postgres_core_backend
restart: always
ports:
- 5432:5432
environment:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: root
Can anyone explain me why I got errors when I execute with docker-compose but everything is fine if I use dockerfile instead?
thank you.
Update: Link error while I try to connect to another container:
Caused by: org.flywaydb.core.internal.exception.FlywaySqlException:
Unable to obtain connection from database: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08001
Error Code : 0
Message : Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
The issue
Based on the logs, it looks like the issue is that you're using localhost as the hostname when you connect.
Docker compose creates an internal network where the hostnames are mapped to the service names. So in your case, the hostname is postgres_db.
Please see the docker compose docs for more information.
Solution
Try specifying postgres_db as the hostname :)

Docker volumes + mongodb (-v) - how to make them to work in Windows 10?

I'm trying to create mongodb container:
docker run --name mydb -d -p 27017:27999 -e MONGO_INITDB_DATABASE=mydb -v /myproject/dbtest:/data/db -v /myproject/docker/mongodb:/etc/mongod mongo:3.6.5-jessie --config /etc/mongod/mongo.conf
I have a problem with the -v flag: no matter what I'm trying it's not mapping my \myproject\dbtest and \myproject\docker\mongodb folders.
I'm trying to create a docker container for my project. Since that I already have a working mongod in my system then I want to map it into a different port (27999).
I tried also creating it using a docker file:
FROM mongo:3.6.5-jessie
ADD mongod.conf /etc/mongod.conf
ENTRYPOINT ["/usr/bin/mongod","--config","/etc/mongod.conf"]
This time it managed to find the configuration but I can't managed to connect into the db from outside of the container.
I tried:
127.0.0.1:2799
<the docker ip of the container>:27017
<the docker ip of the container>:27999
Here's my mongod.conf:
storage:
dbPath: /data/db
journal:
enabled: true
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
net:
port: 27017
bindIp: 0.0.0.0
processManagement:
timeZoneInfo: /usr/share/zoneinfo
Anyone managed to find out how to make it to work in Windows 10?
I'm using Docker CE v18.0.3

Elasticsearch 5.1 and Docker - How to get networking configured properly to reach Elasticsearch from the host

Using Elasticsearch:latest (v5.1) from the Docker public repo, I created my own image containing Cerebro. I am now attempting to get Elasticsearch networking properly configured so that I can connect to Elasticsearch from Cerebro. Cerebro running inside of the container I created, renders properly on my host at: http://localhost:9000.
After committing my image, I created my Docker container with the following:
sudo docker run -d -it --privileged --name es5.1 --restart=always \
-p 9200:9200 \
-p 9300:9300 \
-p 9000:9000 \
-v ~/elasticsearch/5.1/config:/usr/share/elasticsearch/config \
-v ~/elasticsearch/5.1/data:/usr/share/elasticsearch/data \
-v ~/elasticsearch/5.1/cerebro/conf:/root/cerebro-0.4.2/conf \
elasticsearch_cerebro:5.1 \
/root/cerebro-0.4.2/bin/cerebro
my elasticsearch.yml in ~/elasticsearch/5.1/config currently has the following network and discovery entries specified:
network.publish_host: 192.168.1.26
discovery.zen.ping.unicast.hosts: ["192.168.1.26:9300"]
I have also tried 0.0.0.0 and not specifying the values to default to the loopback for these settings. In addition, I've tried specifying network.host with a combination of values. No matter how I set this, elasticsearch logs on startup:
[info] play.api.Play - Application started (Prod)
[info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
[error] p.c.s.n.PlayDefaultUpstreamHandler - Cannot invoke the action
java.net.ConnectException: Connection refused: localhost/127.0.0.1:9200
… cascading errors because of this connection refusal...
No matter how I set the elasticsearch.yml networking, the error message on Elasticsearch startup does not change. I verified that the elasticsearch.yml is being picked-up inside of the Docker container. Please let me know were I'm going wrong with this configuration.
Well, it looks like I"m answering my own question after a days-worth of battle with this! The issue was that elasticsearch wasn't started inside of the container. To determine this, I got a terminal into the container:
docker exec -it es5.1 bash
Once in the container, I checked service status:
service elasticsearch status
To this, the OS responded with:
[FAIL] elasticsearch is not running ... failed!
I started it with:
service elasticsearch start
I add a single script that I'll call from docker run to start elasticsearch and cerebro and that should do the trick. However, I would still like to hear if there is a better way to configure this.
I made a github docker-compose repo that will spin up a elasticsearch, kibana, logstash, cerebro cluster
https://github.com/Shuliyey/elkc
========================================================================
On the other hand, in regard to the actual problem (elasticsearch_cerebro not working).
To get the elasticsearch and cerebro working in one docker container. Need to use supervisor
https://docs.docker.com/engine/admin/using_supervisord/
will update with more details
No need to use supervisor at all. A very simple way to solve this is to use docker-compose and bundle Elasticsearch and Cerebro together, like this:
docker-compose.yml:
version: '2'
services:
elasticsearch:
build: elasticsearch
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx1500m -Xms1500m"
networks:
- elk
cerebro:
build: cerebro
volumes:
- ./cerebro/config/application.conf:/opt/cerebro/conf/application.conf
ports:
- "9000:9000"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
elasticsearch/Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.1
cerebro/Dockerfile:
FROM yannart/cerebro
Then you run docker-compose build and docker-compose up. When everything is started, you can access ES at http://localhost:9200 and Cerebro at http://localhost:9000

Kibana on Docker cannot connect to Elasticsearch

I tried to create Kibana and Elasticsearch and it seems that Kibana is having trouble identifying Elasticsearch.
Here are my steps:
1) Create network
docker network create mynetwork --driver=bridge
2) Run Elasticsearch Container
docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch_2_4 --network mynetwork elasticsearch:2.4
3) Run Kibana Container
docker run -i --network mynetwork -p 5601:5601 kibana:4.6
I get a JSON output when I connect to Elasticsearch via http://localhost:9200/ through my browser.
But when I open http://localhost:5601/ I get
Unable to connect to Elasticsearch at http://elasticsearch:9200.
Alternate Approach,
I still get a similar error when I try
docker run -d -e ELASTICSEARCH_URL=http://127.0.0.1:9200 -p 5601:5601 kibana:4.6
where I get the error
Unable to connect to Elasticsearch at http://127.0.0.1:9200.
My blog post based on the accepted answer: https://gunith.github.io/docker-kibana-elasticsearch/
There is some misunderstanding about what localhost or 127.0.0.1 means when running a command inside a container. Because every container has its own networking, localhost is not your real host system but either the container itself. So when you are running kibana and pointing the ELASTICSEARCH_URL variable to localhost:9200 the kibana process will look for elasticsearch inside the kibana container which of course isn't running there.
You already introduced some custom network that you referenced when starting the containers. All containers running in the same network can reference each other via name on their exposed ports (see Dockerfiles). As you named your elasticsearch container elasticsearch_2_4, you can reference the http endpoint of elasticsearch as http://elasticsearch_2_4:9200.
docker run -d --network mynetwork -e ELASTICSEARCH_URL=http://elasticsearch_2_4:9200 -p 5601:5601 kibana:4.6
As long as you don't need to access the elasticsearch instance directly, you can even omit mapping the ports 9200 and 9300 to your host.
Instead of starting all containers on their own, I would also suggest to use docker-compose to manage all services and parameters. You should also consider mounting a local folder as volume to have the data persisted. This could be your compose file. Add the networks, if you need to have the external network, otherwise this setup just creates a network for you.
version: "2"
services:
elasticsearch:
image: elasticsearch:2.4
ports:
- "9200:9200"
volumes:
- ./esdata/:/usr/share/elasticsearch/data/
kibana:
image: kibana:4.6
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
Test:
docker run -d -e ELASTICSEARCH_URL=http://yourhostip:9200 -p 5601:5601 kibana:4.6
You can test with your host ip or the ip identified by docker0 in ifconfig
Regards
I changed network configuration for Kibana container and after this it works fine:

Elasticsearch in Docker container cluster

I want to run 2 instances of Elasticsearch on 2 different hosts.
I have built my own Docker image based on Ubuntu 14.04 and the 1.3.2 version of Elasticsearch. If I run 2 ES containers on 1 host, each instance can see and communicate with the other; but when I run 2 instances of ES on 2 different hosts, it didn't work. The 9300 port of the container is bind to the 9300 host's port.
Is it possible to create an ES cluster with my configuration?
I was able to get clustering working using unicast across two docker hosts. I just happen to be using the ehazlett/elasticsearch image, but I do not think this should matter all that much. The really important bit seems to be setting the network.publish_host setting to a public or routable IP its docker host.
Configuration
docker-host-01
eth0: 192.168.1.10
Docker version 1.4.1, build 5bc2ff8/1.4.1
docker-host-02
eth0: 192.168.1.20
Docker version 1.4.1, build 5bc2ff8/1.4.1
Building the Cluster
On Docker Host 01
docker run -d \
-p 9200:9200 \
-p 9300:9300 \
ehazlett/elasticsearch \
--cluster.name=unicast \
--network.publish_host=192.168.1.10 \
--discovery.zen.ping.multicast.enabled=false \
--discovery.zen.ping.unicast.hosts=192.168.1.20 \
--discovery.zen.ping.timeout=3s \
--discovery.zen.minimum_master_nodes=1
On Docker Host 02
docker run -d \
-p 9200:9200 \
-p 9300:9300 \
ehazlett/elasticsearch \
--cluster.name=unicast \
--network.publish_host=192.168.1.20 \
--discovery.zen.ping.multicast.enabled=false \
--discovery.zen.ping.unicast.hosts=192.168.1.10 \
--discovery.zen.ping.timeout=3s \
--discovery.zen.minimum_master_nodes=1
Using docker-compose is much easier than running it manually in command line:
elasticsearch_master:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.node.master=true -Des.node.data=false"
environment:
- ES_HEAP_SIZE=512m
ports:
- "9200:9200"
- "9300:9300"
elasticsearch1:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
volumes:
- "/opt/elasticsearch/data"
environment:
- ES_HEAP_SIZE=512m
elasticsearch2:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
volumes:
- "/opt/elasticsearch/data"
environment:
- ES_HEAP_SIZE=512m
You should be able to communicate the two containers running in different hosts as far as the host machines are accessible between them in the ports needed. I think your problem is that you are trying to use ElasticSearch multicast discovery, but if then you need to expose also port 54328 of the containers. If it doesn't work you can also try to configure ElasticSearch using unicast, setting adequately the machines IP's in your elasticsearch.yml.

Resources