Hi i'm new to elasticsearch and docker so forgive me if this question is a bit basic.
I want to load up an elasticsearch container (using the official elasticsearch image on dockerhub) with a config file on my host machine
currently what I do is
docker run -d elasticsearch elasticsearch -Des.cluster.name="testcluster1" -Des.node.name="node1"
and so on.
is there a way to load the container using a confi file i.e the elasticsearch.yml file from my host machine?
host machine is running centos (not sure if relevant but thought i'd add it just incase)
Thank you
You can do research on ONBUILD in Dockerfile.
If one line is marked with ONBUILD, it will only be triggered when run the container.
$ cat Dockerfile
FROM elasticsearch
ONBUILD ADD elasticsearch.yml /SOME_PATH
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
Secondly, you can also mount a host folder when run the container.
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch -Des.cluster.name="testcluster1" -Des.node.name="node1"
Refer:
Dockerfile Best Practices
Related
I have one container of springboot application on docker,and one container of postgres with all the settings.How can I run both of them by linking them together.
The image springboot-postgresql corresponds to spring boot application,and postgres refers to postgresql.
Postgres is listening to 0.0.0.0,port 5432.
Please suggest,if there's another way other than making a .yml file and using docker compose up .
Thanks for the help.
The way you can communicate between each container is using docker network,
First, you need to create a network:
$ docker network create sprintapp
Above, the command creates a network named sprintapp
Then, you need to specify to a container to be inside the network:
$ docker run --name [CONTAINERNAME] --network sprintapp [IMAGE]
This way, all containers within the network could talk to each other, using [CONTAINERNAME] as the URI to locate it.
More info about this:
docker run reference
docker network reference
I have a ubuntu docker. I install elasticsearch service it.
When i use the command "curl -X GET 'localhost:9200' ", it return me the version, the name, all right.
It means the elasticsearch is configured correct, but when i access on my browser out of docker , doesn't work.
I have already configured the network on elasticsearch.yml file in path:
/etc/elasticsearch/elasticsearch.yml
I don't know the reason.
When i start the docker i use the command:
docker run -it -p 9200:9200 ubuntu/elastic
Extra information: the elasticsearch is in ubuntu that it's a docker too. i start the ubuntu, and after that inside the ubuntu's container i start the elasticsearch service.
You should use the official ELS docker image instead and use the command in the documentation.
From ELS documentation :
The Docker image can be retrieved with the following command:
docker pull docker.elastic.co/elasticsearch/elasticsearch:5.4.1
[...]
Running Elasticsearch from the command lineedit
Development modeedit
Elasticsearch can be quickly started for development or testing use
with the following command:
docker run -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1"
docker.elastic.co/elasticsearch/elasticsearch:5.4.1
By default elasticsearch is listening at 127.0.0.1 so it won't accessible outside the container. In order to be accessible outside container you will have to launch it using:
docker run -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" docker.elastic.co/elasticsearch/elasticsearch:5.4.1
This will make elasticsearch available on all IP addresses 0.0.0.0 then you will be able to access it. Detail can be found here.
Currently I've the following setup:
An Hyper-V VM running Windows 10 on which is my dev machine. My CPU doesn't support nested virtualization.
Docker for Windows is installed on the host machine which runs Windows 10 too.
Is it possible to run docker build from the VM against Docker on the host machine?
Yes, you can. According to the documentation, there is 3 ways to do this,
# with Git repo
docker -H xxx build https://github.com/docker/rootfs.git#container:docker
# Tarball contexts
docker -H xxx build http://server/context.tar.gz
Text files
docker -H xxx build - < Dockerfile
When doing this, you need to make sure that,
your client have docker installed.
all the dependent files are accessible by the host.
At the end, the docker image will be created in your host.
Update
the docker options is documented here now.
export DOCKER_HOST=ssh://sammy#your_server_ip
then you can run docker build on your host machine
reference
There is (from my understanding) 3 different way of building a docker using a remote docker host / daemon:
Using the DOCKER_HOST variable
Using contexts
Using -H cli option
as in :
DOCKER_HOST="ssh://user#docker-build.dev" docker build -t toto .
docker use context remote-build-host && docker build -t toto .
docker -H ssh://user#docker-build.dev:22 build -t toto .
Please note the port is required in the last form (-H)
See this page and this one too for more info.
I'm trying to make working out laradock (docker+laravel)
following: https://github.com/LaraDock/laradock instructions
I installed docker + cloned laradock.git
laradock folder is located at
/myHD/...path../www/laradock
at the same level I have my laravel projects
/myHD/...path../www/project01
I edited laradock/docker-compose.xml
### Laravel Application Code Container ######################
volumes_source:
image: tianon/true
volumes:
- ../project01/:/var/www/laravel
After this, bu I'm not sure it this is how to reload correctly after editing docker-file, I did:
docker-compose up -d nginx mysql
now since I have a nginx error 404 not found: how can I debug the problem ?
Additional info:
I entered the machine via bash:
docker-compose exec --user=laradock workspace bash
but I cant find
/etc/nginx/... path (nginx folder doesn't exists !?!?)
Guessing your nginx is not located in the workspace container, it resides on a separate container, You've executed the following:
docker-compose up -d nginx mysql
That would probably only run nginx and mysql containers, not your php-fpm container. Also the path to you volume is important as the configurations in you nginx server depends on this.
To run php-fpm, add php-fpm or something similar to the docker-compose up command, check out what this is called in your docker-compose.yaml file e.g
docker-compose up -d nginx mysql phpfpm
To assess you nginx container first of all execute
`docker ps -a`
From the list, look for the ID of your nginx container, before running
docker exec -it <container-id> bash
This should then give you assess to your nginx container to make the required changes.
Or without directly accessing the container, simply make changes in the nginx configuration file, look for 'server' and the 'root' change the root from var/www/laravel/public to the new directory /project01/public.
Execute the command to bring down your containers
docker-composer down
Star over again with
docker-compose up -d nginx mysql phpfpm
Give it a go
I have spring boot application which communicate with ElasticSearch 5.0.0 alpha 2.
My application successfully communicate with elastic and preform several queries.
When I try to dockerize my application, it fails to communicate with ElasticSearch, and I get the following error:
None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}]
I have spent a lot of time on the internet, but I have found problems when the ElasticSearch is dockerized, but in my case, the client is dockerized, and it is working fine without the docker.
The command I used to create the docker image is: docker build -t my-service .
The DockerFile is:
FROM java:8
VOLUME /tmp
ADD ./build/libs/myjarfile-2.0.0.jar app.jar
EXPOSE 8090
RUN sh -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
To execute the image i use: docker run --name myname -d -p 8090:8090 -t my-service
Can someone share his/her experience with this issue?
Thanks
Guy Hudara
The problem is that your elasticsearch is not available on your dockerized host. When you put something in a docker container it also gets isolated on a network layer and localhost is localhost of the docker container but not the host itself. Therefore if you have elasticsearch also in a docker container use container linking and environment variable injection or reference your host machines address of your main network interface – not loopback – to your app.
Option 1
assuming that elasticsearch exposes 9200 try to run the following
$ docker run -d --name=elasticsearch elasticsearch
$ docker run -d --name=my-app --link elasticsearch:elasticsearch -p 8090:8090 my-app
Then you can define elasticsearch address in your app using env variable ${ELASTICSEARCH_PORT_9200_TCP_ADDR}.
Option 2
assuming your host machine runs on 192.168.1.10 you can also do the following:
$ docker run -d -p 9200:9200 elasticsearch
$ docker run -d -p 8090:8090 my-app
note that the name for the easticsearch container is optional here but the exposing of elasticsearch port mandatory. In this case you'll have to configure your elasticsearch host in your app given address of 192.168.1.10.