How to install ElasticSeach plugins using docker compose - elasticsearch

I have a docker-compose.yml file with an elastic search image:
elasticsearch:
image: elasticsearch
ports:
- "9200:9200"
container_name: custom_elasticsearch_1
If I want to install additional plugins like the HQ interface or the attachment-mapper I have to do a manual installation with the following commands:
$ docker exec custom_elasticsearch_1 plugin install royrusso/elasticsearch-HQ
$ docker exec custom_elasticsearch_1 plugin install mapper-attachments
Is there a way to install them automatically when I run the docker-compose up command?

Here is a blog post by Elastic pertaining to exactly that! You need to use a Dockerfile which executes commands to extend an image. Your Dockerfile will look something like this:
FROM custom_elasticsearch_1
RUN elasticsearch-plugin install royrusso/elasticsearch-HQ

Inspired by #NickPridorozhko's answer, but updated and tested with elasticsearch^7.0.0 (with docker stack / swarm), example with analysis-icu:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
user: elasticsearch
command: >
/bin/sh -c "./bin/elasticsearch-plugin list | grep -q analysis-icu
|| ./bin/elasticsearch-plugin install analysis-icu;
/usr/local/bin/docker-entrypoint.sh"
...
The main difference are the updated commands for ^7.0.0, and the use of the docker entrypoint instead of ./bin/elasticsearch (in a stack's context, you'd get an error related to a limit of spawnable processes).

This works for me. Install plugin before and then continue with starting the elasticsearch.
elasticsearch:
image: elasticsearch
command:
- sh
- -c
- "plugin list | grep -q plugin_name || plugin install plugin_name;
/docker-entrypoint.sh elasticsearch"

The ingest-attachment plugin requires additional permissions and prompts the user during the installation. I used the yes command :
elasticsearch:
image: elasticsearch:6.8.12
command: >
/bin/sh -c "./bin/elasticsearch-plugin list | grep -q ingest-attachment
|| yes | ./bin/elasticsearch-plugin install --silent ingest-attachment;
/usr/local/bin/docker-entrypoint.sh eswrapper"

If you're using the ELK stack from sebp/elk
You need to setup your Dockerfile like
FROM sebp/elk
ENV ES_HOME /opt/elasticsearch
WORKDIR ${ES_HOME}
RUN yes | CONF_DIR=/etc/elasticsearch gosu elasticsearch bin/elasticsearch-plugin \
install -b mapper-attachments
As seen on https://elk-docker.readthedocs.io/#installing-elasticsearch-plugins.
It should also work just for Elastic Search only as well.

Just for somebody who is using elasticsearch version starting from 7 and want to install plugin through the dockerfile then use the --batch flag
FROM elasticsearch:7.16.2
RUN bin/elasticsearch-plugin install repository-azure --batch

An example with Elasticsearch v6.8.15.
For simplicity we will use a docker-compose.yml and a Dockerfile.
The content of Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.8.15
RUN elasticsearch-plugin install analysis-icu
RUN elasticsearch-plugin install analysis-phonetic
And the content docker-compose.yml:
version: '2.2'
services:
elasticsearch:
#image: docker.elastic.co/elasticsearch/elasticsearch:6.8.15
build:
context: ./
dockerfile: Dockerfile
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=X-Requested-With,Content-Type,Content-Length,Authorization
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
- esplugins1:/usr/share/elasticsearch/plugins
ports:
- 9268:9200
networks:
- esnet
elasticsearch2:
#image: docker.elastic.co/elasticsearch/elasticsearch:6.8.15\
build:
context: ./
dockerfile: Dockerfile
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=X-Requested-With,Content-Type,Content-Length,Authorization
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
- esplugins2:/usr/share/elasticsearch/plugins
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
esplugins1:
driver: local
esplugins2:
driver: local
networks:
esnet:
This is the default Elasticsearch 6.8.15 docker-compose.yml file from Elasticsearch website itself https://www.elastic.co/guide/en/elasticsearch/reference/6.8/docker.html#docker-cli-run-prod-mode. And I added two named data volumes, esplugins1 and esplugins2, for two of these nodes. So these plugins can be persisted between docker-compose down.
Note, if you ever run docker-compose down -v then these volumes will be removed!
I commented out the image line and moved that image to Dockerfile. And then using RUN command added the elasticsearch-plugin install command. This elasticsearch-plugin command is natively available in the elasticsearch container. And you can check this once you are in the container shell.

Related

Docker-compose for production running laravel with nginx on azure

I have an app that is working but I am getting problems to make it run on Azure.
I have the next docker-compose
version: "3.6"
services:
nginx:
image: nginx:alpine
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
environment:
PORT: ${PORT}
command: /bin/sh -c "envsubst '$${PORT}' < /etc/nginx/template/nginx.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
- mynet
depends_on:
- app
- worker
app:
image: myimage:latest
build:
context: .
dockerfile: ./setup/azure/Dockerfile
restart: unless-stopped
tty: true
expose:
- 9000
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
networks:
- mynet
worker:
image: my_image:latest
command: bash -c "/usr/local/bin/php artisan queue:work --timeout=0"
depends_on:
- app
networks:
- mynet
volumes:
uploads:
logos:
networks:
mynet:
I am unsure if the volumes in nginx ok, I think that perhaps I should create a new Dockerfile to copy the files. However, this would increase a lot the size of the project.
When using App Services on azure the development is made assigning a randomly port, that's wgy i have the envsubst instruction in command. I appreciate any other suggestion to make it run this project on Azure
I'm assuming you're trying to persist the storage in your app to a volume. Check out this doc issue. Now I don't think you need
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
but for
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
you can create a storage account, mount it to your linux app plan (it's not available for Windows app plans yet), and mount the relative path /var/www/simple/public/uploads to the file path of the storage container.

Elasticsearch service running on minikube cluster not reachable from within the cluster

I am using kompose to deploy this docker-compose.yaml
version: '3'
services:
webapp:
build:
context: ../../../
dockerfile: config/docker/dev/Dockerfile-dev
container_name: myWebApp-dev
command: ["/bin/sh", "-ec","sleep 1000"]
image: 'localhost:5002/webapp:1'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
labels:
kompose.image-pull-policy: 'IfNotPresent'
kompose.service.type: nodeport
ports:
- "4000:4000"
- "3000:3000"
depends_on:
- elasticsearch
links:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch
command: ["/bin/sh", "-ec","sleep 1000"]
environment:
- node.name=elasticsearch
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: es02
command: ["/bin/sh", "-ec","sleep 1000"]
environment:
- node.name=es02
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
to minikube.
The elasticsearch pod and service are running. However, the webapp cannot access the elasticsearch cluster as I get a connection refused error when curling from within the webapp pod -> curl: (7) Failed to connect to 10.108.5.31 port 9200: Connection refused. Does anyone know what the reason for this problem is and how to fix it?
In elasticsearch section, you have a shell command to sleep. And, never started any elasticsearch instances after that.
command: ["/bin/sh", "-ec","sleep 1000"]
So, looks like, there is no elasticsearch running inside the container and that's why connection refused is happening.
To Fix:
Get rid of command: of elasticsearch and es02, that way, default command will be used.
Note:
Now, When the elasticsearch starts, You will face two error (described below) with this compose yaml in kubernetes. These are unrelated to this post, But I will try to giving you direction where to look.
ERROR: [2] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Here,
you need to update host system for vm.max_map_count. Exec into minikube virtualbox by minikube ssh and run sudo -s sysctl -w vm.max_map_count=262144 to change the map_count of host kernel. It will work, because docker/container doesn't provide kernel level isolation.
For minikube,
minikube ssh 'sudo -s sysctl -w vm.max_map_count=262144'
ulimit is not available in kompose. See issue here. So either you have to get rid of both, bootstrap.memory_lock=true from environment: sections, or you may need to update the docker image. This question is already asked here in stackoverflow.
So the improved kompose yaml (works well on minikube):
version: '3'
services:
webapp:
build:
context: ../../../
dockerfile: config/docker/dev/Dockerfile-dev
container_name: myWebApp-dev
command: ["/bin/sh", "-ec","sleep 1000"]
image: 'localhost:5002/webapp:1'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
labels:
kompose.image-pull-policy: 'IfNotPresent'
kompose.service.type: nodeport
ports:
- "4000:4000"
- "3000:3000"
depends_on:
- elasticsearch
links:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- esdata02:/usr/share/elasticsearch/data
However, I would suggest to follow the elasticsearch official doc instead of using compose to install elasticsearch in kubernetes.

How to run elasticsearch via docker compose or swarm mode and install plugin with command

Problem Statement
I have a docker-compose.yml file (v3) that looks like the following:
version: '3'
services:
elastic:
restart: always
image: elasticsearch:2.3.1
command: ["sh", "-c", "./bin/plugin install delete-by-query && ./bin/elasticsearch"]
volumes:
- /home/styfle/esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
restart: always
image: kibana:4.5.4
ports:
- 5601:5601
links:
- elastic:elasticsearch
When I run docker-compose up elastic it appears that the plugin installed correctly, but I get the message "don't run elasticsearch as root".
Creating dev_elastic_1 ... done
Attaching to dev_elastic_1
elastic_1 | -> Installing delete-by-query...
elastic_1 | Trying https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/delete-by-query/2.3.1/delete-by-query-2.3.1.zip ...
elastic_1 | Downloading ..DONE
elastic_1 | Verifying https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/delete-by-query/2.3.1/delete-by-query-2.3.1.zip checksums if available ...
elastic_1 | Downloading .DONE
elastic_1 | Installed delete-by-query into /usr/share/elasticsearch/plugins/delete-by-query
elastic_1 | Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root.
elastic_1 | at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:93)
elastic_1 | at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:144)
elastic_1 | at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
elastic_1 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
elastic_1 | Refer to the log for complete error details.
dev_elastic_1 exited with code 74
Question
How can I install the plugin and run as the elasticsearch user instead of root user?
As per the docker-compose architecture and cleanup policies, you cannot run a docker-compose command to initiate a subshell.
You can do some bash and docker changes in your current docker-compose.yml file as below:
version: '3'
services:
elastic:
restart: always
image: elasticsearch:2.3.1
user: ${MY_USER_ID}
command: ["sh", "-c", "./bin/plugin install delete-by-query && ./bin/elasticsearch"]
volumes:
- /home/styfle/esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
restart: always
user: ${MY_USER_ID}
image: kibana:4.5.4
ports:
- 5601:5601
links:
- elastic:elasticsearch
I have added a line user: ${MY_USER_ID} in the above docker-compose.yml file. After this, you need to use the below command to spin up the containers and start elasticsearch:
MY_USER_ID=$(id -u):$(id -g) docker-compose up elastic
Test it and let me know the feedback.

Docker compose containers fail and exit with code 127 missing /bin/env bash

I'm new to Docker so bear with me for any wrong term.
I have Docker Tools installed on Windows 7 and I'm trying to run a Docker compose file of a proprietary existing project stored in a git repository and that has probably been only run on Linux.
These are the commands I ran:
docker-machine start
docker-machine env
#FOR /f "tokens=*" %i IN ('docker-machine env') DO #%i
this was output by step (2)
docker-compose -f <docker-file.yml> up
Most of the Docker work has gone fine (image download, extraction, etc).
It is failing at container start, where some containers run fine - I recognize a working MongoDB instance since its log doesn't report any error - but other containers exit pretty soon with an error code, i.e.:
frontend_1 exited with code 127
Scrolling up a bit the console, I can see lines like:
No such file or directoryr/bin/env: bash
I have no idea where to go from here. I tried launching composer from a CygWin terminal, but got the same result.
Docker Compose file
version: "2"
services:
frontend:
command: "yarn start"
image: company/application/frontend:1
build:
context: frontend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && yarn run dev"
image: company/application/backend:1
build:
context: backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "4000:4000"
volumes:
- ./backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
generator-backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && npm run dev"
image: company/generator/backend:1
build:
context: generator-backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "5000:5000"
volumes:
- ./generator-backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
db:
image: mongo:3.4
volumes:
- mongo:/data/db
ports:
- "27017:27017"
volumes:
mongo:
It turned out it was a matter of file line endings, caused by git clone, as pointed out by #mklement0 in his answer to env: bash\r: No such file or directory question.
Disabling core.autocrlf then recloning the repo solved it.

How to have "RUN" command in docker-compose similar to dockerfile?

Docker file
FROM elasticsearch:2
RUN /usr/share/elasticsearch/bin/plugin install --batch cloud-aws
from https://www.elastic.co/blog/elasticsearch-docker-plugin-management
Can someone plz help me to add ES plugin in docker-compose file?
version: '2'
services:
nitrogen:
build: .
ports:
- "8000:8000"
volumes:
- ~/mycode:/mycode
depends_on:
- couchdb
- elasticsearch
elasticsearch:
image: elasticsearch:1.7.5
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
In above docker-compose missing is installation of plugin.
Tried this but it runs on local machine, instead of docker container.
command: /usr/share/elasticsearch/bin/plugin install elasticsearch/elasticsearch-river-couchdb/2.6.0
You have to create your own docker image like my-elasticsearch with the Dockerfile you mentioned, then in docker-compose.yml to refer to that image.

Resources