I'm attempting to run an ELK stack using Docker. I found docker-elk which has already set up the config for me, using docker-compose.
I'd like to store the elasticsearch data on the host machine instead of a container. As per docker-elk's README, I added a volumes line to elasticsearch's section of docker-compose.yml:
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200"
- "9300"
volumes:
- ../../env/elasticsearch:/usr/share/elasticsearch/data
However, when I run docker-compose up I get:
$ docker-compose up
Starting dev_elasticsearch_1
Starting dev_logstash_1
Starting dev_kibana_1
Attaching to dev_elasticsearch_1, dev_logstash_1, dev_kibana_1
kibana_1 | Stalling for Elasticsearch
elasticsearch_1 | [2016-03-09 00:23:35,193][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
elasticsearch_1 | Exception in thread "main" java.lang.IllegalStateException: Unable to access 'path.data' (/usr/share/elasticsearch/data/elasticsearch)
elasticsearch_1 | Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/elasticsearch
elasticsearch_1 | at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
elasticsearch_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
elasticsearch_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
... etc ...
Looking in ../../env, the elasticsearch directory was indeed created, but it was empty. If I create ../../env/elasticsearch/elasticsearch then I get an access error for /usr/share/elasticsearch/data/elasticsearch/nodes. If I creates /nodes then I get an error for /nodes/0, etc...
In short, it appears that the container doesn't have write permissions on the directory.
How do I get it to have write permissions? I tried chmod a+wx ../../env/elasticsearch, and then it manages to create the next directory, but that directory has permission drwxr-xr-x and it gets stuck again.
I don't like the idea of having to run this as root.
Docker doesn't tend to worry about these things in its base images because it expects you to use volumes or volume containers. Mounting to the host gets second class support. But as long as the UID that owns the directory is not zero (and it seems it's not based on our comment exchange) you should be able to get away with running elasticsearch as the user who already owns the directory. You could try removing and re-adding the elasticsearch user from the container, specifying its UID.
You would need to do this at entrypoint time, so your best bet would be to build a custom container. Create a file called my-entrypoint with these contents:
#!/bin/bash
# Allow running arbitrary one-off commands
[[ $1 && $1 != elasticsearch ]] && exec "$#"
# Otherwise, fix perms and then delegate the rest to vanilla
target_uid=$(stat -c %u /usr/share/elasticsearch/data)
userdel elasticsearch
useradd -u "$target_uid" elasticsearch
. /docker-entrypoint "$#"
Make sure it's executable. Then create a Dockerfile with these contents:
FROM elasticsearch
COPY my-entrypoint /
ENTRYPOINT ["/my-entrypoint"]
And finally update your docker-compose.yml file:
elasticsearch:
build: .
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200"
- "9300"
volumes:
- ../../env/elasticsearch:/usr/share/elasticsearch/data
Now when you run docker-compose up it should build an elasticsearch container with your changes.
(I had to do something like this once with apache for Magento.)
Related
I am on my MacBook terminal. I try to have a jenkins container up and running on my local machine.
I firstly created a docker-compose.yml :
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
networks:
- net
networks:
net:
As you can see in the volumes section, I have defined the jenkins_home folder under my current directory as the volume for jenkins data.
Then under my current directory of my machine, I created a folder named jenkins_home. Here is my current directory:
-rw-r--r-- 1 john 1349604816 220 Sep 4 00:08 docker-compose.yml
drwxr-xr-x 2 john 1349604816 64 Sep 4 00:06 jenkins_home
As you can see, I need to change the ownership of jenkins_home folder in order to have jenkins container be able to write data in it (because the uid is not 1000). So, I executed command:
sudo chown 1000:1000 jenkins_home/
Then, my current directory looks like this:
-rw-r--r-- 1 john 1349604816 220 Sep 4 00:08 docker-compose.yml
drwxr-xr-x 2 1000 1000 64 Sep 4 00:06 jenkins_home
After that I run my container by command: docker-compose up. But I ended up with error:
Starting jenkins ... done
Attaching to jenkins
jenkins | touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
jenkins | Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
jenkins exited with code 1
Why I still get the permission error after I changed the ownership of the jenkins_home folder under my current directory on my machine?
P.S. I understand there could be other way to purely have a jenkins container running but still I would like to understand what is wrong with my approach and hopefully could also get it work.
Jenkins needs to create or to use existing jenkins_home directory,
When Docker sees that jenkins_home volume in your machine doesn't exists then it will create it with your osx UID & GID.
If you create the jenkins_home folder you must stay with your current directory permissions and not changed them,
Docker running UID isn't the same as your machine, they may have different UID and GID.
Linux namespaces provide isolation for running processes, limiting
their access to system resources without the running process being
aware of the limitations. For more information on Linux namespaces,
see Linux namespaces.
The best way to prevent privilege-escalation attacks from within a
container is to configure your container’s applications to run as
unprivileged users. For containers whose processes must run as the
root user within the container, you can re-map this user to a
less-privileged user on the Docker host. The mapped user is assigned a
range of UIDs which function within the namespace as normal UIDs from
0 to 65536, but have no privileges on the host machine itself.
There a wonderful video explaining how docker works with namespaces
Does the actual jenkins user/group exist on the Mac?
This is what I do on my linux servers where:
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
On my alpine server:
addgroup -g ${gid} ${group}
adduser -u ${uid} -G ${group} -s /bin/bash -D ${user}
to become
addgroup -g 1000 jenkins
adduser -u 1000 -G jenkins -s /bin/bash -D jenkins
On my centos8 server
groupadd -g ${gid} ${group}
useradd -u ${uid} -g ${group} -s /bin/bash -d ${user}
to become
groupadd -g 1000 jenkins
useradd -u 1000 -g jenkins -s /bin/bash -d jenkins
then:
sudo chown jenkins:jenkins jenkins_home/
I do not use Mac, but I presume it is similar
UPDATE
Based on all the above, try the following:
docker-compose.yml
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- 8080:8080
- 50000:50000
volumes:
- $PWD/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
networks:
- net
networks:
net:
I have added the following:
port 50000 (only if you want to attach build slave servers, opposed to just running builds on the master)
volume /var/run/docker.sock (to be able to use the docker daemon with Jenkins, you need to mount the volume)
!!DO THE FOLLOWING!! Delete the original jenkins_home directory that you created before. Now run 'docker-compose up', since the host volume directory does not exist, docker will now create the required directory on the host which is based on the configuration in the docker-compose.yml (in this case '$PWD/jenkins_home'), thus it will now have the correct ownership and permissions for the jenkins container to use it.
If that doesn't work, make the jenkins container run in privileged mode, see below:
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
privileged: true
user: root
ports:
- 8080:8080
- 50000:50000
volumes:
- $PWD/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
networks:
- net
networks:
net:
In my docker-compose.yml file, I have this container definition:
elasticsearch:
image: elasticsearch:2.3.5
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
I cannot find elasticsearch data because I don't know where is esdata located. How is it mapped in my host machine? Where is that directory? I'm running it on a MacOS High Sierra.
the mapping is HOST:CONTAINER
If your volume works, the esdata directory is in the same directory as your docker-compose file.
In my docker-compose file I write "./:/exemple/of/route"
You can check volumes with a docker inspect [container name]
You can also do this find / -type d -name 'esdata' to find the directory on your host
I'm trying to mount a directory with configuration files in my docker-compose.yml.
In my case it is logstash, which tells me the mounted directory is empty.
Loading a bash and ls -la in the parent directory shows that the pipeline directory is empty and is owned by root.
One weird thing is, that it worked a few days ago.
docker-compose.yml:
version: '3'
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.6.3
ports:
- 5000:5000
- 8989:8989
volumes:
- C:/PROJECT_DIR/config/logstash/pipeline/:/usr/share/logstash/pipeline/
I found it better to try around with docker itself, as it gives more feedback
docker run --rm -it -v C:/PROJECT_DIR/config/logstash/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:5.6.3
From here and some googling I found out I had to reset my shared drives credentials under "Docker for Windows" -> Settings... -> Shared Drives, because I had changed my windows domain user password.
If you changed your system username or password then you need to re-apply the credentials to get the volume mount working.
I have 2 containers mgmt-app who is a Meteor container and mgmt-mongo who is the MongoDB.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b65be4ac454 gitlab-lab:5005/dfc/mongo:latest "/entrypoint.sh mongo" About an hour ago Up About an hour 27017/tcp mgmt-mongo
dff0b3c69c5f gitlab-lab:5005/dfc/mgmt-docker-gui:lab "/bin/sh -c 'sh $METE" About an hour ago Up 42 minutes 0.0.0.0:80->80/tcp mgmt-app
From my Docker host I want to run docker run gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
but I have this error:
=> Starting meteor app on port:80
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: MONGO_URL must be set in environment
So I tried:
docker run -e "MONGO_URL=mongodb://mgmt-mongo:27017/meteor" gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
and then the error was:
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoError: failed to connect to server [mgmt-mongo:27017] on first connect
I really don't understand because when I do a docker-compose up -d with this file:
mgmt-app:
image: gitlab-lab:5005/dfc/mgmt-docker-gui:latest
container_name: mgmt-app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $HOME/.docker:/root/.docker
- /home/dockeradm/compose/area:/home/dockeradm/compose/area
environment:
- ROOT_URL=http://localhost:80
- MONGO_URL=mongodb://mgmt-mongo:27017/meteor
ports:
- 80:80
restart: always
mgmt-mongo:
image: gitlab-lab:5005/dfc/mongo:latest
container_name: mgmt-mongo
volumes:
- mgmt_mongo_data_config:/data/configdb
- mgmt_mongo_data_db:/data/db
restart: always
everything go well.
So my request is, how should I do my docker run to execute my command ? (the command is not a simple ls -al but it's ok for the demo)
When you run the containers separately with docker run, they are not linked on the same docker network so the mongo container is not accessible from the app container. To remedy this, you should use either:
--link to mark the app container as linked to the mongo container. This works, but is deprecated.
a defined docker network for both containers to be linked by; this is more complex, but is the recommended architecture
By contrast, docker-compose automatically adds both containers to the same docker network, so they are immediately connectable without any extra configuration required:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
postgres:9.5
I try rebooting,
docker-compose build --no-cache
delete image and container and build again
I have many proyects and anybody starts, keeps the same configuration...
Mac osx Sierra
Apparently the containers were not deleted well, I tried with this and after rebuild works ok.
# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q)
docker-compose.yml
version: '2'
services:
web:
build: .
image: imagename
command: python manage.py runserver 0.0.0.0:8000
ports:
- "3000:3000"
- "8000:8000"
volumes:
- .:/code
depends_on:
- migration
- redis
- db
redis:
image: redis:3.2.3
db:
image: postgres:9.5
volumes:
- .:/tmp/data/
npm:
image: imagename
command: npm install
volumes:
- .:/code
migration:
image: imagename
command: python manage.py migrate --noinput
volumes:
- .:/code
depends_on:
- db
Dockerfile:
FROM python:3.5.2
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
RUN mkdir /code
WORKDIR /code
RUN easy_install -U pip
ADD requirements.txt /code/requirements.txt
RUN pip install -r requirements.txt`
If you're coming here from Google and finding that multiple containers are complaining of Disk space, the issue may be that your local Docker installation has maxed out its disk image size. This is configurable in Docker for Mac. Here are the instructions to change that disk image size.
You can do docker volume prune to remove all unused local volumes.
If you do not have any critical data you can blow away the docker volume.
docker volume ls
docker volume rm your_volume
My case, goto docker Dashboard -> Settings and increate Disk Image size then restart
Off late, I faced a similar issue with postgres and mysql databases. All of a sudden these containers exited without any external trigger. I spend much time on this issue finally it was identified as RAM allocation issue in the server.
There were 13 containers working in the same path out of which 3 postgres and 1 mysql database containers also present. These containers exited and the application stopped working. There were 2 errors in the docker logs - mainly
postgresql database directory appears to contain a database and
FATAL: could not write lock file "postmaster.pid": No space left on device
I tried stopping all other services and starting the database containers only but this issue repeated
First of all check the storage utilisation status with the below command
df [OPTION]... [FILE]...
df -hP
In my case, it was showing 98% utilised and databases were not able to add new records which caused the problem. After allocated additional memory to the NFS mount, the issue cleared
Once it is done, verify the RAM utilisation status also which will be increased now
free -h
This will return values in total, used, free, shared, buff/cache, available. If you try stopping containers one by one and restarting, you can see these are consuming memory from the shared category. So in my case, there was almost 18M showing initially in shared which was not enough for the databases to run along with all other containers. After the NFS mount is increased, the shared RAM also increased to 50M which means all services working fine
So it is pure physical storage space issue and you should act proactively to remove old unused files, docker images which takes huge space, local docker volumes etc. Check in docker documentation to perform these steps
https://docs.docker.com/config/pruning/
I faced this problem on Docker Desktop for Mac, after I've rebuilt the containers and started these via docker compose up. But the older versions of these containers were already running, because these were set to restart automatically.
I.e. the PostgreSQL DB couldn't set the lock on the named volume, because there was a concurrent access with the running container.