I have found a docker image devdb/kibana which runs Elasticsearch 1.5.2 and Kibana 4.0.2. However I would like to pass into this docker container the configuration files for both Elasticsearch (i.e elasticsearch.yml) and Kibana (i.e config.js)
Can I do that with this image itself? Or for that would I have to build a separate docker container?
Can I do that with this image itself?
yes, just use Docker volumes to pass in your own config files
Let say you have the following files on your docker host:
/home/liv2hak/elasticsearch.yml
/home/liv2hak/kibana.yml
you can then start your container with:
docker run -d --name kibana -p 5601:5601 -p 9200:9200 \
-v /home/liv2hak/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml \
-v /home/liv2hak/kibana.yml:/opt/kibana/config/kibana.yml \
devdb/kibana
I was able to figure this out by looking at your image Dockerfile parents which are: devdb/kibana→devdb/elasticsearch→abh1nav/java7→abh1nav/baseimage→phusion/baseimage
and also taking a peek into a devdb/kibana container: docker run --rm -it devdb/kibana find /opt -type f -name *.yml.
Or for that would I have to build a separate docker container?
I assume you mean build a separate docker image?. That would also work, for instance the following Dockerfile would do that:
FROM devdb/kibana
COPY elasticsearch.yml /opt/elasticsearch/config/elasticsearch.yml
COPY kibana.yml /opt/kibana/config/kibana.yml
Now build the image: docker build -t liv2hak/kibana .
And run it: docker run -d --name kibana -p 5601:5601 -p 9200:9200 liv2hak/kibana
Related
I have index.html file in a folder. I am mapping the local directory into the nginx docker container.
When I run the nginx docker container using the command
docker run --name website -v C:\Users\prash\Documents\Programming\Spring\Docker:/usr/share/nginx/html:ro -d -p 8080:80 nginx:latest
The container starts successfully but when I run the following two commands in bash, although the container starts, I can't open index.html file in browser.
docker run --name website -v $(pwd):/usr/share/nginx/html:ro -d -p 8080:80 nginx:latest
What might be the issue here?
I am new to docker, I tried finding the solution for this, but couldn't find anything.
I am following this wiki to setup some performance numbers for my testing I am doing. I needed to setup graphite to see my numbers.
So I ran this command as mentioned in the wiki on my mac -
docker run -d --name graphite -p 80:80 -p 2003-2004:2003-2004 -p 2023-2024:2023-2024 -p 8125:8125/udp -p 8126:8126 graphiteapp/graphite-statsd
Below is what I got:
> docker run -d --name graphite -p 80:80 -p 2003-2004:2003-2004 -p 2023-2024:2023-2024 -p 8125:8125/udp -p 8126:8126 graphiteapp/graphite-statsd
Unable to find image 'graphiteapp/graphite-statsd:latest' locally
latest: Pulling from graphiteapp/graphite-statsd
aad63a933944: Pull complete
9b6d24804914: Pull complete
5f9542cd4cb1: Pull complete
09c978daf42b: Pull complete
Digest: sha256:18fbffd024cd540c7a57febfaa38c3dc5513f05db2263300209deb2a8ecd923c
Status: Downloaded newer image for graphiteapp/graphite-statsd:latest
ac248794f9cdea3bd1ab65659ec321d0aa0111de3f151c5e206b6503202a35e3
Now I ran my program which is pushing my metrics to graphite and then I was trying to configure my grafana dashboard by launching grafana docker container with below command as shown in that same wiki:
docker run -d --name -p 3000:3000 grafana grafana/grafana
But I got an error once I executed above command:
> docker run -d --name -p 3000:3000 grafana grafana/grafana
Unable to find image '3000:3000' locally
docker: Error response from daemon: pull access denied for 3000, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
This is the first time I am working with docker so have some issues setting it up and I have already installed docker on my mac. Any idea what is wrong here?
To explain the problem in your command.
Your command
docker run -d --name -p 3000:3000 grafana grafana/grafana
As you can see, --name, no value is specified and that's why it is picking up random value for the image. Use the below command. Meaning of the flags are
--name => Name of the container which is grafana in this case
-p => Publish a container's port(s) to the host, which is 3000:3000 over here
-d => Run container in background and print container ID
docker run -d -p 3000:3000 --name grafana grafana/grafana
Logs of the command:
docker run -d -p 3000:3000 --name grafana grafana/grafana
Unable to find image 'grafana/grafana:latest' locally
latest: Pulling from grafana/grafana
cbdbe7a5bc2a: Already exists
ed18d4ca725a: Pull complete
5ac007dea7db: Pull complete
33b8e7fbf663: Pull complete
09cd2fb04616: Pull complete
990c0b335bdb: Pull complete
Digest: sha256:4bbfcbf9372e1022bf51b35ec1aaab04bf46e01b76a1d00b424f45b63cf90967
Status: Downloaded newer image for grafana/grafana:latest
7748b112f5004a18144152ac7330749b83120914bb0ab0d3a7112ea16368bfa2
Just set --name grafana.
docker run -d --name grafana -p 3000:3000 grafana/grafana
Unable to find image 'grafana/grafana:latest' locally
latest: Pulling from grafana/grafana
cbdbe7a5bc2a: Already exists
ed18d4ca725a: Pull complete
....
....
Am I able to map docker Linux container to windows folder?
docker run -d -p 80:80 -p 443:443 -v c:/docker/volumes/nginx/docker.sock:/tmp/docker.sock:ro --network nginx-proxy-network --ip 172.18.0.3 jwilder/nginx-proxy
With docker compose i launch a jenkins container and i want to have the possibility to execute docker command with(docker installed on the server).
But when i tried to make a simple test run hello-world image i have the following error :
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
I set the user on the docker group, what's wrong with my docker compose file ?
in other post i see if i add this line :
/var/run/docker.sock:/var/run/docker.sock
my container with jenkins can communicate with docker
my docker compose file
jenkins:
image: jenkins:2.32.3
ports:
- 8088:8080
- 50000:50000
volumes:
- /home/my-user-name/docker-jenkins/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
- /tmp:/tmp
To access the docker.sock file, you must run with a user that has filesystem access to read and write to this socket. By default that's with the root user and/or the docker group on the host system.
When you mount this file into the container, that mount keeps the same uid/gid permissions on the file, but those id's may map to different users inside your container. Therefore, you should create a group inside the container as part of your Dockerfile that maps to the same gid that exists on the host, and assign your jenkins user to this group, so that it has access to the docker.sock. Here's an example from a Dockerfile where I do this:
...
ARG DOCKER_GID=993
RUN groupadd -g ${DOCKER_GID} docker \
&& useradd -m -d /home/jenkins -s /bin/sh jenkins \
&& usermod -aG docker jenkins
...
In the above example, 993 is the docker gid on my host.
I can start elasticsearch with Kibana using the following 2 docker commands...
docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch-pb elasticsearch
docker run -d -p 5601:5601 --name kibana-pb --link elasticsearch-pb:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana
But how do I start es with script support using docker?
Usually this is done by adding 2 lines to elasticsearch.yml file.
script.inline: on
script.indexed: on
how do I change the config file within docker image?
Build a custom image that includes those options.
Create a directory for your docker image
mkdir my_elasticsearch
cd my_elasticsearch
Create an elasticsearch.yml with all the options including
script.inline: on
script.indexed: on
Create a Dockerfile that copies the config file.
from elasticsearch
copy elastcsearch.yml /container/path/to/elasticsearch.yml
Build and tag the image
docker build -t my/elasticsearch .
Then run your image
docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch-pb my/elasticsearch
You might want to publish your image to the Docker Hub or another registry so you only need to build it once.
You can also use docker-compose to manage the build process and multiple containers.
One approach is to create your own elasticsearch image, through a Dockerfile starting with the official elasticsearch image.
FROM elasticsearch:5
COPY myconfig /path/to/elasticsearch.yml
That way, your image can start an elasticsearch container with the right configuration pre-setted.