Link two docker containers on Windows 10 - windows

I have my mongo container running:
docker run --security-opt=seccomp:unconfined -p 27017:27017 -p 28017:28017 --name mong --rm mong --link myapp
and my app
docker run --rm -ti --security-opt=seccomp:unconfined -p8080:8080 --name myapp --link mong --expose 8080
When I run docker port myapp
8080/tcp -> 0.0.0.0:8080
And docker port mong get following:
27017/tcp -> 0.0.0.0:27017
28017/tcp -> 0.0.0.0:28017
However myapp doesn't see mong ports. When I run docker run --rm -ti --security-opt=seccomp:unconfined -p8080:8080 --name myapp --link mong --expose 8080 with --net=host flag myapp starts to see mong container ports, but stops expose 8080.
How to fix it? What is wrong?

If you want to link two or more containers, you can use network.
First create a network:
$ docker network create --driver bridge dev_network
Now run the both container with --net=dev_network
Container 1
$ docker run --security-opt=seccomp:unconfined -p 27017:27017 -p 28017:28017 --name mong --rm mong --net=dev_network
Container 2
docker run --rm -ti --security-opt=seccomp:unconfined -p 8080:8080 --name myapp --net=dev_network
You can now access the containers inside the network with container name.

Related

Failed to bind properties under 'server.port' to java.lang.Integer

Some time ago I faced with such issue:
Failed to bind properties under 'server.port' to java.lang.Integer:
Property: server.port
Value: $PORT
Origin: "server.port" from property source "systemProperties"
Reason: failed to convert java.lang.String to java.lang.Integer
Action:
Update your application's configuration
I tried to run my docker container in DigitalOcean.
I observed some similar topics here and I tried to apply advices. For instance I added server.port=${PORT:8080} to my application.properties but it didn't work for me.
Here's my docker run command:
docker run -p 8080:8080 --name nostalgia --env-file vars.txt --rm -it registry.digitalocean.com/alex-registry/nostalgia
And this is my vars.txt (only one variable at the moment):
PORT=8080
Also I should say that I tried another form of command:
docker run -p 8080:8080 --name nostalgia -e PORT=8080 --rm -it registry.digitalocean.com/alex-registry/nostalgia
But result is the same.
What should I do next to overcome this issue and successfully launch the container? Thanks for your answers!!!

Docker: Change port number during runtime

FROM openjdk:8-jre-alpine
COPY --from=builder tmp/target/Application*.jar app.jar
RUN mkdir -p /app
ARG specified_port=8090
ENV EXPOSED_PORT=$specified_port
EXPOSE $EXPOSED_PORT
ENTRYPOINT ["java", "-jar", "-Dspring.profiles.active=prod", "app.jar"]
And during the runtime how can i change the ports during docker run
server.port to 8092
sample docker run
docker run --net="host" -p 8092:8092 -p 9992:9992 4794973497437
Using Environment Variables
Add this in dockerfile
ENV PORT 8090
then run
docker run -d image_name -e "PORT=8092"
or Another way is
ARG PORT
and use it like
CMD [ $PORT]
and then run your docker container like this
export PORT=8092; docker run -d image_name"

Docker in window map between windows folder and linux folder

Am I able to map docker Linux container to windows folder?
docker run -d -p 80:80 -p 443:443 -v c:/docker/volumes/nginx/docker.sock:/tmp/docker.sock:ro --network nginx-proxy-network --ip 172.18.0.3 jwilder/nginx-proxy

Lost data docker gitlab on local osx

This is how I run GitLab with Docker:
Step 1. Launch a postgresql container
docker run --name gitlab-postgresql -d \
--env 'DB_NAME=gitlabhq_production' \
--env 'DB_USER=gitlab' --env 'DB_PASS=password' \
--volume /srv/docker/gitlab/postgresql:/var/lib/postgresql \
sameersbn/postgresql:9.4-12
Step 2. Launch a redis container
docker run --name gitlab-redis -d \
--volume /srv/docker/gitlab/redis:/var/lib/redis \
sameersbn/redis:latest
Step 3. Launch the gitlab container
docker run --name gitlab -d \
--link gitlab-postgresql:postgresql --link gitlab-redis:redisio \
--publish 10022:22 --publish 10080:80 \
--env 'GITLAB_PORT=10080' --env 'GITLAB_SSH_PORT=10022' \
--env 'GITLAB_SECRETS_DB_KEY_BASE=long-and-random-alpha-numeric-string' \
--volume /srv/docker/gitlab/gitlab:/home/git/data \
sameersbn/gitlab:8.4.2
However, when I restart or shutdown the computer, all previous data is gone.
Please help me, I am new to Docker and GitLab in Docker.
Your approach seems correct and I do not see why the volumes wouldn't persist your data. When you've restarted your computer, you can try to start the stopped containers using these commands:
docker start gitlab-postgresql
docker start gitlab-redis
docker start gitlab
By the way, I'd recommend using this docker-compose.yml file to setup your gitlab environent. Just download the file and run docker-compose up -d.

Passing Elasticsearch and Kibana config file to docker containers

I have found a docker image devdb/kibana which runs Elasticsearch 1.5.2 and Kibana 4.0.2. However I would like to pass into this docker container the configuration files for both Elasticsearch (i.e elasticsearch.yml) and Kibana (i.e config.js)
Can I do that with this image itself? Or for that would I have to build a separate docker container?
Can I do that with this image itself?
yes, just use Docker volumes to pass in your own config files
Let say you have the following files on your docker host:
/home/liv2hak/elasticsearch.yml
/home/liv2hak/kibana.yml
you can then start your container with:
docker run -d --name kibana -p 5601:5601 -p 9200:9200 \
-v /home/liv2hak/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml \
-v /home/liv2hak/kibana.yml:/opt/kibana/config/kibana.yml \
devdb/kibana
I was able to figure this out by looking at your image Dockerfile parents which are: devdb/kibana→devdb/elasticsearch→abh1nav/java7→abh1nav/baseimage→phusion/baseimage
and also taking a peek into a devdb/kibana container: docker run --rm -it devdb/kibana find /opt -type f -name *.yml.
Or for that would I have to build a separate docker container?
I assume you mean build a separate docker image?. That would also work, for instance the following Dockerfile would do that:
FROM devdb/kibana
COPY elasticsearch.yml /opt/elasticsearch/config/elasticsearch.yml
COPY kibana.yml /opt/kibana/config/kibana.yml
Now build the image: docker build -t liv2hak/kibana .
And run it: docker run -d --name kibana -p 5601:5601 -p 9200:9200 liv2hak/kibana

Resources