How to set hosts in docker for mac - macos

When I use docker before, I can use docker-machine ssh default to set hosts in docker's machine /etc/hosts, but in docker for mac I can't access it's VM because of it don't have it.
So, the problem is how to set hosts in docker for mac ?
My secondary domain wants to point the other ip.

I found a solution, use this command
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Now, edit the /etc/hosts in the Docker VM.
To exit screen, use Ctrl + a + d.

Here's how I do it with a bash script so the changes persist between Docker for Mac restarts.
cd ~/Library/Containers/com.docker.docker/Data/database
git reset --hard
DFM_HOSTS_FILE="com.docker.driver.amd64-linux/etc/hosts"
if [ ! -f ${DFM_HOSTS_FILE} ]; then
echo "appending host to DFM /etc/hosts"
echo -e "xxx.xxx.xxx.xxx\tmy.special.host" > ${DFM_HOSTS_FILE}
git add ${DFM_HOSTS_FILE}
git commit -m "add host to /etc/hosts for dns lookup"
fi

You can automate it via this script, run this scrip on start up time or login time will save you..
#!/bin/sh
# host entry -> '10.4.1.4 dockerreigstry.senz.local'
# 1. run debian image
# 2. check host entry exists in /etc/hosts file
# 3. if not exists add it to /etc/hosts file
docker run --name debian -it --privileged --pid=host debian nsenter \
-t 1 -m -u -n -i sh \
-c "if ! grep -q dockerregistry.senz.local /etc/hosts; then echo -e '10.4.1.4\tdockerregistry.pagero.local' >> /etc/hosts; fi"
# sleep 2 seconds
# remove stopped debian container
sleep 2
docker rm -f debian
I have created a blog post with more information about this topic.
https://medium.com/#itseranga/set-hosts-in-docker-for-mac-2029276fd448

You must have to create an docker-compose.yml file. This file will be on the same route of your Dockerfile
For example, I use this docker-compose.yml file:
version: '2'
services:
app:
hostname: app
build: .
volumes:
- ./:/var/www/html
working_dir: /var/www/html
depends_on:
- db
- cache
ports:
- 80:80
cache:
image: memcached:1.4.27
ports:
- 11211:11211
rabbitmq:
image: rabbitmq:latest
ports:
- 5672:5672
db:
image: postgres:9.5.3
ports:
- 5432:5432
environment:
- TZ=America/Mazatlan
- POSTGRES_PASSWORD=root
- POSTGRES_DB=restaurantcore
- POSTGRES_USER=rooms
- POSTGRES_PASSWORD=rooms
The ports are binding with the ports of your host docker machine.

Related

Docker MinIO entrypoint

I have this project which was initially set up on Mac, I'm on Windows, it's a Docker project which runs Node, Kafka and a few other containers, one of them being MinIO. Everything works as intended except MinIO, I get the following error:
createbuckets_1 | /bin/sh: nc: command not found
Docker-compose code:
createbuckets:
image: minio/mc
networks:
- localnet
depends_on:
- minio
entrypoint: >
/bin/sh -c "
while ! nc -zv minio 9000; do echo 'Wait minio to startup...' && sleep 0.1; done; sleep 5;
/usr/bin/mc config host add myminio http://minio:9000 X X;
/usr/bin/mc rm -r --force myminio/cronify/details;
/usr/bin/mc mb myminio/cronify/details;
/usr/bin/mc policy set download myminio/cronify/details;
exit 0;"
Where X is, credentials are supposed to be.
I have been trying to find a fix for weeks.
I have also tried to change the entrypoint from /bin/sh -c to /bin/bash -c or #!/bin/bash -c or #!/bin/sh -c, I get the same error except ".../bin/bash: nc: command not found".
Dockerfile contains:
FROM confluentinc/cp-kafka-connect
I am not entirely sure what you are asking here, but if you are asking about the error message itself, it is telling you that nc is not installed (because it won't be in a container). I am also not clear on which container minio is running in. Assuming the container is being pulled from minio/minio, then it will have curl installed, and you can just use the health check endpoint instead of trying to use nc - https://docs.min.io/minio/baremetal/monitoring/healthcheck-probe.html#minio-healthcheck-api. If it is not a minio container, you would just need to make sure it has curl installed (or nc if for some reason you were set on using that).

How to pull image from Docker hub to EC2 using bitbucket pipeline

I am trying to implement CICD using bitbucket pipelines.
So far I was able to create the image and push it to docker hub. Seems straightforward and the internet is full of tutorials.
But, to pull the image from an EC2 instance and run the image I didnt find anything.
I have this bitbucket-pipeline.yml file:
image: atlassian/default-image:latest
pipelines:
default:
- step:
services:
- docker
script:
- export IMAGE_NAME=juanibe/vinimayapi:$BITBUCKET_COMMIT
- docker build -t $IMAGE_NAME .
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- docker push $IMAGE_NAME
And I have this script, but I dont know where tu put it:
#!bin/bash
sudo docker ps
echo 'Login in to docker'
docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD // How can I set env variable here?
echo 'Fetching latest image'
sudo docker pull user/vinimayapi:latest
echo 'Stoping current container'
sudo docker stop cont_docker_app_test
echo 'Removing old container'
sudo docker rm cont_docker_app_test-old
echo 'Rename stoped container'
sudo docker rename user/cont_docker_app_test user/cont_docker_app_test_old
echo 'Starting new container'
sudo docker run -d --name cont_docker_app_test -p 443:3333 -p 8001:8001 --link my-mongo-testing:my-mongo-testing user/vinimayapi:latest
Any help will be really appreciated, I've been trying to create a pipeline for days without success.
Add to your pipeline additional step:
- pipe: "atlassian/ssh-run:0.2.4"
variables:
SSH_USER: user
SERVER: ip_server
SSH_KEY: sshkey
MODE: script
COMMAND: script.sh
It should look like below:
image: atlassian/default-image:latest
pipelines:
default:
- step:
services:
- docker
script:
- export IMAGE_NAME=juanibe/vinimayapi:$BITBUCKET_COMMIT
- docker build -t $IMAGE_NAME .
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- docker push $IMAGE_NAME
- pipe: "atlassian/ssh-run:0.2.4"
variables:
SSH_USER: user
SERVER: ip_server
SSH_KEY: sshkey
MODE: script
COMMAND: script.sh
script.sh file located in that case is located in the same directory as bitbucket_pipelines.yml

Can't access redis docker volume

I can't access redis volume:
docker run --name redis --volume=data:/data/:rw -p 6379:6379 -d redis
b60c15a0a3e05d7cdb383415192927b4fc1563be5350924a01fe70f6a29ab113
ls: data/: No such file or directory
Why? Doesn't docker create a host volume by default?
You can use named volumes or bind volumes, but don't use dot . for relative path.
docker run
bind volumes
You have to use absolute path:
docker run --name redis --mount type=bind,source="/<absolutepath>/data",target=/app -p 6379:6379 -d redis
named volumes
You cannot use '.' for docker volume name.
docker run --name redis --volume=data:/data/:rw -p 6379:6379 -d redis
After this command, a named volume is generated: you can check it doing:
docker volume ls | grep data
docker-compose
With docker compose, you can define
services:
your-redis-service:
...
volumes:
- ./data:/data
In this case, docker-compose translate relative path to your working directory to absolute path for bind volumes.

Docker: RUN touch doesn't create file

While trying to debug a RUN statements in my Dockerfile, I attempted to redirect output to a file in a bound volume (./mongo/log).
To my surprise I was unable to create files via the RUN command, or to pipe the output of another command to a file using the redirection/appending (>,>>) operators. I was however able to perform the said task by logging in the running container via docker exec -ti mycontainer /bin/sh and issuing the command from there.
Why is this behaviour happening? How can I touch file in the Dockerfile / redirect output to a file or to the console from which the Dockerfile is run?
Here is my Dockerfile:
FROM mongo:3.4
#Installing NodeJS
RUN apt-get update && \
apt-get install -y curl && \
curl -sL https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y nodejs
#Setting Up Mongo
WORKDIR /var/www/smq
COPY ./mongo-setup.js mongo-setup.js
##for testing
RUN touch /var/log/node.log && /
node --help 2>&1 > /var/log/node.log
##this was the command to debug
#RUN node mongo-setup.js > /var/log/mongo-setup.log 2> /var/log/mongo-setup.error.log
Here an excerpt from my docker-compose.yml:
mongodb:
build:
context: ./
dockerfile: ./mongodb-dockerfile
container_name: smqmongodb
volumes:
- /var/lib/mongodb/data
- ./mongo/log/:/var/log/
- ../.config:/var/www/.config
You are doing this during your build:
RUN touch /var/log/node.log && /
node --help 2>&1 > /var/log/node.log
The file /var/log/node.log is created and fixed immutably into the resulting image.
Then you run the container with this volume mount:
volumes:
- ./mongo/log/:/var/log/
Whatever is in ./mongo/log/ is mounted as /var/log in the container, which hides whatever was there before (from the image). This is the thing that's making it look like your touch didn't work (even though it probably worked fine).
You're thinking about this backward - your volume mount doesn't expose the container's version of /var/log externally - it replaces whatever was there.
Nothing you do in Dockerfile (build) will ever show up in an external mount.
Instead of RUN node mongo-setup.js > /var/log/mongo-setup.log 2> /var/log/mongo-setup.error.log, within the container, what if you just say `RUN node mongo-setup.js'?
Docker recommends using docker logs. Like so:
docker logs container-name
To accomplish what you're after (see the mongo setup logs?), you can split the stdout & stderr of the container by piping the separate streams: and send them to files:
me#host~$ docker logs foo > stdout.log 2>stderr.log
me#host~$ cat stdout.log
me#host~$ cat stderr.log
Also, refer to the docker logs documentation

Interactive shell using Docker Compose

Is there any way to start an interactive shell in a container using Docker Compose only? I've tried something like this, in my docker-compose.yml:
myapp:
image: alpine:latest
entrypoint: /bin/sh
When I start this container using docker-compose up it's exited immediately. Are there any flags I can add to the entrypoint command, or as an additional option to myapp, to start an interactive shell?
I know there are native docker command options to achieve this, just curious if it's possible using only Docker Compose, too.
You need to include the following lines in your docker-compose.yml:
version: "3"
services:
app:
image: app:1.2.3
stdin_open: true # docker run -i
tty: true # docker run -t
The first corresponds to -i in docker run and the second to -t.
The canonical way to get an interactive shell with docker-compose is to use:
docker-compose run --rm myapp
(With the service name myapp taken from your example. More general: it must be an existing service name in your docker-compose file, myapp is not just a command of your choice. Example: bash instead of myapp would not work here.)
You can set stdin_open: true, tty: true, however that won't actually give you a proper shell with up, because logs are being streamed from all the containers.
You can also use
docker exec -ti <container name> /bin/bash
to get a shell on a running container.
The official getting started example (https://docs.docker.com/compose/gettingstarted/) uses the following docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- "8000:5000"
redis:
image: "redis:alpine"
After you start this with docker-compose up, you can shell into either your redis container or your web container with:
docker-compose exec redis sh
docker-compose exec web sh
docker-compose run myapp sh should do the deal.
There is some confusion with up/run, but docker-compose run docs have great explanation: https://docs.docker.com/compose/reference/run
If anyone from the future also wanders up here:
docker-compose exec service_name sh
or
docker-compose exec service_name bash
or you can run single lines like
docker-compose exec service_name php -v
That is after you already have your containers up and running.
The service_name is defined in your docker-compose.yml file
Using docker-compose, I found the easiest way to do this is to do a docker ps -a (after starting my containers with docker-compose up) and get the ID of the container I want to have an interactive shell in (let's call it xyz123).
Then it's a simple matter to execute
docker exec -ti xyz123 /bin/bash
and voila, an interactive shell.
This question is very interesting for me because I have problems, when I run container after execution finishes immediately exit and I fixed with -it:
docker run -it -p 3000:3000 -v /app/node_modules -v $(pwd):/app <your_container_id>
And when I must automate it with docker compose:
version: '3'
services:
frontend:
stdin_open: true
tty: true
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
This makes the trick: stdin_open: true, tty: true
This is a project generated with create-react-app
Dockerfile.dev it looks this that:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Hope this example will help other to run a frontend(react in example) into docker container.
I prefer
docker-compose exec my_container_name bash
If the yml is called docker-compose.yml it can be launched with a simple $ docker-compose up. The corresponding attachment of a terminal can be simply (consider that the yml has specified a service called myservice):
$ docker-compose exec myservice sh
However, if you are using a different yml file name, such as docker-compose-mycompose.yml, it should be launched using $ docker-compose -f docker-compose-mycompose.yml up. To attach an interactive terminal you have to specify the yml file too, just like:
$ docker-compose -f docker-compose-mycompose.yml exec myservice sh
A addition to this old question, as I only had the case last time. The difference between sh and bash. So it can happen that for some bash doesn't work and only sh does.
So you can:
docker-compose exec CONTAINER_NAME sh
and in most cases: docker-compose exec CONTAINER_NAME bash
use.
If you have time. The difference between sh and bash is well explained here:
https://www.baeldung.com/linux/sh-vs-bash
You can do docker-compose exec SERVICE_NAME sh on the command line. The SERVICE_NAME is defined in your docker-compose.yml. For example,
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
The SERVICE_NAME would be "zookeeper".
According to documentation -> https://docs.docker.com/compose/reference/run/
You can use this docker-compose run --rm app bash
[app] is the name of your service in docker-compose.yml

Resources