Docker MinIO entrypoint - windows

I have this project which was initially set up on Mac, I'm on Windows, it's a Docker project which runs Node, Kafka and a few other containers, one of them being MinIO. Everything works as intended except MinIO, I get the following error:
createbuckets_1 | /bin/sh: nc: command not found
Docker-compose code:
createbuckets:
image: minio/mc
networks:
- localnet
depends_on:
- minio
entrypoint: >
/bin/sh -c "
while ! nc -zv minio 9000; do echo 'Wait minio to startup...' && sleep 0.1; done; sleep 5;
/usr/bin/mc config host add myminio http://minio:9000 X X;
/usr/bin/mc rm -r --force myminio/cronify/details;
/usr/bin/mc mb myminio/cronify/details;
/usr/bin/mc policy set download myminio/cronify/details;
exit 0;"
Where X is, credentials are supposed to be.
I have been trying to find a fix for weeks.
I have also tried to change the entrypoint from /bin/sh -c to /bin/bash -c or #!/bin/bash -c or #!/bin/sh -c, I get the same error except ".../bin/bash: nc: command not found".
Dockerfile contains:
FROM confluentinc/cp-kafka-connect

I am not entirely sure what you are asking here, but if you are asking about the error message itself, it is telling you that nc is not installed (because it won't be in a container). I am also not clear on which container minio is running in. Assuming the container is being pulled from minio/minio, then it will have curl installed, and you can just use the health check endpoint instead of trying to use nc - https://docs.min.io/minio/baremetal/monitoring/healthcheck-probe.html#minio-healthcheck-api. If it is not a minio container, you would just need to make sure it has curl installed (or nc if for some reason you were set on using that).

Related

sshpass not executing in bash script

I have a dockerfile: (these are the relevent commands)
RUN apk app --update bash openssh sshpass
CMD ["bin/sh", "/home/build/build.sh"]
Which my dockerfile gets ran by this command
docker run --rm -it -v $(pwd):/home <image-name>
and all of the commands within my bash script, that are within the mounted volume execute. These commands range from npm installs to using tar to zip up a file and I want to SFTP that tar.gz file.
I am using sshpass to automate logging in which I know isn't secured, but I'm not worried about that with this application.
sshpass -p <password> sftp -P <port> username#host << EOF
<command>
<command>
EOF
But the sshpass command is never executed. I've tested my docker run command by appending /bin/sh to it and trying it and it also does not run. The SFTP command by itself does.
And when I say it's never executed, I don't receive an error or anything.
Two possible reason
You apk command is wrong, it should be RUN apk add --update bash openssh sshpass, but I assume it typo
Seems like the known host entry is missing, you should check logs `docker logs -f , Also need to add entry in for known-host, check the suggested build script below.
Here is a working example that you can try
Dockerfile
FROM alpine
RUN apk add --update bash openssh sshpass
COPY build.sh /home/build/build.sh
CMD ["bin/sh", "/home/build/build.sh"]
build script
#!/bin/bash
echo "adding host to known host"
mkdir -p ~/.ssh
touch ~/.ssh/known_hosts
ssh-keyscan sftp >> ~/.ssh/known_hosts
echo "run command on remote server"
sshpass -p pass sftp foo#sftp << EOF
ls
pwd
EO
Now build the image, docker build -t ssh-pass .
and finally, the docker-compose for testing the above
version: '3'
services:
sftp-client:
image: ssh-pass
depends_on:
- sftp
sftp:
image: atmoz/sftp
ports:
- "2222:22"
command: foo:pass:1001
so you will able to connect the sftp container using docker-compose up

translate my containers starter file to docker-compose.yml

I am newer in big data domain, and this is my first time using Docker. I just found this amazing project: https://kiwenlau.com/2016/06/26/hadoop-cluster-docker-update-english/ which create a hadoop cluster composed of one master and two slaves using Docker.
After doing all the installation, I just run containers and they work fine. There is start-containers.sh file which give me the hand to lunch the cluster. I decide to install some tools like sqoop to import my local relational data base to Hbase, and that's work fine. After that I stop all Docker container in my pc by tapping
docker stop $(docker ps -a -q)
In the second day, when I tried to relaunch containers by running the same script ./start-container.sh , I found this error:
start hadoop-master container...
start hadoop-slave1 container...
start hadoop-slave2 container...
Error response from daemon: Container
e942e424a3b166452c9d2ea1925197d660014322416c869dc4a982fdae1fb0ad is
not running
even, I lunch this daemon; containers of my cluster cannot connect to each other, and I can't access to data which is stored on Hbase.
First can any one tell me why this daemon don't work.
PS: in the start-container.sh file there is a line which removes containers if they exist before creating them, I delete this line because If I don't delete them, every time I do all things from the beginning.
After searching I found that is preferable to use the docker compose which give me the hand to lunch all container together.
But I can't found how to translate my start-container.sh file to docker-compose.yml file. Is this the best way to lunch all my containers in the same time ? This is the content of start-containers.sh file:
#!/bin/bash
sudo docker network create --driver=bridge hadoop
# the default node number is 3
N=${1:-3}
# start hadoop master container
#sudo docker rm -f hadoop-master &> /dev/null
echo "start hadoop-master container..."
sudo docker run -itd \
--net=hadoop \
-p 50070:50070 \
-p 8088:8088 \
-p 7077:7077 \
-p 16010:16010 \
--name hadoop-master \
--hostname hadoop-master \
spark-hadoop:latest &> /dev/null
# sudo docker run -itd \
# --net=hadoop \
# -p 5432:5432 \
# --name postgres \
# --hostname hadoop-master \
# -e POSTGRES_PASSWORD=0000
# --volume /media/mobelite/0e5603b2-b1ad-4662-9869-8d0873b65f80/postgresDB/postgresql/10/main:/var/lib/postgresql/data \
# sameersbn/postgresql:10-2 &> /dev/null
# start hadoop slave container
i=1
while [ $i -lt $N ]
do
# sudo docker rm -f hadoop-slave$i &> /dev/null
echo "start hadoop-slave$i container..."
port=$(( 8040 + $i ))
sudo docker run -itd \
-p $port:8042 \
--net=hadoop \
--name hadoop-slave$i \
--hostname hadoop-slave$i \
spark-hadoop:latest &> /dev/null
i=$(( $i + 1 ))
done
# get into hadoop master container
sudo docker exec -it hadoop-master bash
Problems with restarting containers
I am not sure if I understood the mentioned problems with restarting containers correctly. Thus in the following, I try to concentrate on potential issues I can see from the script and error messages:
When starting containers without --rm, they will remain in place after being stopped. If one tries to run a container with same port mappings or same name (both the case here!) afterwards that fails due to the container already being existent. Effectively, no container will be started in the process. To solve this problem, one should either re-create containers everytime (and store all important state outside of the containers) or detect an existing container and start it if existent. With names it can be as easy as doing:
if ! docker start hadoop-master; then
docker run -itd \
--net=hadoop \
-p 50070:50070 \
-p 8088:8088 \
-p 7077:7077 \
-p 16010:16010 \
--name hadoop-master \
--hostname hadoop-master \
spark-hadoop:latest &> /dev/null
fi
and similar for the other entries. Note that I do not understand why one would
use the combination -itd (interactive, assign TTY but go to background) for
a service container like this? I'd recommend going with just -d here?
Other general scripting advice: Prefer bash -e (causes the script to stop on unhandled errors).
Docker-Compose vs. Startup Scripts
The question contains some doubt whether docker-compose should be the way to go or if a startup script should be preferred. From my point of view, the most important differences are these:
Scripts are good on flexibility: Whenever there are things that need to be detected from the environment which go beyond environment variables, scripts provide the needed flexibility to execute commands and to be have environment-dependently. One might argue that this goes partially against the spirit of the isolation of containers to be dependent on the environment like this, but a lot of Docker environments are used for testing purposes where this is not the primary concern.
docker-compose provides a few distinct advantages "out-of-the-box". There are commands up and down (and even radical ones like down -v --rmi all) which allow environments to be created and destroyed quickly. When scripting, one needs to implement all these things separately which will often result in less complete solutions. An often-overlooked advantage is also portability concerns: docker-compose exists for Windows as well. Another interesting feature (although not so "easy" as it sounds) is the ability to deploy docker-compose.yml files to Docker clusters. Finally docker-compose also provides some additional isolation (e.g. all containers become part of a network specifically created for this docker-compose instance by default)
From Startup Script to Docker-Compose
The start script at hand is already in a good shape to consider moving to a docker-compose.yml file instead. The basic idea is to define one service per docker run instruction and to transform the commandline arguments into their respective docker-compose.yml names. The Documentation covers the options quite thoroughly.
The idea could be as follows:
version: "3.2"
services:
hadoop-master:
image: spark-hadoop:latest
ports:
- 50070:50070
- 8088:8088
- 7077:7077
- 16010:16010
hadoop-slave1:
image: spark-hadoop:latest
ports:
- 8041:8042
hadoop-slave2:
image: spark-hadoop:latest
ports:
- 8042:8042
hadoop-slave2:
image: spark-hadoop:latest
ports:
- 8043:8042
Btw. I could not test the docker-compose.yml file because the image spark-hadoop:latest does not seem to be available through docker pull:
# docker pull spark-hadoop:latest
Error response from daemon: pull access denied for spark-hadoop, repository does not exist or may require 'docker login'
But the file above might be enough to get an idea.

Docker quits after script has been executed

I have docker container starting with command:
"CMD [\"/bin/bash\", \"/usr/bin/gen_new_key.sh\"]"
script looks like:
#!/bin/bash
/usr/bin/generate_signing_key -k xxxx -r eu-west-2 > /usr/local/nginx/s3_signature_key.txt
{ read -r val1
read -r val2
sed -i "s!AWS_SIGNING_KEY!'$val1'!;
s!AWS_KEY_SCOPE!'$val2'!;
" /etc/nginx/nginx.conf
} < /usr/local/nginx/s3_signature_key.txt
if [ -z "$(pgrep nginx)" ]
then
nginx -c /etc/nginx/nginx.conf
else
nginx -s reload
fi
Script is working itself as I can see all data in docker layer in /var/lib/docker..
It is intended to run it by cron for every 5 days as AWS signature key generated in first line is valid for 7 days only. How can I prevent Docker to quit after script is finished and keep is running?
You want a container that is always ON with nginx, and run that script every 5 days.
First you can just run nginx using:
CMD ["nginx", "-g", "daemon off;"]
This way, the container is always ON with nginx running.
Then, just run your script as a usual script with the cron:
chmod +x script.sh
0 0 */5 * * script.sh
EDIT: since the script must be running in the first time
1) one solution (the pretty one), it's to load manually the AWS valid signing key the first time. After that first time, the script will update the AWS valid signing key automatically. (using the solution previously presented)
2) the other solution, it's to run a docker entrypoint file (that it's your script)
# Your script
COPY docker-entrypoint.sh /usr/local/bin/
RUN ["chmod", "+x", "/usr/local/bin/docker-entrypoint.sh"]
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
# Define default command.
CMD ["/bin/bash"]
On your script:
service nginx start
echo "Nginx is running."
#This line will prevent the container from turning off
exec "$#";
+ info about reason and solution to use the exec line

Docker: RUN touch doesn't create file

While trying to debug a RUN statements in my Dockerfile, I attempted to redirect output to a file in a bound volume (./mongo/log).
To my surprise I was unable to create files via the RUN command, or to pipe the output of another command to a file using the redirection/appending (>,>>) operators. I was however able to perform the said task by logging in the running container via docker exec -ti mycontainer /bin/sh and issuing the command from there.
Why is this behaviour happening? How can I touch file in the Dockerfile / redirect output to a file or to the console from which the Dockerfile is run?
Here is my Dockerfile:
FROM mongo:3.4
#Installing NodeJS
RUN apt-get update && \
apt-get install -y curl && \
curl -sL https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y nodejs
#Setting Up Mongo
WORKDIR /var/www/smq
COPY ./mongo-setup.js mongo-setup.js
##for testing
RUN touch /var/log/node.log && /
node --help 2>&1 > /var/log/node.log
##this was the command to debug
#RUN node mongo-setup.js > /var/log/mongo-setup.log 2> /var/log/mongo-setup.error.log
Here an excerpt from my docker-compose.yml:
mongodb:
build:
context: ./
dockerfile: ./mongodb-dockerfile
container_name: smqmongodb
volumes:
- /var/lib/mongodb/data
- ./mongo/log/:/var/log/
- ../.config:/var/www/.config
You are doing this during your build:
RUN touch /var/log/node.log && /
node --help 2>&1 > /var/log/node.log
The file /var/log/node.log is created and fixed immutably into the resulting image.
Then you run the container with this volume mount:
volumes:
- ./mongo/log/:/var/log/
Whatever is in ./mongo/log/ is mounted as /var/log in the container, which hides whatever was there before (from the image). This is the thing that's making it look like your touch didn't work (even though it probably worked fine).
You're thinking about this backward - your volume mount doesn't expose the container's version of /var/log externally - it replaces whatever was there.
Nothing you do in Dockerfile (build) will ever show up in an external mount.
Instead of RUN node mongo-setup.js > /var/log/mongo-setup.log 2> /var/log/mongo-setup.error.log, within the container, what if you just say `RUN node mongo-setup.js'?
Docker recommends using docker logs. Like so:
docker logs container-name
To accomplish what you're after (see the mongo setup logs?), you can split the stdout & stderr of the container by piping the separate streams: and send them to files:
me#host~$ docker logs foo > stdout.log 2>stderr.log
me#host~$ cat stdout.log
me#host~$ cat stderr.log
Also, refer to the docker logs documentation

Ahow to use multiple terminals in the docker container?

I know it is weird to use multiple terminals in the docker container.
My purpose is to test some commands and build a dockerfile with these commands finally.
So I need to use multiple terminals, say, two. One is running some commands, the other is used to test that commands.
If I use a real machine, I can ssh it to use multiple terminals, but in docker, how can I do this?
Maybe the solution is to run docker with CMD /bin/bash, and in that bash, using screen?
EDIT
In my situation, one shell run a server program, the other run a client program to test the server program. Because the server program and client program are compiled together. So, the default link method in docker is not suitable.
The docker way would be to run the server in one container and the client in another. You can use links to make the server visible from the client and you can use volumes to make the files at the server available from the client. If you really want to have two terminals to the same container there is nothing stopping you from using ssh. I tested this docker server:
from: https://docs.docker.com/examples/running_ssh_service/
# sshd
#
# VERSION 0.0.1
FROM ubuntu:14.04
MAINTAINER Thatcher R. Peskens "thatcher#dotcloud.com"
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
You need to base this image on your image or the otherway around to get all the functionality together. After you have built and started your container you can get it's IP using
docker inspect <id or name of container>
from the docker host you can now ssh in with root and the password from the docker file. Now you can spawn as many ssh clients as you want. I tested with:
while true; do echo "test" >> tmpfile; sleep 1; done
from one client and
tail -f tmpfile
from another
If I understand correctly the problem, then you can use nsenter.
Assuming you have a running docker named nginx (with nginx started), run the following command from the host:
nsenter -m -u -i -n -p -t `docker inspect --format {{.State.Pid}} nginx`
This will start a program in the given name space of the PID (default $SHELL).
You can run more then one shell by issuing it more then once (from the host). Then you can run any binary that exist in the given docker or tail, rm, etc files. For example, tail the log file of nginx.
Further information can be found in the nsenter man.
If you want to just play around, you can run sshd in your image and explore it the way you are used to:
docker run -d -p 22 your_image /usr/sbin/sshd -D
When you are done with your explorations, you can proceed to create Dockerfile as usual.

Resources