Hi i want to up a docker jenkins container and add jobs by using jenkins-CLI command, these process done successfully when i did manually and by using shell script also. But the main problem is when i am trying to execute this script from remote machine docker container is starting but when i am trying to execute commands in docker container from remote machine it's showing error
cannot enable tty mode on non tty input
cannot enable tty mode on non tty input
My script on docker machine
b="branch1"
sed -i "s/master/$b/g" /root/docker/config.xml
#Run docker jenkins base image
docker run -d -P localhost:5000/jenkins_base2
#Printing docker container
export c=($(docker ps))
echo "${c[8]}"
export x="${c[8]}"
sleep 5
#Copying Config file
docker exec -it ${c[8]} bash -c 'scp root#192.168.0.86:/root/docker/config.xml /root/'
sleep 25
#creating job using jenkins CLI
docker exec -ti ${c[8]} bash -c 'java -jar /opt/apache-tomcat-7.0.68/webapps/jenkins/WEB-INF/jenkins-cli.jar -s http://localhost:8080/ create-job $b < /root/config.xml '
script on remote machine
ssh 192.168.0.86 sh docker.sh
Try ssh with -tt option.
ssh -tt 192.168.0.86 sh docker.sh
Related
I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'
I'm attempting to craft system admin bash tools for starting up a Docker image.
But such docker run keeps dying on me after its bash script exited.
The actual working bash script in question is:
#!/bin/sh
docker run \
--name publicnginx1 \
-v /var/www:/usr/share/nginx/html:ro \
-v /var/nginx/conf:/etc/nginx:ro \
--rm \
-p 80 \
-p 443 \
-d \
nginx
docker ps
Executing the simple script resulted in:
# ./docker-run-nginx.sh
743a6eaa33f435e3e0d211c4047bc9af4d4667dc31cd249e481850f40f848c83
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
743a6eaa33f4 nginx "nginx -g 'daemon of…" 1 second ago Up Less than a second 0.0.0.0:32778->80/tcp, 0.0.0.0:32777->443/tcp publicnginx1
And after that bash script gets completed, I executed 'docker ps'
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
There is no Docker running.
What did I do wrong?
Try to run it without --rm.
You can see all container (including the one that already died using this command):
> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
743a6eaa33f4 nginx "nginx -g 'daemon of…" 1 second ago Exited (??) ??
^^^^^
You should be able to look at what is the exit code of the container. Using the container id, you can also look into it's log to understand better what is going on:
docker logs 743a6eaa33f4
If you still can't figure it out, you can start the container with tty to run bash, and try to run the command inside it.
docker run -it -v /var/www:/usr/share/nginx/html:ro -v /var/nginx/conf:/etc/nginx:ro --rm -p 80 -p 443 nginx bash
Is there any shortcut command to connect to a docker container without running docker exec -it 'container_id' bash every time?
Here is a shorter command line shortcut to:
Check if a container is running
If running, connect to a running container using docker exec -it <container> bash command:
Script docker-enter:
#!/bin/bash
name="${1?needs one argument}"
containerId=$(docker ps | awk -v app="$name:" '$2 ~ app{print $1}')
if [[ -n "$containerId" ]]; then
docker exec -it $containerId bash
else
echo "No docker container with name: $name is running"
fi
Then run it as:
docker-enter webapp
I'm using the following alias on OS X:
alias dex='function _dex(){ docker exec -i -t "$(basename $(pwd) | tr -d "[\-_]")_$1_1" /bin/bash -c "export TERM=xterm; exec bash" };_dex'
In the same directory as my docker-files, I run "dex php" to enter the PHP container.
If random id is complicated. Start container with name docker run --name test image and connect with its name docker exec -it test bash.
I have a shell script which runs as follows :
image_id=$(docker ps -a | grep postgres | awk -F' ' '{print $1}')
full_id=$(docker ps -a --no-trunc -q | grep $image_id)
docker exec -i -t $full_id bash
When I run this from the base linux OS, I expect to actually enter the postgres container which is a running container. But the issue is that the shell script hangs on 3rd line during ' docker exec' step.
My end goal is using the bash script, enter a running postgres container and run another bash script inside that container.
However the same command when I run it from command line, it works fine and gets me into the postgres container.
Please help, I have spent hours and hours to solve this but no progress.
Thanks again
Your setup is a bit more complex than it needs to be.
Docker ps can filter containers directly with the --filter= option
docker ps --no-trunc --quiet --filter="ancestor=postgres"
You can also --name containers when you run them which will be less fraught with danger than the script you are attempting
docker run --detach --name postgres_whatever postgres
docker exec -ti postgres_whatever bash
I'm not sure that your script is hanging as opposed to sitting there waiting for input. Try running a command directly
Using naming
exec_test.sh
#!/usr/bin/env bash
docker exec postgres_whatever echo "I have run the test"
When run
$ ./exec_test.sh
I have run the test
Without naming
exec_filter_test.sh
#!/usr/bin/env bash
id=$(docker ps --no-trunc --quiet --filter="ancestor=postgres")
[ -z "$id" ] && echo "no id" && exit 1
docker exec "${id}" echo "I have run the test"
When run
$ ./exec_filter_test.sh
I have run the test
Let's say I have a Host machine and a Vagrant Virtualbox that is running Docker.
If I want to run a docker command on the vagrant I can do something along the lines of:
vagrant ssh -c "docker ps"
If I want to remove all the containers I would from within the vagrant be able to run:
docker rm $(docker ps -a -q)
Trying to remove all the containers from outside the vagrant though with:
vagrant ssh -c "docker rm $(docker ps -a -q)"
Does not work. It tries to run the "docker ps -a -q" on the host machine instead of in the Vagrant which won't work. If I instead try:
vagrant ssh -c "docker rm $(vagrant ssh -c \"docker ps -a -q\")"
I get a little bit closer, but not quite working. How can I run a command like this without having to enter the vagrant directly or have a shell script to run?
Try using single quotes around the command which will prevent interpolation by your shell before it can be run on the vagrant box.
vagrant ssh -c 'docker rm $(docker ps -a -q)'