I want to run nested shell command for example like this in Jenkins pipeline:
docker stop $(docker ps -aq)
Unfortunately when I format it into pipeline syntax:
sh('docker stop $(docker ps -aq)')
Jenkins does not seem to run them correctly, but outputs that:
"docker stop" requires at least 1 argument(s).
I tried to run the command under bash like told here:
Run bash command on jenkins pipeline
But end up with similar issue. Any ideas how to solve this?
This becomes easier for Jenkins Pipeline if you expand the shell command into two lines:
The first to capture the Docker containers that you want to stop.
The second to stop those Docker containers captured in the first command.
We use the first line to capture the output of the shell command into a variable:
containers = sh(returnStdout: true, script: 'sudo /usr/bin/docker ps -aq')
We then use the second command to operate on the captured output from the first command stored in a variable:
sh("sudo /usr/bin/docker stop $containers")
Note that the docker command is normally comfortable with the output of docker ps -aq for operating on with its other commands, but if it dislikes the output stored in the variable, you can reformat it like the following:
containers = sh(returnStdout: true, script: 'sudo /usr/bin/docker ps -aq').trim()
This would, for example, strip the leading whitespace and trailing newlines. The Docker CLI normally does not care about that, but some reformatting may prove necessary here.
Since removing the newlines here would result in a long combined container ID, we need to (as you noted) replace it with a whitespace to delimit the container IDs. That would make the formatting for the string stored in the containers variable:
containers = sh(returnStdout: true, script: 'sudo /usr/bin/docker ps -aq').replaceAll("\n", " ")
I've not used docker stop command this way but syntax is same for docker rm command. Block of pipeline code + OP's line for example:
...
withEnv(["port=$port", "user=$user", "env=$env"]) {
sh '''
ssh -p $port $user#$env docker rm \$(docker ps -aq) || true; \
ssh -p $port $user#$env docker rmi \$(docker images -aq) || true; \
ssh -p $port $user#$env docker stop \$(docker ps -aq) || true
'''
}
...
Adding to the #Matt's answer,
You need a check for empty container. In case when there is no container available to stop, Jenkins build will fail and throw the following error message
"docker stop" requires at least 1 argument(s).
To handle this, you simply need a check for container availability. Here is the complete code
stage('Clean docker containers'){
steps{
script{
def doc_containers = sh(returnStdout: true, script: 'docker container ps -aq').replaceAll("\n", " ")
if (doc_containers) {
sh "docker stop ${doc_containers}"
}
}
}
}
Have you tried somethink like:
docker stop `docker ps -aq`
?
Related
Is there a way to run a shell command during docker run without using the docker file?
What I have right now is this, but it seems to start the container and then run the sh script afterwards. What I need to do set up a user during container running time because it runs with super user privilege (correct me if I'm wrong).
node('linuxNode') {
docker.image('docker/repo').inside(
'--privileged ' +
'--volume "...")
{
sh '/home/path/to/script/createNewUser.sh'
}
}
Fixed my issue, before I was giving argument in '-v ____ ' + '-v ____ ' + ... form, and for some reason the jenkins docker workflow plugin was scrambling the order of this arguments. So instead I have one single long with the sh command at the end.
That should fix the issue most of the time, but in my case there was further trouble, the docker file was adding additional argument (mainly environmental variables) after my arguments, so the sh command still wasn't last.
Because of this this I had to abandon docker.image().inside and use
docker run -d -t -v _____ script.sh
docker exec containername {sh command}
to start the container and run other commands inside it.
This question is not duplicated, because I want to obtain an interactive shell without running with -it flags.
I'm moving first steps into Docker to create images only for internal use.
I start from this envirornment_full.df:
FROM ubuntu:16.04
ENTRYPOINT ["/bin/bash"]
I then build
docker rmi environment:full
docker build -t environment:full -f environment.df .
Then run
docker run environment:full
Running docker images -am I see my image
REPOSITORY TAG IMAGE ID CREATED SIZE
environment full aa91bbd39167 4 seconds ago 129 MB
So I run it
docker run environment:full
I see nothing happening ....
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5847c0a18f30 environment:full "/bin/bash" 21 seconds ago Exited (0) 20 seconds ago admiring_mirzakhani
Also
$ docker run environment:full -ti
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
root#aa768a585f33:/# exit
I'd like to have the ubuntu prompt, like if I was in a SSH connection. And this without user must enter -i or -tty flags.
How can I realize this?
bash won't run at all if stdin is closed. If you don't provide the -i flag, bash will simply exit immediately. So when you...
docker run environment:full
...bash exits immediately, and so your container exits. You would see it if you ran docker ps -a, which shows container that have stopped.
bash won't give you an interactive prompt if it's not attached to a tty. So if you were to run...
coerk run -i environment:full
...you would get a bash shell, but with no prompt, or job control, or other features. You need to provide -t for Docker to allocate a tty device.
You can't get what you want without providing both the -i and -t options on the command line.
An alternative would be to set up an image that runs an ssh daemon, and have people ssh into the container. Instead of behaving "like if I was in a SSH connection", it would actually be an ssh session.
Also, note that this:
docker run environment:full -ti
Is not the same as this:
docker run -it environment:full
The former will run bash -ti inside a container, while the latter passes the -i and -t options to docker run.
In the cloud, I have multiple instances, each running a container with a different random name, e.g.:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5dc97950d924 aws_beanstalk/my-app:latest "/bin/sh -c 'python 3 hours ago Up 3 hours 80/tcp, 5000/tcp, 8080/tcp jolly_galileo
To enter them, I type:
sudo docker exec -it jolly_galileo /bin/bash
Is there a command or can you write a bash script to automatically execute the exec to enter the correct container?
"the correct container"?
To determine what is the "correct" container, your bash script would still need either the id or the name of that container.
For example, I have a function in my .bashrc:
deb() { docker exec -u git -it $1 bash; }
That way, I would type:
deb jolly_galileo
(it uses the account git, but you don't have to)
Here's my final solution. It edits the instance's .bashrc if it hasn't been edited yet, prints out docker ps, defines the dock function, and enters the container. A user can then type "exit" if they want to access the raw instances, and "exit" again to quit ssh.
commands:
bashrc:
command: if ! grep -Fxq "sudo docker ps" /home/ec2-user/.bashrc; then echo -e "dock() { sudo docker exec -it $(sudo docker ps -lq) bash; } \nsudo docker ps\ndock" >> /home/ec2-user/.bashrc; fi
As VonC indicated, usually you have to make some shell scripting of your own if you find yourself doing something repetitive. I made a tool myself here which works if you have Bash 4+.
Install
wget -qO- https://raw.githubusercontent.com/Pithikos/dockerint/master/docker_autoenter >> ~/.bashrc
Then you can enter a container by simply typing the first letters of the container.
$> docker ps
CONTAINER ID IMAGE ..
807b1e7eab7e ubuntu ..
18e953015fa9 ubuntu ..
19bd96389d54 ubuntu ..
$> 18
root#18e953015fa9:/#
This works by taking advantage of the function command_not_found_handle introduced in Bash 4. If a command is not found, the script will try and see if what you typed is a container and if it is, it will run docker exec <container> bash.
I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh
I want to set up a cron job to run a set of commands inside a docker container and then commit the changes to the docker image. I'm able to run the container as a daemon and get the container ID using this command:
CONTAINER_ID=$(sudo docker run -d my-image /bin/sh -c "sleep 10")
but I'm having trouble with the second part--committing the changes to the image once the sleep 10 command completes. Is there a way for me to tell when the docker container is about to be killed and run another command before it is?
EDIT: As an alternative, is there a way to trigger ctrl-p-q via a shell script in the container to leave the container running but return to the host?
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
Run it in the foreground, not as daemon. When it ends the script that launched it takes control and commits/push it
I didn't find any of these answers satisfying, as my goal was to 1) launch a container, 2) run a setup script, and 3) capture/store the state after setup, so I can instantly run various scripts against that state later. And all in a local, automated, continuous integration environment (e.g. scripted and non-interactive).
Here's what I came up with (and I run this in Travis-CI install section) for setting up my test environment:
#!/bin/bash
# Run a docker with the env boot script
docker run ubuntu:14.04 /path/to/env_setup_script.sh
# Get the container ID of the last run docker (above)
export CONTAINER_ID=`docker ps -lq`
# Commit the container state (returns an image_id with sha256: prefix cut off)
# and write the IMAGE_ID to disk at ~/.docker_image_id
(docker commit $CONTAINER_ID | cut -c8-) > ~/.docker_image_id
Note that my base image was ubuntu:14.04 but yours could be any image you want.
With that setup, now I can run any number of scripts (e.g. unit tests) against this snapshot (for Travis, these are in my script section). e.g.:
docker run `cat ~/.docker_image_id` /path/to/unit_test_1.sh
docker run `cat ~/.docker_image_id` /path/to/unit_test_2.sh
Try this if you want an auto commit for all which are running. Put this in a cron or something, if this helps
#!/bin/bash
for i in `docker ps|tail -n +2|awk '{print $1}'`; do docker commit -m "commit new change" $i; done