Is there a way to run a shell command during docker run without using the docker file?
What I have right now is this, but it seems to start the container and then run the sh script afterwards. What I need to do set up a user during container running time because it runs with super user privilege (correct me if I'm wrong).
node('linuxNode') {
docker.image('docker/repo').inside(
'--privileged ' +
'--volume "...")
{
sh '/home/path/to/script/createNewUser.sh'
}
}
Fixed my issue, before I was giving argument in '-v ____ ' + '-v ____ ' + ... form, and for some reason the jenkins docker workflow plugin was scrambling the order of this arguments. So instead I have one single long with the sh command at the end.
That should fix the issue most of the time, but in my case there was further trouble, the docker file was adding additional argument (mainly environmental variables) after my arguments, so the sh command still wasn't last.
Because of this this I had to abandon docker.image().inside and use
docker run -d -t -v _____ script.sh
docker exec containername {sh command}
to start the container and run other commands inside it.
Related
I present to you the following dilemma I execute my script manually and it works well, see next line for example:
docker exec -ti backup_subversion sh -c "/tmp/my_script.sh"
But when I attempt to schedule the process this line is just skipped.
I have tried to execute just a touch command and it too is ignored.
I have tried to execute as root, same problem.
I have tried to execute in another docker environment: same problem.
My OS is Centos 7.
In this script for example the bug part who will crash :
#!/bin/bash
# Create a container.
docker run -d --name=backup_subversion \
-v /subversion/dump:/var/dump \
--net my_network my_server.domaine.com/subversion/billy:1.9
# I copy a script.
docker cp tools_subversion_dump.sh backup_subversion:/tmp
# This line is ignore since crontab exec.
docker exec -ti backup_subversion sh -c "/tmp/tools_subversion_dump.sh"
Thank you in advance for your answers because it's a mystery to me.
It's probably because you used the -it options that only apply to an interactive shell rather then a pseudo one like the one used in scripts as referenced in the question
Using jenkins pipeline I create a docker
def res = sh(returnStdout:true, script:
"""#!/bin/bash -lx
sudo docker run -td --name some_name --network=host -v /usr/src:/usr/src
""")
println res
and then I want to run a very-long-command and see its output in live.
Currently I run the command in this way:
def res = sh(returnStdout:true, script:
"""#!/bin/bash -lx
sudo docker exec -t some_name bash -ci "very-long-command"
""")
println res
After the very-long-command finished, I print its output.
The problem is that sometimes the very-long-command gets stuck, and the jenkins job aborted due to timeout. In this case I don't have the output at all.
Is it possible to have the live output from docker exec?
if you know your code is working and, and an idea of how long the build step takes to complete, you can use the timeout step in your Jenkins Pipeline job,
More information at the following webpage,
https://www.jenkins.io/doc/pipeline/steps/workflow-basic-steps/#timeout-enforce-time-limit
What I am trying to do is setup a local development database and to prevent everyone having to go through all the steps I thought it would be useful to create a script.
What I have below stop once it is in the terminal, which looks like:
output
./dbSetup.sh
hash of container 0d1b182aa6f1
/ #
At which point I have to manually enter exit.
script
#!/bin/bash
command=$(docker ps | grep personal)
set $command
echo "hash of container ${1}"
docker exec -it ${1} sh
Is there a way I can inject a command via a script into a dockers container terminal?
In order to execute command inside a container, you can use something like this:
docker exec -ti my_container sh -c "echo a && echo b"
More information available at: https://docs.docker.com/engine/reference/commandline/exec/
Your script finds a running Docker container and opens a shell to it. The "-it" makes it interactive and allocates a tty which is why it continues to wait for input, e.g. "exit". If the plan is to execute some commands to initialize a local development database, I'd recommend looking at building an image with a Dockerfile instead. i.e. Once you figure out the commands to run, they would become RUN commands and the container after docker run would expose a local development database.
If you really want some commands to run within the shell after it is started and maintain the session, depending on the base image, you might be able to mount a bash profile that has the required commands, e.g. -v db_profile:/etc/profile.d where db_profile is a folder with the shell scripts you want to run. To get them to run you'd exec sh -l to have the login startup scripts to run.
I want to run nested shell command for example like this in Jenkins pipeline:
docker stop $(docker ps -aq)
Unfortunately when I format it into pipeline syntax:
sh('docker stop $(docker ps -aq)')
Jenkins does not seem to run them correctly, but outputs that:
"docker stop" requires at least 1 argument(s).
I tried to run the command under bash like told here:
Run bash command on jenkins pipeline
But end up with similar issue. Any ideas how to solve this?
This becomes easier for Jenkins Pipeline if you expand the shell command into two lines:
The first to capture the Docker containers that you want to stop.
The second to stop those Docker containers captured in the first command.
We use the first line to capture the output of the shell command into a variable:
containers = sh(returnStdout: true, script: 'sudo /usr/bin/docker ps -aq')
We then use the second command to operate on the captured output from the first command stored in a variable:
sh("sudo /usr/bin/docker stop $containers")
Note that the docker command is normally comfortable with the output of docker ps -aq for operating on with its other commands, but if it dislikes the output stored in the variable, you can reformat it like the following:
containers = sh(returnStdout: true, script: 'sudo /usr/bin/docker ps -aq').trim()
This would, for example, strip the leading whitespace and trailing newlines. The Docker CLI normally does not care about that, but some reformatting may prove necessary here.
Since removing the newlines here would result in a long combined container ID, we need to (as you noted) replace it with a whitespace to delimit the container IDs. That would make the formatting for the string stored in the containers variable:
containers = sh(returnStdout: true, script: 'sudo /usr/bin/docker ps -aq').replaceAll("\n", " ")
I've not used docker stop command this way but syntax is same for docker rm command. Block of pipeline code + OP's line for example:
...
withEnv(["port=$port", "user=$user", "env=$env"]) {
sh '''
ssh -p $port $user#$env docker rm \$(docker ps -aq) || true; \
ssh -p $port $user#$env docker rmi \$(docker images -aq) || true; \
ssh -p $port $user#$env docker stop \$(docker ps -aq) || true
'''
}
...
Adding to the #Matt's answer,
You need a check for empty container. In case when there is no container available to stop, Jenkins build will fail and throw the following error message
"docker stop" requires at least 1 argument(s).
To handle this, you simply need a check for container availability. Here is the complete code
stage('Clean docker containers'){
steps{
script{
def doc_containers = sh(returnStdout: true, script: 'docker container ps -aq').replaceAll("\n", " ")
if (doc_containers) {
sh "docker stop ${doc_containers}"
}
}
}
}
Have you tried somethink like:
docker stop `docker ps -aq`
?
I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh