Run shell script inside a container - bash

I´m using lemonlatte / docker-webvirtmgr as base file, but the problem is that there are no ssh keys configured for the user www-data, so I wrote the following shell script:
#!/bin/sh
if [ ! -d "/var/local/webvirtmgr/nginxhome" ]; then
mkdir /var/local/webvirtmgr/nginxhome
chown -R www-data:www-data /var/local/webvirtmgr/nginxhome
usermod -d /var/local/webvirtmgr/nginxhome www-data
su - www-data -s /bin/bash -c "ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa -q -N ''"
su - www-data -s /bin/bash -c "touch /var/local/webvirtmgr/nginxhome/.ssh/config && echo -e 'StrictHostKeyChecking=no\nUserKnownHostsFile=/dev/null' >> /var/local/webvirtmgr/nginxhome/.ssh/config"
su - www-data -s /bin/bash -c "chmod 0600 ~/.ssh/config"
fi
After that I added the two statements to the dockerfile:
ADD setupssh.sh /webvirtmgr/setupssh.sh
RUN /bin/sh -c "/webvirtmgr/setupssh.sh"
I already tried CMD /webvirtmgr/setupssh.sh, RUN /webvirtmgr/setupssh.sh but with no success...
When I run the script inside the container by hand it is working fine.
What is wrong here?
greetings
UPDATE:
Here is the link to the repo of the maintainer: link
UPDATE 2:
The build of the dockerfile was successful and I put the statement between:
RUN apt-get -ys clean
<statements were here>
WORKDIR /

The directory /var/local/webvirtmgr is defined as a volume.
VOLUME /var/local/webvirtmgr
Therefore this directory is a mountpoint in the running container and what you have added to it gets overwritten.
You will have to use a different directory, then your script will work.
Here´s a Dockerfile to test it:
FROM lemonlatte/docker-webvirtmgr
RUN mkdir /var/local/webvirtmgr2
RUN touch /var/local/webvirtmgr2/t && touch /var/local/webvirtmgr/t
RUN ls -la //var/local/webvirtmgr
RUN ls -la /var/local/webvirtmgr2
Output:
Sending build context to Docker daemon 4.608 kB
Sending build context to Docker daemon
Step 0 : FROM lemonlatte/docker-webvirtmgr
---> 18e2839dffea
Step 1 : RUN mkdir /var/local/webvirtmgr2
---> Running in d7a1e897108e
---> cc029293525e
Removing intermediate container d7a1e897108e
Step 2 : RUN touch /var/local/webvirtmgr2/t && touch /var/local/webvirtmgr/t
---> Running in 1a1375651fa7
---> e314c2529d90
Removing intermediate container 1a1375651fa7
Step 3 : RUN ls -la //var/local/webvirtmgr
---> Running in 5228691c84f5
total 8
drwxr-xr-x 2 www-data www-data 4096 Jun 6 09:22 .
drwxr-xr-x 6 root root 4096 Jun 6 09:22 ..
---> ec4113936961
Removing intermediate container 5228691c84f5
Step 4 : RUN ls -la /var/local/webvirtmgr2
---> Running in a6d2a683391a
total 8
drwxr-xr-x 2 root root 4096 Jun 6 09:22 .
drwxr-xr-x 6 root root 4096 Jun 6 09:22 ..
-rw-r--r-- 1 root root 0 Jun 6 09:22 t
---> 3cb98c5c1baf
Removing intermediate container a6d2a683391a
Successfully built 3cb98c5c1baf

Related

Why isn't my docker container respecting my permissions?

Dockerfile:
FROM ubuntu:latest
RUN apt install -y bash
CMD []
build and run:
docker build -t test .
docker run -it test bash
minimal reproduction:
root#8807902e27b4:/# mkdir parent
root#8807902e27b4:/# cd parent
root#8807902e27b4:/parent# mkdir example
root#8807902e27b4:/parent# chmod 000 example
root#8807902e27b4:/parent# ls -la
total 12
drwxr-xr-x 3 root root 4096 Apr 28 19:33 .
drwxr-xr-x 1 root root 4096 Apr 28 19:32 ..
d--------- 2 root root 4096 Apr 28 19:33 example
root#8807902e27b4:/parent# cd example
root#8807902e27b4:/parent/example# echo "test" > test.txt
root#8807902e27b4:/parent/example# chmod 100 test.txt
root#8807902e27b4:/parent/example# cat test.txt
test
root#8807902e27b4:/parent/example# ls -la
total 12
d--------- 2 root root 4096 Apr 28 19:33 .
drwxr-xr-x 3 root root 4096 Apr 28 19:33 ..
---x------ 1 root root 5 Apr 28 19:33 test.txt
In the above example, the cd example command should fail, and even if it doesn't, running cat test.txt should fail. Anyone know what's up?
Here are the same (working) commands run in osx:
beaushinkle#Beaus-MBP ~/p/example-docker> mkdir parent
beaushinkle#Beaus-MBP ~/p/example-docker> cd parent
beaushinkle#Beaus-MBP ~/p/e/parent> mkdir example
beaushinkle#Beaus-MBP ~/p/e/parent> chmod 000 example
beaushinkle#Beaus-MBP ~/p/e/parent> cd example
cd: Permission denied: 'example'
beaushinkle#Beaus-MBP ~/p/e/parent [1]> chmod 777 example
beaushinkle#Beaus-MBP ~/p/e/parent> cd example
beaushinkle#Beaus-MBP ~/p/e/p/example> echo "test" > test.txt
beaushinkle#Beaus-MBP ~/p/e/p/example> chmod 100 test.txt
beaushinkle#Beaus-MBP ~/p/e/p/example> cat test.txt
cat: test.txt: Permission denied
If the prompt is anything to go by, we are logged in as root in the minimal reproduction. Thus, we have root privileges and can read and write all files (external link).

Unable to switch to root user after ssh into the instance using shell script

I have a scenario to automate the manual build update process via shell script on multiple VM nodes.
For the same, I am trying the below sample script to first ssh into the instance and then switch to root user to perform the further steps like copying the build to archives directory under /var and then proceed with the later steps.
Below is the sample script,
#!/bin/sh
publicKey='/path/to/publickey'
buildVersion='deb9.deb build'
buildPathToStore='/var/cache/apt/archives/'
pathToHomedir='/home'
script="whoami && pwd && ls -la && whoami && mv ${buildVersion} ${buildPathToStore} && find ${buildPathToStore} | grep deb9"
for var in "$#"
do
copyBuildPath="${publicKey} ${buildVersion} ${var}:/home/admin/"
echo "copy build ==>" ${copyBuildPath}
scp -r -i ${copyBuildPath}
ssh -i $publicKey -t $var "sudo su - & ${script}; " # This shall execute all commands as root
done
So the CLI stats for the above script are something like this
admin //this is the user check
/home/admin
total 48
drwxr-xr-x 6 admin admin 4096 Dec 6 00:28 .
drwxr-xr-x 6 root root 4096 Nov 17 14:07 ..
drwxr-xr-x 3 admin admin 4096 Nov 17 14:00 .ansible
drwx------ 2 admin admin 4096 Nov 23 18:26 .appdata
-rw------- 1 admin admin 5002 Dec 6 17:47 .bash_history
-rw-r--r-- 1 admin admin 220 May 16 2017 .bash_logout
-rw-r--r-- 1 admin admin 3506 Jun 14 2019 .bashrc
-rw-r--r-- 1 admin admin 675 May 16 2017 .profile
drwx------ 4 admin admin 4096 Nov 23 18:26 .registry
drwx------ 2 admin admin 4096 Jun 21 2019 .ssh
-rw-r--r-- 1 admin admin 0 Dec 6 19:42 testFile.txt
-rw------- 1 admin admin 2236 Jun 21 2019 .viminfo
admin
If I use sudo su -c and remove &
like:
ssh -i $publicKey -t $var "sudo su -c ${script}; "
Then for once whoami returns the user as root but the working directory still prints as /home/admin instead of /root
And the next set of commands are still accounted for admin user rather than the root. So the admin user do not have the privileges to move the build to archive directory and install the build.
Using & I want to ensure that the further steps are being done in the background.
Not sure how to proceed ahead with this. Good suggestions are most welcome right now :)
"sudo su - & ${script}; "
expands to:
sudo su - & whoami && pwd && ...
First sudo su - is run in the background. Then the command chain is executed.
sudo su -c ${script};
expands to:
sudo su -c whoami && pwd && ...
So first sudo su - whoami is executed, which runs whoami as root. Then if this command is successful, then pwd is executed. As normal user.
It is utterly hard to correctly pass commands to execute on remote site using ssh. It is increasingly hard to do it with sudo su - the command will be triple (or twice?) word splitted - one time by ssh, then by the shell, then by the shell run by sudo su.
If you do not need interactive communication, it's best to use a here document with -s shell option, something along (untested):
# DO NOT store commands to use in a variable.
# or if you do and you know what you are doing, properly quote it (printf "%q ") and run it via eval
script() {
set -euo pipefail
whoami
pwd
ls -la
whoami
mv "$buildVersion" "$buildPathToStore"
find "$buildPathToStore" | grep deb9
}
ssh ... "sudo bash -s" <<EOF
echo "Yay! anything here!"
echo "Note that here document delimiter is not quoted!"
$(
# safely import context to work with
# note how command substitution is executed on host side
declare -f script
# pass variables too!
declare -p buildVersion buildPathToStore buildPathToStore
)
script
EOF
When you use su alone it keeps you in your actual directory, if you use su - it simulates the root login.
You should write : su - root -c ${script};

docker compose: issue to start a container with a specific shell script

I would to start a container with specific shell script using docker compose.
For example, a tomcat container starts initweb.sh creating the empty file /tmp/testweb
$ ls -lR
.:
total 8
-rw-rw-r-- 1 xxxxxx xxxxxx 223 déc. 4 19:45 docker-compose.yml
drwxrwxr-x 2 xxxxxx xxxxxx 4096 déc. 4 19:37 web
./web:
total 4
-rwxrwxr-x 1 xxxxxx xxxxxx 31 déc. 4 19:37 initweb.sh
$ cat docker-compose.yml
version: '3'
services:
web:
container_name: web
hostname: web
image: "tomcat:7.0-jdk8"
ports:
- 8080:8080
volumes:
- "./web/:/usr/local/bin/"
command: sh -c "/usr/local/bin/initweb.sh"
$ cat web/initweb.sh
#!/bin/bash
touch /tmp/testweb
When I execute docker-compose up
$ docker-compose up -d
Creating network "tomcat_default" with the default driver
Creating web ... done
$ docker-compose run web ls -l /usr/local/bin/
total 4
-rwxrwxr-x 1 1000 1000 31 Dec 4 18:37 initweb.sh
$ docker-compose run web ls -l /tmp
total 4
drwxr-xr-x 1 root root 4096 Nov 24 01:29 hsperfdata_root
The owner of my script initweb.sh is not root, so maybe that's why it is not executed but I don't know how to resolve this issue.
You need to make initweb.sh to behave like a server process :
#!/bin/bash
touch /tmp/testweb
sleep infinity
then
~$ docker-compose up -d
Creating network "tmp_default" with the default driver
Creating web ... done
~$ docker-compose exec web ls -l /tmp
total 4
drwxr-xr-x 1 root root 4096 Nov 24 01:29 hsperfdata_root
-rw-r--r-- 1 root root 0 Dec 4 22:39 testweb

How to use while loop in gitlab-ci script section

I'm trying to iterate over url entries in a file and use each file as an input for a crawler tool. It's result should be written to a file.
here is the gitlab-ci.yml file:
stages:
- test
test:
stage: test
tags:
- shell-docker
script:
- wget https://github.com/FaKeller/sireg/releases/download/v0.3.1/sireg-linux
- chmod 775 sireg-linux
- mkdir output
- ls -alF
- while read line; do
echo $line;
./sireg-linux exec --loader-sitemap-sitemap \"$line\" >> ./output/${line##*/}_out.txt;
done < sitemap-index
- ls -alF output
artifacts:
paths:
- output/*
expire_in: 1 hrs
and here is the sitemap-index file (only one entry):
http://example.com/sitemap.xml
both files are in the same directory. I expect a file sitemap.xml_out.txt to be written into the output folder(also the same directory). I am pretty sure the ./sireg-linux script does not execute because it usually takes few minutes to complete (tested locally).
the output of the stage looks like this:
2020-04-02 18:22:21 (4,26 MB/s) - »sireg-linux« saved [62566347/62566347]
$ chmod 775 sireg-linux
$ mkdir output
$ ls -alF
total 61128
drwxrwxr-x 4 gitlab-runner gitlab-runner 4096 Apr 2 18:22 ./
drwxrwxr-x 10 gitlab-runner gitlab-runner 4096 Apr 2 15:46 ../
drwxrwxr-x 5 gitlab-runner gitlab-runner 4096 Apr 2 18:22 .git/
-rw-rw-r-- 1 gitlab-runner gitlab-runner 512 Apr 2 18:22 .gitlab-ci.yml
drwxrwxr-x 2 gitlab-runner gitlab-runner 4096 Apr 2 18:22 output/
-rw-rw-r-- 1 gitlab-runner gitlab-runner 30 Apr 2 15:46 README.md
-rwxrwxr-x 1 gitlab-runner gitlab-runner 62566347 Nov 11 2017 sireg-linux*
-rw-rw-r-- 1 gitlab-runner gitlab-runner 55 Apr 2 18:08 sitemap-index
$ while read line; do echo $line; ./sireg-linux **exec** --loader-sitemap-sitemap \"$line\" >>
./output/${line##*/}_out.txt; done < sitemap-index
$ ls -alF output
total 8
drwxrwxr-x 2 gitlab-runner gitlab-runner 4096 Apr 2 18:22 ./
drwxrwxr-x 4 gitlab-runner gitlab-runner 4096 Apr 2 18:22 ../
Uploading artifacts...
Runtime platform arch=amd64 os=linux pid=23813 revision=1f513601 version=11.10.1
WARNING: output/*: no matching files
ERROR: No files to upload
Job succeeded
update
tried to move all steps into a separate script but that did not work either.
update 2
forgot to add exec in the command:
./sireg-linux exec --loader-sitemap-sitemap \"$line\" >>
./output/${line##*/}_out.txt;
unfortunately it didn't help.
what can I do to make it working?
Try changing ./sireg-linux --loader-sitemap-sitemap \"$line\" to ./sireg-linux exec --loader-sitemap-sitemap "$line". Hope this helps!
EDIT: Also, it looks like the script doesn't enter the while loop at all. Maybe the file sitemap-index is empty or it has only one line without a newline at the end?
EDIT 2: The back-slashes in the command line are wrong. corrected my answer
You can of course painfully debug multi-line commands in YAML.
You can even use YAML multi-line strings:
How do I break a string over multiple lines?
https://gitlab.com/snippets/1717579
But I would just wrap code into a shell script, store it in the same GitLab repo, and just call it in .gitlab-ci.yml.
This way you can run this script exactly the same way both locally and in CI, which is a best practice in Continuous Delivery.
- ./script.sh

Jenkins "file not found" error with existing Bash script

My goal is to have Jenkins 2 execute alpha integration tests between an express js app and a postgres db. I am to spin up containerized resources locally and test successfully with bash scripts that employ docker-compose. The relevant bash script is scripts/docker/dockerRunTest.sh.
However, when I try to do the same thing via Jenkins, Jenkins claims that the initiating script is not found.
Jenkinsfile
stage('Alpha Integration Tests') {
agent {
docker {
image 'tmaier/docker-compose'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock --network host'
}
}
steps {
sh 'ls -lah ./scripts/docker/'
sh './scripts/docker/dockerRunTest.sh'
}
}
Output
+ ls -lah ./scripts/docker/
total 36
drwxr-xr-x 2 root root 4.0K Jan 26 21:31 .
drwxr-xr-x 6 root root 4.0K Jan 26 20:54 ..
-rwxr-xr-x 1 root root 2.2K Jan 26 21:31 docker.lib.sh
-rwxr-xr-x 1 root root 282 Jan 26 21:31 dockerBuildApp.sh
-rwxr-xr-x 1 root root 289 Jan 26 21:31 dockerBuildTestRunner.sh
-rwxr-xr-x 1 root root 322 Jan 26 21:31 dockerDown.sh
-rw-r--r-- 1 root root 288 Jan 26 21:31 dockerRestart.sh
-rwxr-xr-x 1 root root 482 Jan 26 21:31 dockerRunTest.sh
-rwxr-xr-x 1 root root 284 Jan 26 21:31 dockerUp.sh
+ ./scripts/docker/dockerRunTest.sh
/var/jenkins_home/workspace/project-name#2#tmp/durable-9ac0d23a/script.sh: line 1: ./scripts/docker/dockerRunTest.sh: not found
ERROR: script returned exit code 127
The file clearly exists per the ls output. I have some hazy idea that there may be some conflict between how shell scripts and bash scripts work, but I cannot quite grasp the nuance in how Jenkins is not able to execute a script that clearly exists.
edit (including script contents):
dockerRunTest.sh
#!/bin/bash
MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd -P )"
MY_DIR="${MY_DIR:?}"
SCRIPTS_DIR="$(realpath "${MY_DIR}/..")"
ROOT_DIR="$(realpath "${SCRIPTS_DIR}/..")"
TEST_DIR="${ROOT_DIR}/test/integration"
SRC_DIR="${ROOT_DIR}/src"
REPORTS_DIR="${ROOT_DIR}/reports"
. "${SCRIPTS_DIR}/docker/docker.lib.sh"
dockerComposeUp
dockerExecuteTestRunner
dockerComposeDown
docker.lib.sh
#!/bin/bash
CURRENT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd -P )"
CURRENT_DIR="${CURRENT_DIR:?}"
SCRIPTS_DIR="$(realpath "${CURRENT_DIR}/..")"
ROOT_DIR="$(realpath "${SCRIPTS_DIR}/..")"
. "${SCRIPTS_DIR}/lib.sh"
dockerComposeUp() {
docker-compose build --no-cache
docker-compose up --detach --force-recreate
DC_CODE=$?
if [ ${DC_CODE} -ne 0 ]; then
# Introspection
docker-compose logs
docker-compose ps
exit ${DC_CODE}
fi
}
dockerComposeDown() {
# docker-compose rm: Removes stopped service containers.
# -f, --force - Don't ask to confirm removal.
# -s, --stop - Stop the containers, if required, before removing.
# -v - Remove any anonymous volumes attached to containers.
docker-compose rm --force --stop -v
}
dockerComposeRestart() {
dockerComposeDown
dockerComposeUp
}
dockerBuildTestRunner() {
docker build -f test/Dockerfile -t kwhitejr/botw-test-runner .
}
dockerExecuteTestRunner() {
IMAGE_NAME="kwhitejr/botw-test-runner"
echo "Build new ${IMAGE_NAME} image..."
dockerBuildTestRunner
echo "Run ${IMAGE_NAME} executable test container..."
docker run -it --rm --network container:api_of_the_wild_app_1 kwhitejr/botw-test-runner
}
tmaier/docker-compose image doesn't have /bin/bash interpreter installed by default since latest tag is an alpine image [1, 2]. This can be confirmed by running:
$ docker run -it --rm tmaier/docker-compose bash
/usr/local/bin/docker-entrypoint.sh: exec: line 35: bash: not found
To get the script working, either install bash in the docker image using apk add bash or change the shebang to #!/bin/sh if the script can be run using ash shell (the default shell in busybox).
[1] https://github.com/tmaier/docker-compose/blob/b740feb61fb25030101638800a609605cfd5e96a/Dockerfile#L2
[2] https://github.com/docker-library/docker/blob/d94b9832f55143f49e47d00de63589ed41f288e7/18.09/Dockerfile#L1
I have the similar issue but in my case, it is because the shell script file has EOL in Windows format (if you open the file in the terminal using vi, you will see each line ends with ^M)
I can fix this using Notepad++ Edit -> EOL Conversion -> Unix (LF)

Resources