I built a container using the docker-compose script below:
services:
client:
image: alpine
environment:
- BACKUP_ENABLED=1
- BACKUP_INTERVAL=60
- BACKUP_PATH=/data
- BACKUP_FILENAME=db_backup
networks:
- dbnet
entrypoint: |
sh -c 'sh -s << EOF
apk add --no-cache mysql-client
while true
do
then
sleep $$BACKUP_INTERVAL
echo "$$(date +%FT%H.%M) - Making Backup to : $$BACKUP_PATH/$$(date +%F)/$$BACKUP_FILENAME-$$(date +%FT%H.%M).sql.gz"
mysqldump -u root -ppassword -h dblb --all-databases | gzip > $$BACKUP_PATH/$$(date +%F)/$$BACKUP_FILENAME-$$(date +%FT%H.%M).sql.gz
done
EOF'
But I encounter an issue where the date won't be updated and causes the loop keep backup to the same created file.
Every 60s the log will some the same date value. Here the container's log:
The same thing happened when I tried to manually write the script inside the container:
The timestamp always displays correctly when I only type date inside the container console.
Why won't the date update? What did I miss in the script?
Why won't the date update?
Because it is expanded by outer shell. Compare a shell script:
#!/bin/sh
# in a script
# this is running inside a shell
cat <<EOF # cat just prints data
$(date) # not cat, **but the shell**, expands $(date)
EOF
vs:
sh -c '
# this is running inside a shell
sh -s <<EOF # sh -s executes input data
echo $(date) # not sh -s, but **the outer shell**, expands $(date). ONCE
EOF
'
That sh -c sh -s and entrypoint is all unorthodox, just run the command that you want to run.
command:
- sh
- -c
- |
apk add --no-cache mysql-client
while sleep $$BACKUP_INTERVAL; do
echo "$$(date +%FT%H.%M) - Making Backup to : $$BACKUP_PATH/$$(date +%F)/$$BACKUP_FILENAME-$$(date +%FT%H.%M).sql.gz"
done
Related
I run a few scripts 1 by 1
cat 001.sh
sh /home/mysqldom/da-cron/f_mysqldom_nrd/5_change_nrd_tld.sh
sh /home/mysqldom/da-cron/f_mysqldom_nrd/5_proxy_removed.sh
sh /home/mysqldom/da-cron/f_mysqldom_nrd/6_sync_nrd.sh
The last script wont work... if I run manually it work very well...
the script is
cat 6_sync_nrd.sh
source /home/mysqldom/da-cron/var.sh
cd /home/mysqldom/da-cron/f_mysqldom_nrd/
mysql -u mysqldom_fnrd -p$mysqldom_fnrd_password -D mysqldom_fnrd -e "UPDATE \`$yesterday\` SET sync='$yesterday';"
mysql -u mysqldom_fnrd -p$mysqldom_fnrd_password -D mysqldom_fnrd -e "DELETE FROM \`$yesterday\` WHERE domain_name = 'domain_name';"
sed s/change_database/$yesterday/g update.conf > $yesterday.conf
/usr/share/logstash/bin/logstash -f $yesterday.conf --path.data /var/lib/logstash108
rm -rf nohup.out
The 6 has to be run after 5
any idea whats worn in it
I have a bash file should bring the postgres docker container online and then run a .sql file to create the databases. But it's throwing the error.
psql: error: provision-db.sql: No such file or directory
I have checked the path and the file exists at the same level of this bash script. Following is the content of my bash file.
#!/usr/bin/env bash
docker-compose up -d db
# Ensure the Postgres server is online and usable
until docker exec -i boohoo.postgres pg_isready --host="${POSTGRES_HOST}" --username="${POSTGRES_USER}"
do
echo "."
sleep 1
done
docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f provision-db.sql
And this is the provision-db.sql file.
DROP DATABASE "boo-hoo";
CREATE DATABASE "boo-hoo";
GRANT ALL PRIVILEGES ON DATABASE "boo-hoo" TO postgres;
This is the part of docker-compose.yml
version: '3.3'
services:
db:
container_name: boohoo.postgres
hostname: postgres.boohoo
image: postgres
ports:
- "15432:5432"
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
The short version
This works
cat provision-db.sql | docker exec -i boohoo.postgres bash -c 'psql -U ${POSTGRES_USER} -w -a -q -f -'
The long version
multiple things here
1) why does following command not find the provision-db.sql?
docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f provision-db.sql
because the provision-db.sql is on your host and not in your container. Therefore, when you execute the psql command inside the container it can not find the file
2) Why didn't my first solution work?
cat provision-db.sql | docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f - should do the trick asuming provision-db.sql
That is due to the fact, that the variables ${POSTGRES_USER} and ${POSTGRES_PASSWORD} get evaluated on your host machine and I guess they are not set there. In addition, I forgot to specify the -w flag to avoid the password prompt
3) Why does that work?
cat provision-db.sql | docker exec -i boohoo.postgres bash -c 'psql -U ${POSTGRES_USER} -w -a -q -f -'
Well, let's go through it step by step.
First, we print the content of provision-db.sql, which resides on the host machine to stdout and pipe it to the next command via |.
docker-exec executes a command in the container specified (boohoo.postgres). By specifying the -i flag we allow the stdin from your host to go to stdin in the container <- that's important.
In the container, we execute bash -c which is just a wrapper to avoid evaluating the bash variables on the host. We want the variables from the container and by putting it into single quotes we can do that.
docker-exec boohoo.postgres bash -c "echo $POSTGRES_USER"
evaluates the host env variable named POSTGRES_USER.
docker-exec boohoo.postgres bash -c "echo $POSTGRES_USER"
evaluates the container env variable named POSTGRES_USER.
Next we just have to get our postgres command in order.
psql -U ${POSTGRES_USER} -w -a -q -f -
-U specifies the user
-w does not ask for password
-q do it quietly
-f - process whatever you get from stdin
-f is an option for psql and not for docker exec, and psql is running inside the container, so it can only access the file if it is inside the container as well.
On my computer bash starts and lasts on docker:git.
~ # bash
bash-4.4# ps
PID USER TIME COMMAND
1 root 0:00 sh
26 root 0:00 bash
32 root 0:00 ps
bash-4.4# echo $0
bash
bash-4.4# echo $SHELL
/bin/ash
ash seems a bit uneasy but I'm able to run #!/bin/bash file so it's fine so far.
However, on Gitlab CI on gitlab.com, command bash doesn't return anything but it doesn't seem to keep running. Why is this?
$ apk add --update bash
$ bash
$ ps && pwd && echo $0 && echo $SHELL && ls /bin
PID USER TIME COMMAND
1 root 0:00 /bin/sh
10 root 0:00 /bin/sh
25 root 0:00 ps
/builds/230s/industrial_calibration
/bin/sh
ash
base64
bash
bashbug
:
More detailed output on my computer:
$ lsb_release -a|grep Description
No LSB modules are available.
Description: Ubuntu 16.04.4 LTS
$ docker pull docker:git
$ docker images | grep -i docker
docker git 5c58d1939c5d 10 days ago 152MB
$ docker run -it docker:git
~ # apk add --update bash
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
:
(10/10) Installing tar (1.29-r1)
Executing busybox-1.27.2-r11.trigger
OK: 37 MiB in 31 packages
~ # bash
bash-4.4# ps
PID USER TIME COMMAND
1 root 0:00 sh
26 root 0:00 bash
32 root 0:00 ps
bash-4.4# echo $0
bash
bash-4.4# echo $SHELL
/bin/ash
.gitlab-ci.yml used (this fails at the last line as the passed file uses bash specific syntax):
image: docker:git
before_script:
- apk add --update bash coreutils tar # install industrial_ci depedencies
- bash
- git clone https://github.com/plusone-robotics/industrial_ci.git .ci_config -b gitlab_modularize
- ps && pwd && echo $0 && echo $SHELL && ls /bin
- source ./.ci_config/industrial_ci/src/tests/gitlab_module.sh
UPDATE: Sourcing bash-based file in bash -c indeed works, but it's probably not useful to me as what I really want is to use a function defined in that file and because bash -c line terminates and doesn't carry the context, the function won't be available in the later lines IMO.
- /bin/bash -c "source ./.ci_config/industrial_ci/src/tests/gitlab_module.sh"
- pass_sshkey_docker
image: alpine
before_script:
- apk add --update bash coreutils tar
- bash
- echo smth
Now imagine your are the computer. You wait for each command to finish before executing the next one, you don't use the keyboard. So what do you do? Let's try it with alpine, substituting newline with ;:
$ docker run -ti --rm alpine sh -c 'apk add --update bash; bash; echo smth'
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/6) Installing pkgconf (1.3.10-r0)
(2/6) Installing ncurses-terminfo-base (6.0_p20171125-r0)
(3/6) Installing ncurses-terminfo (6.0_p20171125-r0)
(4/6) Installing ncurses-libs (6.0_p20171125-r0)
(5/6) Installing readline (7.0.003-r0)
(6/6) Installing bash (4.4.19-r1)
Executing bash-4.4.19-r1.post-install
Executing busybox-1.27.2-r7.trigger
OK: 13 MiB in 17 packages
bash-4.4#
YOU DON'T TOUCH THE KEYBOARD. You can wait endlessly for the bash-4.4# line to disapear as bash will wait endlessly for you to type anything. The command echo smth will never execute, gitlab will timeout waiting for bash to end, the end.
Now if you want to execute something in alpine using bash using gitlab-ci i suggest doing it this way: create a executable script ci-myscript.sh that you git add&commit to your repo:
$ cat ci-myscript.sh
#!/bin/bash
git clone https://github.com/plusone-robotics/industrial_ci.git .ci_config -b gitlab_modularize
ps && pwd && echo $0 && echo $SHELL && ls /bin
source ./.ci_config/industrial_ci/src/tests/gitlab_module.sh
The first line #!/bin/bash tells that shell should execute this script under bash. Now from your gitlab-ci you run:
image: docker:git
before_script:
- apk add --update bash coreutils tar
- ./ci-myscript-sh
Creating such scripts is actually a good workflow, because you can test the script locally on your computer before testing it in gitlab-ci.
The other option is to single call bash as suggested by #Mazel in comments:
image: docker:git
before_script:
- apk add --update bash coreutils tar
- bash -c 'git clone https://github.com/plusone-robotics/industrial_ci.git .ci_config -b gitlab_modularize; ps && pwd && echo $0 && echo $SHELL && ls /bin; source ./.ci_config/industrial_ci/src/tests/gitlab_module.sh'
That way you need to call everything in single line, because the next line won't have the same enviroment as the previous line.
Need some help writing a script to automate a fairly simple process of running tests on several docker-compose environments on a windows host.
This is the manual process that I would like to automate:
Open a docker quickstart terminal
Setup 3 identical environments: docker-compose -p test1 up -d && docker-compose -p test2 up -d && docker-compose -p test3 up -d
Open 2 more docker terminals, and then run one of the following on each:
docker-compose-run -p test1 app ./node_modules/gulp/bin/gulp.js cuc-reports
docker-compose-run -p test2 app ./node_modules/gulp/bin/gulp.js cuc-not-reports1
docker-compose-run -p test3 app ./node_modules/gulp/bin/gulp.js cuc-not-reports2
When all tests complete, tear down: docker-compose -p test1 down && docker-compose -p test2 down && docker-compose -p test3 down
I'm stuck pretty much in the beginning. I can open a docker machine shell but can't get it to change directories in order to execute step 2. I tried the following:
#!/bin/bash
src=$PWD/../../
cd "C:\Program Files\Docker Toolbox"
"C:\Program Files\Git\bin\bash.exe" --login -i "C:\Program Files\Docker Toolbox\start.sh" cd $src && docker-compose -p test1 up -d && docker-compose -p test2 up -d && docker-compose -p test3 up -d
However the "cd $src" is not executed which causes the subsequent commands to fail.
Trying to generalize the things I think I need in order to run this script, I might summarize as follows:
How can I pass commands to be executed once the docker shell loads (such as "cd ...")?
How can I open multiple independent (docker) shells from the root shell and wait for them to finish executing their commands?
I intended to write the script for git-bash on windows, which is my preference, but suggestions for a windows batch script are also welcome.
Well, it wasn't so hard in the end. Pretty messy and I'm sure it could be improved, but it does what I wanted (in the end I'm running 2 docker environments not 3 as it performs better with 4 cores). If anyone is interested where I got all this weirdness, just ask and I'll site some sources. Remember this answer is for Windows:
#!/bin/bash
cd `dirname $0`/../../
start bash -c 'docker-compose -p test1 up -d; sleep 3s; docker exec -i $(docker-compose -p test1 ps ros | grep -m 1 ros | cut -d " " -f1 ) ./node_modules/gulp/bin/gulp.js cucumber1; docker-compose -p test1 down; bash'
start bash -c 'docker-compose -p test2 up -d; sleep 3s; docker exec -i $(docker-compose -p test2 ps ros | grep -m 1 ros | cut -d " " -f1 ) ./node_modules/gulp/bin/gulp.js cucumber2; docker-compose -p test2 down; bash'
sleep 5s
start bash -c "docker stats $(docker ps | awk '{if(NR>1) print $NF}')"
This works:
# echo 1 and exit:
$ docker run -i -t image /bin/bash -c "echo 1"
1
# exit
# echo 1 and return shell in docker container:
$ docker run -i -t image /bin/bash -c "echo 1; /bin/bash"
1
root#4c064f2554de:/#
Question: How could I source a file into the shell? (this does not work)
$ docker run -i -t image /bin/bash -c "source <(curl -Ls git.io/apeepg) && /bin/bash"
# content from http://git.io/apeepg is sourced and shell is returned
root#4c064f2554de:/#
In my case, I use RUN source command (which will run using /bin/bash) in a Dockerfile to install nvm for node.js
Here is an example.
FROM ubuntu:14.04
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
...
...
RUN source ~/.nvm/nvm.sh && nvm install 0.11.14
I wanted something similar, and expanding a bit on your idea, came up with the following:
docker run -ti --rm ubuntu \
bash -c 'exec /bin/bash --rcfile /dev/fd/1001 \
1002<&0 \
<<<$(echo PS1=it_worked: ) \
1001<&0 \
0<&1002'
--rcfile /dev/fd/1001 will use that file descriptor's contents instead of .bashrc
1002<&0 saves stdin
<<<$(echo PS1=it_worked: ) puts PS1=it_worked: on stdin
1001<&0 moves this stdin to fd 1001, which we use as rcfile
0<&1002 restores the stdin that we saved initially
You can use .bashrc in interactive containers:
RUN curl -O git.io/apeepg.sh && \
echo 'source apeepg.sh' >> ~/.bashrc
Then just run as usual with docker run -it --rm some/image bash.
Note that this will only work with interactive containers.
I don't think you can do this, at least not right now. What you could do is modify your image, and add the file you want to source, like so:
FROM image
ADD my-file /my-file
RUN ["source", "/my-file", "&&", "/bin/bash"]