I need to run a script, which among many things running socat.
Running the script from the command line works fine, now what I want is that this script is run as a service.
This is the script I have:
#!/usr/bin/env sh
set -e
TTY=${AQM_TTY:-/dev/ttyUSB0}
/reg_sesion/create
DESTINOS=(http://127.0.0.1)
LOG_DIR=./logs-aqm
mkdir -p "${LOG_DIR}"
###ADDED####
echo $$ > /var/run/colector.pid
socat -b 115200 ${TTY},echo=0,crnl - |
grep --line-buffered "^rs" |
while read post; do
for destino in ${DESTINOS[#]}; do
wget --post-data="$(echo "${post}" | tr -d "\n")" \
-O /dev/null \
--no-verbose \
--background \
--append-output="${LOG_DIR}/${destino//\/}.log" \
"${destino}/reg_sesion/create"
done
echo "${post}" | tee -a "${LOG_DIR}/aqm.log"
done
And the service file:
[Unit]
Description=colector
[Service]
Type=simple
PIDFile=/var/run/colector.pid
User=root
Group=root
#ExecStart=/root/socat.sh
ExecStart=/bin/sh -c '/root/socat.sh'
[Install]
WantedBy=multi-user.target
When I start the service, the process starts and ends quickly.
any ideas?
Thanks for your time
Remove PIDFile= from your service file, and see whether it works.
PIDFile= is mainly for Type=forking, where your startup program would fork a sub-process, so you tell SYSTEMD (via PIDFile) to watch for that process. In case of Type=simple , with your long-running service, SYSTEMD will create a sub-process to start your service, so it knows exactly what the PID is.
Related
I need to script a way to do the following (note all is done on the local machine as root):
runuser -l user1 -c 'ssh localhost' &
runuser -l user1 -c 'systemctl --user list-units'
The first command should be run as root. The end goal is to log in as "user1" so that if any user ran who "user1" will appear in this list. Noticed how the first command is backgrounded before the next command is run.
The next command should be run as root as well, NOT user1.
Problem: These commands run fine when run separately, but when run in a script "user1" never appears to show up when running who. Here is my script
#!/bin/bash
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
echo
sleep 1
echo "[+] Running systemctl --user commands as root."
runuser -l user 1 -c 'systemctl --user list-units'
echo "[+] Killing active ssh sessions."
kill $(ps aux | grep ssh | grep "^user1.*" | grep localhost | awk '{print$2}') 2>/dev/null
echo "[+] Done."
When running the script it looks like the script is able to ssh into the system but who does not show the user logged in, nor do any ps aux output show a ssh session. Note: I commented out the kill line to confirm if the process stays, which I do not see it at all.
How do I make the bash script fork two processes. Process 1 goal is to login as "user1" and wait. Then process 2 is to perform commands as root while user1 is logged in?
My goal is to run systemctl --user commands as root via script. If your familiar with systemctl --user domain, there is no way to manage systemctl --user units, without the user being logged in via traditional methods (ssh, direct terminal, or gui). I cannot "su - user1" as root either. So I want to force an ssh session as root to the vdns11 user via runuser commands. Once the user is authenticated and shows up via who I can run systemctl --user commands. How can I keep the ssh session active in my code?
With this additional info, the question essentially boils down to 'How can I start and background an interactive ssh session?'.
You could use script for that. It can be used to trick applications into thinking they are being run interactively:
echo "[+] Starting SSH session in background"
runuser -l user1 -c "script -c 'ssh localhost'" &>/dev/null &
pid=$!
...
echo "[+] Killing active SSH session"
kill ${pid}
Original answer before OP provided additional details (for future reference):
Let's dissect what is going on here.
I assume you start your script as root:
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
So root runs runuser -l user1 -c '...', which itself runs ssh -q localhost 2>/dev/null as user1. All this takes place in the background due to &.
ssh will print Pseudo-terminal will not be allocated because stdin is not a terminal. (hidden due to 2>/dev/null) and immediately exit. That's why you don't see anything when running who or when running ps.
Your echo says [+] Becoming user1, which is quite different from what's happening.
sleep 1
The script sleeps for a second. Nothing wrong with that.
echo "[+] Running systemctl --user commands as root."
#runuser -l user 1 -c 'systemctl --user list-units'
# ^ typo!
runuser -l user1 -c 'systemctl --user list-units'
Ignoring the typo, root again runs runuser, which itself runs systemctl --user list-units as user1 this time.
Your echo says [+] Running systemctl --user commands as root., but actually you are running systemctl --user list-units as user1 as explained above.
echo "[+] Killing active ssh sessions."
kill $(ps aux | grep ssh | grep "^user1.*" | grep localhost | awk '{print$2}') 2>/dev/null
This would kill the ssh process that had been started at the beginning of the script, but it already exited, so this does nothing. As a side note, this could be accomplished a lot easier:
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
pid=$!
...
echo "[+] Killing active ssh sessions."
kill $(pgrep -P $pid)
So this should give you a better understanding about what the script actually does, but between the goals you described and the conflicting echos within the script it's really hard to figure out where this is supposed to be going.
I am using https://stackoverflow.com/a/42955871/308851 and it works from command line but not from cron. I even tried running the script with env -i but it stubbornly works.
#!/bin/bash
filename=$(date '+%Y-%m-%d').gz
docker exec -t elastic_db.1.$(docker service ps -f 'name=elastic_db.1' elastic_db -q --no-trunc | head -n1) mysqldump example |gzip -9 > /container/$filename
docker exec -t elastic_drupal.1.$(docker service ps -f 'name=elastic_drupal.1' elastic_drupal -q --no-trunc |head -n1) rclone --config /etc/rclone.conf move /app/$filename example:example/dump/
This compresses a 0 byte file when ran from cron but works just fine otherwise. What am I doing wrong?
Gordon Davisson's comment is correct: changing docker to /usr/bin/docker worked.
I'm using Docker with Rancher v1.6, setting up a Nextcloud stack.
I would like to use a dedicated container for running cron tasks every 15 minutes.
The "normal" Nextcloud Docker image can simply use the following:
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
while /bin/true; do
su -s "/bin/bash" -c "/usr/local/bin/php /var/www/html/cron.php" www-data
echo $$(date) - Running cron finished
sleep 900
done
EOF'
(Pulled from this GitHub post)
However, the Alpine-based image does not have bash, and so it cannot be used.
I found this script in the list of examples:
#!/bin/sh
set -eu
exec busybox crond -f -l 0 -L /dev/stdout
However, I cannot seem to get that working with my docker-compose.yml file.
I don't want to use an external file, just to have the script entirely in the docker-compose.yml file, to make preparation and changes a bit easier.
Thank you!
Need some help writing a script to automate a fairly simple process of running tests on several docker-compose environments on a windows host.
This is the manual process that I would like to automate:
Open a docker quickstart terminal
Setup 3 identical environments: docker-compose -p test1 up -d && docker-compose -p test2 up -d && docker-compose -p test3 up -d
Open 2 more docker terminals, and then run one of the following on each:
docker-compose-run -p test1 app ./node_modules/gulp/bin/gulp.js cuc-reports
docker-compose-run -p test2 app ./node_modules/gulp/bin/gulp.js cuc-not-reports1
docker-compose-run -p test3 app ./node_modules/gulp/bin/gulp.js cuc-not-reports2
When all tests complete, tear down: docker-compose -p test1 down && docker-compose -p test2 down && docker-compose -p test3 down
I'm stuck pretty much in the beginning. I can open a docker machine shell but can't get it to change directories in order to execute step 2. I tried the following:
#!/bin/bash
src=$PWD/../../
cd "C:\Program Files\Docker Toolbox"
"C:\Program Files\Git\bin\bash.exe" --login -i "C:\Program Files\Docker Toolbox\start.sh" cd $src && docker-compose -p test1 up -d && docker-compose -p test2 up -d && docker-compose -p test3 up -d
However the "cd $src" is not executed which causes the subsequent commands to fail.
Trying to generalize the things I think I need in order to run this script, I might summarize as follows:
How can I pass commands to be executed once the docker shell loads (such as "cd ...")?
How can I open multiple independent (docker) shells from the root shell and wait for them to finish executing their commands?
I intended to write the script for git-bash on windows, which is my preference, but suggestions for a windows batch script are also welcome.
Well, it wasn't so hard in the end. Pretty messy and I'm sure it could be improved, but it does what I wanted (in the end I'm running 2 docker environments not 3 as it performs better with 4 cores). If anyone is interested where I got all this weirdness, just ask and I'll site some sources. Remember this answer is for Windows:
#!/bin/bash
cd `dirname $0`/../../
start bash -c 'docker-compose -p test1 up -d; sleep 3s; docker exec -i $(docker-compose -p test1 ps ros | grep -m 1 ros | cut -d " " -f1 ) ./node_modules/gulp/bin/gulp.js cucumber1; docker-compose -p test1 down; bash'
start bash -c 'docker-compose -p test2 up -d; sleep 3s; docker exec -i $(docker-compose -p test2 ps ros | grep -m 1 ros | cut -d " " -f1 ) ./node_modules/gulp/bin/gulp.js cucumber2; docker-compose -p test2 down; bash'
sleep 5s
start bash -c "docker stats $(docker ps | awk '{if(NR>1) print $NF}')"
I have a script which contains this line:
fgrep -m 1 'PostgreSQL init process complete' <( docker run --name test-postgres-migration \
-a STDOUT -p 5432:5432 postgres:9.4 )
However, even if I change it to:
fgrep -m 1 'PostgreSQL init process complete' <( docker run --name test-postgres-migration \
-a STDOUT -p 5432:5432 postgres:9.4 </dev/null )
or:
fgrep -m 1 'PostgreSQL init process complete' <( docker run --name test-postgres-migration \
-a STDOUT -p 5432:5432 postgres:9.4 </dev/null ) </dev/null
or even when I put the entire line in a separate shell script and wrap it in nohup:
nohup ./boot-container.sh
the docker container still dies if the script is killed (by emacs) when it is on a later line. This happens because the docker client command is still running, and even though it has PID 1 as its parent, for some mysterious reason (maybe due to a shared stdin file handle or tty) it gets killed too when the script gets killed, which in turn causes the docker container it has started to die. How can I prevent this?
In this situation, even nohup isn't sufficient - it's necessary to use setsid:
fgrep -m 1 'PostgreSQL init process complete' <( setsid docker run --name test-postgres-migration \
-a STDOUT -p 5432:5432 postgres:9.4 )
This solves my problem on Linux - I haven't verified whether the problem occurs on Mac or Unixes, or if this unofficial port of setsid to Mac and other Unixes fixes it.