I have 3 commands that I am trying to run when the system start as a cronjob.
# Sleep at startup
sleep 2m
#command num 1:
./trace.out
sleep 5
#Command num 2:
java -jar file.jar
sleep 5
#Command num 3:
sh ./script.sh
is there any way to make this script more efficient using a loop, some way to make sure every script is running before executing the next one.
I would use && between each command as it executes each command, only if the previous one succeeded! For example:
# Sleep at startup
sleep 2m
./trace.out && java -jar file.jar && sh ./script.sh
Related
I'm trying to run a bash script at boot time from /etc/rc.local on a headless Raspberry Pi 4 (Raspbian buster lite - Debian based). I've done something similar on a Pi 3 with success so I'm confused about why the Pi 4 would misbehave - or behave differently.
The script executed from /etc/rc.local fires but appears to just exit at seemingly random intervals with no indication as to why it's being terminated.
To test it, I dumbed down the script and just stuck the following into a test script called /home/pi/test.sh:
#!/bin/bash
exec 2> /tmp/output # send stderr from rc.local to a log file
exec 1>&2 # send stdout to the same log file
set -x # tell bash to display commands before execution
while true
do
echo 'Still alive'
sleep .1
done
I then call it from /etc/rc.local just before the exit line:
#!/bin/sh -e
#
# rc.local - executed at the end of each multiuser runlevel
#
# Make sure that the script will "exit 0" on success or any other
# value on error.
/home/pi/test.sh
echo $? >/tmp/exiterr #output exit code to /tmp/exiterr
exit 0
The contents of /tmp/output:
+ true
+ echo 'Still alive'
Still alive
+ sleep .1
+ true
+ echo 'Still alive'
Still alive
+ sleep .1
and /tmp/exiterr shows
0
If I reduce the sleep period, /tmp/output is longer (over 6000 lines without the sleep).
Any ideas why the script is exiting shortly after starting?
I'm trying to record system metrics using top while other processes are running. I'm hoping to chain things together, like so:
#!/bin/bash
# Redirect `top` output
top -U $USER > file.txt &&
# Then run a process that just sleeps for 4 seconds
python3 -c 'import time;time.sleep(4)' &&
# Then run another process that does the same
python3 -c 'import time;time.sleep(4)'
When I run this, however, the latter two (Python) processes never complete. My goal is to start recording from top before any of the other processes start, then once those processes complete, stop recording from top.
#run the first command in background
top -U $USER > file.txt &
# Then run a process that just sleeps for 4 seconds
python3 -c 'import time;time.sleep(4)' &&
# Then run another process that does the same
python3 -c 'import time;time.sleep(4)' &&
# kill the command in background
kill %1
So i have been trying to make a auto reboot script, most of it works, but when it comes down to my if else statement i dont think it get ran when i run the script via cron job
#!/bin/sh
screen -x modded
sleep 2
screen -S modded -X stuff "say restarting in 1 minute"
screen -S modded -X eval "stuff \015"
# [...]
screen -wipe
sleep 2
screen -ls | awk '/\.modded\t/ {print strtonum($1)}' > pid/kill.pid
sleep 1
PIDFile="/home/Minecraft/direwolf20-server1.12/pid/kill.pid"
File=`stat -c %s pid/kill.pid`
if [ $File -lt 1 ];then
rm pid/kill.pid
sleep 2
sh ./start
else
sleep 2
kill -9 $(<"$PIDFile")
sleep 2
rm pid/kill.pid
sleep 2
screen -wipe
sleep 2
sh ./start
fi
when i run the script my self it works fine
Two options:
Ensure that the script is marked as executable.
When creating the cron entry specify the shell just before the script.
eg.
0 */12 * * * /bin/bash /home/foo/script.sh
I want to distribute the work from a master server to multiple worker servers using batches.
Ideally I would have a tasks.txt file with the list of tasks to execute
cmd args 1
cmd args 2
cmd args 3
cmd args 4
cmd args 5
cmd args 6
cmd args 7
...
cmd args n
and each worker server will connect using ssh, read the file and mark each line as in progress or done
#cmd args 1 #worker1 - done
#cmd args 2 #worker2 - in progress
#cmd args 3 #worker3 - in progress
#cmd args 4 #worker1 - in progress
cmd args 5
cmd args 6
cmd args 7
...
cmd args n
I know how to make the ssh connection, read the file, and execute remotely but don't know how to make the read and write an atomic operation, in order to not have cases where 2 servers start the same task, and how to update the line.
I would like for each worker to go to the list of tasks and lock the next available task in the list rather than the server actively commanding the workers, as I will have a flexible number of workers clones that I will start or close according to how fast I will need the tasks to complete.
UPDATE:
and my ideea for the worker script would be :
#!/bin/bash
taskCmd=""
taskLine=0
masterSSH="ssh usr#masterhost"
tasksFile="/path/to/tasks.txt"
function getTask(){
while [[ $taskCmd == "" ]]
do
sleep 1;
taskCmd_and_taskLine=$($masterSSH "#read_and_lock_next_available_line $tasksFile;")
taskCmd=${taskCmd_and_taskLine[0]}
taskLine=${taskCmd_and_taskLine[1]}
done
}
function updateTask(){
message=$1
$masterSSH "#update_currentTask $tasksFile $taskLine $message;"
}
function doTask(){
return $taskCmd;
}
while [[ 1 -eq 1 ]]
do
getTask
updateTask "in progress"
doTask
taskErrCode=$?
if [[ $taskErrCode -eq 0 ]]
then
updateTask "done, finished successfully"
else
updateTask "done, error $taskErrCode"
fi
taskCmd="";
taskLine=0;
done
You can use flock to concurrently access the file:
exec 200>>/some/any/file ## create a file descriptor
flock -w 30 200 ## concurrently access /some/any/file, timeout of 30 sec.
You can point the file descriptor to your tasks list or any other file, but of course the same file in order to flock work. The lock will me removed as soon as the process that created it is done or fail. You can also remove the lock by yourself when you don't need it anymore:
flock -u 200
An usage sample:
ssh user#x.x.x.x '
set -e
exec 200>>f
echo locking...
flock -w 10 200
echo working...
sleep 5
'
set -e fails the script if any step fails. Play with the sleep time and execute this script in parallel. Just one sleep will execute at a time.
Check if you are reinventing GNU Parallel:
parallel -S worker1 -S worker2 command ::: arg1 arg2 arg3
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
try to implement something like
while read line; do
echo $line
#check if the line contains the # char, if not execute the ssh, else nothing to do
checkAlreadyDone=$(grep "^#" $line)
if [ -z "${checkAlreadyDone}" ];then
<insert here the command to execute ssh call>
<here, if everything has been executed without issue, you should
add a commad to update the file taskList.txt
one option could be to insert a sed command but it should be tested>
else
echo "nothing to do for $line"
fi
done < taskList.txt
Regards
Claudio
I think I have successfully implemented one: https://github.com/guo-yong-zhi/DistributedTaskQueue
It is mainly based on bash, ssh and flock, and python3 is required for string processing.
I have a script that uses ssh to login to a remote machine, cd to a particular directory, and then start a daemon. The original script looks like this:
ssh server "cd /tmp/path ; nohup java server 0</dev/null 1>server_stdout 2>server_stderr &"
This script appears to work fine. However, it is not robust to the case when the user enters the wrong path so the cd fails. Because of the ;, this command will try to run the nohup command even if the cd fails.
The obvious fix doesn't work:
ssh server "cd /tmp/path && nohup java server 0</dev/null 1>server_stdout 2>server_stderr &"
that is, the SSH command does not return until the server is stopped. Putting nohup in front of the cd instead of in front of the java didn't work.
Can anyone help me fix this? Can you explain why this solution doesn't work? Thanks!
Edit: cbuckley suggests using sh -c, from which I derived:
ssh server "nohup sh -c 'cd /tmp/path && java server 0</dev/null 1>master_stdout 2>master_stderr' 2>/dev/null 1>/dev/null &"
However, now the exit code is always 0 when the cd fails; whereas if I do ssh server cd /failed/path then I get a real exit code. Suggestions?
See Bash's Operator Precedence.
The & is being attached to the whole statement because it has a higher precedence than &&. You don't need ssh to verify this. Just run this in your shell:
$ sleep 100 && echo yay &
[1] 19934
If the & were only attached to the echo yay, then your shell would sleep for 100 seconds and then report the background job. However, the entire sleep 100 && echo yay is backgrounded and you're given the job notification immediately. Running jobs will show it hanging out:
$ sleep 100 && echo yay &
[1] 20124
$ jobs
[1]+ Running sleep 100 && echo yay &
You can use parenthesis to create a subshell around echo yay &, giving you what you'd expect:
sleep 100 && ( echo yay & )
This would be similar to using bash -c to run echo yay &:
sleep 100 && bash -c "echo yay &"
Tossing these into an ssh, and we get:
# using parenthesis...
$ ssh localhost "cd / && (nohup sleep 100 >/dev/null </dev/null &)"
$ ps -ef | grep sleep
me 20136 1 0 16:48 ? 00:00:00 sleep 100
# and using `bash -c`
$ ssh localhost "cd / && bash -c 'nohup sleep 100 >/dev/null </dev/null &'"
$ ps -ef | grep sleep
me 20145 1 0 16:48 ? 00:00:00 sleep 100
Applying this to your command, and we get
ssh server "cd /tmp/path && (nohup java server 0</dev/null 1>server_stdout 2>server_stderr &)"
or:
ssh server "cd /tmp/path && bash -c 'nohup java server 0</dev/null 1>server_stdout 2>server_stderr &'"
Also, with regard to your comment on the post,
Right, sh -c always returns 0. E.g., sh -c exit 1 has error code
0"
this is incorrect. Directly from the manpage:
Bash's exit status is the exit status of the last command executed in
the script. If no commands are executed, the exit status is 0.
Indeed:
$ bash -c "true ; exit 1"
$ echo $?
1
$ bash -c "false ; exit 22"
$ echo $?
22
ssh server "test -d /tmp/path" && ssh server "nohup ... &"
Answer roundup:
Bad: Using sh -c to wrap the entire nohup command doesn't work for my purposes because it doesn't return error codes. (#cbuckley)
Okay: ssh <server> <cmd1> && ssh <server> <cmd2> works but is much slower (#joachim-nilsson)
Good: Create a shell script on <server> that runs the commands in succession and returns the correct error code.
The last is what I ended up using. I'd still be interested in learning why the original use-case doesn't work, if someone who understands shell internals can explain it to me!