So i have been trying to make a auto reboot script, most of it works, but when it comes down to my if else statement i dont think it get ran when i run the script via cron job
#!/bin/sh
screen -x modded
sleep 2
screen -S modded -X stuff "say restarting in 1 minute"
screen -S modded -X eval "stuff \015"
# [...]
screen -wipe
sleep 2
screen -ls | awk '/\.modded\t/ {print strtonum($1)}' > pid/kill.pid
sleep 1
PIDFile="/home/Minecraft/direwolf20-server1.12/pid/kill.pid"
File=`stat -c %s pid/kill.pid`
if [ $File -lt 1 ];then
rm pid/kill.pid
sleep 2
sh ./start
else
sleep 2
kill -9 $(<"$PIDFile")
sleep 2
rm pid/kill.pid
sleep 2
screen -wipe
sleep 2
sh ./start
fi
when i run the script my self it works fine
Two options:
Ensure that the script is marked as executable.
When creating the cron entry specify the shell just before the script.
eg.
0 */12 * * * /bin/bash /home/foo/script.sh
Related
I have written a shell script that runs some commands. I have added a logic to run this script once every 24 hours. But it runs once and then doesn't run.
The script is as below:
#!/bin/bash
while true; do
cd /home/ubuntu/;
DATE=`date '+%Y-%m-%d'`;
aws s3 cp --recursive "/home/ubuntu/" s3://bucket_name/$DATE/;
rm -r -f ./*;
# sleep 24 hours
sleep $((24 * 60 * 60))
done
Why does it not run once every 24 hours ? I do not get any errors when the script runs. The copy takes about 10 mins.
The good practice is to protect your script againt multirunning.
In this case, you can be sure that only 1 instance is running.
#!/bin/bash
LOCKFILE=/tmp/block_file
if ( set -o noclobber; echo "$$" > "$LOCKFILE") 2> /dev/null;
then
trap 'rm -f "$LOCKFILE"; exit $?' INT TERM EXIT
while true; do
cd /home/ubuntu/;
DATE=`date '+%Y-%m-%d'`;
aws s3 cp --recursive "/home/ubuntu/" s3://bucket_name/$DATE/;
rm -r -f ./*;
# sleep 24 hours
sleep $((24 * 60 * 60))
done
rm -f "$LOCKFILE"
trap - INT TERM EXIT
else
echo "Warning. Script is already running!"
echo "Block by PID $(cat $LOCKFILE) ."
exit
fi
You can run a script immune to hangups.
nohup is a UNIX utility that runs the specified command ignoring communication loss signals (SIGHUP). Thus, the script will continue to work in the background even after the user logs out.
nohup ./yourscript.sh
The created file /tmp/block_file will safe runned script against multirunning. To complete it press ctrl+c or run kill -11 pidofyourscript in terminal, in this way /tmp/block_file will be deleted.
The output of script puts on file nohup.out.
To run in background (preferred way):
nohup ./yourscript.sh &
Your script is probably killed due to inactivity, or when you exit the shell. The proper way to do this is use cron, as #Christian.K mentioned. See https://help.ubuntu.com/community/CronHowto
I have been given a c shell script that launches 800 individual qsubs for a sample. I need to run this script on more than 500 samples (listed in samples.txt). To automate the process, I thought about running the script (named SrchDriver) using the following bash shell script:
#!/bin/sh
for item in $(cat samples.txt)
do
(cd dir_"$item"/MAPGAPS && SrchDriver "$item"_Out 3)
done
This script would launch the SrchDriver script for all samples one right after another which would result in too many jobs on the server at one time. I would like to run only one sample at a time by waiting for all qsubs to finish for a particular sample.
What is the best way to put in a check for running/waiting jobs for a sample and holding the launch of the Srchdriver script for additional samples until all jobs are finished for the current sample?
I was thinking to first wait for 30 seconds and then check status of the qsubs (name of jobs is mapgaps). Next, I wanted to use a while loop to check the status every 30 seconds. Once the status is no longer 0, then proceed to the next sample. Would this be correct?
sleep 30
qstat | grep mapgaps &> /dev/null
while [ $? -eq 0 ];
do
sleep 30
qstat | grep mapgaps &> /dev/null
done;
If correct, how would I combine it with my for-loop? Would the following code below be correct?
#!/bin/sh
for item in $(cat samples.txt)
do
(cd dir_"$item"/MAPGAPS && SrchDriver "$item"_Out 3)
sleep 30
qstat | grep mapgaps &> /dev/null
status=$?
while [ $status = 0 ]
do
sleep 30
qstat | grep mapgaps &> /dev/null
status=$?
done
done
Thanks in advance for help. Please let me know if more information is needed.
Your script should work as is, indeed. The logic is sound and the syntax is correct.
A small improvement: the while statement can take the return status of a command directly, without using $?, so you could write your script like this:
#!/bin/sh
for item in $(cat samples.txt)
do
(cd dir_"$item"/MAPGAPS && SrchDriver "$item"_Out 3)
sleep 30
while qstat | grep mapgaps &> /dev/null
do
sleep 30
done
done
I'm developing a simple screenshot spyware which takes screenshot every 5 seconds from start of the script. I want it to run on closing the terminal. Even after nohupping the script along with '&', my script exits on closing the terminal.
screenshotScriptWOSleep.sh
#!/bin/bash
echo "Starting Screenshot Capture Script."
echo "Process ID: $$"
directory=$(date "+%Y-%m-%d-%H:%M")
mkdir ${directory}
cd ${directory}
shotName=$(date "+%s")
while true
do
if [ $( date "+%Y-%m-%d-%H:%M" ) != ${directory} ]
then
directory=$(date "
+%Y-%m-%d-%H:%M")
cd ..
mkdir ${directory}
cd ${directory}
fi
if [ $(( ${shotName} + 5 )) -eq $(date "+%s" ) ]
then
shotName=$(date "+%s" )
screencapture -x $(date "+%Y-%m-%d-%H:%M:%S" )
fi
done
I ran the script with,
nohup ./screenshotScriptWOSleep.sh &
On closing the terminal window, it warns with,
"Closing this tab will terminate the running processes: bash, date."
I have read that the nohup applies to the child process too, but i'm stuck here. Thanks.
Either you're doing something really weird or that's referring to other processes.
nohup bash -c 'sleep 500' &
Shutdown that terminal; open another one:
ps aux | grep sleep
409370294 26120 1 0 2:43AM ?? 0:00.01 sleep 500
409370294 26330 26191 0 2:45AM ttys005 0:00.00 grep -i sleep
As you can see, sleep is still running.
Just ignore that warning, your process is not terminated. verify with
watch wc -l nohup.out
The following script works as expected when executed from an Applescript do shell script command.
#!/bin/sh
sleep 10 &
#echo "hello world" > /tmp/apipe &
cpid=$!
sleep 1
if ps -ef | grep $cpid | grep sleep | grep -qv grep ; then
echo "killing blocking cmd..."
kill -KILL $cpid
# non zero status to inform launch script of problem...
exit 1
fi
But, if the sleep command (line 2) is swaped to the echo command in (line 3) together with the if statement, the script blocks when run from Applescript but runs fine from the terminal command line.
Any ideas?
EDIT: I should have mentioned that the script works properly when a consumer/reader is connected to the pipe. It only block when nothing is reading from the pipe...
OK, the following will do the trick. It basically kills the job using its jobid. Since there is only one, it's the current job %%.
I was lucky that I came across the this answer or it would have driven me crazy :)
#!/bin/sh
echo $1 > $2 &
sleep 1
# Following is necessary. Seems to need it or
# job will not complete! Also seen at
# https://stackoverflow.com/a/10736613/348694
echo "Checking for running jobs..."
jobs
kill %% >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo "Taking too long. Killed..."
exit 1
fi
exit 0
I want to get output of a command/script to a variable but the process is triggered to run in background. I tried as below and few servers ran it correctly and I got the response. But in few I am getting i_res as empty.
I am trying to run it in background as the command has chance to get in hang state and I don't want to hung the parent script.
Hope I will get a response soon.
#!/bin/ksh
x_cmd="ls -l"
i_res=$(eval $x_cmd 2>&1 &)
k_pid=$(pgrep -P $$ | head -1)
sleep 5
c_errm="$(kill -0 $k_pid 2>&1 )"; c_prs=$?
if [ $c_prs -eq 0 ]; then
c_errm=$(kill -9 $k_pid)
fi
wait $k_pid
echo "Result : $i_res"
Try something like this:
#!/bin/ksh
pid=$$ # parent process
(sleep 5 && kill $pid) & # this will sleep and wake up after 5 seconds
# and kill off the parent.
termpid=$! # remember the timebomb pid
# put the command that can hang here
result=$( ls -l )
# if we got here in less than 5 five seconds:
kill $termpid # kill off the timebomb
echo "$result" # disply result
exit 0
Add whatever messages you need to the code. On average this will complete much faster than always having a sleep statement. You can see what it does by making the command sleep 6 instead of ls -l