bash sh script nohup excuting not complete? - bash

i have a problem, plese watch this code. (j_restart.sh file)
#!/bin/bash
printf "Killing j-Chat server script... "
nyret=`pkill -f index.php`
printf "OK !\n"
printf "Wait killing instances."
while : ; do
nyret=`netstat -ap | grep :8008 | wc -l`
if [ "$nyret" == "0" ]; then
printf "OK !\n"
break
fi
printf "."
sleep 3
done
echo "Runing j-Chat server script... "
nyret=`nohup php -q /home/jChat/public_html/index.php < /dev/null &`
echo "OK !"
echo "j-Chat Server Working ON !";
ssh return val :
root#server [~]# sh /home/jChat/public_html/j_restart.sh
Killing jChat Server Script... OK !
Wait killing instances................ OK !
Runing jChat Server Script...
nohup: redirecting stderr to stdout
(and waiting not jump next line..)
im press manualy ctrl+c keys
^C
root#server [~]#
How to fix this problem ? why not working complete ? Stop and wait line 16...how to countinue next line 17 and 18... ?? Help me please..

Here's a simpler example reproducing your problem:
nyret=`nohup sleep 30 < /dev/null &`
echo "This doesn't run (until sleep exits)"
The problem is the shell is waiting to capture all output from your command. It runs in the background, but it still keeps the pipe open, so the shell waits.
The solution is to not capture the output, because you don't use it anyways:
nohup sleep 30 < /dev/null &
echo "This runs fine"

Related

Execute process in background, without printing "Done", and get PID

This seems like a pretty trivial thing to do, but I'm very stuck.
To execute something in the background, use &:
>>> sleep 5 &
[1] 21763
>>> #hit enter
[1]+ Done sleep 5
But having a bashrc-sourced background script output job information is pretty frustrating, so you can do this to fix it:
>>> (sleep 5 &)
OK, so now I want to get the PID of sleep for wait or kill. Unfortunately its running in a subshell so the typical $! method doesn't work:
>>> echo $!
21763
>>> (sleep 5 &)
>>> echo $!
21763 #hasn't changed
So I thought, maybe I could get the subshell to print its PID in this way:
>>> sleep 5 & echo $!
[1] 21803 #annoying job-start message (stderr)
21803 #from the echo
But now when I throw that in the subshell no matter how I try to capture stdout of the subshell, it appears to block until sleep has finished.
>>> pid=$(sleep 5 & echo $!)
How can I run something in the background, get its PID and stop it from printing job information and "Done"?
Solution A
When summoning the process, redirect the shell's stderr to >/dev/null for that summoning instance. We can do this by duplicating fd 2 so we could still use the duplicate fd for the process. We do all of these inside a block to make redirection temporary:
{ sleep 5 2>&3 & pid=$!; } 3>&2 2>/dev/null
Now to prevent the "Done" message from being shown later, we exclude the process from the job table and this is done with the disown command:
{ sleep 5 2>&3 & disown; pid=$!; } 3>&2 2>/dev/null
It's not necessary if job control is not enabled. Job control can be disabled with set +m or shopt -u -o monitor.
Solution B
We can also use command substitution to summon the process. The only problem we had is that the process still hooks itself to the pipe created by $() that reads stdout but we can fix this by duplicating original stdout before it then using that file descriptor for the process:
{ pid=$( sleep 200s >&3 & echo $! ); } 3>&1
It may not be necessary if we redirect the process' output somewhere like /dev/null:
pid=$( sleep 200s >/dev/null & echo $! )
Similarly with process substitution:
{ read pid < <(sleep 200s >&3 & echo $!); } 3>&1
Some may say that redirection is not necessary for process substitution but the problem is that the process that may be accessing its stdout would die quickly. For example:
$ function x { for A in {1..100}; do echo "$A"; sleep 1s; done }
$ read pid < <(x & echo $!)
$ kill -s 0 "$pid" &>/dev/null && echo "Process active." || echo "Process died."
Process died.
$ read pid < <(x > /dev/null & echo $!)
$ kill -s 0 "$pid" &>/dev/null && echo "Process active." || echo "Process died."
Process active.
Optionally you can just create a permanent duplicate fd with exec 3>&1 so you can just have pid=$( sleep 200s >&3 & echo $! ) on the next lines.
You can use read bulletin to capture output:
read -r pid < <(sleep 10 & echo $!)
Then:
ps -p $pid
PID TTY TIME CMD
78541 ttys001 0:00.00 sleep 10
The set +m disable monitor mode in bash. In other words it rid off the annnoying Done message.
To enable again, use set -m.
eg:
$ set +m
$ (sleep 5; echo some) &
[1] 23545 #still prints the job number
#after 5 secs
some
$ #no Done message...
Try this:
pid=$((sleep 5 & echo $!) | sed 1q)
I found a great way, no need sub-shell, will keep the parent-child relationship.
Since: [1] 21763 and [1]+ Done sleep 5 are all stderr, which is &2.
We can redirect &2 to /dev/null, here is code:
exec 7>&2 2>/dev/null # Here backup 2 to 7, and redirect 2 to /dev/null
sleep 5
wait
exec 2>&7 7>&- # here restore 7 to 2, and delete 7.
See: Using exec

Timeout for grep in bash script

I need to grep for a specific string in a specific time on an input:
trap "kill 0" EXIT SIGINT SIGTERM
RESULT=$(adb logcat MyTag:V *:S | grep -m 1 "Hello World") &
sleep 10
if [ "$RESULT" = "" ]; then
echo "Timeout!"
else
echo "found"
fi
with the trap the subshell gets killed correctly but i see that the grep does not work anymore now. adb logcat is the only process running in the subshell, when executing the script
You could open a file descriptor as input with process substitution. After n seconds you could read the result.
{
sleep 10
IFS= read -rd '' -u 4 RESULT
if [ "$RESULT" = "" ]; then
echo "Timeout!"
else
echo "found"
fi
} 4< <(adb logcat MyTag:V *:S | grep -m 1 "Hello World")
You could also use exec to keep it not just within {}.
Although I still wonder why you have to place the command in the background and use sleep to wait for it. It runs with a subshell so the value saved in RESULT would always be lost.
RESULT=$(adb logcat MyTag:V *:S | grep -m 1 "Hello World") &
sleep 10
RESULT=$(adb logcat MyTag:V *:S | grep -m 1 "Hello World")
Can you kill the subshell using $! (the pid of the last process invoked)

Customized progress message for tasks in bash script

I'm currently writing a bash script to do tasks automatically. In my script I want it to display progress message when it is doing a task.
For example:
user#ubuntu:~$ Configure something
->
Configure something .
->
Configure something ..
->
Configure something ...
->
Configure something ... done
All the progress message should appear in the same line.
Below is my workaround so far:
echo -n "Configure something "
exec "configure something 2>&1 /dev/null"
//pseudo code for progress message
echo -n "." and sleep 1 if the previous exec of configure something not done
echo " done" if exec of the command finished successfully
echo " failed" otherwise
Will exec wait for the command to finish and then continue with the script lines later?
If so, then how can I echo message at the same time the exec of configure something is taking place?
How do I know when exec finishes the previous command and return true? use $? ?
Just to put the editorial hat on, what if something goes wrong? How are you, or a user of your script going to know what went wrong? This is probably not the answer you're looking for but having your script just execute each build step individually may turn out to be better overall, especially for troubleshooting. Why not define a function to validate your build steps:
function validateCmd()
{
CODE=$1
COMMAND=$2
MODULE=$3
if [ ${CODE} -ne 0 ]; then
echo "ERROR Executing Command: \"${COMMAND}\" in Module: ${MODULE}"
echo "Exiting."
exit 1;
fi
}
./configure
validateCmd $? "./configure" "Configuration of something"
Anyways, yes as you probably noticed above, use $? to determine what the result of the last command was. For example:
rm -rf ${TMP_DIR}
if [ $? -ne 0 ]; then
echo "ERROR Removing directory: ${TMP_DIR}"
exit 1;
fi
To answer your first question, you can use:
echo -ne "\b"
To delete a character on the same line. So to count to ten on one line, you can do something like:
for i in $(seq -w 1 10); do
echo -en "\b\b${i}"
sleep .25
done
echo
The trick with that is you'll have to know how much to delete, but I'm sure you can figure that out.
You cannot call exec like that; exec never returns, and the lines after an exec will not execute. The standard way to print progress updates on a single line is to simply use \r instead of \n at the end of each line. For example:
#!/bin/bash
i=0
sleep 5 & # Start some command
pid=$! # Save the pid of the command
while sleep 1; do # Produce progress reports
printf '\rcontinuing in %d seconds...' $(( 5 - ++i ))
test $i -eq 5 && break
done
if wait $pid; then echo done; else echo failed; fi
Here's another example:
#!/bin/bash
execute() {
eval "$#" & # Execute the command
pid=$!
# Invoke a shell to print status. If you just invoke
# the while loop directly, killing it will generate a
# notification. By trapping SIGTERM, we suppress the notice.
sh -c 'trap exit SIGTERM
while printf "\r%3d:%s..." $((++i)) "$*"; do sleep 1
done' 0 "$#" &
last_report=$!
if wait $pid; then echo done; else echo failed; fi
kill $last_report
}
execute sleep 3
execute sleep 2 \| false # Execute a command that will fail
execute sleep 1

Loop shell script until successful log message

I am trying to get a shell script to recognize when an app instance has come up. That way it can continue issuing commands.
I've been thinking it would be something like this:
#/bin/bash
startApp.sh
while [ `tail -f server.log` -ne 'regex line indicating success' ]
do
sleep 5
done
echo "App up"
But, even if this worked, it wouldn't address some concerns:
What if the app doesn't come up, how long will it wait
What if there is an error when bringing the app up
How can I capture the log line and echo it
Am I close, or is there a better way? I imagine this is something that other admins have had to overcome.
EDIT:
I found this on super user
https://superuser.com/questions/270529/monitoring-a-file-until-a-string-is-found
tail -f logfile.log | while read LOGLINE
do
[[ "${LOGLINE}" == *"Server Started"* ]] && pkill -P $$ tail
done
My only problem with this is that it might never exit. Is there a way to add in a maximum time?
Ok the first answer was close, but didn't account for everything I thought could happen.
I adapted the code from this link:
Ending tail -f started in a shell script
Here's what I came up with:
#!/bin/bash
instanceDir="/usr/username/server.name"
serverLogFile="$instanceDir/server/app/log/server.log"
function stopServer() {
touch ${serverLogFile}
# 3 minute timeout.
sleep 180 &
local timerPid=$!
tail -n0 -F --pid=${timerPid} ${serverLogFile} | while read line
do
if echo ${line} | grep -q "Shutdown complete"; then
echo 'Server Stopped'
# stop the timer..
kill ${timerPid} > /dev/null 2>&1
fi
done &
echo "Stoping Server."
$instanceDir/bin/stopserver.sh > /dev/null 2>&1
# wait for the timer to expire (or be killed)
wait %sleep
}
function startServer() {
touch ${serverLogFile}
# 3 minute timeout.
sleep 180 &
local timerPid=$!
tail -n0 -F --pid=${timerPid} ${serverLogFile} | while read line
do
if echo ${line} | grep -q "server start complete"; then
echo 'Server Started'
# stop the timer..
kill ${timerPid} > /dev/null 2>&1
fi
done &
echo "Starting Server."
$instanceDir/bin/startserver.sh > /dev/null 2>&1 &
# wait for the timer to expire (or be killed)
wait %sleep
}
stopServer
startServer
Well, tail -f won't ever exit, so that's not what you want.
numLines=10
timeToSleep=5
until tail -n $numLines server.log | grep -q "$serverStartedPattern"; do
sleep $timeToSleep
done
Be sure that $numLines is greater than the number of lines that might show up during $timeToSleep when the server has come up.
This will continue forever; if you want to only allow so much time, you could put a cap on the number of loop iterations with something like this:
let maxLoops=60 numLines=10 timeToSleep=5 success=0
for (( try=0; try < maxLoops; ++try )); do
if tail -n $numLines server.log | grep -q "$serverStartedPattern"; then
echo "Server started!"
success=1
break
fi
sleep $timeToSleep
done
if (( success )); then
echo "Server started!"
else
echo "Server never started!"
fi
exit $(( 1-success ))

Determining if process is running using pgrep

I have a script that I only want to be running one time. If the script gets called a second time I'm having it check to see if a lockfile exists. If the lockfile exists then I want to see if the process is actually running.
I've been messing around with pgrep but am not getting the expected results:
#!/bin/bash
COUNT=$(pgrep $(basename $0) | wc -l)
PSTREE=$(pgrep $(basename $0) ; pstree -p $$)
echo "###"
echo $COUNT
echo $PSTREE
echo "###"
echo "$(basename $0) :" `pgrep -d, $(basename $0)`
echo sleeping.....
sleep 10
The results I'm getting are:
$ ./test.sh
###
2
2581 2587 test.sh(2581)---test.sh(2587)---pstree(2591)
###
test.sh : 2581
sleeping.....
I don't understand why I'm getting a "2" when only one process is actually running.
Any ideas? I'm sure it's the way I'm calling it. I've tried a number of different combinations and can't quite seem to figure it out.
SOLUTION:
What I ended up doing was doing this (portion of my script):
function check_lockfile {
# Check for previous lockfiles
if [ -e $LOCKFILE ]
then
echo "Lockfile $LOCKFILE already exists. Checking to see if process is actually running...." >> $LOGFILE 2>&1
# is it running?
if [ $(ps -elf | grep $(cat $LOCKFILE) | grep $(basename $0) | wc -l) -gt 0 ]
then
abort "ERROR! - Process is already running at PID: $(cat $LOCKFILE). Exitting..."
else
echo "Process is not running. Removing $LOCKFILE" >> $LOGFILE 2>&1
rm -f $LOCKFILE
fi
else
echo "Lockfile $LOCKFILE does not exist." >> $LOGFILE 2>&1
fi
}
function create_lockfile {
# Check for previous lockfile
check_lockfile
#Create lockfile with the contents of the PID
echo "Creating lockfile with PID:" $$ >> $LOGFILE 2>&1
echo -n $$ > $LOCKFILE
echo "" >> $LOGFILE 2>&1
}
# Acquire lock file
create_lockfile >> $LOGFILE 2>&1 \
|| echo "ERROR! - Failed to acquire lock!"
The argument for pgrep is an extended regular expression pattern.
In you case the command pgrep $(basename $0) will evaluate to pgrep test.sh which will match match any process that has test followed by any character and lastly followed by sh. So it wil match btest8sh, atest_shell etc.
You should create a lock file. If the lock file exists program should exit.
lock=$(basename $0).lock
if [ -e $lock ]
then
echo Process is already running with PID=`cat $lock`
exit
else
echo $$ > $lock
fi
You are already opening a lock file. Use it to make your life easier.
Write the process id to the lock file. When you see the lock file exists, read it to see what process id it is supposedly locking, and check to see if that process is still running.
Then in version 2, you can also write program name, program arguments, program start time, etc. to guard against the case where a new process starts with the same process id.
Put this near the top of your script...
pid=$$
script=$(basename $0)
guard="/tmp/$script-$(id -nu).pid"
if test -f $guard ; then
echo >&2 "ERROR: Script already runs... own PID=$pid"
ps auxw | grep $script | grep -v grep >&2
exit 1
fi
trap "rm -f $guard" EXIT
echo $pid >$guard
And yes, there IS a small window for a race condition between the test and echo commands, which can be fixed by appending to the guard file, and then checking that the first line is indeed our own PID. Also, the diagnostic output in the if can be commented out in a production version.

Resources