how to stop tail -f command executed in sub shell - bash

I tried various steps from http://mywiki.wooledge.org/ProcessManagement and http://mywiki.wooledge.org/BashFAQ/068My but I am unable to achieve is how to kill the tail -f command after certain time interval.
my script:
#!/bin/bash
function strt ()
{
command 1..
command 2..
}
export -f strt
su user -c 'set -e && RUN_Server.sh > server.log && tail -f server.log & pid=$! { sleep 20; kill $pid; } && strt'
exit 0.
I am trying to kill the pid of tail -f server.log and proceed to 'strt' which is small function to find if jboss server is started or not.
on executing I get error as
bash: -c: line 0: syntax error near unexpected token `{' .

Try this
timeout 20 tail -f server.log

pid=$! { sleep 20 ; kill $pid; }
What are you trying to do? Maybe just adding a semicolon before { can help?

The problem you're having is that you sleep command won't run until after you kill your tail.
The command structure:
command1 && command2
says to run command1 and if it exits with an exit code of 0, then run command2. It's equivalent to this:
if command1
then
command2
fi
I had a similar situation to this a while ago. I had to start up a Weblogic server, wait until the server started, then do something on the server.
I ended up using named pipes (Directions) to do this. I ran the command that started the Weblogic server through this named pipe. I then had another process read the pipe, and when I saw the startup string, I broke out of my loop and continued my program.
The link above should give you complete directions on doing just that.

I was trying something similar, namely wanted to print out the pid of a process spawned in the background with ampersand (&), in a one-liner/single line:
$ tail -f /var/log/syslog & ; echo $!
bash: syntax error near unexpected token `;'
... but kept getting the dreaded syntax error, which brought me to this post.
What I failed to realize in my example above, is that the & (ampersand) is also a command separator/terminator in bash - just like the ; (semicolon) is!! Note:
BashSheet - Greg's Wiki
[command] ; [command] [newline]
Semi-colons and newlines separate synchronous commands from each other.
[command] & [command]
A single ampersand terminates an asynchronous command.
So - given that the &, which is a command line terminator in bash, in my example above is followed by a ;, which is also a command line terminator - bash chokes. The answer is simply to remove the semicolon ;, and let only the ampersand & act as a command line separator in the one-liner:
$ tail -f /var/log/syslog & echo $!
[1] 15562
15562
$ May 1 05:39:16 mypc avahi-autoipd(eth0)[23315]: Got SIGTERM, quitting.
May 1 06:09:01 mypc CRON[2496]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete)
May 1 06:17:01 mypc CRON[5077]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
May 1 06:25:01 mypc CRON[7587]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
^C
$ May 1 06:52:01 mypc CRON[15934]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ))
$ ps -p 15562
PID TTY TIME CMD
15562 pts/0 00:00:00 tail
$ kill 15562
$ ps -p 15562
PID TTY TIME CMD
[1]+ Terminated tail -f /var/log/syslog
$ ps -p 15562
PID TTY TIME CMD
$
... however, in this example, you have to manually kill the spawned process.
To go back to OP problem, I can reconstruct the problem with this command line:
$ tail -f /var/log/syslog & pid=$! { sleep 2; kill $pid; }
bash: syntax error near unexpected token `}'
Thinking about this - bash sees & as separator, then sees "legal" command pid=$!, and then - with the previous "legal" command unterminated, sees a curly brace { which means a new command group in current shell. So the answer is simply to terminate the pid=$! with a semicolon ;, so the new command group can properly start:
$ tail -f /var/log/syslog & pid=$! ; { sleep 2; kill $pid; }
[1] 20228
May 1 05:39:16 mypc avahi-autoipd(eth0)[23315]: Got SIGTERM, quitting.
May 1 06:09:01 mypc CRON[2496]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete)
May 1 06:17:01 mypc CRON[5077]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
May 1 06:25:01 mypc CRON[7587]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
May 1 06:52:01 mypc CRON[15934]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ))
$ ps -p 20228
PID TTY TIME CMD
[1]+ Terminated tail -f /var/log/syslog
$
Note that the tail -f process seems to terminate property, but in my bash (version 4.2.8(1)), I have to press Enter in shell, to see the "[1]+ Terminated ..." message.
Hope this helps,
Cheers!

Related

what is the time about nohup command execute in shell?

I write a shell (CentOS) script command like this:
count=`ps -ef | grep ${APP_NAME} | grep -v "grep" | wc -l`
if [[ ${count} -lt 1 ]]; then
cd ${APP_HOME}
nohup ${JAVA_HOME}/bin/java -Xmx256M -Xms128M -jar -Xdebug -Xrunjdwp:transport=dt_socket,suspend=n,server=y,address=5016 ${APP_HOME}/${APP_NAME} >> /dev/null &
sleep 5
else
echo "process aready exists!"
exit 1
fi
I execute the script in terminal, the output is:
++ ps -ef
++ wc -l
++ grep -v grep
++ grep soa-report-consumer-service-1.0.0-SNAPSHOT.jar
+ count=0
+ [[ 0 -lt 1 ]]
+ cd /data/jenkins/soa-report-consumer
+ sleep 15
+ nohup /opt/dabai/tools/jdk1.8.0_211/bin/java -Xmx256M -Xms128M -jar -Xdebug -Xrunjdwp:transport=dt_socket,suspend=n,server=y,address=5016 /data/jenkins/soa-report-consumer/soa-report-consumer-service-1.0.0-SNAPSHOT.jar
The question is: Is the sleep command executed before nohup command? Why echo sleep command result first?
It's because you put the nohup command in the background with &.
There is no output immediately from the command when you put it in the background as it is running in a separate shell, and your current shell immediately goes to the sleep command. By the time you return from sleep, the nohup background process has returned and outputs the value.
If you remove the & (and thus run the commands in the same shell) you will see the order changes.

On closing the Terminal the nohupped shell script (with &) is stopped

I'm developing a simple screenshot spyware which takes screenshot every 5 seconds from start of the script. I want it to run on closing the terminal. Even after nohupping the script along with '&', my script exits on closing the terminal.
screenshotScriptWOSleep.sh
#!/bin/bash
echo "Starting Screenshot Capture Script."
echo "Process ID: $$"
directory=$(date "+%Y-%m-%d-%H:%M")
mkdir ${directory}
cd ${directory}
shotName=$(date "+%s")
while true
do
if [ $( date "+%Y-%m-%d-%H:%M" ) != ${directory} ]
then
directory=$(date "
+%Y-%m-%d-%H:%M")
cd ..
mkdir ${directory}
cd ${directory}
fi
if [ $(( ${shotName} + 5 )) -eq $(date "+%s" ) ]
then
shotName=$(date "+%s" )
screencapture -x $(date "+%Y-%m-%d-%H:%M:%S" )
fi
done
I ran the script with,
nohup ./screenshotScriptWOSleep.sh &
On closing the terminal window, it warns with,
"Closing this tab will terminate the running processes: bash, date."
I have read that the nohup applies to the child process too, but i'm stuck here. Thanks.
Either you're doing something really weird or that's referring to other processes.
nohup bash -c 'sleep 500' &
Shutdown that terminal; open another one:
ps aux | grep sleep
409370294 26120 1 0 2:43AM ?? 0:00.01 sleep 500
409370294 26330 26191 0 2:45AM ttys005 0:00.00 grep -i sleep
As you can see, sleep is still running.
Just ignore that warning, your process is not terminated. verify with
watch wc -l nohup.out

Convert job control number (%1) to pid so that I can kill sudo'ed background job

I often end up in this situation:
$ sudo something &
[1] 21838
$# Oh, shoot, it's hung, and assume the pid has scrolled off the screen
$ kill %1
-bash: kill: (21838) - Operation not permitted
$# Ah, rats. I forgot I sudo'ed that.
$# Wishful thinking:
$ sudo kill %1
kill: cannot find process "%1"
$# Now I have to use ps and find the pid I want.
$ ps -elf | grep something
$ ps -elf | grep sleep
4 S root 21838 1928 0 80 0 - 53969 poll_s 11:28 pts/2 00:00:00 sudo sleep 100
4 S root 21840 21838 0 80 0 - 26974 hrtime 11:28 pts/2 00:00:00 sleep 100
$ sudo kill -9 21838
[1]+ Killed sudo something
I would really like to know if there is a better workflow for this. I'm really surprised there isn't a bash expression to turn %1 into a pid number.
Is there a bash trick for converting %1 to it's underlying pid? (Yes, I know I could have saved it at launch with $!)
To get the PID of a job, use: jobs -p N, where N is the job number:
$ yes >/dev/null &
[1] 2189
$ jobs -p 1
2189
$ sudo kill $(jobs -p 1)
[1]+ Terminated yes > /dev/null
Alternatively, and more strictly answering your question, you might find -x useful: it runs a command, replacing a job spec with the corresponding PID:
$ yes >/dev/null &
[1] 2458
$ jobs -x sudo kill %1
[1]+ Terminated yes > /dev/null
I find -p more intuitive, personally, but I get the appeal of -x.

nohup doesn't work when used with double-ampersand (&&) instead of semicolon (;)

I have a script that uses ssh to login to a remote machine, cd to a particular directory, and then start a daemon. The original script looks like this:
ssh server "cd /tmp/path ; nohup java server 0</dev/null 1>server_stdout 2>server_stderr &"
This script appears to work fine. However, it is not robust to the case when the user enters the wrong path so the cd fails. Because of the ;, this command will try to run the nohup command even if the cd fails.
The obvious fix doesn't work:
ssh server "cd /tmp/path && nohup java server 0</dev/null 1>server_stdout 2>server_stderr &"
that is, the SSH command does not return until the server is stopped. Putting nohup in front of the cd instead of in front of the java didn't work.
Can anyone help me fix this? Can you explain why this solution doesn't work? Thanks!
Edit: cbuckley suggests using sh -c, from which I derived:
ssh server "nohup sh -c 'cd /tmp/path && java server 0</dev/null 1>master_stdout 2>master_stderr' 2>/dev/null 1>/dev/null &"
However, now the exit code is always 0 when the cd fails; whereas if I do ssh server cd /failed/path then I get a real exit code. Suggestions?
See Bash's Operator Precedence.
The & is being attached to the whole statement because it has a higher precedence than &&. You don't need ssh to verify this. Just run this in your shell:
$ sleep 100 && echo yay &
[1] 19934
If the & were only attached to the echo yay, then your shell would sleep for 100 seconds and then report the background job. However, the entire sleep 100 && echo yay is backgrounded and you're given the job notification immediately. Running jobs will show it hanging out:
$ sleep 100 && echo yay &
[1] 20124
$ jobs
[1]+ Running sleep 100 && echo yay &
You can use parenthesis to create a subshell around echo yay &, giving you what you'd expect:
sleep 100 && ( echo yay & )
This would be similar to using bash -c to run echo yay &:
sleep 100 && bash -c "echo yay &"
Tossing these into an ssh, and we get:
# using parenthesis...
$ ssh localhost "cd / && (nohup sleep 100 >/dev/null </dev/null &)"
$ ps -ef | grep sleep
me 20136 1 0 16:48 ? 00:00:00 sleep 100
# and using `bash -c`
$ ssh localhost "cd / && bash -c 'nohup sleep 100 >/dev/null </dev/null &'"
$ ps -ef | grep sleep
me 20145 1 0 16:48 ? 00:00:00 sleep 100
Applying this to your command, and we get
ssh server "cd /tmp/path && (nohup java server 0</dev/null 1>server_stdout 2>server_stderr &)"
or:
ssh server "cd /tmp/path && bash -c 'nohup java server 0</dev/null 1>server_stdout 2>server_stderr &'"
Also, with regard to your comment on the post,
Right, sh -c always returns 0. E.g., sh -c exit 1 has error code
0"
this is incorrect. Directly from the manpage:
Bash's exit status is the exit status of the last command executed in
the script. If no commands are executed, the exit status is 0.
Indeed:
$ bash -c "true ; exit 1"
$ echo $?
1
$ bash -c "false ; exit 22"
$ echo $?
22
ssh server "test -d /tmp/path" && ssh server "nohup ... &"
Answer roundup:
Bad: Using sh -c to wrap the entire nohup command doesn't work for my purposes because it doesn't return error codes. (#cbuckley)
Okay: ssh <server> <cmd1> && ssh <server> <cmd2> works but is much slower (#joachim-nilsson)
Good: Create a shell script on <server> that runs the commands in succession and returns the correct error code.
The last is what I ended up using. I'd still be interested in learning why the original use-case doesn't work, if someone who understands shell internals can explain it to me!

Getting a PID from a Background Process Run as Another User

Getting a background process ID is easy to do from the prompt by going:
$ my_daemon &
$ echo $!
But what if I want to run it as a different user like:
su - joe -c "/path/to/my_daemon &;"
Now how can I capture the PID of my_daemon?
Succinctly - with a good deal of difficulty.
You have to arrange for the su'd shell to write the child PID to a file and then pick the output. Given that it will be 'joe' creating the file and not 'dex', that adds another layer of complexity.
The simplest solution is probably:
su - joe -c "/path/to/my_daemon & echo \$! > /tmp/su.joe.$$"
bg=$(</tmp/su.joe.$$)
rm -f /tmp/su.joe.$$ # Probably fails - joe owns it, dex does not
The next solution involves using a spare file descriptor - number 3.
su - joe -c "/path/to/my_daemon 3>&- & echo \$! 1>&3" 3>/tmp/su.joe.$$
bg=$(</tmp/su.joe.$$)
rm -f /tmp/su.joe.$$
If you're worried about interrupts etc (and you probably should be), then you trap things too:
tmp=/tmp/su.joe.$$
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15
su - joe -c "/path/to/my_daemon 3>&- & echo \$! 1>&3" 3>$tmp
bg=$(<$tmp)
rm -f $tmp
trap 0 1 2 3 13 15
(The caught signals are HUP, INT, QUIT, PIPE and TERM - plus 0 for shell exit.)
Warning: nice theory - untested code...
The approaches presented here didn't work for me. Here's what I did:
PID_FILE=/tmp/service_pid_file
su -m $SERVICE_USER -s /bin/bash -c "/path/to/executable $ARGS >/dev/null 2>&1 & echo \$! >$PID_FILE"
PID=`cat $PID_FILE`
As long as the output from the background process is redirected, you can send the PID to stdout:
su "${user}" -c "${executable} > '${log_file}' 2>&1 & echo \$!"
The PID can then be redirected to a file owned by the first user, rather than the second user.
su "${user}" -c "${executable} > '${log_file}' 2>&1 & echo \$!" > "${pid_file}"
The log files do need to be owned by the second user to do it this way, though.
Here's my solution
su oracle -c "/home/oracle/database/runInstaller" &
pid=$(pgrep -P $!)
Explantation
pgrep -P $! - Gets the child process of the parent pid $!
I took the above solution by Linux, but had to add a sleep to give the child process a chance to start.
su - joe -c "/path/to/my_daemon > /some/output/file" &
parent=$!
sleep 1
pid=$(pgrep -P $parent)
Running in bash, it doesn't like pid=$(pgrep -P $!) but if I add a space after the ! it's ok: pid=$(pgrep -P $! ). I stuck with the extra $parent variable to remind myself what I'm doing next time I look at the script.

Resources