How to grep from background process? - bash

I have a script, that runs an app.
The app produces some output
I need to grep from the output in order to verify it
How can I accomplish this?
For instance;
script1.sh
#!/bin/sh
app1 &
app2
******
Output:
app1 -> "App1"
app2 -> "APp2"

You can use nohup to catch the script output and grep in nohup file. Below is how you'll execute the command in nohup:
nohup ./script.sh &
It will create a nohup.out file in current directory which can be used for your purpose.

I suggest using a named pipe over a file produced by nohup. The reason is subtle, but important. Let's say that your background application takes 10 seconds to execute and produces a decent amount of data. For nohup to work, you will need to wait for the background application to finish before you can process the data from that file. You will miss out on the befit of using a background process in the first place which is parallelism. This is true not just from nohup output, but for any regular file.
Here is an example of not waiting for the background file to finish executing:
$ { for i in {0..100}; do echo $i; sleep 0.1; done } > outfile &
[1] 2069
$ grep 1 outfile
1
As you can see, the grep process immediately processes the file and exits before the background application is finished writing data.
When using a named pipe, the foreground process, grep in this case, will know that it needs to wait for the pipe to close. Notice the difference:
$ mkfifo outpipe
$ { for i in {0..100}; do echo $i; sleep 0.1; done } > outpipe &
[1] 2173
$ grep 1 outpipe
1
10
11
12
13
14
15
16
17
18
19
21
31
41
51
61
71
81
91
100
With a named pipe, we can use the output of the background process just as if it were coming from a pipe.

Related

restart program if it outputs some string

I want to loop a process in a bash script, it is a process which should run forever but which sometimes fails.
When it fails, it outputs >>747;3R to its last line, but keeps running.
I tried (just for testing)
while [ 1 ]
do
mono Program.exe
last_pid=$!
sleep 3000
kill $last_pid
done
but it doesn't work at all, the process mono Program.exe just runs forever (until it crashes, but even then my script does nothing.)
$! expands to the PID of the last process started in the background. This can be seen with:
cat test
sleep 2
lastpid=$!
echo $lastpid
~$ bash -x test
+ sleep 2
+ lastpid=
+ echo
vs
~$ cat test
sleep 2 &
lastpid=$!
echo $lastpid
:~$ bash -x test
+ lastpid=25779
+ sleep 2
+ echo 25779
The fixed version of your script would read:
while true; do
mono Program.exe &
last_pid=$!
sleep 3000
kill $last_pid
done
Your version was running mono Program.exe and then sitting there. It didn't make it to the next line as it was waiting for the process to finish. Your kill command then didn't work as $! never populated (wasn't a background process).

Start process in background quietly

Appending a & to the end of a command starts it in the background. E.g.:
$ wget google.com &
[1] 7072
However, this prints a job number and PID. Is it possible to prevent these?
Note: I still want to keep the output of wget – it's just the [1] 7072 that I want to get rid of.
There is an option to the set builtin, set -b, that controls the output of this line, but the choice is limited to "immediately" (when set) and "wait for next prompt" (when unset).
Example of immediate printing when the option is set:
$ set -b
$ sleep 1 &
[1] 9696
$ [1]+ Done sleep 1
And the usual behaviour, waiting for the next prompt:
$ set +b
$ sleep 1 &
[1] 840
$ # Press enter here
[1]+ Done sleep 1
So as far as I can see, these can't be suppressed. The good news is, though, that job control messages are not displayed in a non-interactive shell:
$ cat sleeptest
#!/bin/bash
sleep 1 &
$ ./sleeptest
$
So if you start a command in the background in a subshell, there won't be any messages. To do that in an interactive session, you can run your command in a subshell like this (thanks to David C. Rankin):
$ ( sleep 1 & )
$
which also results in no job control prompts.
From the Advanced Bash-Scripting Guide:
Suppressing stdout.
cat $filename >/dev/null
# Contents of the file will not list to stdout.
Suppressing stderr (from Example
16-3).
rm $badname 2>/dev/null
# So error messages [stderr] deep-sixed.
Suppressing output from both stdout and stderr.
cat $filename 2>/dev/null >/dev/null
#1 If "$filename" does not exist, there will be no error message output.
# If "$filename" does exist, the contents of the file will not list to stdout.
# Therefore, no output at all will result from the above line of code.
#
# This can be useful in situations where the return code from a command
#+ needs to be tested, but no output is desired.
#
# cat $filename &>/dev/null
# also works, as Baris Cicek points out.

How can I wait for a file to be finished being written to in shell script?

I have a shell script called parent.sh which does some stuff, then goes off and calls another shell script child.sh which does some processing and writes some output to a file output.txt.
I would like the parent.sh script to only continue processing after that output.txt file has been written to. How can I know that the file has finished being written to?
Edit: Adding answers to questions:
Does child.sh finish writing to the file before it exits? Yes
Does parent.sh run child.sh in the foreground or the background? I'm not sure - it's being called from withing parent.sh like this: ./child.sh "$param1" "$param2"
You need the wait command. wait will wait until all sub-processes have finished before continuing.
parent.sh:
#!/bin/bash
rm output.txt
./child.sh &
# Wait for the child script to finish
#
wait
echo "output.txt:"
cat output.txt
child.sh:
#!/bin/bash
for x in $(seq 10); do
echo $x >&2
echo $x
sleep 1
done > output.txt
Here is the output from ./parent.sh:
[sri#localhost ~]$ ./parent.sh
1
2
3
4
5
6
7
8
9
10
output.txt:
1
2
3
4
5
6
7
8
9
10

Start background process from shellscript then bring back to foreground later

I'm trying to make a shell script that does the following:
Start program x
While x is running execute some commands, for example:
echo "blabla" >> ~/blabla.txt
After the execution of those commands program x should be running in the foreground, so that it can take user input.
So far I have:
~/x &
echo "blabla" >> ~/blabla.txt
However, I don't know how to move x back to the foreground. This is all called from a shell script so I don't know the job number of x to move to the foreground.
Note: everything has to be automated, no user interaction with the shell script should be needed.
Any suggestions are welcome :)
Although absolutely don't understand why someone may need such script, and I'm sure than exists more elegant and more better/correct solution - but ok - the next demostrating how:
The script what going to background (named as bgg)
#!/bin/bash
for i in $(seq 10)
do
echo "bg: $i"
sleep 1
done
read -p 'BGG enter something:' -r data
echo "$0 got: $data"
the main script (main.sh)
set -m #this is important
echo "Sending script bgg to background - will cycle 10 secs"
./bgg & 2>/dev/null
echo "Some commands"
date
read -r -p 'main.sh - enter something:' fgdata
echo "Main.sh got: ==$fgdata=="
jnum=$(jobs -l | grep " $! " | sed 's/\[\(.*\)\].*/\1/')
echo "Backgroung job number: $jnum"
echo "Now sleeping 3 sec"
sleep 3
echo "Bringing $jnum to foreground - wait until the BG job will read"
fg $jnum
run the ./main.sh - and the result will be something like
Sending bgg to background - will cycle 10 secs
Some commands
Mon Mar 3 00:04:57 CET 2014
main.sh - enter something:bg: 1
bg: 2
bg: 3
bg: 4
bg: 5
qqbg: 6
qqqqq
Main.sh got: ==qqqqqqq==
Backgroung job number: 1
Now sleeping 3 sec
bg: 7
bg: 8
bg: 9
Bringing 1 to foreground - wait until the BG job will read
./bgg
bg: 10
BGG enter something:wwwwwww
./bgg got: wwwwwww
You can use fg to bring the last background process to foreground

Reading realtime output from airodump-ng

When I execute the command airodump-ng mon0 >> output.txt , output.txt is empty. I need to be able to run airodump-ng mon0 and after about 5 seconds stop the command , than have access to its output. Any thoughts where I should begin to look? I was using bash.
Start the command as a background process, sleep 5 seconds, then kill the background process. You may need to redirect a different stream than STDOUT for capturing the output in a file. This thread mentions STDERR (which would be FD 2). I can't verify this here, but you can check the descriptor number with strace. The command should show something like this:
$ strace airodump-ng mon0 2>&1 | grep ^write
...
write(2, "...
The number in the write statement is the file descriptor airodump-ng writes to.
The script might look somewhat like this (assuming that STDERR needs to be redirected):
#!/bin/bash
{ airodump-ng mon0 2>> output.txt; } &
PID=$!
sleep 5
kill -TERM $PID
cat output.txt
You can write the output to a file using the following:
airodump-ng [INTERFACE] -w [OUTPUT-PREFIX] --write-interval 30 -o csv
This will give you a csv file whose name would be prefixed by [OUTPUT-PREFIX]. This file will be updated after every 30 seconds. If you give a prefix like /var/log/test then the file will go in /var/log/ and would look like test-XX.csv
You should then be able to access the output file(s) by any other tool while airodump is running.
By airodump-ng 1.2 rc4 you should use following command:
timeout 5 airodump-ng -w my --output-format csv --write-interval 1 wlan1mon
After this command has compeleted you can access it's output by viewing my-01.csv. Please not that the output file is in CSV format.
Your command doen't work because airodump-ng output to stderr instead of stdout!!! So following command is corrected version of yours:
airodump-ng mon0 &> output.txt
The first method is better in parsing the output using other programs/applications.

Resources