Logging background process with parms as log name - bash

I am relatively new to shell scripting, and am trying to write a script that automatically creates a log based on the inputs to the script, but this requires the parsing of the arguments and (sometimes) the creation of directories.
I would like to log the first part of the script in case an error occurs, so have come up with a weird shuffling of log files.
#!/bin/bash
# Submission Example: sh task.sh 20 Illinois > /dev/null 2>&1 &
set -x
tmpfile=$( mktemp $PWD/begin_log.XXXXXX )
exec > $tmpfile 2>&1
week=$1
area=$2
__proj_logdir=$PWD/${week}
mkdir -p ${__proj_logdir}
__proj_log=${__proj_logdir}/${area}.log
function resolve_logs {
inter=$( mktemp $PWD/intermed_log.XXXXXX )
mv $__proj_log $inter
mv $tmpfile $__proj_log
cat $inter >> $__proj_log
rm -f $inter $tmpfile
}
trap resolve_logs EXIT
exec > $__proj_log 2>&1
echo "the rest of the script"
exit
My question is: Does there exist a built in command or option to allow this, or is there a better way to go about this?

Related

Bash use of zenity with console redirection

In efforts to create more manageable scripts that write their own output to only one location themselves (via 'exec > file'), is there a better solution than below for combining stdout redirection + zenity (which in this use relies on piped stdout)?
parent.sh:
#!/bin/bash
exec >> /var/log/parent.out
( true; sh child.sh ) | zenity --progress --pulsate --auto-close --text='Executing child.sh')
[[ "$?" != "0" ]] && exit 1
...
child.sh:
#!/bin/bash
exec >> /var/log/child.out
echo 'Now doing child.sh things..'
...
When doing something like-
sh child.sh | zenity --progress --pulsate --auto-close --text='Executing child.sh'
zenity never receives stdout from child.sh since it is being redirected from within child.sh. Even though it seems to be a bit of a hack, is using a subshell containing a 'true' + execution of child.sh acceptable? Or is there a better way to manage stdout?
I get that 'tee' is acceptable to use in this scenario, though I would rather not have to write out child.sh's logfile location each time I want to execute child.sh.
Your redirection exec > stdout.txt will lead to error.
$ exec > stdout.txt
$ echo hello
$ cat stdout.txt
cat: stdout.txt: input file is output file
You need an intermediary file descriptor.
$ exec 3> stdout.txt
$ echo hello >&3
$ cat stdout.txt
hello

Redirect copy of stdin to file from within bash script itself

In reference to https://stackoverflow.com/a/11886837/1996022 (also shamelessly stole the title) where the question is how to capture the script's output I would like to know how I can additionally capture the scripts input. Mainly so scripts that also have user input produce complete logs.
I tried things like
exec 3< <(tee -ia foo.log <&3)
exec <&3 <(tee -ia foo.log <&3)
But nothing seems to work. I'm probably just missing something.
Maybe it'd be easier to use the script command? You could either have your users run the script with script directly, or do something kind of funky like this:
#!/bin/bash
main() {
read -r -p "Input string: "
echo "User input: $REPLY"
}
if [ "$1" = "--log" ]; then
# If the first argument is "--log", shift the arg
# out and run main
shift
main "$#"
else
# If run without log, re-run this script within a
# script command so all script I/O is logged
script -q -c "$0 --log $*" test.log
fi
Unfortunately, you can't pass a function to script -c which is why the double-call is necessary in this method.
If it's acceptable to have two scripts, you could also have a user-facing script that just calls the non-user-facing script with script:
script_for_users.sh
--------------------
#!/bin/sh
script -q -c "/path/to/real_script.sh" <log path>
real_script.sh
---------------
#!/bin/sh
<Normal business logic>
It's simpler:
#! /bin/bash
tee ~/log | your_script
The wonderful thing is your_script can be a function, command or a {} command block!

testing a program in bash

I wrote a program in c++ and now I have a binary. I have also generated a bunch of tests for testing. Now I want to automate the process of testing with bash. I want to save three things in one execution of my binary:
execution time
exit code
output of the program
Right now I am stack up with a script that only tests that binary does its job and returns 0 and doesn't save any information that I mentioned above. My script looks like this
#!/bin/bash
if [ "$#" -ne 2 ]; then
echo "Usage: testScript <binary> <dir_with_tests>"
exit 1
fi
binary="$1"
testsDir="$2"
for test in $(find $testsDir -name '*.txt'); do
testname=$(basename $test)
encodedTmp=$(mktemp /tmp/encoded_$testname)
decodedTmp=$(mktemp /tmp/decoded_$testname)
printf 'testing on %s...\n' "$testname"
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
echo 'encoder failed'
rm "$encodedTmp"
rm "$decodedTmp"
continue
fi
if ! "$binary" -u -f $encodedTmp -o $decodedTmp > /dev/null; then
echo 'decoder failed'
rm "$encodedTmp"
rm "$decodedTmp"
continue
fi
if ! diff "$test" "$decodedTmp" > /dev/null ; then
echo "result differs with input"
else
echo "$testname passed"
fi
rm "$encodedTmp"
rm "$decodedTmp"
done
I want save output of $binary in a variable and not send it into /dev/null. I also want to save time using time bash function
As you asked for the output to be saved in a shell variable, I tried answering this without using output redirection – which saves output in (temporary) text files (which then have to be cleaned).
Saving the command output
You can replace this line
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
with
if ! output=$("$binary" -c -f $test -o $encodedTmp); then
Using command substitution saves the program output of $binary in the shell variable. Command substitution (combined with shell variable assignment) also allows exit codes of programs to be passed up to the calling shell so the conditional if statement will continue to check if $binary executed without error.
You can view the program output by running echo "$output".
Saving the time
Without a more sophisticated form of Inter-Process Communication, there’s no way for a shell that’s a sub-process of another shell to change the variables or the environment of its parent process so the only way that I could save both the time and the program output was to combine them in the one variable:
if ! time-output=$(time "$binary" -c -f $test -o $encodedTmp) 2>&1); then
Since time prints its profiling information to stderr, I use the parentheses operator to run the command in subshell whose stderr can be redirected to stdout. The programming output and the output of time can be viewed by running echo "$time-output" which should return something similar to:
<program output>
<blank line>
real 0m0.041s
user 0m0.000s
sys 0m0.046s
You can get the process status in bash by using $? and print it out by echo $?.
And to catch the output of time, you could use sth like that
{ time sleep 1 ; } 2> time.txt
Or you can save the output of the program and execution time at once
(time ls) > out.file 2>&1
You can save output to a file using output redirection. Just change first /dev/null line:
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
to
if ! "$binary" -c -f $test -o $encodedTmp > prog_output; then
then change second and third /dev/null lines respectively:
if ! "$binary" -u -f $encodedTmp -o $decodedTmp >> prog_output; then
if ! diff "$test" "$decodedTmp" >> prog_output; then
To measure program execution put
start=$(date +%s)
on the first line
then
end=$(date +%s)
echo "Execution time in seconds: " $((end-start)) >> prog_output
on the end.

stop bash script from outputting in terminal

I believe I have everything setup correctly for my if else statement however it keeps outputting content into my shell terminal as if i ran the command myself. is there anyway i can escape this so i can run these commands without it populating my terminal with text from the results?
#!/bin/bash
ps cax | grep python > /dev/null
if [ $? -eq 0 ]; then
echo "Process is running." &
echo $!
else
echo "Process is not running... Starting..."
python likebot.py &
echo $!
fi
Here is what the output looks like a few minutes after running my bash script
[~]# sh check.sh
Process is not running... Starting...
12359
[~]# Your account has been rated. Sleeping on kranze for 1 minute(s). Liked 0 photo(s)...
Your account has been rated. Sleeping on kranze for 2 minute(s). Liked 0 photo(s)...
If you want to redirect output from within the shell script, you use exec:
exec 1>/dev/null 2>&1
This will redirect everything from now on. If you want to output to a log:
exec 1>/tmp/logfile 2>&1
To append a log:
exec 1>>/tmp/logfile 2>&1
To backup your handles so you can restore them:
exec 3>&1 4>&2
exec 1>/dev/null 2>&1
# Do some stuff
# Restore descriptors
exec 1>&3 2>&4
# Close the descriptors.
exec 3>&- 4>&-
If there is a particular section of a script you want to silence:
#!/bin/bash
echo Hey, check me out, I can make noise!
{
echo Thats not fair, I am being silenced!
mv -v /tmp/a /tmp/b
echo Me too.
} 1>/dev/null 2>&1
If you want to redirect the "normal (stdout)" output use >/dev/null if you also want to redirect the error output as well use 2>&1 >/dev/null
eg
$ command 2>&1 >/dev/null
I think you have to redirect STDOUT (and may be STDERR) of the python interpreter:
...
echo "Process is not running... Starting..."
python likebot.py >/dev/null 2>&1 &
...
For further details, please have a look at Bash IO-Redirection.
Hope that helped a bit.
*Jost
You have two options:
You can redirect standard output to a log file using > /path/to/file
You can redirect standard output to /dev/null to get rid of it completely using > /dev/null
If you want error output redirected as well use &>
See here
Also, not relevant to this particular example, but some bash commands support a 'quiet' or 'silent' flag.
Append >> /path/to/outputfile/outputfile.txt to the end of every echo statement
echo "Process is running." >> /path/to/outputfile/outputfile.txt
Alternatively, send the output to the file when you run the script from the shell
[~]# sh check.sh >> /path/to/outputfile/outputfile.txt

Bash process substitution and syncing

(Possibly related to Do some programs not accept process substitution for input files?)
In some Bash unit test scripts I'm using the following trick to log and display stdout and stderr of a command:
command > >(tee "${stdoutF}") 2> >(tee "${stderrF}" >&2)
This process produces some output to stdout, so the $stdoutF file gets some data. Then I run another command which does not output any data:
diff -r "$source" "$target" > >(tee "${stdoutF}") 2> >(tee "${stderrF}" >&2)
However, it doesn't look like this process always finishes successfully before the test for emptiness is run (using shunit-ng):
assertNull 'Unexpected output to stdout' "$(<"$stdoutF")"
In a 100 run test this failed 25 times.
Should it be sufficient to call sync before testing the file for emptiness:
sync
assertNull 'Unexpected output to stdout' "$(<"$stdoutF")"
... and/or should it work by forcing the sequence of the commands:
diff -r "$source" "$target" \
> >(tee "${stdoutF}"; assertNull 'Unexpected output to stdout' "$(<"$stdoutF")")
2> >(tee "${stderrF}" >&2)
... and/or is it possible to tee it somehow to assertNull directly instead of a file?
Update: sync is not the answer - See Gilles' response below.
Update 2: Discussion taken further to Save stdout, stderr and stdout+stderr synchronously. Thanks for the answers!
In bash, a process substitution substitution command foo > >(bar) finishes as soon as foo finishes. (This is not discussed in the documentation.) You can check this with
: > >(sleep 1; echo a)
This command returns immediately, then prints a asynchronously one second later.
In your case, the tee command takes just one little bit of time to finish after command completes. Adding sync gave tee enough time to complete, but this doesn't remove the race condition, any more than adding a sleep would, it just makes the race more unlikely to manifest.
More generally, sync does not have any internally observable effect: it only makes a difference if you want to access device where your filesystems are stored under a different operating system instance. In clearer terms, if your system loses power, only data written before the last sync is guaranteed to be available after you reboot.
As for removing the race condition, here are a few of possible approaches:
Explicitly synchronize all substituted processes.
mkfifo sync.pipe
command > >(tee -- "$stdoutF"; echo >sync.pipe)
2> >(tee -- "$stderrF"; echo >sync.pipe)
read line < sync.pipe; read line < sync.pipe
Use a different temporary file name for each command instead of reusing $stdoutF and $stderrF, and enforce that the temporary file is always newly created.
Give up on process substitution and use pipes instead.
{ { command | tee -- "$stdoutF" 1>&3; } 2>&1 \
| tee -- "$stderrF" 1>&2; } 3>&1
If you need the command's return status, bash puts it in ${PIPESTATUS[0]}.
{ { command | tee -- "$stdoutF" 1>&3; exit ${PIPESTATUS[0]}; } 2>&1 \
| tee -- "$stderrF" 1>&2; } 3>&1
if [ ${PIPESTATUS[0]} -ne 0 ]; then echo command failed; fi
I sometimes put a guard:
: > >(sleep 1; echo a; touch guard) \
&& while true; do
[ -f "guard" ] && { rm guard; break; }
sleep 0.2
done
Insert a sleep 5 or whatnot in place of sync to answer your last question

Resources