Lets Say I have multiple scripts which need to invoke sequentially for the job.
These scripts has long and lengthy output this is my bash script,
How to avoid that but still can understand that the process is running.
Here is an example,
#!/bin/bash
echo "Script to prepare Final BUILD"
rm -vf module1.out
module1_build_script.sh #FIXME: This scripts outputs 10000 lines
#module1_build_script.sh &> /dev/null #Not interested as this makes difficult if the process hangs or running.
if [ ! -f ./out/module1.out ];then
echo "Module 1 build failed"
exit 1
fi
.
.
.
rm -vf module1.out
module4_build_script.sh # This scripts outputs 5000 lines
if [ ! -f ./out/module4.out ];then
echo "Module 4 build failed"
exit 4
fi
Now I am expecting some code gives me effect like below output as one liner without scroll
example: module1_build_script.sh | "magical code here" #FIXME:
Like below output
user#bash#./myscript
#-------content of myscript ---------------
#!/bin/bash
while (( i < 10))
do
echo -en "\r Process is running...$i"
sleep 0.5
((i++))
done
#------------------------------------------
Related
So I'm trying to check for the output of a command, but I also want to be able display the output directly in the terminal.
#!/bin/bash
while :
do
OUT=$(streamlink -o "$NAME" "$STREAM" best)
echo "$OUT"
if [[ $OUT == *"No playable streams"* ]]; then
echo "Delaying!"
sleep 15s
fi
done
This is what I tried to do.
The code checks if the output of a command contains that error substring, if so it'd add a delay. It works well on that part.
But it doesn't work well when the command is actually successfully downloading a file as it won't perform that echo until it is finished with the download (which would take hours). So until then I have no way of personally checking the output of the command
Plus the output of this particular command displays and updates the speed and filesize in real-time, something echo wouldn't be able to replicate.
So is there a way to be able to display the output of a command in real-time, while also command substituting them in order to check the output for substrings after the command is finished?
Use a temporary file:
TEMP=$(mktemp) || exit 1
while true
do
streamlink -o "$NAME" "$STREAM" best |& tee "$TEMP"
OUT=$( cat "$TEMP" )
#echo "$OUT" # not longer needed
if [[ $OUT == *"No playable streams"* ]]; then
echo "Delaying!"
sleep 15s
fi
done
# not really needed here because of endless loop
rm -f "$TEMP"
I have an script that shows me the content of a directory every 2 seconds.
Now, I need to change it so I will be able to do something (e.g. echo it has changed) if the content of the directory has changed.
My script is as follow:
#!/bin/bash
MON_DIR="/home/lab"
if [ -d $MON_DIR ] ; then
echo "Directory exists."
while true
do
echo "Content of directory:"
ls $MON_DIR
sleep 2
done
else
echo "Directory does not exists." > /dev/stderr
exit $? > /dev/stderr
fi
Your task sounds like you want to try watch: It can run a command periodically and show its output. Using its -g (--chgexit) (exit when the output changes), you could try achieve what you want. I am thinking along the lines of (untested):
#!/bin/bash
MON_DIR="/home/lab"
if [ -d $MON_DIR ] ; then
echo "Directory exists."
while true
do
watch -n 2 -g "ls ${MON_DIR}" > /dev/null
echo "Content has changed."
done
else
echo "Directory does not exists." > /dev/stderr
exit $? > /dev/stderr
fi
I am suppressing output of watch here, to ensure you will be able to see the message. you might also replace the infinite loop (while true) with something that can be better aborted: Ctrl+C will abort watch and the loop will restart it. Thus you would have to hit Ctr+C twice in a short interval.
I have a bash shell script, that should:
1) check for the existence of a file
2) If file exists exit script, otherwise create file
3) Set off a process
4) Check process has run correctly - and send result to a log file
5) Delete file
6) Exit script
if [ -f $PROPERTIES_HC ]
then
# lockfile/propertiesfile exists so exit the script
log --------- lockfile exists so operation cancelled at `date` ---------
exit 1
else
# no lockfile/propertiesfile so continue
# create the lockfile/propertiesfile
input="./$PROPERTIES_VAR"
while IFS= read -r line || [ -n "$line" ]; do
eval "echo $line" >> $PROPERTIES_HC
done < $PROPERTIES_VAR
#Run Process
RUN_MY_PROCESS --overridefile $PROPERTIES_HC >> $LOG_FILE
#Check Process Ran Okay
if [ "$?" = "0" ]; then
echo "RAN WITHOUT ERROR" >> $LOG_FILE
else
echo "SOME ERROR!" >> $LOG_FILE
fi
# Remove the lockfile/propertiesfile
rm -rf $PROPERTIES_HC
fi
This script seemed to have been running fine, however recently I came across a situation where the "RUN_MY_PROCESS" element of the script failed, and the script seems to have simply exited leaving the lockfile in place.
As I understand it unless I set something like #!/bin/sh -e, the script should run on regardless of errors. Have I misunderstood how shell scripts/shell error handling work (I am new to this!), or is it that my shell script has crashed itself - hence it didn't finish running?
Thanks in advance for any help.
The proper way to handle errors inside your script (i.e. errors that cause your script to crash) is through traps.
You could modify your script as follow :
if [ -f $PROPERTIES_HC ]
#your regular script here
#...
#Run Process
trap 'echo "SOME ERROR" >> $LOG_FILE && rm -rf $PROPERTIES_HC' ERR
RUN_MY_PROCESS --overridefile $PROPERTIES_HC >> $LOG_FILE
#rest of your script here
#....
I wrote a program in c++ and now I have a binary. I have also generated a bunch of tests for testing. Now I want to automate the process of testing with bash. I want to save three things in one execution of my binary:
execution time
exit code
output of the program
Right now I am stack up with a script that only tests that binary does its job and returns 0 and doesn't save any information that I mentioned above. My script looks like this
#!/bin/bash
if [ "$#" -ne 2 ]; then
echo "Usage: testScript <binary> <dir_with_tests>"
exit 1
fi
binary="$1"
testsDir="$2"
for test in $(find $testsDir -name '*.txt'); do
testname=$(basename $test)
encodedTmp=$(mktemp /tmp/encoded_$testname)
decodedTmp=$(mktemp /tmp/decoded_$testname)
printf 'testing on %s...\n' "$testname"
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
echo 'encoder failed'
rm "$encodedTmp"
rm "$decodedTmp"
continue
fi
if ! "$binary" -u -f $encodedTmp -o $decodedTmp > /dev/null; then
echo 'decoder failed'
rm "$encodedTmp"
rm "$decodedTmp"
continue
fi
if ! diff "$test" "$decodedTmp" > /dev/null ; then
echo "result differs with input"
else
echo "$testname passed"
fi
rm "$encodedTmp"
rm "$decodedTmp"
done
I want save output of $binary in a variable and not send it into /dev/null. I also want to save time using time bash function
As you asked for the output to be saved in a shell variable, I tried answering this without using output redirection – which saves output in (temporary) text files (which then have to be cleaned).
Saving the command output
You can replace this line
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
with
if ! output=$("$binary" -c -f $test -o $encodedTmp); then
Using command substitution saves the program output of $binary in the shell variable. Command substitution (combined with shell variable assignment) also allows exit codes of programs to be passed up to the calling shell so the conditional if statement will continue to check if $binary executed without error.
You can view the program output by running echo "$output".
Saving the time
Without a more sophisticated form of Inter-Process Communication, there’s no way for a shell that’s a sub-process of another shell to change the variables or the environment of its parent process so the only way that I could save both the time and the program output was to combine them in the one variable:
if ! time-output=$(time "$binary" -c -f $test -o $encodedTmp) 2>&1); then
Since time prints its profiling information to stderr, I use the parentheses operator to run the command in subshell whose stderr can be redirected to stdout. The programming output and the output of time can be viewed by running echo "$time-output" which should return something similar to:
<program output>
<blank line>
real 0m0.041s
user 0m0.000s
sys 0m0.046s
You can get the process status in bash by using $? and print it out by echo $?.
And to catch the output of time, you could use sth like that
{ time sleep 1 ; } 2> time.txt
Or you can save the output of the program and execution time at once
(time ls) > out.file 2>&1
You can save output to a file using output redirection. Just change first /dev/null line:
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
to
if ! "$binary" -c -f $test -o $encodedTmp > prog_output; then
then change second and third /dev/null lines respectively:
if ! "$binary" -u -f $encodedTmp -o $decodedTmp >> prog_output; then
if ! diff "$test" "$decodedTmp" >> prog_output; then
To measure program execution put
start=$(date +%s)
on the first line
then
end=$(date +%s)
echo "Execution time in seconds: " $((end-start)) >> prog_output
on the end.
I have a bash script that loops through a folder and processes all *.hql files. Sometimes one of the hive script fails (syntax, resource constraint, etc), instead of the script failing it will continue onto the next .hql file.
Anyway I can stop the bash from processing the remaining? Below is my sample bash:
for i in `ls ${layer}/*.hql`; do
echo "Processing $i ..."
hive ${hiveconf_all} -hiveconf DATE=${date} -f ${i} &
if [ $j -le 5 ]; then
j=$(( j+1 ))
else
wait
j=0
fi
done
I would check the process completion state of the previous command and invoke the exit command to come out the loop
(( $? == 0 )) && exit 1
Introduce the above line after the hive command and should do the trick.
add
set -e
to the top of your script
Use this template for running parallel processes and wait for their completion. Add your date, layer, hiveconf_all and other variables:
#!/bin/bash
set -e
#Run parallel processes and write their logs
log_dir=/tmp/my_script_logs
for i in `ls ${layer}/*.hql`; do
echo "Processing $i ..."
#Run hive in parallel and redirect to the log file
hive ${hiveconf_all} -hiveconf DATE=${date} -f ${i} 2>&1 | tee "log_dir/${i}".log &
done
#Now wait for all processes to complete
FAILED=0
for job in `jobs -p`
do
echo "job=$job"
wait $job || let "FAILED+=1"
done
if [ "$FAILED" != "0" ]; then
echo "Execution FAILED! ($FAILED)"
#Do something here, log or send message, etc
exit 1
fi
#All processes are completed successfully!
#Do something here
echo "Done successfully"
Then you will be able to inspect each process log individually.