Retry a command only once : when a command fails (in bash) - bash

for ( i=3; i<5; i++)
do
execute some command 1
if command 2 is successful then do not run the command 1 (the for loop should continue)
if command 2 is not successful then run command 1 only once (like retry command 1 only once, after this the for loop should continue)
done
This is to note that command 2 is dependent on command 1 and command 2 can only be executed after command 1
for example:
for ( i=3; i<5; i++)
do
echo "i" >> mytext.txt ---> command 1
if "check the content of mytext.txt file to see if the value of i is actually added" ---> command 2
if it is not added then execute echo "i" >> mytext.txt (command 1) again and only once.
if i value is added to the file .. then exit and continue the loop
done
Since the "command 1" is quite big and not just an example echo statement here.I do not want to add "command 1" twice .. once outside and once inside the if condition. I want this logic in an optimized way with no redundancy of code.

Per a comment it sounds like the OP may need to invoke command 1 up to 2 times for a given $i value, but only wants to type command 1 once in the script.
Siddhartha's suggestion to use a function is probably good enough but depending on the actual command 1 (OP mentions that it's 'quite big') I'm going to play devil's advocate and assume there could be additional issues with passing some args to the function (eg, a need to escape some characters ... ??).
The general idea is to have an internal loop that can be executed at most 2 times, with logic in the loop that will allow for an 'early' exit (eg, after just one pass through the loop).
Since we're using pseudo-code I'll use the same ...
for ( i=3; i<5; i++ )
do
pass=1 # reset internal loop counter
while ( pass -le 2 )
do
echo "i" >> mytext.txt # command 1
if ( pass -eq 1 ) # after first 'command 1' execution
&& ( value of 'i' is in mytext.txt ) # command 2
then
break # break out of inner loop; alternatively ...
# pass=10 # ensure pass >= 2 to force loop to exit on this pass
fi
pass=pass+1 # on 1st pass set pass=2 => allows another pass through loop
# on 2nd pass set pass=3 => will force loop to exit
done
done

you can declare functions like
function command
{
your_command -f params
}
for ( i=3; i<5; i++)
do
if command ; then
echo "success"
else
echo "retry"
command
fi
done

Related

bash recursion automatically ends after single level

Why is this make_request function ending just after a single traversal?
make_request(){
path="${1//' '/'%20'}"
echo $path
mkdir -p $HOME/"$1"
$(curl --output $HOME/"$1"/$FILE_NAME -v -X GET $BASE_URL"/"$API_METHOD"/"$path &> /dev/null)
# sample response from curl
# {
# "count":2,
# "items": [
# {"path": "somepath1", "type": "folder"},
# {"path": "somepath2", "type": "folder"},
# ]
# }
count=$(jq ".count" $HOME/"$1"/$FILE_NAME)
for (( c=0; c<$count; c++ ))
do
child=$(jq -r ".items[$c] .path" $HOME/"$1"/$FILE_NAME);
fileType=$(jq -r ".items[$c] .type" $HOME/"$1"/$FILE_NAME);
if [ "$fileType" == "folder" ]; then
make_request "$child"
fi
done
}
make_request "/"
make_request "/" should give the following output:
/folder
/folder/folder1-1
/folder/folder1-1/folder1-2
/folder/foler2-1
/folder/folder2-1/folder2-2
/folder/folder2-1/folder2-3 ...
but I am getting the following:
/folder
/folder/folder1-1
/folder/folder1-1/folder1-2
You are using global variables everywhere. Therefore, the inner call changes the loop variables c and count of the outer call, resulting in bogus.
Minimal example:
f() {
this_is_global=$1
echo "enter $1: $this_is_global"
((RANDOM%2)) && f "$(($1+1))"
echo "exit $1: $this_is_global"
}
Running f 1 prints something like
enter 1: 1
enter 2: 2
enter 3: 3
exit 3: 3
exit 2: 3
exit 1: 3
Solution: Make the variables local by writing local count=$(...) and so on. For your loop, you have to put an additional statement local c above the for.
As currently written all variables have global scope; this means that all function calls are overwriting and referencing the same set of variables, this in turn means that when a child function returns to the parent the parent will find its variables have been overwritten by the child, this in turn leads to the type of behavior seen here.
In this particular case the loop variable c leaves the last child process with a value of c=$count and all parent loops now see c=$count and thus exit; it actually gets a bit more interesting because count is also changing with each function call. The previous comment to add set -x (aka enable debug mode) before the first function call should show what's going on with each variable at each function call.
What OP wants to do is insure each function is working with a local copy of a variable. The easiest approach is to add a local <variable_list> at the top of the function, making sure to list all variables that should be treated as 'local', eg:
local path count c child fileType
change variables to have local scope instead of global.
...
local count; # <------ VARIABLE MADE LOCAL
count=$(jq ".count" $HOME/"$1"/$FILE_NAME)
local c; # <------ VARIABLE MADE LOCAL
for (( c=0; c<$count; c++ ))
do
....
done
...

Tcsh Script Last Exit Code ($?) value is resetting

I am running the following script using tcsh. In my while loop, I'm running a C++ program that I created and will return a different exit code depending on certain things. While it returns an exit code of 0, I want the script to increment counter and run the program again.
#!/bin/tcsh
echo "Starting the script."
set counter = 0
while ($? == 0)
# counter ++
./auto $counter
end
I have verified that my program is definitely returning with exit code = 1 after a certain point. However, the condition in the while loop keeps evaluating to true for some reason and running.
I found that if I stick the following line at the end of my loop and then replace the condition check in the while loop with this new variable, it works fine.
while ($return_code == 0)
# counter ++
./auto $counter
set return_code = $?
end
Why is it that I can't just use $? directly? Is another operation underneath the hood performed in between running my custom program and checking the loop condition that's causing $? to change value?
That is peculiar.
I've altered your example to something that I think illustrates the issue more clearly. (Note that $? is an alias for $status.)
#!/bin/tcsh -f
foreach i (1 2 3)
false
# echo false status=$status
end
echo Done status=$status
The output is
Done status=0
If I uncomment the echo command in the loop, the output is:
false status=1
false status=1
false status=1
Done status=0
(Of course the echo in the loop would break the logic anyway, because the echo command completes successfully and sets $status to zero.)
I think what's happening is that the end that terminates the loop is executed as a statement, and it sets $status ($?) to 0.
I see the same behavior with both tcsh and bsd-csh.
Saving the value of $status in another variable immediately after the command is a good workaround -- and arguably just a better way of doing it, since $status is extremely fragile, and will almost literally be clobbered if you look at it.
Note that I've add a -f option to the #! line. This prevents tcsh from sourcing your init file(s) (.cshrc or .tcshrc) and is considered good practice. (That's not the case for sh/bash/ksh/zsh, which assign a completely different meaning to -f.)
A digression: I used tcsh regularly for many years, both as my interactive login shell and for scripting. I would not have anticipated that end would set $status. This is not the first time I've had to find out how tcsh or csh behaves by trial and error and been surprised by the result. It is one of the reasons I switched to bash for interactive and scripting use. I won't tell you to do the same, but you might want to read Tom Christiansen's classic "csh.whynot".
Slightly shorter/simpler explanation:
Recall that with tcsh/csh EACH command (including shell builtin) return a status. Therefore $? (aliases to $status) is updated by 'if' statements, 'for' loops, assignments, ...
From practical point of view, better to limit the usage of direct use of $? to an if statement after the command execution:
do-something
if ( $status == 0 )
...
endif
In all other cases, capture the status in a variable, and use only that variable
do-something
something_status=$?
if ( $something_status == 0 )
...
endif
To expand on the $status, even a condition test in an if statement will modify the status, therefore the following repeated test on $status will not never hit the '$status == 5', even when do-something will return status of 5
do-something
if ( $status == 2 ) then
echo FOO
else if ( $status == 5 ) then
echo BAR
endif

What's the difference between these two bash parallelization syntax?

Value "4" below is the number of CPU threads. Idea is to run the tasks in batch of 4 and wait until the current batch is finished before starting the next batch.
Syntax 1:
while read something; do
((++i%4==0)) && wait
(
task using something as input;
)
done < input_file.txt
Syntax 2:
while read something; do
((i=i%4)); ((i++==0)) && wait
(
task using something as input;
)
done < input_file.txt
To me they both work the same except the second one is longer. But when running in the cloud (AWS ubuntu 14.04), only syntax 1 worked. The syntax2 threw a generic syntax error at "((i=i%4));" step and it became a mystery.
"The second one is longer" doesn't help since you used pseudocode.
Maybe this will help:
while read x; do ((i=++i%4)) || wait; sleep $x & done < input_file.txt
My input_file.txt:
10
9
8
7
6
5
4
3
2
1

which loop in bash script

I am quite new in bash, but I need to create a simple script which will do below steps:
Wait 1 minute
A) bash script will use CM to generate result file
B) check row 8 in result file (to know if Administrator is running any jobs or not)
if NO jobs:
C) bash script will use CM to start cube refresh
D) wait 1 minute
D1) Remove result file
E) generate result file
E1) Read row 8
no jobs:
F) remove result file G) EXIT
yes:
I) Go to D)
YES:
E) Wait 1 minute
F) Remove result file
Go to A)
As bash doesn't have goto (or should not be use), I tried few loops, but I not sure which I should choose.
I know how to:
- start cube(step C)
- generate result file (step A & E):
- check line 8:
sed '8!d' /abc_uat/cmlogs/adm_jobs_u1.log
condition for loops will be probably similar to this: !='Owner = Administrator'
but how to avoid goto ?
I tried with while do loop, but I am not sure what should I add in case of false condition, I added else, but not sure of it:
sleep 60
Generate result file with admin jobs (which admin runs inside of 3rd party tool)
while [ sed '8!d' admin_jobs_result_file.log !="Owner = Administrator" ];
do
--NO Admin jobs
START CUBE REFRESH (it will start admin job)
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
while [ sed '8!d' admin_jobs_result_file.log = "Owner = Administrator" ];
--Admin is still running cube refresh
do
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
-- it should continue checking every 1 minute if admin is still running cube refresh job, so I hope it will go back to while condition
else
done
else
-- Admin is running something
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
-it should check result file again but I think it will finish loop
done
You can replace goto with a loop. while loop, for example.
Syntax
while <condition>
do
action
done
Check out cron jobs. Delegate, if possible, "waiting for a minute" task to cron. Cron should worry about running your script on a timely fashion.
You may consider writing two scripts instead of one.
Do you really need to create a result file? Do you know piping ? (no offense, just mentioning it because you said you were fairly new to bash)
Hopefully this is self explanatory.
result_file=admin_jobs_result_file.log
function generate {
logmsg sleeping
sleep 60
rm -f "$result_file"
logmsg generating
# use CM to generate result file
}
function owner_is_administrator {
# if line 8 contains "Owner = Administrator", exit success
# else exit failure
sed -n '8 {/Owner = Administrator/ q 0; q 1}' "$result_file"
}
function logmsg { date "+%Y-%m-%d %T -- $*"; }
##############
generate
while owner_is_administrator; do
generate
done
# at this point, line 8 does NOT contain "Owner = Administrator"
logmsg start cube refresh
# use CM to start cube refresh
generate
while owner_is_administrator; do
generate
done
logmsg Done
Looks like AIX's sed can't exit with a specified status. Try this instead:
function owner_is_administrator {
# if line 8 contains "Owner = Administrator", exit success
# else exit failure
awk 'NR == 8 {if (/Owner = Administrator/) {exit 0} else {exit 1}}' "$result_file"
}

Is it possible to run two loops at the same time?

So I have a project in my cyber security class to make a bash game. I like to make one of those medieval games where you make farms and mines to get resources. Well I like to make something like that. To do that I have to have two while loops running. Like this
while [ blah ]; do
blah
done
while [ blah ]; do
blah
done
Is it possible to run two while loops at the same time and if I am writing it wrong how do I write it?
If you put a & after each done, like done&, you will create new processes in the background that run the while loops. You will have to be careful to realize what this means though, since the bash script will continue executing commands after creating those new processes even if they are not finished. You might use the wait command to prevent this from happening, but I'm not too used to using that so I cannot vouch for it.
Yes, but you will have to fork a new process for each while loop to be executing in. Technically, they won't both run at the same time (unless you consider multiple cores, but this isn't even garaunteed).
Below is a link to how to fork multiple processes using bash.
Forking / Multi-Threaded Processes | Bash
Since you mention this is a school project, I'll stop here lest I help you "not learn".
R
First things first, wrap the loop into a function and then fork it.
This is done when you want to split a process, for example, if I'm processing a CSV with 160,000+ lines, single process/"thread" will take hours. If you wrap the loop into a function and simply fork it, you will have x amount of processes running, then add wait/kill defunct process loop and you are done. here what you are looking at.
while loop with nested loop:
function jobA() {
while read STR;
do
touch $1_temp
key=$(IFS="|";set -- $STR; echo $1)
for each in ${blah[#]};
do
#echo "$each"
done
done <$1;
}
for i in ${blah[#]};
do
echo "$i"
$(jobRDtemp $i) &
child_pid=$!
parent_pid=$$
PIDS+=($child_pid)
echo "forked process $child_pid with parent $parent_pid"
done
for pid in ${PIDS[#]};
do
wait $pid
done
echo "all jobs done"
sleep 1
Now this is wrapped, here is example of a FORKED loop. this means you will have parallel processes run in the background, WAIT will wait for ALL to complete before proceeding. This is important for some type of scripts.
Also, DO NOT use nested FOR loops written C style like presented above, example:
for (( i = 1; i <= 5; i++ )) ### Outer for loop ###
This is VERY slow. use THIS type:
for each in ${blah[#]};
do
#echo "$each"
if [ "$key" = "$each" ]; then
# echo "less than $keyValNeed..."
echo $STR >> $1_temp
fi
done
You could also use nested for loops
for (( i = 1; i <= 5; i++ )) ### Outer for loop ###
do
for (( j = 1 ; j <= 5; j++ )) ### Inner for loop ###
do
echo -n "$i "
done
echo "" #### print the new line ###
done
EDIT: I thought you meant Nested Loop but reading again you said running both loops "at the same time". I will leave my answer here though.

Resources