Is it possible to run two loops at the same time? - bash

So I have a project in my cyber security class to make a bash game. I like to make one of those medieval games where you make farms and mines to get resources. Well I like to make something like that. To do that I have to have two while loops running. Like this
while [ blah ]; do
blah
done
while [ blah ]; do
blah
done
Is it possible to run two while loops at the same time and if I am writing it wrong how do I write it?

If you put a & after each done, like done&, you will create new processes in the background that run the while loops. You will have to be careful to realize what this means though, since the bash script will continue executing commands after creating those new processes even if they are not finished. You might use the wait command to prevent this from happening, but I'm not too used to using that so I cannot vouch for it.

Yes, but you will have to fork a new process for each while loop to be executing in. Technically, they won't both run at the same time (unless you consider multiple cores, but this isn't even garaunteed).
Below is a link to how to fork multiple processes using bash.
Forking / Multi-Threaded Processes | Bash
Since you mention this is a school project, I'll stop here lest I help you "not learn".
R

First things first, wrap the loop into a function and then fork it.
This is done when you want to split a process, for example, if I'm processing a CSV with 160,000+ lines, single process/"thread" will take hours. If you wrap the loop into a function and simply fork it, you will have x amount of processes running, then add wait/kill defunct process loop and you are done. here what you are looking at.
while loop with nested loop:
function jobA() {
while read STR;
do
touch $1_temp
key=$(IFS="|";set -- $STR; echo $1)
for each in ${blah[#]};
do
#echo "$each"
done
done <$1;
}
for i in ${blah[#]};
do
echo "$i"
$(jobRDtemp $i) &
child_pid=$!
parent_pid=$$
PIDS+=($child_pid)
echo "forked process $child_pid with parent $parent_pid"
done
for pid in ${PIDS[#]};
do
wait $pid
done
echo "all jobs done"
sleep 1
Now this is wrapped, here is example of a FORKED loop. this means you will have parallel processes run in the background, WAIT will wait for ALL to complete before proceeding. This is important for some type of scripts.
Also, DO NOT use nested FOR loops written C style like presented above, example:
for (( i = 1; i <= 5; i++ )) ### Outer for loop ###
This is VERY slow. use THIS type:
for each in ${blah[#]};
do
#echo "$each"
if [ "$key" = "$each" ]; then
# echo "less than $keyValNeed..."
echo $STR >> $1_temp
fi
done

You could also use nested for loops
for (( i = 1; i <= 5; i++ )) ### Outer for loop ###
do
for (( j = 1 ; j <= 5; j++ )) ### Inner for loop ###
do
echo -n "$i "
done
echo "" #### print the new line ###
done
EDIT: I thought you meant Nested Loop but reading again you said running both loops "at the same time". I will leave my answer here though.

Related

Parallel subshells doing work and report status

I am trying to do work in all subfolders in parallel and describe a status per folder once it is done in bash.
suppose I have a work function which can return a couple of statuses
#param #1 is the folder
# can return 1 on fail, 2 on sucess, 3 on nothing happend
work(){
cd $1
// some update thing
return 1, 2, 3
}
now I call this in my wrapper function
do_work(){
while read -r folder; do
tput cup "${row}" 20
echo -n "${folder}"
(
ret=$(work "${folder}")
tput cup "${row}" 0
[[ $ret -eq 1 ]] && echo " \e[0;31mupdate failed \uf00d\e[0m"
[[ $ret -eq 2 ]] && echo " \e[0;32mupdated \uf00c\e[0m"
[[ $ret -eq 3 ]] && echo " \e[0;32malready up to date \uf00c\e[0m"
) &>/dev/null
pids+=("${!}")
((++row))
done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
echo "waiting for pids ${pids[*]}"
wait "${pids[#]}"
}
and what I want is, that it prints out all the folders per line, and updates them independently from each other in parallel and when they are done, I want that status to be written in that line.
However, I am unsure subshell is writing, which ones I need to capture how and so on.
My attempt above is currently not writing correctly, and not in parallel.
If I get it to work in parallel, I get those [1] <PID> things and [1] + 3156389 done ... messing up my screen.
If I put the work itself in a subshell, I don't have anything to wait for.
If I then collect the pids I dont get the response code to print out the text to show the status.
I did have a look at GNU Parallel but I think I cannot have that behaviour. (I think I could hack it that the finished jobs are printed, but I want all 'running' jobs are printed, and the finished ones get amended).
Assumptions/undestandings:
a separate child process is spawned for each folder to be processed
the child process generates messages as work progresses
messages from child processes are to be displayed in the console in real time, with each child's latest message being displayed on a different line
The general idea is to setup a means of interprocess communications (IC) ... named pipe, normal file, queuing/messaging system, sockets (plenty of ideas available via a web search on bash interprocess communications); the children write to this system while the parent reads from the system and issues the appropriate tput commands.
One very simple example using a normal file:
> status.msgs # initialize our IC file
child_func () {
# Usage: child_func <unique_id> <other> ... <args>
local i
for ((i=1;i<=10;i++))
do
sleep $1
# each message should include the child's <unique_id> ($1 in this case);
# parent/monitoring process uses this <unique_id> to control tput output
echo "$1:message - $1.$i" >> status.msgs
done
}
clear
( child_func 3 & )
( child_func 5 & )
( child_func 2 & )
while IFS=: read -r child msg
do
tput cup $child 10
echo "$msg"
done < <(tail -f status.msgs)
NOTES:
the (child_func 3 &) construct is one way to eliminate the OS message re: 'background process completed' from showing up in stdout (there may be other ways but I'm drawing a blank at the moment)
when using a file (normal, pipe) OP will want to look at a locking method (flock?) to insure messages from multiple children don't stomp each other
OP can get creative with the format of the messages printed to status.msgs in conjunction with parsing logic in the parent's while loop
assuming variable width messages OP may want to look at appending a tput el on the end of each printed message in order to 'erase' any characters leftover from a previous/longer message
exiting the loop could be as simple as keeping count of the number of child processes that send a message <id>:done, or keeping track of the number of children still running in the background, or ...
Running this at my command line generates 3 separate lines of output that are updated at various times (based on the sleep $1):
# no ouput to line #1
message - 2.10 # messages change from 2.1 to 2.2 to ... to 2.10
message - 3.10 # messages change from 3.1 to 3.2 to ... to 3.10
# no ouput to line #4
message - 5.10 # messages change from 5.1 to 5.2 to ... to 5.10
NOTE: comments not actually displayed in console
Based on #markp-fuso's answer:
printer() {
while IFS=$'\t' read -r child msg
do
tput cup $child 10
echo "$child $msg"
done
}
clear
parallel --lb --tagstring "{%}\t{}" work ::: folder1 folder2 folder3 | printer
echo
You can't control exit statuses like that. Try this instead, rework your work function to echo status:
work(){
cd $1
# some update thing &> /dev/null without output
echo "${1}_$status" #status=1, 2, 3
}
And than set data collection from all folders like so:
data=$(
while read -r folder; do
work "$folder" &
done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
wait
)
echo "$data"

bash when is a variable read during a fucntion or loop

In a bash script lets take the extreme below examples where the call/start of the myFn() is 5 minutes before the echo of $inVar -> $myvar happens. During this time between the function start and the interaction with the $myvar, it is updated.
myvar=$(... //some json
alpha=hello )
myFn(){
local -n inVar=$1
//wait for 5 mins .... :)
echo $inVar
}
myFn "myvar"
if [ -z $var ]
then
//wait 5 mins
//but life goes on and this is non blocking
//so other parts of the script are running
echo $myvar
fi
myvar=$(... // after 2 mins update alpha
alpha=world
)
As the $myvar is passed to myFn(), when is $myvar actually read,
at myFn call time (when the function is called/starts)
at the reference copy time inVar=$1
when the echo $inVar occurs
and is this the same for other types of processes such as while, if etc?
You're setting inVar as a nameref, so the value is not known until the variable is expanded at the echo statement
HOWEVER
In your scenario, myFn is "non blocking", meaning you launch it in the background. In this case, the subshell gets a copy of the current value of myVar -- if myVar gets updated subsequently, that update is happening in the current shell, not the background shell.
To demonstrate:
$ bash -c '
fn() { local -n y=$1; sleep 2; echo "in background function, y=$y"; }
x=5
fn x &
x=10
wait
'
in background function, y=5
TL;DR: namerefs and background processes don't mix well.

Tcsh Script Last Exit Code ($?) value is resetting

I am running the following script using tcsh. In my while loop, I'm running a C++ program that I created and will return a different exit code depending on certain things. While it returns an exit code of 0, I want the script to increment counter and run the program again.
#!/bin/tcsh
echo "Starting the script."
set counter = 0
while ($? == 0)
# counter ++
./auto $counter
end
I have verified that my program is definitely returning with exit code = 1 after a certain point. However, the condition in the while loop keeps evaluating to true for some reason and running.
I found that if I stick the following line at the end of my loop and then replace the condition check in the while loop with this new variable, it works fine.
while ($return_code == 0)
# counter ++
./auto $counter
set return_code = $?
end
Why is it that I can't just use $? directly? Is another operation underneath the hood performed in between running my custom program and checking the loop condition that's causing $? to change value?
That is peculiar.
I've altered your example to something that I think illustrates the issue more clearly. (Note that $? is an alias for $status.)
#!/bin/tcsh -f
foreach i (1 2 3)
false
# echo false status=$status
end
echo Done status=$status
The output is
Done status=0
If I uncomment the echo command in the loop, the output is:
false status=1
false status=1
false status=1
Done status=0
(Of course the echo in the loop would break the logic anyway, because the echo command completes successfully and sets $status to zero.)
I think what's happening is that the end that terminates the loop is executed as a statement, and it sets $status ($?) to 0.
I see the same behavior with both tcsh and bsd-csh.
Saving the value of $status in another variable immediately after the command is a good workaround -- and arguably just a better way of doing it, since $status is extremely fragile, and will almost literally be clobbered if you look at it.
Note that I've add a -f option to the #! line. This prevents tcsh from sourcing your init file(s) (.cshrc or .tcshrc) and is considered good practice. (That's not the case for sh/bash/ksh/zsh, which assign a completely different meaning to -f.)
A digression: I used tcsh regularly for many years, both as my interactive login shell and for scripting. I would not have anticipated that end would set $status. This is not the first time I've had to find out how tcsh or csh behaves by trial and error and been surprised by the result. It is one of the reasons I switched to bash for interactive and scripting use. I won't tell you to do the same, but you might want to read Tom Christiansen's classic "csh.whynot".
Slightly shorter/simpler explanation:
Recall that with tcsh/csh EACH command (including shell builtin) return a status. Therefore $? (aliases to $status) is updated by 'if' statements, 'for' loops, assignments, ...
From practical point of view, better to limit the usage of direct use of $? to an if statement after the command execution:
do-something
if ( $status == 0 )
...
endif
In all other cases, capture the status in a variable, and use only that variable
do-something
something_status=$?
if ( $something_status == 0 )
...
endif
To expand on the $status, even a condition test in an if statement will modify the status, therefore the following repeated test on $status will not never hit the '$status == 5', even when do-something will return status of 5
do-something
if ( $status == 2 ) then
echo FOO
else if ( $status == 5 ) then
echo BAR
endif

bash multiprocess - get results and assign to array

I want to get results from child processes using multiprocessing,
and then I want to assign these results into array type of variable.
my codes are like this
for (( i=0; i<${#servers[#]}; i++ ));
do
output_strings[$i]=$(ls) &
pids[${i}]=$!
done
for pid in ${pids[*]}; do
wait $pid
done
echo ${#output_strings[#]}
However, results were not assigned into the array.
Actually, if I change the code output_strings[$i]=$(ls) & to echo $(ls) &, it works.
How can I assign these results?
Don't send the assignment to the background. Send the command itself to the background:
output_strings[$i]=$(ls &)

why doesn't this timer work?

I'm trying to make a script to start a second counter. [but later I want to add minutes too] but so far, it just keeps echoing 0, 0, 0, 0, over and over. :\
#!/bin/bash
seconds=0;
count()
{
export seconds=$[seconds + 1]
sleep 1;
count
}
count&
N=$!
trap "kill $N; exit 0;" 2
while true; do
echo $seconds
sleep 1;
done
The & makes it run in a subshell, which means that it has its own set of environment variables independent of the current script. Find another way (or another language) to do this.
Ignacio's answer explains that your subshell's environment is not visible to your parent process.
One way to create slaves like this is co-processes (with coproc in zsh and newer bash or with special syntax in ksh). Your bash probably doesn't support this yet.
Here's a variation on your idea that uses signals to send the updates to the parent. I've retained your basic structure where it doesn't conflict:
count() {
parent=$1
kill -ALRM $parent
sleep 1
count $parent
}
trap 'seconds=$[$seconds + 1]' ALRM
count $$ &
trap "kill $!; exit 0" INT
while true
do
echo $seconds
done

Resources