How many processes are created in this snippet using fork? - fork

Can anyone help me in understanding how this will created 19 process?
Main()
{
Fork();
Fork() && fork () || fork ();
Fork ();
}

Try to draw a tree of the process creation and study/remember the following points:
P1. fork() returns the pid (which is bigger tan 0) to the current process
and returns 0 in the child process.
P2. you will need to know how the
expression A() && B() || C() is evaluated; for example if A() returns
0 (false) function B will not get evaluated because 0 && whatever is
always 0.
Now, let's label the calls to for ease of reference:
Main()
{
Fork() /*[1]*/;
Fork() /*[2]*/ && fork ()/*[3]*/ || fork ()/*[4]*/;
Fork ()/*[5]*/;
}
I will draw the first level of process creation (and a bit of second level):
[0]
/ / \ \
[1] [2] [3] [5]
/ | \
[2] [3] [5]
The above tree means process [0] (the initial process) will execute the fork() function numbered 1, 2, 3, and 5. Why didn't process [0] run fork()[4]? Because fork()[2] && fork[3] evaluate to true already, so no point to evaluate fork()[4].
Apply similar concept to the process forked by fork[1] on the second level to see why process fork[4] was not called.
You can complete the process creation tree by applying P1 and P2 at each level of the process creation tree.

Remember that fork has a return value, either 0 or a PID (can't remember if the child gets the PID or the parent). So the && and || operators will evaluate to true when a PID is returned in which case some more processes will be forked. Hope that helps

Related

How can I implement the following loop in Erlang?

I have the following pseudocode:
for ( int i = 0; i < V ; i++ )
{
for( int j = 0 ; j < V ; j++ )
{
if( ( i != j ) && ( tuple {i,j} belong to E ) )
{
R[i] := {i,j};
}
}
}
I want to parallelise this code using erlang. How can I achieve the same thing using Erlang? I am new to Erlang...
Edit:
I know that the following code runs both the calls to say/2 concurrently:
-module(pmap).
-export([say/2]).
say(_,0) ->
io:format("Done ~n");
say(Value,Times) ->
io:format("Hello ~n"),
say(Value,Times-1).
start_concurrency(Value1, Value2) ->
spawn(pmap, say, [Value1, 3]),
spawn(pmap, say, [Value2, 3]).
However, here we are hardcoding the functions. So, suppose I want to call say 1000 times, do I need to write spawn(pmap, say, [Valuex, 3]) 1000 times? I can use recursion, but won't it be giving a sequential performance?
Edit:
I tried the following code, where I aim to create 3 threads, where each thread wants to run a say function. I want to run these 3 say functions concurrently(Please comment in the box for more clarification):
-module(pmap).
-export([say/1,test/1,start_concurrency/1]).
say(0) ->
io:format("Done ~n");
say(Times) ->
io:format("Hello ~p ~n",[Times]),
say(Times-1).
test(0) ->
spawn(pmap, say, [3]);
test(Times) ->
spawn(pmap, say, [3]),
test(Times-1).
start_concurrency(Times) ->
test(Times).
Is this code correct?
I want to run these 3 say functions concurrently. Is this code
correct?
You can get rid of your start_concurrency(N) function because it doesn't do anything. Instead, you can call test(N) directly.
I aim to create 3 threads
In erlang, you create processes.
In erlang, indenting is 4 spaces--not 2.
Don't put blank lines between multiple function clauses for a function definition.
If you want to see concurrency in action, then there has to be some waiting in the tasks you are running concurrently. For example:
-module(a).
-compile(export_all).
say(0) ->
io:format("Process ~p finished.~n", [self()]);
say(Times) ->
timer:sleep(rand:uniform(1000)), %%perhaps waiting to receive data from an http request
io:format("Hello ~p from process ~p~n",[Times, self()]),
say(Times-1).
loop(0) ->
spawn(a, say, [3]);
loop(Times) ->
spawn(a, say, [3]),
loop(Times-1).
In the shell:
3> c(a).
a.erl:2: Warning: export_all flag enabled - all functions will be exported
{ok,a}
4> a:loop(3).
<0.84.0>
Hello 3 from process <0.82.0>
Hello 3 from process <0.81.0>
Hello 2 from process <0.82.0>
Hello 3 from process <0.83.0>
Hello 2 from process <0.81.0>
Hello 3 from process <0.84.0>
Hello 2 from process <0.83.0>
Hello 1 from process <0.81.0>
Process <0.81.0> finished.
Hello 1 from process <0.82.0>
Process <0.82.0> finished.
Hello 2 from process <0.84.0>
Hello 1 from process <0.83.0>
Process <0.83.0> finished.
Hello 1 from process <0.84.0>
Process <0.84.0> finished.
5>
If there is no random waiting in the tasks that you are running concurrently, then the tasks will complete sequentially:
-module(a).
-compile(export_all).
say(0) ->
io:format("Process ~p finished.~n", [self()]);
say(Times) ->
%%timer:sleep(rand:uniform(1000)),
io:format("Hello ~p from process ~p~n",[Times, self()]),
say(Times-1).
loop(0) ->
spawn(a, say, [3]);
loop(Times) ->
spawn(a, say, [3]),
loop(Times-1).
In the shell:
5> c(a).
a.erl:2: Warning: export_all flag enabled - all functions will be exported
{ok,a}
6> a:loop(3).
Hello 3 from process <0.91.0>
Hello 3 from process <0.92.0>
Hello 3 from process <0.93.0>
Hello 3 from process <0.94.0>
<0.94.0>
Hello 2 from process <0.91.0>
Hello 2 from process <0.92.0>
Hello 2 from process <0.93.0>
Hello 2 from process <0.94.0>
Hello 1 from process <0.91.0>
Hello 1 from process <0.92.0>
Hello 1 from process <0.93.0>
Hello 1 from process <0.94.0>
Process <0.91.0> finished.
Process <0.92.0> finished.
Process <0.93.0> finished.
Process <0.94.0> finished.
7>
When there is no random waiting in the tasks that you are running concurrently, then concurrency provides no benefit.

Executing a TCL script from line N

I have a TCL script that say, has 30 lines of automation code which I am executing in the dc shell (Synopsys Design Compiler). I want to stop and exit the script at line 10, exit the dc shell and bring it back up again after performing a manual review. However, this time, I want to run the script starting from line number 11, without having to execute the first 10 lines.
Instead of having two scripts, one which contains code till line number 10 and the other having the rest, I would like to make use of only one script and try to execute it from, let's say, line number N.
Something like:
source a.tcl -line 11
How can I do this?
If you have Tcl 8.6+ and if you consider re-modelling your script on top of a Tcl coroutine, you can realise this continuation behaviour in a few lines. This assumes that you run the script from an interactive Tcl shell (dc shell?).
# script.tcl
if {[info procs allSteps] eq ""} {
# We are not re-entering (continuing), so start all over.
proc allSteps {args} {
yield; # do not run when defining the coroutine;
puts 1
puts 2
puts 3
yield; # step out, once first sequence of steps (1-10) has been executed
puts 4
puts 5
puts 6
rename allSteps ""; # self-clean, once the remainder of steps (11-N) have run
}
coroutine nextSteps allSteps
}
nextSteps; # run coroutine
Pack your script into a proc body (allSteps).
Within the proc body: Place a yield to indicate the hold/ continuation point after your first steps (e.g., after the 10th step).
Create a coroutine nextSteps based on allSteps.
Protect the proc and coroutine definitions in a way that they do not cause a re-definition (when steps are pending)
Then, start your interactive shell and run source script.tcl:
% source script.tcl
1
2
3
Now, perform your manual review. Then, continue from within the same shell:
% source script.tcl
4
5
6
Note that you can run the overall 2-phased sequence any number of times (because of the self-cleanup of the coroutine proc: rename):
% source script.tcl
1
2
3
% source script.tcl
4
5
6
Again: All this assumes that you do not exit from the shell, and maintain your shell while performing your review. If you need to exit from the shell, for whatever reason (or you cannot run Tcl 8.6+), then Donal's suggestion is the way to go.
Update
If applicable in your case, you may improve the implementation by using an anonymous (lambda) proc. This simplifies the lifecycle management (avoiding re-definition, managing coroutine and proc, no need for a rename):
# script.tcl
if {[info commands nextSteps] eq ""} {
# We are not re-entering (continuing), so start all over.
coroutine nextSteps apply {args {
yield; # do not run when defining the coroutine;
puts 1
puts 2
puts 3
yield; # step out, once first sequence of steps (1-10) has been executed
puts 4
puts 5
puts 6
}}
}
nextSteps
The simplest way is to open the text file, parse it to get the first N commands (info complete is useful there), and then evaluate those (or the rest of the script). Doing this efficiently produces slightly different code when you're dropping the tail as opposed to when you're dropping the prefix.
proc ReadAllLines {filename} {
set f [open $filename]
set lines {}
# A little bit careful in case you're working with very large scripts
while {[gets $f line] >= 0} {
lappend lines $line
}
close $f
return $lines
}
proc SourceFirstN {filename n} {
set lines [ReadAllLines $filename]
set i 0
set script {}
foreach line $lines {
append script $line "\n"
if {[info complete $script] && [incr i] >= $n} {
break
}
}
info script $filename
unset lines
uplevel 1 $script
}
proc SourceTailN {filename n} {
set lines [ReadAllLines $filename]
set i 0
set script {}
for {set j 0} {$j < [llength $lines]} {incr j} {
set line [lindex $lines $j]
append script $line "\n"
if {[info complete $script]} {
if {[incr i] >= $n} {
info script $filename
set realScript [join [lrange $lines [incr j] end] "\n"]
unset lines script
return [uplevel 1 $realScript]
}
# Dump the prefix we don't need any more
set script {}
}
}
# If we get here, the script had fewer than n lines so there's nothing to do
}
Be aware that the kinds of files you're dealing with can get pretty large, and Tcl currently has some hard memory limits. On the other hand, if you can source the file at all, you're already within that limit…

Send and Receive operations between communicators in MPI

Following my previous question : Unable to implement MPI_Intercomm_create
The problem of MPI_INTERCOMM_CREATE has been solved. But when I try to implement a basic send receive operations between process 0 of color 0 (globally rank = 0) and process 0 of color 1 (ie globally rank = 2), the code just hangs up after printing received buffer.
the code:
program hello
include 'mpif.h'
implicit none
integer tag,ierr,rank,numtasks,color,new_comm,inter1,inter2
integer sendbuf,recvbuf,tag,stat(MPI_STATUS_SIZE)
tag = 22
sendbuf = 222
call MPI_Init(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD,numtasks,ierr)
if (rank < 2) then
color = 0
else
color = 1
end if
call MPI_COMM_SPLIT(MPI_COMM_WORLD,color,rank,new_comm,ierr)
if (color .eq. 0) then
if (rank == 0) print*,' 0 here'
call MPI_INTERCOMM_CREATE(new_comm,0,MPI_Comm_world,2,tag,inter1,ierr)
call mpi_send(sendbuf,1,MPI_INT,2,tag,inter1,ierr)
!local_comm,local leader,peer_comm,remote leader,tag,new,ierr
else if(color .eq. 1) then
if(rank ==2) print*,' 2 here'
call MPI_INTERCOMM_CREATE(new_comm,2,MPI_COMM_WORLD,0,tag,inter2,ierr)
call mpi_recv(recvbuf,1,MPI_INT,0,tag,inter2,stat,ierr)
print*,recvbuf
end if
end
The communication with intercommunication is not well understood by most users, and examples are not as many as examples for other MPI operations. You can find a good explanation by following this link.
Now, there are two things to remember:
1) Communication in an inter communicator always go from one group to the other group. When sending, the rank of the destination is its the local rank in the remote group communicator. When receiving, the rank of the sender is its local rank in the remote group communicator.
2) Point to point communication (MPI_send and MPI_recv family) is between one sender and one receiver. In your case, everyone in color 0 is sending and everyone in color 1 is receiving, however, if I understood your problem, you want the process 0 of color 0 to send something to the process 0 of color 1.
The sending code should be something like this:
call MPI_COMM_RANK(inter1,irank,ierr)
if(irank==0)then
call mpi_send(sendbuf,1,MPI_INT,0,tag,inter1,ierr)
end if
The receiving code should look like:
call MPI_COMM_RANK(inter2,irank,ierr)
if(irank==0)then
call mpi_recv(recvbuf,1,MPI_INT,0,tag,inter2,stat,ierr)
print*,'rec buff = ', recvbuf
end if
In the sample code, there is a new variable irank that I use to query the rank of each process in the inter-communicator; that is the rank of the process in his local communicator. So you will have two process of rank 0, one for each group, and so on.
It is important to emphasize what other commentators of your post are saying: when building a program in those modern days, use moderns constructs like use mpi instead of include 'mpif.h' see comment from Vladimir F. Another advise from your previous question was yo use rank 0 as remote leader in both case. If I combine those 2 ideas, your program can look like:
program hello
use mpi !instead of include 'mpif.h'
implicit none
integer :: tag,ierr,rank,numtasks,color,new_comm,inter1,inter2
integer :: sendbuf,recvbuf,stat(MPI_STATUS_SIZE)
integer :: irank
!
tag = 22
sendbuf = 222
!
call MPI_Init(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD,numtasks,ierr)
!
if (rank < 2) then
color = 0
else
color = 1
end if
!
call MPI_COMM_SPLIT(MPI_COMM_WORLD,color,rank,new_comm,ierr)
!
if (color .eq. 0) then
call MPI_INTERCOMM_CREATE(new_comm,0,MPI_Comm_world,2,tag,inter1,ierr)
!
call MPI_COMM_RANK(inter1,irank,ierr)
if(irank==0)then
call mpi_send(sendbuf,1,MPI_INT,0,tag,inter1,ierr)
end if
!
else if(color .eq. 1) then
call MPI_INTERCOMM_CREATE(new_comm,0,MPI_COMM_WORLD,0,tag,inter2,ierr)
call MPI_COMM_RANK(inter2,irank,ierr)
if(irank==0)then
call mpi_recv(recvbuf,1,MPI_INT,0,tag,inter2,stat,ierr)
if(ierr/=MPI_SUCCESS)print*,'Error in rec '
print*,'rec buff = ', recvbuf
end if
end if
!
call MPI_finalize(ierr)
end program h

'Segmentation Fault' while debugging expect script

I am debugging an expect program with the traditional debugging way by passing the -D 1 flag for the following script.
#!/usr/bin/expect
proc p3 {} {
set m 0
}
proc p2 {} {
set c 4
p3
set d 5
}
proc p1 {} {
set a 2
p2
set a 5
}
p1
With the debugger command w, I am trying to see the stack frame and got the following error.
dinesh#mypc:~/pgms/expect$ expect -D 1 stack.exp
1: proc p3 {} {
set m 0
}
dbg1.0> n
1: proc p2 {} {
set c 4
p3
set d 5
}
dbg1.1>
1: proc p1 {} {
set a 2
p2
set a 5
}
dbg1.2>
1: p1
dbg1.3> s
2: set a 2
dbg2.4>
2: p2
dbg2.5>
3: set c 4
dbg3.6> w
0: expect {-D} {1} {stack.exp}
Segmentation fault (core dumped)
dinesh#mypc:~/pgms/expect$
I am having the expect version 5.45.
Is there anything wrong in my way of command execution ?
In order to achieve the debugging trace, Expect pokes its fingers inside the implementation of Tcl. In particular, it has copies of the definitions of some of the internal structures used inside Tcl (e.g., the definition of the implementation of procedures and of stack frames). However, these structures change from time to time; we don't announce such internal implementation changes, as they shouldn't have any bearing on any other code, but that's obviously not the case.
Overall, this is a bug in Expect (and it might be that the fix is for a new C API function to be added to Tcl). In order to see about fixing this, we need to know not just the exact version of Expect but also the exact version of Tcl (use info patchlevel to get this).

PID control - value of process parameter based on PID result

I'm trying to implement a PID controller following http://en.wikipedia.org/wiki/PID_controller
The mechanism I try to control works as follows:
1. I have an input variable which I can control. Typical values would be 0.5...10.
2. I have an output value which I measure daily. My goal for the output is roughly at the same range.
The two variables have strong correlation - when the process parameter goes up, the output generally goes up, but there's quite a bit of noise.
I'm following the implementation here:
http://code.activestate.com/recipes/577231-discrete-pid-controller/
Now the PID seems like it is correlated with the error term, not the measured level of output. So my guess is that I am not supposed to use it as-is for the process variable, but rather as some correction to the current value? How is that supposed to work exactly?
For example, if we take Kp=1, Ki=Kd=0, The process (input) variable is 4, the current output level is 3 and my target is a value of 2, I get the following:
error = 2-3 = -1
PID = -1
Then I should set the process variable to -1? or 4-1=3?
You need to think in terms of the PID controller correcting a manipulated variable (MV) for errors, and that you need to use an I term to get to an on-target steady-state result. The I term is how the PID retains and applies memory of the prior behavior of the system.
If you are thinking in terms of the output of the controller being changes in the MV, it is more of a 'velocity form' PID, and the memory of prior errors and behavior is integrated and accumulated in the prior MV setting.
From your example, it seems like a manipulated value of -1 is not feasible and that you would like the controller to suggest a value like 3 to get a process output (PV) of 2. For a PID controller to make use of "The process (input) variable is 4,..." (MV in my terms) Ki must be non-zero, and if the system was at steady-state, whatever was accumulated in the integral (sum_e=sum(e)) would precisely equal 4/Ki, so:
Kp= Ki = 1 ; Kd =0
error = SV - PV = 2 - 3 = -1
sum_e = sum_e + error = 4/Ki -1
MV = PID = -1(Kp) + (4/Ki -1)Ki = -1Kp + 4 - 1*Ki = -1 +4 -1 = 2
If you used a slower Ki than 1, it would smooth out the noise more and not adjust the MV so quickly:
Ki = 0.1 ;
MV = PID = -1(Kp) + (4/Ki -1)Ki = -1Kp + 4 - 1*Ki = -1 +4 -0.1 = 2.9
At steady state at target (PV = SV), sum_e * Ki should produce the steady-state MV:
PV = SV
error = SV - PV = 0
Kp * error = 0
MV = 3 = PID = 0 * Kp + Ki * sum_e
A nice way to understand the PID controller is to put units on everything and think of Kp, Ki, Kd as conversions of the process error, accumulated error*timeUnit, and rate-of-change of error/timeUnit into terms of the manipulated variable, and that the controlled system converts the controller's manipulated variable into units of output.

Resources