I write a makefile for my erlang project and it works. But while project is running, if i change a file and make it by makefile, the new version of code not works and i must exit shell and start it again to run the new version. How can solve it?
makefile:
# Makefile
SRC_DIR = src
BIN_DIR = ebin
DB_DIR = db
ERL = erl
ERLC = erlc
ERLC_FLAGS=
SOURCES=$(wildcard ${SRC_DIR}/*.erl)
HEADERS=$(wildcard ${SRC_DIR}/*.hrl)
OBJECTS=$(SOURCES:${SRC_DIR}/%.erl=${BIN_DIR}/%.beam)
all: $(OBJECTS)
ebin/%.beam : src/%.erl $(HEADERS) Makefile
${ERLC} $(ERLC_FLAGS) -o ${BIN_DIR}/ $<
drop_db:
rm -r ${DB_DIR}
clean:
-rm $(OBJECTS)
run:
${ERL} -pa ${BIN_DIR} -s god run
Erlang allows to modify the application code in run easily, but it doesn't mean that it is transparent to the user.
Hopefully it isn't transparent, otherwise the feature couldn't be used, or only restricted to trivial use cases. Here are the first 3 reasons that come to my mind to justify that code change must be under the user responsibility and control:
from one version to another one, the data may need to be adapted (modified record, modified structure, new data ...). So on transition, the new version of code must update the state data; it even must verify that it is able to do it (does it have the transformation code to go from version X to version Y). The OTP behaviors provides special callback for this.
An application modification may involve changes in multiple modules, and a module may be run by many processes (a web client for example). But the code is updated at the process level at very specific time (fully qualified call) and it cannot occur for all processes simultaneously. So you must control the order of module upgrade. The function code:load_file(Module) allows this.
Only (:o) 2 versions of a module may live concurrently in one node. If you run a module and then make 2 modifications loading the module twice, the oldest code "disappears" and any process still using this version simply dies. You need to synchronize the upgrade. The function code:soft_purge(Module) helps you for this.
After all this warnings come back to the fun. You can easily play with code upgrade to get more familiar with it. Only 2 steps and one condition are required:
the new beam file (compiled code) must be in the code path of your Erlang VM (or you will have to use code:load_abs(Filename) in the next step).
You must load the new code in the VM. If you compile the code from the shell using c(my_module). it is done automatically. If you compile it by other means, you will have to explicitly load the new code in the VM using for example code:load_file(my_module).
Then you must ensure that each process using this module performs a fully qualified call to an exported function and thats it, for example: my_module:code_change(State). if you use OTP behavior, this callback exists and provide you also the old version of the code in parameter.
Lets use the following module:
-module(one).
-compile([export_all]).
-define (VERSION,1).
start() ->
rserver,spawn(?MODULE,init,[]).
init() ->
loop(0).
loop(N) ->
receive
print ->
io:format("received ~p message(s) so far~n",[N+1]),
loop(N+1);
load ->
io:format("received ~p message(s) so far~n reload the code~n",[N+1]),
?MODULE:loop(N+1);
version ->
io:format("received ~p message(s) so far~n version is ~p~n",[N+1,?VERSION]),
loop(N+1);
M ->
io:format("received unexpected message ~p: ignored~n",[M]),
loop(N)
end.
I compile it in the shell, start 2 processes and play with them:
1> c(one).
{ok,one}
2> P1 = one:start().
<0.40.0>
3> P2 = one:start().
<0.42.0>
4> P1 ! print.
received 1 message(s) so far
print
5> P1 ! print.
received 2 message(s) so far
print
6> P1 ! version.
received 3 message(s) so far
version is 1
version
7> P1 ! reset.
received unexpected message reset: ignored
reset
8> P2 ! print.
received 1 message(s) so far
print
9>
Now I modify the code to:
-module(one).
-compile([export_all]).
-define (VERSION,2).
start() ->
rserver,spawn(?MODULE,init,[]).
init() ->
loop(0).
loop(N) ->
receive
print ->
io:format("received ~p message(s) so far~n",[N+1]),
loop(N+1);
load ->
io:format("received ~p message(s) so far~n reload the code~n",[N+1]),
?MODULE:loop(N+1);
version ->
io:format("received ~p message(s) so far~n version is ~p~n",[N+1,?VERSION]),
loop(N+1);
reset ->
io:format("received ~p message(s) so far, reset message count~n",[N+1]),
loop(0);
M ->
io:format("received unexpected message ~p: ignored~n",[M]),
loop(N)
end.
Compile it outside the VM and test:
9> P1 ! version.
received 4 message(s) so far
version is 1
version
10> P1 ! load.
received 5 message(s) so far
reload the code
load
11> P1 ! version.
received 6 message(s) so far
version is 1
version
12> P1 ! reset.
received unexpected message reset: ignored
reset
13> P2 ! print.
received 2 message(s) so far
print
14>
As I didn't load the code, there is no update: the VM will not loose time to search in the code path if a new version exists at each external call!
14> code:load_file(one).
{module,one}
15> P1 ! version.
received 7 message(s) so far
version is 1
version
16> P2 ! version.
received 3 message(s) so far
version is 1
version
17> P1 ! load.
received 8 message(s) so far
reload the code
load
18> P1 ! version.
received 9 message(s) so far
version is 2
version
19> P1 ! reset.
received 10 message(s) so far, reset message count
reset
20> P1 ! print.
received 1 message(s) so far
print
21> P2 ! version.
received 4 message(s) so far
version is 1
version
22> P2 ! print.
received 5 message(s) so far
print
23>
after loading the new code, I have been able to upgrade the version of P1, while P2 is still on version 1.
Now I make a new modification, simply going to version 3 and compile in in the shell to force the code loading:
23> c(one).
{ok,one}
24> P1 ! version.
received 2 message(s) so far
version is 2
version
25> P1 ! load.
received 3 message(s) so far
reload the code
load
26> P1 ! version.
received 4 message(s) so far
version is 3
version
27> P2 ! print.
print
28>
I have been able tu upgrade the process P1 from version 2 to 3. But the process P2 which was still using the version 1 is died.
Related
I want to create a pretty printed table from a matrix (or column vector).
For Matlab there are several available functions that can do this (such as printmat, array2table, and table), but for Octave I cannot find any.
So instead of:
>> a = rand(3,2)*10;
>> round(a)
ans =
2 10
1 3
2 1
I would like to see:
>> a = rand(3,2)*10;
>> pretty_print(round(a))
THIS THAT
R1 2 10
R2 1 3
R3 2 1
How can I produce a pretty printed table from a matrix?
(Any available package to do so?)
UPDATE
After trying to follow the extremely obtuse package installation instruction from Octave Wiki, I kept getting the error pkg: failed to read package 'econometrics-1.1.1.tar.gz': Couldn't resolve host name. Apparently the windows version isn't able to use the direct installation command (as given on their Wiki). The only way I managed to get it, was by first downloading the package manually into the current working directory of Octave. (See pwd output.) Only then did the install command work.
pkg install econometrics-1.1.1.tar.gz
pkg load econometrics
Yes, there is a prettyprint function in the econometrics package. Once the package is installed and loaded, you can use it like this:
>> a = rand(3,2)*10;
>> prettyprint(round(a),['R1';'R2';'R3'],['THIS';'THAT'])
THIS THAT
R1 2.000 3.000
R2 3.000 4.000
R3 10.000 3.000
I am trying to use Xyce for a project and am running into this issue. I am copying the DC sweep netlist example from the Xyce user guide on page 39 to notepad and saving it as test2c.cir. I then copy it over into the Xyce directory and run the Xyce terminal and run the simulate command and am unable to generate any output. Is there a step I am missing to be able to run the Diode Clipper Circuit DC Sweep file? Am I saving the cir file in the right directory? It seems that the circuit "loads properly" and the syntax is fine, but I am not getting a figure output I am expecting. I believe the issue might be that my PC doesnt have a way to open prn files, in that case, how would I fix that?
Diode Clipper Circuit
** Voltage Sources
VCC 1 0 5V
VIN 3 0 0V
* Analysis Command
.DC VIN -10 15 1
* Output
.PRINT DC V(3) V(2) V(4)
* Diodes
D1 2 1 D1N3940
D2 0 2 D1N3940
* Resistors
R1 2 3 1K
R2 1 2 3.3K
R3 2 0 3.3K
R4 4 0 5.6K
* Capacitor
C1 2 4 0.47u
.MODEL D1N3940 D(
+ IS=4E-10 RS=.105 N=1.48 TT=8E-7
+ CJO=1.95E-11 VJ=.4 M=.38 EG=1.36
+ XTI=-8 KF=0 AF=1 FC=.9
+ BV=600 IBV=1E-4)
.END
And this is the directory...
UPDATE:
I changed the Analysis Command to save files as different formats (csv, raw, dat) and it still gives me the same error. Aborts because it cant open test.cir.___. Is the problem maybe something to do with where the program directory is located?
I was informed what a possible fix would be and it worked. The Xyce installation was in a location without admin permission (by default after serial installation). The easiest thing to try that worked was to cd to another directory with the netlist file and run Xyce in that other directory. That generated the output file correctly!
I am writing a bpf filter to prevent certain netlink messages. I am trying to debug the bpf code. Is there any debug tool that could help me?
I was initially thinking of using nlmon to capture netlink messages:
From https://jvns.ca/blog/2017/09/03/debugging-netlink-requests/
# create the network interface
sudo ip link add nlmon0 type nlmon
sudo ip link set dev nlmon0 up
sudo tcpdump -i nlmon0 -w netlink.pcap # capture your packets
Then use ./bpf_dbg (
https://github.com/cloudflare/bpftools/blob/master/linux_tools/bpf_dbg.c)
1) ./bpf_dbg to enter the shell (shell cmds denoted with '>'):
2) > load bpf 6,40 0 0 12,21 0 3 20... (this is the bpf code I intend to debug)
3) > load pcap netlink.pcap
4) > run /disassemble/dump/quit (self-explanatory)
5) > breakpoint 2 (sets bp at loaded BPF insns 2, do run then;
multiple bps can be set, of course, a call to breakpoint
w/o args shows currently loaded bps, breakpoint reset for
resetting all breakpoints)
6) > select 3 (run etc will start from the 3rd packet in the pcap)
7) > step [-, +] (performs single stepping through the BPF)
Did anyone try this before?
Also, I was not able to make nlmon module to load on my linux kernel(Is there a doc for this?)
I am running kernel version Linux version 4.10.0-40-generic
The nlmon module seems to be present in the kernel source:
https://elixir.free-electrons.com/linux/v4.10/source/drivers/net/nlmon.c#L41
But, when I search inside, /lib/modules/ for nlmon.ko I dont find anything.
instance-1:/lib/modules$ find . | grep -i nlmon
instance-1:/lib/modules$
UPDATE: Confirmed as a bug. For more detail, see the link and details provided by #ViralBShah below.
Julia throws a strange error when I add and remove processes (addprocs and rmprocs), but only if I don't do any parallel processing in between. Consider the following example code:
#Set parameters
numCore = 4;
#Add workers
print("Adding workers... ");
addprocs(numCore - 1);
println(string(string(numCore-1), " workers added."));
#Detect number of cores
println(string("Number of processes detected = ", string(nprocs())));
# Do some stuff (COMMENTED OUT)
# XLst = {rand(10, 1) for i in 1:8};
# XMean = pmap(mean, XLst);
#Remove the additional workers
print("Removing workers... ");
rmprocs(workers());
println("Done.");
println("Subroutine complete.");
Note that I've commented out the only code that actually does any parallel processing (the call to pmap). If I run this code on my machine (Julia 0.2.1, Ubuntu 14.04), I get the following output in the console:
Adding workers... 3 workers added.
Number of processes detected = 4
Removing workers... Done.
Subroutine complete.
fatal error on
In [86]: fatal error on 88: ERROR: 87: ERROR: connect: connection refused (ECONNREFUSED)
in yield at multi.jl:1540
connect: connection refused (ECONNREFUSED) in wait at task.jl:117
in wait_connected at stream.jl:263
in connect at stream.jl:878
in Worker at multi.jl:108
in anonymous at task.jl:876
in yield at multi.jl:1540
in wait at task.jl:117
in wait_connected at stream.jl:263
in connect at stream.jl:878
in Worker at multi.jl:108
in anonymous at task.jl:876
The first four lines are printed by my program, and seem to indicate that it runs to completion. But then I get a fatal error. Any ideas?
The most interesting thing about this error is if I uncomment the code with the call to pmap (ie if I actually do some parallel processing), the fatal error goes away.
This issue is being tracked at https://github.com/JuliaLang/julia/issues/7646 and I reproduce the answer by Amit Murthy:
pid 1 does an addprocs(3)
addprocs returns after it has established connections with all 3 new workers.
However, at this time the the connections between workers may not have been setup, i.e. from pids 3 -> 2, 4 -> 2 and 4 -> 3.
Now pid 1 calls rmprocs(workers()) , i.e., pids 2, 3 and 4.
As pid 2 exits, the connection attempt in 4 to 2, results in an error.
Since we have redirected the output of pid 4, to the stdout of pid 1, we see the same error printed.
The system is still in a consistent state, though the printing of said error messages may suggest something amiss.
I installed Linpack on a 2-Node cluster with Xeon processors. Sometimes if I start Linpack with this command:
mpiexec -np 28 -print-rank-map -f /root/machines.HOSTS ./xhpl_intel64
linpack starts and prints the output, sometimes I only see the mpi mappings printed and then nothing following. To me this seems like random behaviour because I don't change anything between the calls and as already mentioned, Linpack sometimes starts, sometimes not.
In top I can see that xhpl_intel64processes have been created and they are heavily using the CPU but when watching the traffic between the nodes, iftop is telling me that it nothing is sent.
I am using MPICH2 as MPI implementation. This is my HPL.dat:
# cat HPL.dat
HPLinpack benchmark input file
Innovative Computing Laboratory, University of Tennessee
HPL.out output file name (if any)
6 device out (6=stdout,7=stderr,file)
1 # of problems sizes (N)
10000 Ns
1 # of NBs
250 NBs
0 PMAP process mapping (0=Row-,1=Column-major)
1 # of process grids (P x Q)
2 Ps
14 Qs
16.0 threshold
1 # of panel fact
2 PFACTs (0=left, 1=Crout, 2=Right)
1 # of recursive stopping criterium
4 NBMINs (>= 1)
1 # of panels in recursion
2 NDIVs
1 # of recursive panel fact.
1 RFACTs (0=left, 1=Crout, 2=Right)
1 # of broadcast
1 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM)
1 # of lookahead depth
1 DEPTHs (>=0)
2 SWAP (0=bin-exch,1=long,2=mix)
64 swapping threshold
0 L1 in (0=transposed,1=no-transposed) form
0 U in (0=transposed,1=no-transposed) form
1 Equilibration (0=no,1=yes)
8 memory alignment in double (> 0)
edit2:
I now just let the program run for a while and after 30min it tells me:
# mpiexec -np 32 -print-rank-map -f /root/machines.HOSTS ./xhpl_intel64
(node-0:0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15)
(node-1:16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31)
Assertion failed in file ../../socksm.c at line 2577: (it_plfd->revents & 0x008) == 0
internal ABORT - process 0
APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)
Is this a mpi problem?
Do you know what type of problem this could be?
I figured out what the problem was: MPICH2 uses different random ports each time it starts and if these are blocked your application wont start up correctly.
The solution for MPICH2 is to set the environment variable MPICH_PORT_RANGE to START:END, like this:
export MPICH_PORT_RANGE=50000:51000
Best,
heinrich