I want to run two matlab scripts in parallel for a project and communicate between them. The purpose of this is to have one script do image analysis and sending the results to the other which will use it for more calculations (time consuming, but not related to the task of finding stuff in the images). Since both tasks are time consuming, and should preferably be done in real time, I believe that parallelization is necessary.
To get a feel for how this should be done I created a test script to find out how to communicate between the two scripts.
The first script takes a user input using the built in function input, and then using labSend sends it to the other, which recieves it, and prints it.
function [blarg] = inputStuff(blarg)
mpiInit(); %added because of error message, but do not work...
for i=1:2
labBarrier; % added because of error message
inp = input('Enter a number to write');
labSend(inp);
if (inp == 0)
break;
else
i = 1;
end
end
end
function [ blarg ] = testWrite( blarg )
mpiInit(); % added because of error message, but does not help
par = 0;
if ( blarg == 0)
par = 1;
end
for i = 1:10
if (par == 1)
labBarrier
delta = labReceive();
i = 1;
else
delta = input('Enter number to write');
end
if (delta == 0)
break;
end
s = strcat('This lab no', num2str(labindex), '. Delta is = ')
delta
end
end
%%This is the file test_parfor.m
funlist = {#inputStuff, #testWrite};
matlabpool(2);
mpiInit(); % added because of error message, but does not help
parfor i=1:2
funlist{i}(0);
end
matlabpool close;
Then, when the code is run, the following error message appears:
Starting matlabpool using the 'local' profile ... connected to 2 labs.
Error using parallel_function (line 589)
The MPI implementation has not yet been loaded. Please
call mpiInit.
Error stack:
testWrite.m at 11
Error in test_parfor (line 8)
parfor i=1:2
Calling the method mpiInit does not help... (Called as shown in the code above.)
And nowhere in the examples that mathworks have in the documentation, or on their website, show this error or what to do with it.
Any help is appreciated!
You would typically use constructs such as labSend, labRecieve and labBarrier within an spmd block, rather than a parfor block.
parfor is intended for implementing embarrassingly parallel algorithms, in other words algorithms that consist of multiple independent tasks that can be run in parallel, and do not require communication between tasks.
I'm stretching my knowledge here (perhaps someone more expert can correct me), but as I understand things, it does not set up an MPI ring for communication between workers, which is probably the explanation for the (rather uninformative) error message you're getting.
An spmd block enables communication between workers using labSend, labRecieve and labBarrier. There are quite a few examples of using them all in the documentation.
Sam is right that the MPI functionality is not enabled during parfor, only during spmd. You need to do something more like this:
spmd
funlist{labindex}(0);
end
(Sam is also quite right that the error message you saw is pretty unhelpful)
Related
I have a .m script that I've been running using Windows Task Scheduler, generally successfully, every 15 minutes for about a year (options: -automation -minimize -r remotedata -logfile logfile.txt;quit).
When I run the code manually in Matlab, everything behaves as expected.
However, when it is run as an automated script, it has two issues I can't resolve, that seem to indicate the code is not being executed the same way.
First, I have the following conditional:
~isempty(remoteData.Time(setdiff(1:end,ni))) which is terrible syntax, I know, but works just fine when I run the script manually. However, when it runs automated, it gives the error:
Error using setdiff (line 80) Not enough input arguments.
I corrected it to ~isempty(remoteData.Time(setdiff(1:height(remoteData),ni)))
but it made me curious.
Second, I have a webread function with a number of queries (see below) that executes normally when I have it open and hit "run", however, when running as an automation the dateutc query is ignored. This one is a bit more puzzling. Can anyone suggest a reason it might be failing to register, or how I might fix it? Debugging is difficult since it works as expected when I run it manually.
WUurl = 'http://weatherstation.wunderground.com/weatherstation/updateweatherstation.php';
WUID = '***';
WUpwd = '***';
WUdateutc = datestr(datenum(webData.Time(WDNewest-newTimes+i))+7/24,'yyyy-mm-dd HH:MM:SS');
WUwindspeedmph = num2str(webData.WndSpd(WDNewest-newTimes+i)*0.62);
WUwinddir = num2str(webData.WndDir(WDNewest-newTimes+i));
WUtempf = num2str(webData.AirTmp(WDNewest-newTimes+i)*1.8+32);
WUrainin = num2str(webData.Rain(WDNewest-newTimes+i)/25.4*4);
WUdailyrainin = num2str(sum(webData.Rain(WDMidnight:WDNewest-newTimes+i))/25.4);
WUbaromin = num2str(webData.BarPress(WDNewest-newTimes+i)*.0295);
WUhumidity = num2str(webData.RelHum(WDNewest-newTimes+i));
gamma = log(webData.RelHum(WDNewest-newTimes+i)/100)+ ...
(17.67*webData.AirTmp(WDNewest-newTimes+i))/ ...
(243.5+webData.AirTmp(WDNewest-newTimes+i));
WUdewptf = num2str((243.5*gamma)/(17.67-gamma)*1.8+32); % Magnus formula estimation
WUsolarradiation = num2str(webData.NetRad_Wm2(WDNewest-newTimes+i));
WUsoiltempf = num2str(nanmean(webData{WDNewest,20:3:77})*1.8+32);
WUsoilmoisture = num2str(nanmean(webData{WDNewest,18:3:75}));
options = weboptions('Timeout',newTimes);
WU_debugging = webread(WUurl,...
'ID',WUID,...
'PASSWORD',WUpwd,...
'dateutc',WUdateutc,...
'windspeedmph',WUwindspeedmph,...
'winddir',WUwinddir,...
'tempf',WUtempf,...
'rainin',WUrainin,...
'dailyrainin',WUdailyrainin,...
'baromin',WUbaromin,...
'humidity',WUhumidity,...
'dewptf',WUdewptf,...
'solarradiation',WUsolarradiation,...
'soiltempf',WUsoiltempf,...
'soilmoisture',WUsoilmoisture,...
'action','updateraw',...
options);
I am trying to continously read a file in ruby (which is growing over time and needs to be processed in a separate process). Currently I am archiving this with the following bit of code:
r,w = IO.pipe
pid = Process.spawn('ffmpeg'+ffmpeg_args, {STDIN => r, STDERR => STDOUT})
Process.detach pid
while true do
IO.copy_stream(open(#options['filename']), w)
sleep 1
end
However - while working - I can't imagine that this is the most performant way of doing it. An alternative would be to use the following variation:
step = 1024*4
copied = 0
pid = Process.spawn('ffmpeg'+ffmpeg_args, {STDIN => r, STDERR => STDOUT})
Process.detach pid
while true do
IO.copy_stream(open(#options['filename']), w, step, copied)
copied += step
sleep 1
end
which would only continously copy parts of the file (the issue here being if the step should ever overreach the end of the file). Other approaches such a simple read-file led to ffmpeg failing when there was no new data. With this solution the frames are dropped if no new data is available (which is what I need).
Is there a better way (more performant) to archive something like that?
EDIT:
Using the method proposed by #RaVeN I am now using the following code:
open(#options['filename'], 'rb') do |stream|
stream.seek(0, IO::SEEK_END)
queue = INotify::Notifier.new
queue.watch(#options['filename'], :modify) do
w.write stream.read
end
queue.run
end
However now ffmpeg complaints about invalid data. Is there another method than read?
I have to run bash heavy-job.sh <data-num> (that takes 0.5~2 days) frequently on my computer to process data located at ~/a/data/num . The script call a few sub-processes sequentially and write a log to ~/a/result/num.log . I have done this manually until now.
I wanted to visualize processed tasks and it's status(success or fail), etc as html table. I wrote simple sinatra app to render a table that shows
the list of ~/a/data/num to be processed
~/a/result/num.log exists or not (process not-launched/processing/done)
it's status (the log file contains the word "error" or not)
I found that it would be convenient that if I could launch a bash heavy-job.sh <data-num> from the sinatra app, log the tasks (and info like time,date,etc..) and it's args (heavy-jobs takes some optional args ) and show them as html table.
So I need something that manages jobs and logs to files (or db).
First I wrote a code like below for test (! for test, not integrated with my system yet !), but later I found resque is what i wanted. I am a beginner and not sure if my decision is reasonable or not.
my questions are
is it reasonable to use resque to manage external long-running commands (and log tasks)
or should I use another tool (not necessarily ruby-tool).
(extra;) the task-manager and the sinatra app should work separately (and communicate each other over REST or something) OR not ?
The jobs are not critical since I can retry tasks manually later if failed.
I am not good at English and my question may be misleading. I appreciate any help :) .
class TaskSpawn
def initialize()
#pids = []
end
def spawn(command, options = {})
#opt = {:pgroup => true}
#pids << Kernel.spawn(command, options)
end
def pids()
return #pids.clone
end
def waitany_nohang()
delete_idx = nil
ret = nil
#pids.each_with_index do |p, idx|
pid,status = Process.waitpid2(p, Process::WNOHANG)
unless pid.nil?
delete_idx = idx
ret = [pid,status]
break
end
end
if delete_idx
#pids.delete_at(delete_idx)
return ret
else
# no task fininshed
return nil
end
end
def waitall()
ret = waitall
raise "interal error" if ret.size != pids.size
return ret
end
end
Is it possible to display the local error of a integrator step of a variable time step solver in simulink. I would like to find out why simulink is taking small integrator steps. Since the stepsize is depending on the local error of a integration it would be helpful to record the local error. Is this possible?
Have you tried using the Simulink Debugger? If you set breakpoint on the input and output of the block of interest, and run the simulation up to a time when the steps start to become small, then you may be able to determine what's happening.
You might also play around with zero crossing detection. A pretty good discussion of its basics can be found here.
This is possible from the debugger (as Phil Goddard hinted). Start model in with debugger (from matlab console):
>> sldebug mdl
Enable "Solver trace level 1"
>> strace 1
Enable "Break on failed integration step"
>> xbreak
Start simulation:
>> continue
The simulation will then break when the local error is too large. For example:
[TM = 0.035250948751817182 ] Start of Major Time Step
[Tm = 0.035250948751817182 ] [Hm = 0.0009443691112440444 ] Start of Solver Phase
[Tm = 0.03525094875181718 ] [Hm = 0.0009443691112440444 ] Begin Integration Step
[Tn = 0.03525094875181718 ] [Hn = 0.0009443691112440444 ] Begin Newton Iteration
[Tf = 0.03619531786306122 ] [Hf = 0.0009443691112440444 ] Fail [Er = 6.8210e+00 ] [Ix = 1]
Detected integation step failure. Interrupting model execution
The Er value is the local error, Ix is the state index. To find the corresponding block type:
>> states
With output
Continuous States:
Idx Value (system:block:element Name 'BlockName')
0 -7.96155746500428e-06 (0:0:0 CSTATE 'mdl/x')
1 1.630758262432841e-12 (0:1:0 CSTATE 'mdl/y')
I use the Spyder IDE. Usually, when I am running non-parallelized scripts, I tend to debug using print statements. Depending on which statements are printed (or not), I can see where errors are occurring.
For example:
print "Started while loop..."
doWhileLoop = False
while doWhileLoop == True:
print "Doing something important!"
time.sleep(5)
print "Finished while loop..."
Above, I am missing a line that changes doWhileLoop to False at some point, so I will be stuck perpetually in the while loop, but my print statements let me see where it is in my code that I have hung up.
However, when running scripts that are parallelized, I get no output to the console until after the process has finished. Normally, what I do in this case is attempt to debug with a single process (i.e. temporarily deparallelize the program by running only one task, for instance), but currently, I am dealing with an error that seems to occur only when I am running more than one task.
So, I am having trouble figuring out what this error is using my usual methods -- how should I change my usual debugging practice in order to efficiently debug scripts employing multiprocessing?
Like #roippi said, debugging parallel things is hard. Another tool is using logging over print. Logging gives you severity, timestamps, and most importantly which process is doing something.
Example code:
import logging, multiprocessing, Queue
def myproc(arg):
return arg*2
def worker(inqueue, outqueue):
mylog = multiprocessing.get_logger()
mylog.info('start')
for job in iter(inqueue.get, 'STOP'):
mylog.info('got %s', job)
try:
outqueue.put( myproc(job), timeout=1 )
except Queue.Full:
mylog.error('queue full!')
mylog.info('done')
def executive(inqueue):
total = 0
mylog = multiprocessing.get_logger()
for num in iter(inqueue.get, 'STOP'):
total += num
mylog.info('got {}\ttotal{}', job, total)
logger = multiprocessing.log_to_stderr(
level=logging.INFO,
)
logger.info('setup')
inqueue, outqueue = multiprocessing.Queue(), multiprocessing.Queue()
if 0: # debug 'queue full!' issues
outqueue = multiprocessing.Queue(maxsize=1)
# prefill with 3 jobs
for num in range(3):
inqueue.put(num)
# signal end of jobs
inqueue.put('STOP')
worker_p = multiprocessing.Process(
target=worker, args=(inqueue, outqueue),
name='worker',
)
worker_p.start()
worker_p.join()
logger.info('done')
Example output:
[INFO/MainProcess] setup
[INFO/worker] child process calling self.run()
[INFO/worker] start
[INFO/worker] got 0
[INFO/worker] got 1
[INFO/worker] got 2
[INFO/worker] done
[INFO/worker] process shutting down
[INFO/worker] process exiting with exitcode 0
[INFO/MainProcess] done
[INFO/MainProcess] process shutting down