OpenMDAO External Code Component with mpi - parallel-processing

I am trying to optimize an airfoil using openMDAO and SU2. I have multiple Designpoints that i want to run in parallel. I managed to do that with a "Parallel Group" and XFoil. But i now want to use SU2 instead of XFoil.
The Big Problem is, SU2 by itself, is started by MPI (mpirun-np 4 SU2_CFD config.cfg). Now i want openMDAO to divide all the available processes evenly to all DesignPoints. And then run one SU2 instance per Designpoint. Every SU2 instance should then use all the processes that openMDAO allocated to that DesginPoint.
How could i do that?
Probably wrong approach:
I played around with the external-code component. But if this component gets 2 processes, it is run twice. I dont want to run SU2 twice. I want to run it once, but using both available processes.
Best Regards
David

I don't think your approach to wrapping SU2 is going to work, if you want to run it in parallel as part of a larger model. ExternalCodeComp is designed for file-wrapping and spawns sub-processes, which doesn't give you any way to share MPI communicators with the parent process (that I know of anyway).
Im not an expert in SU2, so I can't speak to their python interface. But Im quite confident that ExternalCodeComp isn't going to give you what you want here. I suggest you talk to the SU2 developers to discuss their in-memory interface.

I couldn't figure out a simple way. But I discorvered ADflow: https://github.com/mdolab/adflow.
It is a CFD-Solver that comes shipped with an OpenMDAO-Wrapper. So I am going to use that.

Related

Driving motors with image processing on raspberry pi

I have a question about processing the image while driving a motor. I did some researches, probably I need to use multiprocessing. However, I couldn't find out how to run two processors together.
Let's say I have two functions as imageProcessing() and DrivingMotor(). With coming information from imageProcessing(), I need to update my DrivingMotor() function simultaneously. How can I handle this issue?
In multiprocessing, you must create two process(process means program in execution) and must implement interproccesing communication methods to communicate process each other, this is tedious,hard and inefficient way .Multiproccesing less efficient than multithreading.Therefore I think you should multithread ,it is very efficient way ,communication between thread is very easy, you can use global data for communication.
You shall create two threads, one thread is handle imageProcessing() ,and other thread DrivingMotor().Operating system handled execution of thread,Operating system run synchronous these threads.
there is basic tutorial for multithreading below links
https://www.tutorialspoint.com/python/python_multithreading.htm

Is it suitable to use MPI_Comm_split when assigning different jobs to different group?

I'm writing a MPI program where all processes are divided into two groups. Each group does different jobs. For example, processes of group A do some computation and communicate with each other, while processes of group B do nothing. Should I use MPI_Comm_split there?
I'd prefer to add a comment but I'm new to stack overflow so don't have sufficient reputation ...
As already mentioned, sub-communicators are essential if you want to call collectives. Even without that, they'd be recommended as they'll make development easier. For example, if you try and send a message outside of group A then this will fail with a sub-communicator, but could cause your code to hang/misbehave if everyone stays in COMM_WORLD.
However, I would be very careful of going down the MPMD route as it may not be portable between systems and makes launching the program more complicated. Having a single MPI executable is the standard and simplest model.

how to use stanford parser with threads

hello I want to use stanford parser wuth threads but I dont know how to do that with thread pool. I want that all threads will do this:
LexicalizedParser.apply(Object in)
but I dont want to create all the time new object of LexicalizedParser because it will load
lp = new LexicalizedParser("englishPCFG.ser.gz");
and it will take 2 sec for each obj.
what can I do?
thanks!
Guess it's too late but a thread safe version is there: http://nlp.stanford.edu/software/lex-parser.shtml
You can use ThreadLocal.
It allows you to keep one instance of parser per thread. Thus any created instance of parser will never be used from more than one thread.
Usually it shouldn't create more instances than CPUs*cores you have.
For me it is ~4-5 instances (if I disable Hyper Threading on my quadcore).
P.S. Not related to StanfordNLP. Sometimes poor class implementations contain static fields and modify them in non-thread safe way. General safe parallelization approach for such implementations would be:
move computation part into separate process;
launch (CPUs*cores) number of processes with computations.
use IPC technic for communicating between main/background processes.

NSThread or pythons' threading module in pyobjc?

I need to do some network bound calls (e.g., fetch a website) and I don't want it to block the UI. Should I be using NSThread's or python's threading module if I am working in pyobjc? I can't find any information on how to choose one over the other. Note, I don't really care about Python's GIL since my tasks are not CPU bound at all.
It will make no difference, you will gain the same behavior with slightly different interfaces. Use whichever fits best into your system.
Learn to love the run loop. Use Cocoa's URL-loading system (or, if you need plain sockets, NSFileHandle) and let it call you when the response (or failure) comes back. Then you don't have to deal with threads at all (the URL-loading system will use a thread for you).
Pretty much the only time to create your own threads in Cocoa is when you have a large task (>0.1 sec) that you can't break up.
(Someone might say NSOperation, but NSOperationQueue is broken and RAOperationQueue doesn't support concurrent operations. Fine if you already have a bunch of NSOperationQueue code or really want to prepare for working NSOperationQueue, but if you need concurrency now, run loop or threads.)
I'm more fond of the native python threading solution since I could join and reference threads around. AFAIK, NSThreads don't support thread joining and cancelling, and you could get a variety of things done with python threads.
Also, it's a bummer that NSThreads can't have multiple arguments, and though there are workarounds for this (like using NSDictionarys and NSArrays), it's still not as elegant and as simple as invoking a thread with arguments laid out in order / corresponding parameters.
But yeah, if the situation demands you to use NSThreads, there shouldn't be any problem at all. Otherwise, it's cool to stick with native python threads.
I have a different suggestion, mainly because python threading is just plain awful because of the GIL (Global Interpreter Lock), especially when you have more than one cpu core. There is a video presentation that goes into this in excruciating detail, but I cannot find the video right now - it was done by a Google employee.
Anyway, you may want to think about using the subprocess module instead of threading (have a helper program that you can execute, or use another binary on the system. Or use NSThread, it should give you more performance than what you can get with CPython threads.

Worker threads in Ruby

I am writing a simple memory game using ruby + qt (trying to get away from c++ for while...)
In order to allow a X second timeout to view two open pieces, I need either timers or do the work in a background thread.
What is the simplest way of implementing this without reinventing the wheel?
Ruby threads? Qt threads? Qt timers?
I dont know if it is the best solution but:
block=Proc.new{ Thread.pass }
timer=Qt::Timer.new(window)
invoke=Qt::BlockInvocation.new(timer, block, "invoke()")
Qt::Object.connect(timer, SIGNAL("timeout()"), invoke, SLOT("invoke()"))
timer.start(1)
Makes ruby threads work! Adjust start(x) for your needs.
The decision to choose QT threads/timers or Ruby ones is probably a personal one, but you should remember that Ruby threads are green. This means that they are implemented by the Ruby interpreter and cannot scale across multiple processor cores. Though, for a simple memory game with a timer I'm guessing you probably don't need to worry about that.
Although somewhat unrelated, Midiator, a Ruby interface to MIDI devices uses Ruby threads to implement a timer.
Also, have a look at Leslie Viljoen's article, he says that Ruby's threads lock up when QT form widgets are waiting for input. He also provides some sample code to implement QT timers (which look quite easy and appropiate for what you are doing).
Thanks.
Solved it using QTimer::singleShot.
Sufficient - in my case, fires a one time timer every time two tiles are displayed.

Resources