Find Real Fine-grained Parallel Scientific Applications that use MPI + OpenMP Task - parallel-processing

I am currently working on a project that we slightly modified the OpenMP code. We have googled and looked into the OpenMP forum but could not find real fine-grained parallel scientific applications that are based on MPI+OpenMP tasking.
We want to evaluate our changes with some real scientific applications using MPI+OpenMP tasking and understand how it performs.
It would be greatly appreciated if someone could point us to some open-source sites where we could just download and see the numbers. If it is not open-source, we can also try to contact them.

Related

Is there a FreeRTOS howto for Cortex M7 about how to supervise/trace a system with few tasks (what features of kernel to be used)

I'm slowly assembling the picture of how to use FreeRTOS in a real world application.
I've read a lot of partial features (stack supervision, memory, malloc etc...).
But haven't anywhere found a good instruction, what "supervision" to use to be able to follow the performance of tasks, system also after debugger is not connected anymore...
Can anyone help with some pointers, advices?
What features do you activate when a FreeRTOS app is designed?
How do you supervise, what is going on with tasks?
I'd rather read something short, to try feature by feature and see how it works. Something more for beginners. I understand, I have the documentation, but what I'm after is gradual introduction in FreeRTOS with examples. Maybe I overlooked a good info to read...
Let me illustrate it with few questions that I don't have the answers for:
Should I have a separate supervision task, that gathers the info about other tasks (state,memory,..) ?
What features should be used to supervise FreeRTOS based app in an "professional" way?
Should I use ITM/SWO, or maybe RTT?
Do you leave serial console on the system to supervise it?
Thanks in advance,
regards.
I'm slowly assembling the picture of how to use FreeRTOS in a real
world application. I've read a lot of partial features (stack
supervision, memory, malloc etc...). [...]
Can anyone help with some pointers, advices?
On the freeRTOS website, you find a lot of documentation for introduction as well as to understand detail features in depth.
I'd rather read something short, to try feature by feature and see how
it works. Something more for beginners. I understand, I have the
documenation, but what I'm after is gradual introduction in FreeRTOS
with examples. Maybe I overlooked a good info to read...
There is also a lot of third-party documentation. You may want to read general literature about RTOSes and how to use them: First, because many of them refer to one of the most well-known OSS implementation - freeRTOS. Second, because when working with RTOS, one has to take care of virtually the same aspects independent from which RTOS implementation is used.
How do you supervise, what is going on with tasks?
This depends on the purpose of supervision:
If the system that runs the RTOS is critical in some meaning
(e. g., it implements functional safety or
security requirements),
you'll probably need certain supervision measures at runtime that depend on the type and level of criticality.
Violating the expectations of such supervision usually triggers the system to switch off and fall into some kind of safe/secure operation mode.
More usually, you need supervision to debug or trace the application during development and testing to gain insights why certain errors appear in system behaviour, or how long the tasks/ISRs in the system need to execute and how long they are suppressing other contexts in doing so.
This will often allow you to attach a debug/trace adapter to the system all the time.
Violating the expectations here means guiding the developer to a remaining error in the system under development/test.
For many kinds of applications, you may have to measure (and log) the task timings over larger periods in order to get reliable statistics under controlled laboratory (or real-life) conditions.
Then you usually cannot keep a debug/trace adapter dongle at the embedded system because this would disturb the procedures under test. So, a logging concept/implementation is needed.
You have to evaluate the purpose of supervision. Then you can look up this board and others for more specific help and re-post further questions you may have.
But haven't anywhere found a good instruction, what "supervision" to use to be able to follow the performance of tasks, system also after debugger is not connected anymore...
What features do you activate when a FreeRTOS app is designed?
All your application requires (see above). One by one!
Let me illustrate with few questions, that I don't have the answers
for:
Should I have a separate supervision task, that gathers the info about other tasks (state,memory,..)?
What features should be used to supervise FreeRTOS based app in an "professional" way ?
Should I use ITM/SWO, or maybe RTT?
Do you leave serial console on the system to supervise it?
This all depends on the answers you find about the purpose of supervision.
The professional way to deal with it is a top-down approach to focus on the system requirements (and development needs), and to design/implement everything that is necessary to fulfill them.
If you are looking for a way to get a first insight how to activate ITM/SWO trace of freeRTOS for educational purposes, I can recommend the beautiful tutorial in the Atollic blog, a beginners' intro spread over several free articles, step-by-step.
For RTOS architecture hints, you may also like youtube introductions like the channel of beningo engineering, for example.

RTOS Alongside Windows

I have a question about a family of softwares, of which one example is INtime, which lets you run a real-time operating system in parallel with Windows.
I have a reasonable grasp on how Windows works, including kernel/driver/application security rings etc. Similarly, I know how a RTOS runs on a dedicated system.
The Simple Question:
How do these go about existing together without fighting over hardware or other similar problems? How is the allocation or resources made, and how is this integrated with Windows?
Slightly more complicated:
What are the steps I would have to take if I wanted to develop something similar myself? Are there some open-source embodiment's of this paradigm I can inspect to glean some more understanding?

Parallel programming service on internet

Here is my question:
Is tere any service or technology to run parallel algorythms on more computer without knowing them?
For example: I write a parallel algorythm. My friends install a simple client app, and if they have internet connection, they can help my calculation with their free processor capacity. I would like to see them like an additional core in my CPU.
If there is no technology like that, is there any unsolvable problems with developing one? (I know there must be a lot problems with code trasfering, operation systems, and compatibility)
I believe that you can use BOINC to set up your own volunteer computing project. But I have no experience of this to report.

Performant and Easy to Use Non-GPLed Genetic Programming Library

I would like to build an application that uses Genetic Programming to figure out what exactly the user is asking. It's a programming application for non-programmers. Basically the user feeds the application a bunch of examples, and from the examples the application will derive the rules required to build a new program for the user's own use/distribution.
I've built prototypes using linear regression but it could only solve simple problems. This week I experimented genetic programming using pyevolve and it worked much more brilliantly than I expected! However, I suspect it being written in pure python made it require dozens of seconds to solve an example, whereas in my application I only have at most a couple of seconds time.
I've been trying to find a more performant library that was as easy to use as pyevolve but cannot find a suitable one. I tried openBeagle but after getting an example running, and hours of poring through the documentation later, I still cannot find a way to actually pick an individual out of the "Vivarium". I've seen people recommend GAUL but that is a GPL library and will limit how I can license my future application. I've tried to download lil-gp but the ftp download links are locked by a university's login screen.
Since the application will be a Mac OS X cocoa application, I did not consider Java, C# or Matlab GP libraries.
As a developer of Open BEAGLE I still recommend you to use that library if you seek a fast GP library. Retrieving your best individual would actually be done by running a second program that parses the XML file that is logged at the end of the evolution. Otherwise, you can access it through the Vivarium.getHallOfFame() method and then sort it and access the first element with the HallOfFame.operator[]. The Member you'll get is a struct of the individual with the generation it was recorded and in what deme it was.
That way you can get access to the best individual that ever lived in your evolution.
If you have specific questions on Open BEAGLE I recommend you to ask them directly to the developer list, we usually answer very quickly.
Although, if you wish to try a very different library in Python I recommend you DEAP that allows a lot more flexibility than Pyevolve. Some GP examples run much faster under PyPy than Python.
If you ask the key developer of the GAUL project for permission to use an alternative license agreement, then he* is quite likely to agree.
*"he" is me.

What is the "Processing" Programming language used for? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
The language site: http://processing.org/
What are people using Processing for? I have the opportunity to learn this in a classroom setting and am wondering if it will be a good use of my time.
Yes, it is useful and not a waste of time. I'm using Processing mainly for building proof-of-concepts for visualisations and graphic experiments. The time between an idea in my head and working code on my laptop is small, mainly because Processing does not throw too many obstacles in that course.
The ease of experimenting with things in Processing is an advantage in learning to program. Processing is actually a front-end to Java programs. At run-time the Processing code is translated to Java code.
Processing comes with a small but capable development environment (IDE), excellent documentation, a large library of extensions and a significant set of examples and demos.
Finally, I strongly recommend the book Processing: A Programming Handbook for Visual Designers and Artists by Casey Reas and Ben Fry, the authors of Processing. It's a beautiful book, carefully edited and full of sources for inspiration.
Processing has been used for hundreds of high-end projects in a wide range of fields, from multimedia installations to information visualization. It is not a toy or an educational exercise, despite its roots as a teaching tool.
The core application framework simplifies most common multimedia needs (OpenGL, Quicktime, PDF export, camera capture), removing project overhead involved in the boring task of setting up basic applications.It uses an extensible code structure that has allowed the creation of dozens of useful libraries for everything from 3D import / export to complex geometry synthesis.
So no, it is not a waste of time.
A references from my own work:
Stockspace
Anything that beautiful could never be a waste of time. :) It's probably the leading tool in its space, which would be something like, "Declarative languages for visualizing data". (Though I'm told it can do more than that.) Its got a O'Reilly book - that's always a great sign.
'Useful' and 'Beautiful' do not describe the library (processing is not a language), but the programs written in it. They are usually beautiful, but can be useful, too. Perhaps browsing it's showcase can give you a hint about some useful programs. Processing is very well suited for visualization apps, so it can indeed be useful for that.
Now, that's for the usefulness of the applications. For the usefulness of programming them, I think it's a pretty cool way to show more visually how some fundamental concepts of programs work, which you may find enjoyable. Processing is being used a lot nowadays to teach fundamentals of programming; it'd be cool to learn recursion by making fractals.
I have used processing for many interactive installations and found it really useful, because you get real fast results. Programming visual effects is really easy and elegant.
Some examples to watch:
http://www.youtube.com/watch?v=Ziv8Q5N7mSU
http://www.youtube.com/watch?v=zrT5uJox0J0
http://www.youtube.com/watch?v=Y58wBAp7mac
http://www.youtube.com/watch?v=EZp5HsFKxCs
http://www.youtube.com/watch?v=d4LcfsHQnYw
If you are dealing with a lot of data (many bitmaps, videos and sound) you might consider its limitations. I was happy to use it and will continue using it for certain projects.
So as a conclusion: processing is no waste of time. It's a really useful language for real world applications (in its own domain of course).
I agree with what the other posters have said, but I would add that every development tool has advantages and disadvantages. While it is easy to jump right in and visualize stuff with processing, the drawback is that it is hard to incorporate processing code into another project. Tools are in development to make this easier, but if you want a graphical environment that works in your application, all the time that processing saves you when prototyping is mostly lost in the re-adaptation of the code or getting it to work.
Processing is definitely useful for many purposes.
I think the post on "Processing for Programmers" by Eliot Lash answers your question very well, and in much detail. I'll give some highlights based on my experience below, but I recommend you have a look at the post, which also covers practicalities.
Your question has to do with the perception of Processing as a simple programming language and environment that doesn't require much experience to use. However, Processing is also a neat tool that makes life easier for more experienced developers, and the skills you develop with Processing can be definitely useful outside the "classroom" or prototyping contexts.
First off, as a programming language, Processing acts merely as a "layer" on top of Java that simplifies things. All Processing code is translated to Java code first. This means you can write Java code and import Java libraries in your Processing code, within (or outside) the Processing IDE. Pedagogically, this helps Processing serve as a "gateway" programming language into Java and other fully-featured languages. You can start coding with Processing, slowly make way into Java in a familiar environment, and then progress to more advanced tools.
You can also import Processing functionality into your Java projects (see here and here). This lets you exploit the speed and simplicity of Processing for multimedia etc., in the context of complex applications that require a more fully-featured programming language.
On top of these innate features, over the years, people have developed tools, libraries, etc. that can make your Processing skills useful in many contexts. Some examples:
Web/browser: Processing.js is a JavaScript library that lets you run Processing code verbatim in the browser. p5.js is a library for writing JavaScript based on Processing principles and functionality.
Mobile: You can develop Android apps using Processing by using the IDE in "Android mode".
Electronics, IoT...: The Arduino programming language and environment are very, very similar to Processing.

Resources