I am currently a high school student who is doing a mentorship project in Hampton University. My group members and I are using Raspberry Pis to study parallel processing and super computers. Our project involves altering the clocking speed, number of Pis used, and quality of the SD Card inserted to find the processing speed of the Raspberry Pi cluster.
The problem I am facing is that I want to code my own benchmark software similar to LINPACK program to be able to test the processing speed. I am currently taking multivariable calculus and linear algebra, and I learned the computer languages java and c. I know how to operate in linux in pretty rudimentary level. Will it be possible for me to create a benchmark program? Or do I not have enough knowledge of the field? If so, can I get the steps involved in completing it? I am willing to learn new things and eager to do so. I am used to learning these things in a tutorial as well. And I forgot to mention. I have approximately three months to complete it, so the program does not have to be too complicated. Thank you for you time.
Related
I’m a college student and I’m trying to build an underwater robot with my team.
We plan to use stm32 and RPi. We will put our controller on stm32 and high-level algorithm (like path planning, object detection…) on Rpi. The reason we design it this way is that the controller needs to be calculated fast and high-level algorithms need more overhead.
But later I found out there is tons of package on ROS that support IMU and other attitude sensors. Therefore, I assume many people might build their controller on a board that can run ROS such as RPi.
As far as I know, RPi is slower than stm32 and has less port to connect to sensor and motor which makes me think that Rpi is not a desired place to run a controller.
So I’m wondering if I design it all wrong?
Robot application could vary so much, the suitable structure shall be very much according to use case, so it is difficult to have a standard answer, I just share my thoughts for your reference.
In general, I think Linux SBC(e.g. RPi) + MCU Controller(e.g. stm32/esp32) is a good solution for many use cases. I personally use RPi + ESP32 for a few robot designs, the reason is,
Linux is not a good realtime OS, MCU is good at handling time critical tasks, like motor control, IMU filtering;
Some protection mechnism need to be reliable even when central "brain" hang or whole system running into low voltage;
MCU is cheaper, smaller and flexible to distribute to any parts inside robot, it also helps our modularized design thinking;
Many new MCU is actually powerful enough to handle sophisticated tasks and could offload a lot from the central CPU;
With the vast array of micro controllers out there and even different levels of arduinos providing more power than the last, is there a mathematical way or some way of knowing how much processing power you need, just by analysis, to run your program as designed in order to choose the right micro?.
Without just trial and error. i.e without just trying it and if it is too slow buying the next chip up.
I've had to do performance projections for computer systems that did not exist yet. Things like cycle time ratios can only give a very rough guide. Generally, I had to resort to simulation, the nearest I could get to measuring on actual hardware.
That said, you may be able to find numbers for benchmarks similar to your code that will at least give you a starting point.
I would not do it by working up one chip at a time - your code may have a problem that makes it too slow for any feasible chip. I would try to find a chip that is fast enough, and work down if it is much faster than needed.
I'm going to be a college student at 40. I'll be studying IT and plan on doing a bachelor's project. The basic idea is to try to use neural nets to evaluate bias in media. The training data will be political blogs with well known biases.
What I need is a programming language that can run parallel on multiple machines that are networked, but not clustered. I have 2 Linux machines and 3 running OS X. I would prefer if the language would compile to binary rather than bytecode or to a VM, but I'll take what I can get. I don't need any GUI libraries, so that's not a constraint. I do most of my programming in python, but I'm willing to learn another language if it'll make the parallel execution easier. Any suggestions?
I strongly suggest that you consider sticking with Python. Learning a new language, at the same time as you start tackling parallel / distributed computing, may well throw a spanner in your works that you just don't need. I believe that your time will be better spent tackling the issues of building the neural net you want, rather than learning the peculiarities of a new language. And, by reputation, Python is eminently suitable for what you plan. It does, of course, fail your requirement that it should compile to binary but I'm not sure where that is coming from.
When you write parallel programming over multiple machines without clustering I'm thinking oh, he means distributed programming. I tend towards the view that parallel computing is a niche within distributed computing, in part defined by the homogeneity (from the programmer's point of view) of the resources used. This apparent homogeneity is aided tremendously if it is supported by homogeneity of hardware so that there is little gap between vision and reality.
If what you really have is an assortment of computers of different specs and different OSes and communicating over a non-dedicated network then I fear that you will find it difficult building the illusion of homogeneity for the programmer (ie for yourself) and would be better setting out to build a distributed system from the get go.
I just plain disagree with the answer telling you to pick up C and MPI, I think you'll make progress much faster much quicker with Python.
Good luck with your studies.
Oh, and if you just won't take my advice to forget about a new programming language, consider Haskell and Erlang.
Sounds like like an interesting project. However thinking laterally wouldn't a GPU based system (ie massively parallel) be more the soupe du jour? Hence something like C + CUDA perhaps?
I don't know if it's still around but OCCAM (from the Transputers of old) was designed to be a parallel system, with it's PAR and SEQ constructs. I've just read of this one on linux
That sounds like C + MPI to me.
I just created a VERY large neural net, albeit on very powerful hardware, and imagine my shock and disappointment, when I realized that NeuralFit[] from NeuralNetworks` package only seems to use one core, and not even to its fullest capacity. I was heartbroken. Do I really have to write an entire NN implementation from scratch? Or did I miss something simple?
My net took 200 inputs to 2 hidden layers of 300 neurons to produce 100 outputs. I understand we're talking about trillions of calculations, but as long as I know my hardware is the weak point - that can be upgraded. It should handle training of such a net fairly well if left alone for a while (4Ghz 8-thread machine with 24Gb of 2000Mhz CL7 memory running RAID-0 SSD drives on SATA-III - I'm fairly sure).
Ideas? Suggestions? Thanks in advance for your input.
I am the author of the Neural Network Package. It is easy to parallelize the evaluation of a neural network given the input. That is, to compute the output of the network given the inputs (and all the weights, the parameters of the network). However, this evaluation is not very time consuming and it is not very interesting to parallellize it for most problems. On the other hand, the training of the network is often time consuming and, unfortunately, not easy to parallelize. The training can be done with a different algorithms and best ones are not easy to parallelize. My contact info can be found at the product's homepage on the Wolfram web. Improvement suggestions are very welcome.
The last version of the package works fine one version 9 and 10 if you switch off the suggestion bar (under preferences). The reason for that is that the package use the old HelpBrowser for the documentation and it crash in combination with the suggestion bar.
yours Jonas
You can contact the author of the package directly, he is a very approachable fellow and might be able to make some suggestions.
I'm not sure how you wrote the code or how it is written inside the package you are using; try to use vectorization, it really speeds up the linear algebra computations. In the ml-class.org course you can see how it's made.
I need some advice on writing a program that will be used as part of a psychology experiment. The program will track small changes in reaction time. The experimental subject will be asked to solve a series of very simple math problems (such as "2x4=" or "3+5="). The answer is always a single digit. The program will determine the time between the presentation of the problem and the keystroke that answers it. (Typical reaction times are on the order of 200-300 milliseconds.)
I'm not a professional programmer, but about twenty years ago, I took some courses in PL/I, Pascal, BASIC, and APL. Before I invest the time in writing the program, I'd like to know whether I can get away with using a programming package that runs under Windows 7 (this would be the easiest approach for me), or whether I should be looking at a real-time operating system. I've encountered conflicting opinions on this matter, and I was hoping to get some expert consensus.
I'm not relishing the thought of installing some sort of open-source Linux distribution that has real-time capabilities -- but if that's what it takes to get reliable data, then so be it.
Affect seems like it could save you the programming: http://ppw.kuleuven.be/leerpsy/affect4/index2.php. Concerning accuracy on a windows machine, read this.