I have to implement a lottery scheduling algorithm for kernel. This question on SO is quite helpful but on older version of kernel. Moreover I would like to where are other scheduling algorithms are implemented. I mean where I can find the source code related to, say round robin scheduling, in the kernel source.
Any help appriciated
Related
Firstly, I'd like to mention I wasn't sure where this question should go. It may be better of in a sister site but I didn't know which one therefore if it is in the wrong category, would someone be able to migrate it?
I'm conducting some research into how heterogeneous architectures work in terms of the short term scheduling queue. I know its a collection of different processors such as CPU's and GPU's.
I also found out some information regarding asymmetric scheduling. I know homogeneous uses symmetrical, but struggling to understand how heterogeneous works?
I am just starting out studying machine learning and currently doing Andrew Ng's course on Coursera. I am going through the course but am a bit lost. It will make studying all those algorithms/theory a lot rewarding if I can see some use cases for them.
For example, the first topic I read about was gradient descent and then linear regression and logistic regression. Are these used directly in practice or are other algorithms like k-means and kernel density used? I guess I am trying to get real world (software engineering, data mining) examples of these topics. Can some one suggest a post that might have some explanation of any machine learning algorithm(s) usage? It will be greatly helpful.
NO FREE LUNCH THEOREM states that if algorithm A outperforms algorithm B for some problem, then loosely speaking there must exist exactly as many other problems where B outperforms.
So, it is difficult to link algorithm with particular use case.
If you are looking only for use cases where you can use machine learning algorithms, visit https://www.kaggle.com/wiki/DataScienceUseCases
Update : Just now, i came across http://pkghosh.wordpress.com Check it out. (use cases with algorithms)
I'm going to be a college student at 40. I'll be studying IT and plan on doing a bachelor's project. The basic idea is to try to use neural nets to evaluate bias in media. The training data will be political blogs with well known biases.
What I need is a programming language that can run parallel on multiple machines that are networked, but not clustered. I have 2 Linux machines and 3 running OS X. I would prefer if the language would compile to binary rather than bytecode or to a VM, but I'll take what I can get. I don't need any GUI libraries, so that's not a constraint. I do most of my programming in python, but I'm willing to learn another language if it'll make the parallel execution easier. Any suggestions?
I strongly suggest that you consider sticking with Python. Learning a new language, at the same time as you start tackling parallel / distributed computing, may well throw a spanner in your works that you just don't need. I believe that your time will be better spent tackling the issues of building the neural net you want, rather than learning the peculiarities of a new language. And, by reputation, Python is eminently suitable for what you plan. It does, of course, fail your requirement that it should compile to binary but I'm not sure where that is coming from.
When you write parallel programming over multiple machines without clustering I'm thinking oh, he means distributed programming. I tend towards the view that parallel computing is a niche within distributed computing, in part defined by the homogeneity (from the programmer's point of view) of the resources used. This apparent homogeneity is aided tremendously if it is supported by homogeneity of hardware so that there is little gap between vision and reality.
If what you really have is an assortment of computers of different specs and different OSes and communicating over a non-dedicated network then I fear that you will find it difficult building the illusion of homogeneity for the programmer (ie for yourself) and would be better setting out to build a distributed system from the get go.
I just plain disagree with the answer telling you to pick up C and MPI, I think you'll make progress much faster much quicker with Python.
Good luck with your studies.
Oh, and if you just won't take my advice to forget about a new programming language, consider Haskell and Erlang.
Sounds like like an interesting project. However thinking laterally wouldn't a GPU based system (ie massively parallel) be more the soupe du jour? Hence something like C + CUDA perhaps?
I don't know if it's still around but OCCAM (from the Transputers of old) was designed to be a parallel system, with it's PAR and SEQ constructs. I've just read of this one on linux
That sounds like C + MPI to me.
I have a computational intensive task which I used CUDA to implement it and now I want to make it even faster with FPGAs (if possible)
The system I want to implement is a series of computations each similar to matrix multiplication in sense of being parallel. It also has some non-parallel parts in between. It works with big amounts of data.
Although I want it as fast as possible, I have enough time to learn and explore with FPGAs.
here I'm asking for suggestions on how I start my path? Which FPGA to choose and where to learn about it. any website or online class or books? I've decided to do this anyway but your idea of whether this will be faster on FPGA or not would be helpful too.
The big wins from an FPGA over using a GPU come from:
Using non-standard word widths optimised to your application. This allows denser logic, which allows more parallel processing blocks
using your knowledge of the required accesses to external RAM to schedule them in hardware more efficiently than a general purpose memory controller can.
The downside is getting data to and from the FPGA. Draw a data-transfer diagram before you start. Even if the FPGA provides infinite speedup, you might still find it's not worth the effort if there's loads of data to be shuffled to and fro!
It's likely you'll be wanting a PCI express based board. Which is (I imagine) a whole new learning-curve before you get to do anything with the FPGA - but if you're up for it, it'll be a very interesting task!
In terms of choosing FPGAs, have a play with the software tools from the various vendors - at the learning stage that's much more important than the chips themselves. You won't find (at this early learning-stage) a show-stopper feature in any of the various chips. Also take into account the availability of boards with your required interfaces on, and any IP-core you might need to do the high-speed interfacing (eg PCIe)
You can get a substantial speedup on most parallel problems with an FPGA.
However, in addition to implementing your computation on the FPGA, there's a lot of work involved in getting the data back and forth from the CPU/main memory. This will require implementation of (for example) a PCI Express endpoint in the FPGA logic (bus mastering for maximum speed) and custom drivers on the software side. Most operating systems will require those drivers to be developed in kernel mode.
And you can't just use the most straightforward approach for FPGA programming either. You're going to need to worry about pipelining and clock synchronization in order to maximize throughput.
In other words, it's a substantially difficult task even for engineers with years of FPGA experience. I strongly suggest you find someone to work with on this. Depending on how proprietary your project is, you might find skilled academics willing to work with you as long as you provide them with all materials and publication rights.
If you're determined to go it alone, you'll need some hardware. Many different companies offer FPGA wired up as accelerators, for example http://www.nallatech.com/pci-express-cards.html
Depending on whether you choose a Xilinx or Altera FPGA, you'll find considerable sample code and tutorials for getting PCI express working.
What are the fast solvers for FEM equations? I would prefer open source implementation, but if there is a commercial implementation, then I won't mind paying for it.
Code Aster is an open source FE code. code aster
The pre- and post-processing is usually done with Salome - both originate from EDF.
How about FEAP. It has full source code available when you purchase it. It is pretty big project, maybe its too much for your needs, but check it out.
FEAP is a general purpose finite
element analysis program which is
designed for research and educational
use. Source code of the full program
is available for compilation using
Windows (Compaq or Intel compiler),
LINUX or UNIX operating systems, and
Mac OS X based Apple systems.
It has also a Personal Edition called FEAPpv available for free, including source code. Differences between those versions are listed in this pdf.
"brad"? do you mean "broad"?
you don't say if your problem is linear or non-linear. that'll make a very big difference.
the solver depends on the type of equation and the size of your problem. for elliptical pdes you can choose standard linear algebra techniques like lu decomposition, iterative methods like successive over relaxation, or wavefront solvers that minimize memory consumption.
some people like solving non-linear steady-state problems as if they were dynamics problems. the idea is to create "fake" mass and damping matricies and use explicit time integration to converge to steady state.
lots of choices. standard linear algebra is a good starting point.
language? java?
Oops, that's kind of a brad question.
Solving differential equations usually starts with analyzing equation itself. Some equations are notoriously difficult to solve efficiently, e.g. indifinite boundary problems.
So if you have something else than an elliptic problem, you'll might better prepare for hard times ahead.
Next important and crutial part is transfering the contiouus problem into a discrete mesh. Typically the accuracy of your results will vary with different ways to generate this mesh. You'll need some sound experience here.
So I'd say there is nothing like the fast slover for FEM equations. Anyway, while Wikipedia gives a short overview of the topic, you might perhaps also have a look a the german Wikipedia page. It lists well-known FEM implementations.
OpenFoam and Elmer are two open source solvers. Not sure about Elmer, but I think OpenFoam might uses the control volume approach.
I used OpenFOAM for fluid dynamics research. You can do parallel processing with it with MPI. And if you have a Cray T3E it will be fast!
It's open source :D
http://www.opencfd.co.uk/openfoam/features.html#features
Please have look for Deal.II open source library:
http://www.dealii.org/
They provide also VirtualBox image which comes pre-installed libs.