Different approaches to an allocation algorithm for processes? - algorithm

I am looking to implement an automated way of allocating processes to a variety of servers available. There are many types of servers (characterized by things like location, cpu, network card, etc..) and there are various types of processes (more than there are servers) with different priorities and location/hardware requirements. I can think of pretty much greedy algorithms that are simplistic in nature but was wondering what other references and approaches exist for this type of problem (which I feel is pretty standard). I am also interested in solving a related problem - in which say we remove one of the servers after things have been allocated and we need to reshuffle with minimal interference. This latter one I also feel is standard but I'm not sure what some good references to look at are. Any suggestions on where to start?

Your question is pretty vague. Normally problems like this are handled either by modeling them as a set of linear equations and optimizing an objective function given the linear constraints, or the problem is modeled as a knapsack problem.

Related

What is an approach for designing complex FSMs?

At work, we use FSMs. Recently, I had to design an FSM for a problem that I deem "a little too complex for a simple FSM". Why? Because the problem has about 6 different data dimensions, and many permutations of this data impact the behaviour of the solution significantly. My brain thinks "6 data attributes means 2^6 +1 permutations of this data" if it were all boolean data. Furthermore, there are about 8 inputs that can happen at any given time.
This problem made me aware that my FSM creating skills stop at simple problems used in my hobby projects. At work, we are constrained to use FSMs. That means, I cannot just say "this problem is outside of the scope of FSMs. I'll use something else." Indeed, the FSM platform we have in place does provide a lot of power for our solutions.
Question: What is an approach for designing an FSM when the problem is sufficiently complex? I've researched a bit on this and found a few papers which, honestly, didn't help me much. I hope there are some best practices for this, and all I'm asking for is one. Please and thanks.
I suppose that you might be experiencing the usual "state-transition explosion", which is the known problem of traditional "flat" FSMs. The traditional FSMs "explode", because they inflict repetitions of the same reactions in many states. FSMs lack any mechanisms to capture commonalities of behavior among states. The long know solution is to use Hierarchical State Machines (a.k.a. Harel statecharts or UML state machines). HSM support the concept of state nesting, in which sub-states inherit behavior from the surrounding superstate(s). When used correctly, state nesting eliminates the repetitions and counteracts the "explosion" problem. Most non-trivial problems are not really tractable with FSMs, but are quite manageable with HSMs.

Comparing algorithmic performance to old methods

I have written a new algorithm for something. Now I need to compare it with existing methods, some of which are old about 10 years.
The idea I had is to look at benchmarks of different processors over the years in order to establish how much faster my processor (i7-920) is than average processor from 2003. Then I would simply divide old methods' execution time by the speedup factor and use those numbers to compare with my own algorithm.
Has something like this been done? So I don't redo the existing work.
Can such a comparison be done some other way?
Are there some scientific papers written about such comparisons which I can reference?
I don't know which of these are possible for you, but here's a list of options I can think of:
Run their implementation side-by-side on your machine against yours.
This is the best option.
Rewrite their implementations and do (1).
You preferably need to compare it against their test to ensure you get vaguely similar results.
Find a library that implements their algorithm (or multiple libraries) and do (1).
I suggest multiple libraries, if possible, since a single one may not have implemented the algorithm efficiently. You may also want to compare these against their test.
Compare the algorithms mathematically.
This may be difficult, but it's not impossible.
Do what you presented.
(a) I would not recommend this as there are other determining factors in your computer other than the processor speed that affect the speed of an algorithm. Getting an equation that perfectly balances these will likely be very difficult.
(b) There is a massive difference between top and bottom of the line computers, so using the average is not a particularly good idea. If the author didn't provide details regarding this, I'm afraid your benchmark is not likely to be too accurate.
Go out and buy a machine of similar specs to the one used by the desired test to benchmark on.
A 10-year-old machine should be pretty cheap, if you can find one. Also, see (5.b).
Contact the author to allow for any of the other options.
Papers often provide contact details of the authors, or you should be able to find them elsewhere if they have any sort of online presence and you're half-decent at using Google.
If I were reviewing your results, I would be annoyed if you attempted to demonstrate less than an order of magnitude speedup this way. There are a lot of variables determining algorithm performance, and I would be skeptical that a generic benchmark could capture the right ones. My gold standard is old and new algorithms implemented by the same programmer, with similar effort made to optimize, running on the same hardware. Using the previous authors' implementation instead of making a new one is commonplace in the experimental algorithms literature, but using different hardware isn't.
Algorithmic performance is usually measured in big-O terms, for which it is better to count basic operations, like comparisons, and do it for a range of input sizes.
If you must measure overall time, at least eliminate other sources of difference.
As #larsmans said, do it on the same processor.
Also, if there is existing work, there's no harm in repeating it.
Generally, in science, that's a good thing.
You should attempt to reduce the amount of differing factors between the two runs. I think just run-timing the two algorithms side by side on the same machine and/or comparing their Big O times are both equally valid and important. You should also attempt to use updated libraries and other external functions; using outdated ones my also be the cause of timing results.

From explicit threading to TBB

Assuming I have a parallel algorithm that uses explicit threading with one or two locks for synchronization and is optimized to take advantage of cache lines (including shared L3 cache between multiple cores), what are good ways of incorporating that into a TBB program? The algorithm in question does not break down as nicely into tasks as it does into threads.
Without any further information (code sample, or any kind of generic presentation of the algorithm, such as a flowchart) I would say that the best way to make such transition is by refactoring the algorithm, isolating repeatable actions and trying to combine them in tasks (a range of one or more activities that have a common purpose)...
unfortunately there's no magical formula to make this transition because these techniques are fundamentally 2 answers to solve the same problem, therefore they share some common ground, but they also tackle the problem from 2 different angles...

What are some hints that an algorithm should parallelized?

My experience thus far has shown me that even with multi-core processors, parallelizing an algorithm won't always speed it up noticably. In fact, sometimes it can slow things down. What are some good hints that an algorithm can be sped up significantly by being parallelized?
(Of course given the caveats with premature optimization and their correlation to evil)
To gain the most benefit from parallelisation, a task should be able to be broken into similiar-sized course-grain chunks that are independent (or mostly so), and require little communication of data or synchronisation between the chunks.
Fine-grain parallelisation, almost always suffers from increased overheads, and will have a finite speed-up regardless of the number of physical cores available.
[The caveat to this, is those architectures that have a very large no. of 'cores' (such as the connection machines 64,000 cores). These are well suited to calculations that can be broken into relatively simple actions assigned to a particular topology (like a rectangular mesh).]
If you can divide the work into independent parts then it may be parallelized well.
Remember also Amdahl's Law which is a sobering reminder of how little we can expect in terms of performances gains by adding more cores to most programs.
First, check out this paper by the late Jim Gray:
Distributed Computing Economics
Actually, this will clear up some misunderstanding based on what you wrote in the question. Obviously, if the less amenable your problem set is to being discretized, the more difficult it will be.
Any time you have computations that depend on previous computations, it is not a parallel problem. Things like linear image processing, brute force methods, and genetic algorithms are all easily parallelized.
A good analogy is what could you work on that you could get a bunch of friends to do different parts at once? For example, putting ikea furniture together might parallelize well if different people can work on different sections, but rolling wallpaper might not because you need to do walls in sequence.
If you're doing large matrix computations, like simulations involving finite element models, these can often be broken down into smaller pieces in straight-forward ways. Matrix-vector multiplies can benefit well from parallelization, assuming you are dealing with very large matrices. Unless there is a real performance bottleneck that is causing code to run slowly, it's probably not necessary to hassle with parallel processing.
Well, if you need lots of locks for it to work, then its probably one of those difficult algorithms that doesn't parallelise well. Is there any part of the algorithm that can be broken up into separate parts that don't need to touch each other?

Real world implementations of "classical algorithms"

I wonder how many of you have implemented one of computer science's "classical algorithms" like Dijkstra's algorithm or data structures (e.g. binary search trees) in a real world, not academic project?
Is there a benefit to our dayjobs in knowing these algorithms and data structures when there are tons of libraries, frameworks and APIs which give you the same functionality?
Is there a benefit to our dayjobs in knowing these algorithms and data structures when there are tons of libraries, frameworks and APIs which give you the same functionality?
The library doesn't know what your problem domain is and won't be able to chose the correct algorithm to do the job. That is why I think it is important to know about them: then YOU can make the correct choice of algorithms to solve YOUR problem.
Knowing, or being able to understand these algorithms is important, these are the tools of your trade. It does not mean you have to be able to implement A* in an hour from memory. But you should be able to figure out what the advantages of using a red-black tree as opposed to a normal unbalanced tree are so you can decide if you need it or not. You need to be able to judge the fitness of an algorithm for solving your problem.
This might sound too school-masterish but these "classical algorithms" were not invented to give college students exam questions, they were invented to solve problems or improve on current solutions, just like the array, the linked list or the stack are building blocks to write a program so are some of these. Just like in math where you move from addition and subtraction to integration and differentiation, these are advanced techniques that will help you solve problems that are out there.
They might not be directly applicable to your problems or work situation but in the long run knowing of them will help you as a professional software engineer.
To answer your question, I did an implementation of A* recently for a game.
Is there a benefit to understanding your tools, rather than simply knowing that they exist?
Yes, of course there is. Taking a trivial example, don't you think there's a benefit to knowing what the difference is List (or your language's equivalent dynamic array implementation) and LinkedList (or your language's equivalent)? It's pretty important to know that one has constant random access time, while the other is linear. And one requires N copies if you insert a value in the middle of the sequence, while the other can do it in constant time.
Don't you think there's an advantage to understanding that the same sorting algorithm isn't always optimal? That for almost-sorted data, quicksort sucks, for example? Naively just calling Sort() and hoping for the best can become ridiculously expensive if you don't understand what's happening under the hood.
Of course there are a lot of algorithms you probably won't need, but even so, just understanding how they work may make it easier for yourself to come up with efficient algorithms to solve other, unrelated, problems.
Well, someone has to write the libraries. While working at a mapping software company, I implemented Dijkstra's, as well as binary search trees, b-trees, n-ary trees, bk-trees and hidden markov models.
Besides, if all you want is a single 'well known' algorithm, and you also want the freedom to specialise it and optimise it if it becomes critical to performance, including a whole library seems like a poor choice.
We use a home grown implementation of a p-random number generator from Knuth SemiNumeric as an aid in some statistical processing
In my previous workplace, which was an EDA company, we implemented versions of Prim and Dijsktra's algorithms, disjoint set data structures, A* search and more. All of these had real world significance. I believe this is dependent on problem domain - some domains are more algorithm-intensive and some less so.
Having said that, there is a fine line to walk - I see no business reason for re-implementing STL or Java Generics. In many cases, a standard library is better than "inventing a wheel". The more you are near your core application, the more it may be necessary to implement a textbook algorithm or data structure.
If you never work with performance-critical code, consider yourself lucky. However, I consider this scenario unrealistic. Performance problems could occur anywhere. And then it's necessary to know how to fix that problem. Obviously, merely knowing a few algorithm names isn't enough here – unless you want to implement them all and try them out one after the other.
No, knowing (at least some of) the inner workings of different algorithms is important for gauging their strengths and weaknesses and for analyzing how they would handle your situation.
Obviously, if there's a library already implementing exactly what you need, you're incredibly lucky. But let's face it, even if there is such a library, using it is often not completely straightforward (at the very least, interfaces and data representation often have to be adapted) so it's still good to know what to expect.
A* for a pac man clone. It took me weeks to really get but to this day I consider it a thing of beauty.
I've had to implement some of the classical algorithms from numerical analysis. It was easier to write my own than to connect to an existing library. Also, I've had to write variations on classical algorithms because the textbook case didn't fit my application.
For classical data structures, I nearly always use the standard libraries, such as STL for C++. The one time recently when I thought STL didn't have the structure I needed (a heap) I rolled my own, only to have someone point out almost immediately that I didn't need to do that.
Classical algorithms I have used in actual work:
A topological sort
A red-black tree (although I will
confess that I only had to implement
insertions for that application and
it only got used in a prototype).
This got used to implement an
'ordered dict' type structure in
Python.
A priority queue
State machines of various sorts
Probably one or two others I can't remember.
As to the second part of the question:
An understanding of how the algorithms work, their complexity and semantics gets used on a fairly regular basis. They also inform the design of systems. Occasionally one has to do things involving parsing or protocol handling, or some computation that's slightly clever. Having a working knowledge of what the algorithms do, how they work, how expensive they are and where one might find them lying around in library code goes a long way to knowing how to avoid reinventing the wheel poorly.
I use the Levenshtein distance algorithm to help implement a 'Did you mean [suggested word]?' feature in our website search.
Works quite well when combined with our 'tagging' system, which allows us to associate extra words (other than those in title/description/etc) with items in the database. \
It's not perfect by any means, but it's way better than most corporate site searches, if I don't say so myself ; )
Classical algorithms are usually associated with something glamorous, like games, or Web search, or scientific computation. However, I had to use some of the classical algorithms for a mere enterprise application.
I was building a metadata migration tool, and I had to use topological sort for dependency resolution, various forms of graph traversals for queries on metadata, and a modified variation of Tarjan's union-find datastructure to partition forest-like structured metadata to trees.
That was a really satisfying experience. Most of those algorithms were implemented before, but their implementations lacked something that I would need for my task. That's why It's important to understand their internals.

Resources