I understand that doing minimization in integer programming is a very complex problem. But what makes this problem so difficult?
If I were to (attempt) to write an algorithm to solve it, what would I need to take into account? I'm only familiar with the branch-and-bound technique for solving it and I'm wondering what sort of roadblocks I will face when attempting to apply this technique programatically.
I'm wondering what sort of roadblocks I will face when attempting to apply this technique programatically.
None in particular (assuming a fairly straightforward implementation without a lot of tricks). The algorithms aren’t complicated – they are complex, that’s a fundamental difference.
Techniques such as branch and bound or branch and cut try to prune the search tree and thus speed up the running time. But the whole problem tree is nevertheless exponentially large, hence the problem.
Like the other said, those problem are very hard and there are no simple solution nor simple algorithm that apply to all classes of problems.
The "classic" way of solving those problem is to do a branch-and-bound and apply the simplex algorithm at each node, as you say in your question. However, I would not recommand implementing this yourself if you are not an expert.
As for a lot of numerical methods, it is very hard to get it right (good parameter values, good optimisations), and a lot have been done (see CPLEX, COIN_OR, etc).
It's not that you can't do it: the branch-and-bound part is pretty straigtforward, but without all the tricks your program will be really slow.
Also, you will need a simplex implementation and this is not something you want to do yourself: you will have to use a third-part lib anyway.
Most likely, wether
if your data set is not that big (try it !), and you are not interested in solving it really fast: use something like COIN-OR or lp_solve with the default method, it will work;
if your data set is really big (and/or you need to find a solution quickly each time), you need to work with an expert in this field.
My main point is that only experienced people will know which algorithm will perform better on your problem, wich form of the model will be the easiest to solve, which method to apply and what kind of optimisations you can try.
If you are interested in those problems, I would recommend this book for an introduction to the math behind all this (with a lot of examples). It is incredibly expansive, so you may want to go to a library instead of buying it: Nemhauser and Wolsey.
Integer programming is NP-hard. That's why it is so difficult.
There is a tutorial that you might be interested.
The first thing you do before you solve any mathematical optimization problem is you categorize it. Except special cases, most of the time, integer programming problems will be np-hard. So instead of using an "algorithm", you will use a "heuristic". The final solution you will find will not be a guaranteed optimum, but it will be a pretty good solution for real life problems.
Your main roadblock will your programming skills. Heuristic programming requires a good level of programming understanding. So instead of programming your own heuristic you are better of using well known package (eg, COIN-OR, free). This way you can focus on your problem instead of the heuristic.
Related
As the question describe itself "What is the core difference between algorithm and pseudocode?".
algorithm
An algorithm is a procedure for solving a problem in terms of the actions to be executed and the order in which those actions are to be executed. An algorithm is merely the sequence of steps taken to solve a problem. The steps are normally "sequence," "selection, " "iteration," and a case-type statement.
Pseudocode
Pseudocode is an artificial and informal language that helps programmers develop algorithms. Pseudocode is a "text-based" detail (algorithmic) design tool.
The rules of Pseudocode are reasonably straightforward. All statements showing "dependency" are to be indented. These include while, do, for, if, switch. Examples below will illustrate this notion.
I think all the other answers give useful explanations and definitions, but I'm going to give mine.
An algorithm is the idea of how to obtain some result from some input. It is an abstract concept; an algorithm is not something material by itself, but more something like an imagination or a deduction, a thing that only exists in the mind. In the broad sense, any sequence of steps that give you some thing(s) from other thing(s) could be called an algorithm. For example, if the screen of your computer is dirty, "spraying some glass cleaner on it and wipe it with a cloth" could be said to be an algorithm to solve the problem of how to obtain a clean screen form a dirty screen. It is important to note the difference between the problem itself (getting a clean screen) and the algorithm (wiping it with a cloth and cleaner); generally, several different algorithms are possible to solve the same problem. The idea of complexity is inherent to the algorithms itself, not the problem or the particular implementation or execution of the algorithm.
Pseudocode is a language to express algorithms. Since, as said before, algorithms are only concepts, we need to use something to express them and explain them to other people. Pseudocode is a convenient way for many computer science algorithms, because it is usually unambiguous, easy to read and somewhat similar to many programming languages. However, a specific programming language like C or Java can also be used to express and algorithm (it's just less convenient to those not familiarized with that language). In other cases, pseudocode may not be the best way to express an algorithm; for example, many graph and tree algorithms can be explained more easily with drawings or diagrams. In the previous example, the algorithm to get your screen cleaned is probably better expressed in a natural language like English, because it is simple and specific enough for that case.
Obviously, terms are frequently used loosely and exchanged depending on the context, and there's no need to be nitpicky about it, but I think it is important to have the difference clear. An algorithm doesn't stop being an algorithm just because it is written in Python instead of pseudocode. Pseudocode is just a convenient and widespread communication tool to express them.
An algorithm is something (a sequence of steps) you can do. Pseudocode is a notation to describe an algorithm.
Algorithm is something which is represented in mathematical terms. It includes, analysis, complexity considerations(best, average and worstcase analysis etc.).Pseudo code is a human readable representation of a program.
From Wikipedia :
Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state.
With a pseudo language one can implement an algorithm without using a programming language such as C.
An example of pseudo language is Flow Charts.
I was wondering when some one asks you to solve an algorithmic problem, is it a good way to actually start off with Hastable, Hashset or HashMap. Normally i have heard people saying you shouldn't come up with Hashes as your first answer.
So how should we go about in algorithms: In-place should be given importance or make sure time complexity is best
I'm not trying to generalise, but still some suggestions would be helpful.
Thanks
The best you can hope for is a generalized answer for your generalized question.
It depends.
The reason there are many different algorithms is because there is not always 1 algorithm that is the best. And many algorithms aim to solve different problems from each other. Some algorithms it makes no sense to even talk about hash tables.
If someone asks me to solve an algorithmic problem though, I will probably try to use something that is built in to the language I'm using before designing my own algorithm. The reason is because I value my time. If I find later that the code is not efficient enough, then I can look for a better way to do it.
I think it is really situational. If random access is a priority and you need fast access and little constraint on memory utilization and no sequential access, then Hashtable, (et al), is the choice.
When faced with a problem in software I usually see a solution right away. Of course, what I see is usually somewhat off, and I always need to sit down and design (admittedly, I usually don't design enough), but I get a certain intuition right away.
My problem is I don't get that same intuition when it comes to advanced algorithms. I feel much more up to the task of building another Facebook then building another Google search, or a Music Genom project. It's probably because I've been building software for quite some time, but I have little experience with composing algorithms.
I would like the community's advice on what to read and what projects to undertake to be better at composing algorithms.
(This question has nothing to do with Algorithmic composition. Well, almost nothing)
+1 To whoever said experience is the best teacher.
There are several online portals which have a lot of programming problems, that you can submit your own solutions to, and get an automated pass/fail indication.
http://www.spoj.pl/
http://uva.onlinejudge.org/
http://www.topcoder.com/tc
http://code.google.com/codejam/contests.html
http://projecteuler.net/
https://codeforces.com
https://leetcode.com
The USACO training site is the training program that all USA computing olympiad participants go through. It goes step by step, introducing more and more complex algorithms as you go.
You might find it helpful to perform algorithms physically. For example, when you're studying sorting algorithms, practice doing each one with a deck of cards. That will activate different parts of your brain than reading or programming alone will.
Steve Yegge referred to "The Algorithm Design Manual" in one of his rants. I haven't seen it myself, but it sounds like it's just the ticket from his description.
My absolute favorite for this kind of interview preparation is Steven Skiena's The Algorithm Design Manual. More than any other book it helped me understand just how astonishingly commonplace (and important) graph problems are – they should be part of every working programmer's toolkit. The book also covers basic data structures and sorting algorithms, which is a nice bonus. But the gold mine is the second half of the book, which is a sort of encyclopedia of 1-pagers on zillions of useful problems and various ways to solve them, without too much detail. Almost every 1-pager has a simple picture, making it easy to remember. This is a great way to learn how to identify hundreds of problem types.
problem domain
First you must understand the problem domain. An elegant solution to the wrong problem is no good, nor is an inefficient solution to the right problem in most cases. Solution quality, in other words, is often relative. A simple scheduling problem that has a deterministic solution that takes ten minutes to run may be fine if schedules are realculated once per week, but if schedules change several times a day then a genetic algorithm solution that converges in a few seconds may be required.
decomposition and mapping
Second, decompose the problem into sub-problems and known/unknown elements that correspond to elements of the solution. Sometimes this is obvious, e.g. to count widgets you need a way of identifying widgets, an incrementable counter, and a way of storing the count. Sometimes it is not so obvious. Sometimes you have to decompose the problem, the domain, and possible solutions at the same time and try several different mappings between them to find one that leads to the correct results [this is the general method].
model
Model the solution, in your head at least, and walk through it to see if it works correctly. Adjust as necessary (See decomposition and mapping, above).
composition/interfaces
Many times you can find elements of the problem and elements of the solution that map to each other and produce partial results that are useful. This composition and interface construction provides the kernal of the solution, and also serves to reduce the scope of the problem remaining. So then you just loop back to the top with a smaller initial problem, and go through it again.
experience
Experience is the best teacher, of course, but reading about different kinds of problems and solutions will also be helpful. Studying some of the well-known algorithms and their applications is likewise very helpful, e.g. Dijkstra, Bresenham, Unification, and of course, graph theory.
I am not sure intuition can be cultivated, but I think I know what you are asking. The more problems you solve, the more information and experience you have at your disposal for future problems. So, I say just practice. Practice programming real world applications and you run into plenty of problems. Sometimes, solving puzzles can be very educational as well.
I try to find physical analogues when I'm looking at a complex problem.
The other day I thought I'd attempt creating the Fibonacci algorithm in my code, but I've never been good at maths.
I ended up writing my own method with a loop but it seemed inefficient or not 'the proper way'.
Does anyone have any recommendations/reading material on implementing algorithms in code?
I find Project Euler useful for this kind of thing. It forces you to think about an algorithm and then implement it. Many of the questions then have extensive discussions on how to solve the problem (from the naive solutions to some pretty ingenious ones) that you can use to see what you did right and wrong.
In the discussion threads you'll find various implementations from other people in many different languages too. Coming up with a solution yourself and then comparing it to that from other people is (imho) a good way to learn.
Both of these introductory books have good information about this sort of thing:
How To Design Programs and moreso Structure and Interpretation of Computer Programs
Both are somewhat funcitonal (and scheme) oriented, but that's a natural fit for these sorts of problems.
On top of that, you might get quite a bit out of Project Euler
Derive your algorithm test-driven. I've been able to write much more complex algorithms correctly by using TDD than I was before.
Go on youtube and look at some of the lectures on Introduction to Algorithms. There are some really, really good lectures that break down some of the most common algorithms such as the Fibonacci series and how to optimize them.
Start reading about O notation so you can understand how your algorithm grows with variable size input and how to classifiy the run-time of the algorithm you have.
Start with this video series which I found excellent material on the subject:
Algorithms Lecture
If you can't translate pseudo code for a fibonacci function to your language, then you should go and find a basic tutorial for your language, since it seems that you have not yet grasped its basic idioms.
If you have a working solution, but feel insecure about it, show it to others for review.
I wonder how many of you have implemented one of computer science's "classical algorithms" like Dijkstra's algorithm or data structures (e.g. binary search trees) in a real world, not academic project?
Is there a benefit to our dayjobs in knowing these algorithms and data structures when there are tons of libraries, frameworks and APIs which give you the same functionality?
Is there a benefit to our dayjobs in knowing these algorithms and data structures when there are tons of libraries, frameworks and APIs which give you the same functionality?
The library doesn't know what your problem domain is and won't be able to chose the correct algorithm to do the job. That is why I think it is important to know about them: then YOU can make the correct choice of algorithms to solve YOUR problem.
Knowing, or being able to understand these algorithms is important, these are the tools of your trade. It does not mean you have to be able to implement A* in an hour from memory. But you should be able to figure out what the advantages of using a red-black tree as opposed to a normal unbalanced tree are so you can decide if you need it or not. You need to be able to judge the fitness of an algorithm for solving your problem.
This might sound too school-masterish but these "classical algorithms" were not invented to give college students exam questions, they were invented to solve problems or improve on current solutions, just like the array, the linked list or the stack are building blocks to write a program so are some of these. Just like in math where you move from addition and subtraction to integration and differentiation, these are advanced techniques that will help you solve problems that are out there.
They might not be directly applicable to your problems or work situation but in the long run knowing of them will help you as a professional software engineer.
To answer your question, I did an implementation of A* recently for a game.
Is there a benefit to understanding your tools, rather than simply knowing that they exist?
Yes, of course there is. Taking a trivial example, don't you think there's a benefit to knowing what the difference is List (or your language's equivalent dynamic array implementation) and LinkedList (or your language's equivalent)? It's pretty important to know that one has constant random access time, while the other is linear. And one requires N copies if you insert a value in the middle of the sequence, while the other can do it in constant time.
Don't you think there's an advantage to understanding that the same sorting algorithm isn't always optimal? That for almost-sorted data, quicksort sucks, for example? Naively just calling Sort() and hoping for the best can become ridiculously expensive if you don't understand what's happening under the hood.
Of course there are a lot of algorithms you probably won't need, but even so, just understanding how they work may make it easier for yourself to come up with efficient algorithms to solve other, unrelated, problems.
Well, someone has to write the libraries. While working at a mapping software company, I implemented Dijkstra's, as well as binary search trees, b-trees, n-ary trees, bk-trees and hidden markov models.
Besides, if all you want is a single 'well known' algorithm, and you also want the freedom to specialise it and optimise it if it becomes critical to performance, including a whole library seems like a poor choice.
We use a home grown implementation of a p-random number generator from Knuth SemiNumeric as an aid in some statistical processing
In my previous workplace, which was an EDA company, we implemented versions of Prim and Dijsktra's algorithms, disjoint set data structures, A* search and more. All of these had real world significance. I believe this is dependent on problem domain - some domains are more algorithm-intensive and some less so.
Having said that, there is a fine line to walk - I see no business reason for re-implementing STL or Java Generics. In many cases, a standard library is better than "inventing a wheel". The more you are near your core application, the more it may be necessary to implement a textbook algorithm or data structure.
If you never work with performance-critical code, consider yourself lucky. However, I consider this scenario unrealistic. Performance problems could occur anywhere. And then it's necessary to know how to fix that problem. Obviously, merely knowing a few algorithm names isn't enough here – unless you want to implement them all and try them out one after the other.
No, knowing (at least some of) the inner workings of different algorithms is important for gauging their strengths and weaknesses and for analyzing how they would handle your situation.
Obviously, if there's a library already implementing exactly what you need, you're incredibly lucky. But let's face it, even if there is such a library, using it is often not completely straightforward (at the very least, interfaces and data representation often have to be adapted) so it's still good to know what to expect.
A* for a pac man clone. It took me weeks to really get but to this day I consider it a thing of beauty.
I've had to implement some of the classical algorithms from numerical analysis. It was easier to write my own than to connect to an existing library. Also, I've had to write variations on classical algorithms because the textbook case didn't fit my application.
For classical data structures, I nearly always use the standard libraries, such as STL for C++. The one time recently when I thought STL didn't have the structure I needed (a heap) I rolled my own, only to have someone point out almost immediately that I didn't need to do that.
Classical algorithms I have used in actual work:
A topological sort
A red-black tree (although I will
confess that I only had to implement
insertions for that application and
it only got used in a prototype).
This got used to implement an
'ordered dict' type structure in
Python.
A priority queue
State machines of various sorts
Probably one or two others I can't remember.
As to the second part of the question:
An understanding of how the algorithms work, their complexity and semantics gets used on a fairly regular basis. They also inform the design of systems. Occasionally one has to do things involving parsing or protocol handling, or some computation that's slightly clever. Having a working knowledge of what the algorithms do, how they work, how expensive they are and where one might find them lying around in library code goes a long way to knowing how to avoid reinventing the wheel poorly.
I use the Levenshtein distance algorithm to help implement a 'Did you mean [suggested word]?' feature in our website search.
Works quite well when combined with our 'tagging' system, which allows us to associate extra words (other than those in title/description/etc) with items in the database. \
It's not perfect by any means, but it's way better than most corporate site searches, if I don't say so myself ; )
Classical algorithms are usually associated with something glamorous, like games, or Web search, or scientific computation. However, I had to use some of the classical algorithms for a mere enterprise application.
I was building a metadata migration tool, and I had to use topological sort for dependency resolution, various forms of graph traversals for queries on metadata, and a modified variation of Tarjan's union-find datastructure to partition forest-like structured metadata to trees.
That was a really satisfying experience. Most of those algorithms were implemented before, but their implementations lacked something that I would need for my task. That's why It's important to understand their internals.