Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I read over the wikipedia article, but it seems to be beyond my comprehension. It says it's for optimization, but how is it different than any other method for optimizing things?
An answer that introduces me to linear programming so I can begin diving into some less beginner-accessible material would be most helpful.
The answers so far have given an algebraic definition of linear programming, and an operational definition. But there is also a geometric definition. A polytope is an n-dimensional generalization of a polygon (in two dimensions) or a polyhedron (in three dimensions). A convex polytope is a polytope which is also a convex set. By definition, linear programming is an optimization problem in which you want to maximize or minimize a linear function on a convex polytope.
For example: Suppose that you want to buy some combination of red sand and blue sand. Suppose also:
You can't buy a negative amount of either kind.
The depot only has 300 pounds of red sand and 400 pounds of blue sand.
Also your jeep has a weight limit of 500 pounds.
If you draw a picture in the plane of how much you can buy with these constraints, it's a convex pentagon. Then, whatever you want to optimize (say, the total amount of gold in the sand), you can know that an optimum (not necessarily the only optimum) is at one of the vertices of the polytope. In fact, there is a much stronger result: Even in high dimensions, any such linear programming problem can be solved in polynomial time, in the number of constraints, or putative sides of the polytope. Note that not every constraint corresponds to a side. If the constraint is an equality, it might reduce the dimension of the polytope. Or if the constraint is an inequality, it might be not create a side if it is already implied by all of the other constraints.
There are a lot of practical optimization problems that are linear programming. One of the first examples was the "diet problem": Given a menu of a bunch of kinds of food, what is the cheapest possible balanced diet? It's a linear programming problem because the cost is linear, and because all of the constraints (vitamins, calories, the assumption that you can't buy a negative amount of food, etc.) are linear.
But, linear programming is even more important for a theoretical reason. It is one of the most powerful polynomial-time algorithms for optimization or for any other purpose. As such, it is very important as a substitute for approximately solving other optimization problems, and as a subroutine for exactly solving them.
Yes, two generalizations are convex programming and integer programming. With some qualificiations, convex programming can work just as well as linear programming, provided that the objective (the thing to maximize) is linear. It turns out that convexity, not flat sides, is the main reason that linear programming has a good algorithm.
Integer programming, on the other hand, is usually hard. For instance, suppose in the example problem you have to buy the sand in fixed-size bags rather than in bulk; that is then integer programming. There is a theorem that it can be NP-hard. How hard it is in practice depends on how close it is to linear programming. There are some celebrated examples of integer programming problems in which, miraculously, all of the vertices of the linear program are integer points. Then you can solve the linear program and the solution will happen to be integral. One example of such a problem is the marriage problem, how to marry n men and n women to each other to maximize total happiness. (Or, n cities to n factories, n jobs to n applicants, n computers to n printers, etc.)
Linear programming is a topic of 'mathematical programming', which is also called 'mathematical optimization'. Linear programs differ from general mathematical programs in that for a Linear Program (LP) all constraint functions and the objective function are linear with respect to their variables.
A good place to start would be here if you want the original work by Dantzig, or if you want to get a textbook, I recommend this one. If you want to look up your own resources, start with looking up the Simplex method--it is a very common technique to solve these programs, or the less common but definitely polynomial time Ellipsoid method. Though I haven't read it all, looking it over quickly also suggests this PDF may be a good place to start. Make sure whatever you end up reading covers duality (and perhaps specifically the Farkas' lemma) as it's a central idea in most LP solvers.
The most natural extensions are either Integer programs (similar to LP's, but all variables must take on integer values--that is, no fractional components) or Convex programming (perhaps a more general extension). A good convex optimization text book is available in PDF form here.
As everyone else has said, Linear Programming is a way to solve optimization problems, in which the terms are linear.
It might help to understand what types of problems LP's solve
One example where I've used Linear Programming is building a restaurant schedule. In a restaurant you have skill sets:
Cooks
Servers
Dishwashers
Hosts
Bussers
Manager
etc
And you have employees, each with one or more skill sets. Each employee also has a specific availability. For example, Bob can't work Sunday mornings because he's a pastor at a local church. Employees also have an associated cost. Bob might be $10.50/hr while Suzy is $5.15/hr. Finally Employees might have minimum guaranteed hours. Since Bob has been an employee for 15 years, the boss says he'll always get at least 35 hours.
The restaurant itself has demands. For example it has 3 shifts: Morning, Afternoon, and Night, and each of these shifts has a set of staffing requirements: We need 1 cook, 1 server, 1 manager in the morning, 3 cooks, 2 servers, 2 hosts, 2 managers in the afternoon, and 4 cooks, 4 servers, 3 hosts, 2 managers, 2 bussers in the evening. Each shift will have a duration, and you can figure out the cost of each shift by multiplying the duration by the employee's hourly wage.
Finally we have state and federal laws, and some basic "business" rules: No employee can work more than 8-hours without going into overtime. No employee can be scheduled for less than 2 hours (because it would suck to make a 30 minute commute for a 2 hour shift), employees can't work two overlapping shifts, etc.
Now given all these requirements, give me a schedule that has meets all requirements, and produces the lowest labor cost.
This is an example of a linear programming optimization problem.
A linear program typically consists of:
An objective function, variables, variable bounds, and constraints.
Since we want to minimize cost, our objective function is going to involve shifts that employees work, and the associated costs (shift duration * wage).
The variables in this case, are the shifts that each employee can work.
The bounds on these variables integers between 0 and 1, because either an employee is working the shift (1), or the employee is not working the shift (0). This, by the way, is a special program, called a Binary Integer Program or BIP for short, because all the variables are integers (no fractional values) and all values are either 0 or 1.
The constraints are equality/inequality constraints based on the requirements above.
For example, if Bob and Suzy can both work as cooks in the morning, then Bob_Morning_Cook1_Shift + Suzy_Morning_Cook1_Shift = 1, with Bob_Morning_Cook_Shift = {0,1} and Suzy_Morning_Cook_Shift = {0,1} due to the bounds mentioned above. These three pieces of information specify that at most only a single employee can be assigned as the first morning cook.
So once you've defined all the constraints that model your problem, you can begin to solve the problem. If a solution can be found (and depending on the constraints the problem may be infeasible), it will give you the employee assignments that produce the lowest weekly labor cost.
Linear programming is an optimization technique that involves linear constraints and a linear objective function. The constraints are written to bound the problem space, while the objective function is something that you are attempting to minimize (or possibly maximize) that satisfies the constraints. The simplex algorithm is typically used to walk along edges of the constraint intersection to find the minimum (or maximum) value of the objective function satisfying the constraints.
When setting up an LP problem, it's important to make sure that the constraints properly bound the objective function. It's possible to define constraints which result in no possible solution (e.g. x > 1 and -x > 1). This is over constrained. It's also possible to under-constrain a problem (e.g. find min x such that x < 1).
One big difference (or at least, distinguishing feature) of linear programming is that the constraints are modeled as linear equations, i.e. they're all of the form c_1 x_1 + c_2 x_2.... The standard form section of the wikipedia article gives a pretty good overview of this.
Another difference/feature is that linear programming is seeking to maximize (or minimize) ONE function - you can't effectively do multi-objective optimization.
Related
I've seen online that one can write the travelling salesman problem as a linear expression and compute it using software such as CPLEX for java.
I have a 1000 towns and need to find a short distance. I plan on partitioning these 1000 towns into clusters of ~100 towns and performing some linear programming algorithm on these individual clusters.
The question I have is, how exactly do I represent this as a linear expression.
So I have 100 towns and I'm sure everyone's aware of how TSP works.
I literally have no clue how I can write linear constraints, objectives and variables which satisfy the TSP.
Could someone explain to me how this is done or send me a link which explains it clearly, because I've been researching a lot and can't seem to find anything.
EDIT:
A bit of extra information I found:
We label the cities with numbers 0 to n and define the matrix:
Would this yield the following matrix for 5 towns?
The constraints are:
i) Each city be arrived at from exactly one other city
ii) From each city there is a departure to exactly one other city
iii) The route isn't broken up into separate islands.
Again, this makes complete sense to me, but I'm still having trouble writing these constraints as a linear expression. Apparently it's a simple enough matrix.
Thanks for any help !
According to this Wikipedia article the travelling salesman problem can be modelled as an integer linear program, which I believe to be the key issue of the question. The idea is to have decision variables of permitted values in {0,1} which model selected edges in the graph. Suitable constraints must ensure that the selected edges cover every node, the selected edges form a collection of cycles and there is only one connected component (which in total means that there is exactly one cycle which contains every node). Note that the article also gives an explicit formulation and explains the interpretations of the constraints.
As the travelling salesman problem is NP-hard, it cannot be solved via (non-integral) linear programming unless P=NP.
The TSP problem is a rather complex integer programming problem due to its combinatorial nature.
There are several (exact and approximated) techniques to solve it. The model in Wikipedia is just one of them: it has constraints to ensure there is only one incoming and outgoing arc in each node and the constraints with u variables are for preventing sub-cycles in the solution. There is also several ways to write this sub-cycle preventing constraints, some of them can be found on section 4.1 of this article. The section presents some models for the TSP problem (all of them can be solved using CPLEX, Gurobi, MATLAB or other Integer/Linear Programming Software.
Hope I could be of any help. (=
I'm dealing with a war game. I have a list of my bases B(x,y) from which I can send attacks on the enemy (they have bases between my own bases). Each base B can attack at a range R (the same radius for all bases). How can I find my bases to be able to attack as many enemy bases as possible, but use a minimum number of my bases?
I've reduced the problem to finding the minimum number of bases (and their coordinates) required to cover the largest area possible. I wonder if there is a better way than looking at all the possible combinations and because the number of bases could reach thousands.
Example: If the attack radius is 10 and I have five bases in a square and its center: (0,0), (10,0), (10,10), (0,10), (5,5) then the answer is that only the first four would be needed because all the area covered by the one in the center is already covered by the others.
Note 1 The solution must be single-threaded.
Note 2 The solution doesn't have to be perfect if that means a big gain in speed. The number of bases reaches thousands and this needs to use as little time as possible. I would consider running time greater than 100 ms for 10,000 bases in Python on a modern computer unacceptable, so I was thinking maybe I could start by eliminating the obvious, like if there are multiple bases within R/10 distance of each other, simply eliminate all except for one (whichever).
If I understand you correctly, the enemy bases and your bases are given as well as the (constant) attack radius. I.e. if you select one of your bases, you know exactly which of the enemy bases get attacked due to the selection.
The first step would be to eliminate those enemy cities from the problem which can not be attacked by any of your bases. Then, selecting all of your bases guarantees attacking all attackable enemy bases, so there is solution that attacks as many enemy bases as possible.
Under all those solutions you are looking for the one that uses the minimum number of your bases. This problem is equivalent to the https://en.wikipedia.org/wiki/Set_cover_problem, which is unfortunately NP-hard. You can apply all known solution methods such as Integer Linear Programming or the already mentioned greedy algorithm / metaheuristics.
If your problem instance is large and runtime is the primary concern, greedy is probably the way to go. For example you could always add that particular base of yours to the selection which adds the highest number of enemy bases that can be attacked which were previously not under attack by your already selected bases.
Hum the solution depends on your needs. If you need real time answer, maybe a greedy algorithm could provide good solution.
Other solution could be using meta-heuristic with constraint time(http://en.wikipedia.org/wiki/Metaheuristic). I probably would use genetic algorithm to search a solution for this problem under a limited time.
If interested I can provide a toy example of implementation in Python.
EDIT :
When you have to provide solution quickly a greedy algorithm is often better. But in your case I doubt. Particularity of many greedy algorithm is that you need to start from scratch each time you try to compute a new result.
Speaking again of genetic algorithm, you could for example each time you have to take a decision restart the search process from its last result. In fact you could probably let him turning has a subprocess and each 100ms take the better solution computed during the last loop.
If not too greedy in computing resource, this solution would provide better results than greedy one on the long run as the solution will probably need to be adapted to the changes of the situation but many element will stay unchanged. Just be aware that initializing a meta-search with the solution of a greedy algorithm is anyway a good idea!
We know that Genetic Algorithms (or evolutionary computation) work with an encoding of the points in our solution space Ω rather than these points directly. In the literature, we often find that GAs have the drawback : (1) since many chromosomes are coded into a similar point of Ω or similar chromosomes have very different points, the efficiency is quite low. Do you think that is really a drawback ? because these kind of algorithms uses the mutation operator in each iteration to diversify the candidate solutions. To add more diversivication we simply increase the probability of crossover. And we mustn't forget that our initial population ( of chromosones ) is randomly generated ( another more diversification). The question is, if you think that (1) is a drawback of GAs, can you provide more details ? Thank you.
Mutation and random initialization are not enough to combat the problem that is known as genetic drift which is the major problem of genetic algorithms. Genetic drift means that the GA may quickly lose most of its genetic diversity and the search proceeds in a way that is not beneficial for crossover. This is because the random initial population quickly converges. Mutation is a different thing, if it is high it will diversify, true, but at the same time it will prevent convergence and the solutions will remain at a certain distance to the optimum with higher probability. You will need to adapt the mutation probability (not the crossover probability) during the search. In a similar manner the Evolution Strategy, which is similar to a GA, adapts the mutation strength during the search.
We have developed a variant of the GA that is called OffspringSelection GA (OSGA) which introduces another selection step after crossover. Only those children will be accepted that surpass their parents' fitness (the better, the worse or any linearly interpolated value). This way you can even use random parent selection and put the bias on the quality of the offspring. It has been shown that this slows the genetic drift. The algorithm is implemented in our framework HeuristicLab. It features a GUI so you can download and try it on some problems.
Other techniques that combat genetic drift are niching and crowding which let the diversity flow into the selection and thus introduce another, but likely different bias.
EDIT: I want to add that the situation of having multiple solutions with equal quality might of course pose a problem as it creates neutral areas in the search space. However, I think you didn't really mean that. The primary problem is genetic drift, ie. the loss of (important) genetic information.
As a sidenote, you (the OP) said:
We know that Genetic Algorithms (or evolutionary computation) work with an encoding of the points in our solution space Ω rather than these points directly.
This is not always true. An individual is coded as a genotype, which can have any shape, such as a string (genetic algorithms) or a vector of real (evolution strategies). Each genotype is transformed into a phenotype when assessing the individual, i.e. when its fitness is calculated. In some cases, the phenotype is identical to the genotype: it is called direct coding. Otherwise, the coding is called indirect. (you may find more definitions here (section 2.2.1))
Example of direct encoding:
http://en.wikipedia.org/wiki/Neuroevolution#Direct_and_Indirect_Encoding_of_Networks
Example of indirect encoding:
Suppose you want to optimize the size of a rectangular parallelepiped dened by its length, height and width. To simplify the example, assume that these three quantities are integers between 0 and 15. We can then describe each of them using a 4-bit binary number. An example of a potential solution may be to genotype 0001 0111 01010. The corresponding phenotype is a parallelepiped of length 1, height 7 and width 10.
Now back to the original question on diversity, in addition to what DonAndre said you could read you read chapter 9 "Multi-Modal Problems and Spatial Distribution" of the excellent book Introduction to Evolutionary Computing written by A. E. Eiben and J. E. Smith. as well as a research paper on that matter such as Encouraging Behavioral Diversity in Evolutionary Robotics: an Empirical Study. In a word, diversity is not a drawback of GA, it is "just" an issue.
I've always been writing software to solve business problems. I came across about LIP while I was going through one of the SO posts. I googled it but I am unable to relate how I can use it to solve business problems. Appreciate if some one can help me understand in layman terms.
ILP can be used to solve essentially any problem involving making a bunch of decisions, each of which only has several possible outcomes, all known ahead of time, and in which the overall "quality" of any combination of choices can be described using a function that doesn't depend on "interactions" between choices. To see how it works, it's easiest to restrict further to variables that can only be 0 or 1 (the smallest useful range of integers). Now:
Each decision requiring a yes/no answer becomes a variable
The objective function should describe the thing we want to maximise (or minimise) as a weighted combination of these variables
You need to find a way to express each constraint (combination of choices that cannot be made at the same time) using one or more linear equality or inequality constraints
Example
For example, suppose you have 3 workers, Anne, Bill and Carl, and 3 jobs, Dusting, Typing and Packing. All of the people can do all of the jobs, but they each have different efficiency/ability levels at each job, so we want to find the best task for each of them to do to maximise overall efficiency. We want each person to perform exactly 1 job.
Variables
One way to set this problem up is with 9 variables, one for each combination of worker and job. The variable x_ad will get the value 1 if Anne should Dust in the optimal solution, and 0 otherwise; x_bp will get the value 1 if Bill should Pack in the optimal solution, and 0 otherwise; and so on.
Objective Function
The next thing to do is to formulate an objective function that we want to maximise or minimise. Suppose that based on Anne, Bill and Carl's most recent performance evaluations, we have a table of 9 numbers telling us how many minutes it takes each of them to perform each of the 3 jobs. In this case it makes sense to take the sum of all 9 variables, each multiplied by the time needed for that particular worker to perform that particular job, and to look to minimise this sum -- that is, to minimise the total time taken to get all the work done.
Constraints
The final step is to give constraints that enforce that (a) everyone does exactly 1 job and (b) every job is done by exactly 1 person. (Note that actually these steps can be done in any order.)
To make sure that Anne does exactly 1 job, we can add the constraint that x_ad + x_at + x_ap = 1. Similar constraints can be added for Bill and Carl.
To make sure that exactly 1 person Dusts, we can add the constraint that x_ad + x_bd + x_cd = 1. Similar constraints can be added for Typing and Packing.
Altogether there are 6 constraints. You can now supply this 9-variable, 6-constraint problem to an ILP solver and it will spit back out the values for the variables in one of the optimal solutions -- exactly 3 of them will be 1 and the rest will be 0. The 3 that are 1 tell you which people should be doing which job!
ILP is General
As it happens, this particular problem has a special structure that allows it to be solved more efficiently using a different algorithm. The advantage of using ILP is that variations on the problem can be easily incorporated: for example if there were actually 4 people and only 3 jobs, then we would need to relax the constraints so that each person does at most 1 job, instead of exactly 1 job. This can be expressed simply by changing the equals sign in each of the 1st 3 constraints into a less-than-or-equals sign.
First, read a linear programming example from Wikipedia
Now imagine the farmer producing pigs and chickens, or a factory producing toasters and vacuums - now the outputs (and possibly constraints) are integers, so those pretty graphs are going to go all crookedly step-wise. That's a business application that is easily represented as a linear programming problem.
I've used integer linear programming before to determine how to tile n identically proportioned images to maximize screen space used to display these images, and the formalism can represent covering problems like scheduling, but business applications of integer linear programming seem like the more natural applications of it.
SO user flolo says:
Use cases where I often met it: In digital circuit design you have objects to be placed/mapped onto certain parts of a chip (FPGA-Placing) - this can be done with ILP. Also in HW-SW codesign there often arise the partition problem: Which part of a program should still run on a CPU and which part should be accelerated on hardware. This can be also solved via ILP.
A sample ILP problem will looks something like:
maximize 37∙x1 + 45∙x2
where
x1,x2,... should be positive integers
...but, there is a set of constrains in the form
a1∙x1+b1∙x2 < k1
a2∙x1+b2∙x2 < k2
a3∙x1+b3∙x2 < k3
...
Now, a simpler articulation of Wikipedia's example:
A farmer has L m² land to be planted with either wheat or barley or a combination of the two.
The farmer has F grams of fertilizer, and P grams of insecticide.
Every m² of wheat requires F1 grams of fertilizer, and P1 grams of insecticide
Every m² of barley requires F2 grams of fertilizer, and P2 grams of insecticide
Now,
Let a1 denote the selling price of wheat per 1 m²
Let a2 denote the selling price of barley per 1 m²
Let x1 denote the area of land to be planted with wheat
Let x2 denote the area of land to be planted with barley
x1,x2 are positive integers (Assume we can plant in 1 m² resolution)
So,
the profit is a1∙x1 + a2∙x2 - we want to maximize it
Because the farmer has a limited area of land: x1+x2<=L
Because the farmer has a limited amount of fertilizer: F1∙x1+F2∙x2 < F
Because the farmer has a limited amount of insecticide: P1∙x1+P2∙x2 < P
a1,a2,L,F1,F2,F,P1,P2,P - are all constants (in our example: positive)
We are looking for positive integers x1,x2 that will maximize the expression stated, given the constrains stated.
Hope it's clear...
ILP "by itself" can directly model lots of stuff. If you search for LP examples you will probably find lots of famous textbook cases, such as the diet problem
Given a set of pills, each with a vitamin content and a daily vitamin
quota, find the cheapest cocktail that matches the quota.
Many such problems naturally have instances that require varialbe to be integers (perhaps you can't split pills in half)
The really interesting stuff though is that actually a big deal of combinatorial problems reduce to LP. One of my favourites is the assignment problem
Given a set of N workers, N tasks and an N by N matirx describing how
much each worker charges for the each task, determine what task to
give to each worker in order to minimize cost.
Most solution that naturally come up have exponential complexity but there is a polynomial solution using linear programming.
When it comes to ILP, ILP has the added benefit/difficulty of being NP-complete. This means that it can be used to model a very wide range of problems (boolean satisfiability is also very popular in this regard). Since there are many good and optimized ILP solvers out there it is often viable to translate an NP-complete problem into ILP instead of devising a custom algorithm of your own.
You can apply linear program easily everywhere you want to optimize and the target function is linear. You can make schedules (I mean big, like train companies, who need to optimize the utilization of the vehicles and tracks), productions (optimize win), almost everything. Sometimes it is tricky to formulate your problem as IP and/or sometimes you meet the problem that your solution is, that you have to produce e.g. 0.345 cars for optimum win. That is of course not possible, and so you constraint even more: Your variable for the number of cars must be integer. Even when it now sounds simpler (because you have infinite less choices for your variable), its actually harder. In this moment it gets NP-hard. Which actually means you can solve ANY problem from your computer with ILP, you just have to transform it.
For you I would recommend an intro into reading some basic (I)LP stuff. From my mind I dont know any good online site (but if you goolge you will find some), as book I can recommend Linear Programming from Chvatal. It has very good examples, and describes also real use cases.
The other answers here have excellent examples. Two of the gold standards in business of using integer programming and more generally operations research are
the journal Interfaces published by INFORMS (The Institute for Operations Research and the Management Sciences)
winners of the the Franz Edelman Award for Achievement in Operations Research and the Management Sciences
Interfaces publishes research that uses operations research applied to real-world problems, and the Edelman award is a highly competitive award for business use of operations research techniques.
There was a post on here recently which posed the following question:
You have a two-dimensional plane of (X, Y) coordinates. A bunch of random points are chosen. You need to select the largest possible set of chosen points, such that no two points share an X coordinate and no two points share a Y coordinate.
This is all the information that was provided.
There were two possible solutions presented.
One suggested using a maximum flow algorithm, such that each selected point maps to a path linking (source → X → Y → sink). This runs in O(V3) time, where V is the number of vertices selected.
Another (mine) suggested using the Hungarian algorithm. Create an n×n matrix of 1s, then set every chosen (x, y) coordinate to 0. The Hungarian algorithm will give you the lowest cost for this matrix, and the answer is the number of coordinates selected which equal 0. This runs in O(n3) time, where n is the greater of the number of rows or the number of columns.
My reasoning is that, for the vast majority of cases, the Hungarian algorithm is going to be faster; V is equal to n in the case where there's one chosen point for each row or column, and substantially greater for any case where there's more than that: given a 50×50 matrix with half the coordinates chosen, V is 1,250 and n is 50.
The counterargument is that there are some cases, like a 109×109 matrix with only two points selected, where V is 2 and n is 1,000,000,000. For this case, it takes the Hungarian algorithm a ridiculously long time to run, while the maximum flow algorithm is blinding fast.
Here is the question: Given that the problem doesn't provide any information regarding the size of the matrix or the probability that a given point is chosen (so you can't know for sure) how do you decide which algorithm, in general, is a better choice for the problem?
You can't, it's an imponderable.
You can only define which is better "in general" by defining what inputs you will see "in general". So for example you could whip up a probability model of the inputs, so that the expected value of V is a function of n, and choose the one with the best expected runtime under that model. But there may be arbitrary choices made in the construction of your model, so that different models give different answers. One model might choose co-ordinates at random, another model might look at the actual use-case for some program you're thinking of writing, and look at the distribution of inputs it will encounter.
You can alternatively talk about which has the best worst case (across all possible inputs with given constraints), which has the virtue of being easy to define, and the flaw that it's not guaranteed to tell you anything about the performance of your actual program. So for instance HeapSort is faster than QuickSort in the worst case, but slower in the average case. Which is faster? Depends whether you care about average case or worst case. If you don't care which case, you're not allowed to care which "is faster".
This is analogous to trying to answer the question "what is the probability that the next person you see will have an above (mean) average number of legs?".
We might implicitly assume that the next person you meet will be selected at random with uniform distribution from the human population (and hence the answer is "slightly less than one", since the mean is less than the mode average, and the vast majority of people are at the mode).
Or we might assume that your next meeting with another person is randomly selected with uniform distribution from the set of all meetings between two people, in which case the answer is still "slightly less than one", but I reckon not the exact same value as the first - one-and-zero-legged people quite possibly congregate with "their own kind" very slightly more than their frequency within the population would suggest. Or possibly they congregate less, I really don't know, I just don't see why it should be exactly the same once you take into account Veterans' Associations and so on.
Or we might use knowledge about you - if you live with a one-legged person then the answer might be "very slightly above 0".
Which of the three answers is "correct" depends precisely on the context which you are forbidding us from talking about. So we can't talk about which is correct.
Given that you don't know what each pill does, do you take the red pill or the blue pill?
If there really is not enough information to decide, there is not enough information to decide. Any guess is as good as any other.
Maybe, in some cases, it is possible to divine extra information to base the decision on. I haven't studied your example in detail, but it seems like the Hungarian algorithm might have higher memory requirements. This might be a reason to go with the maximum flow algorithm.
You don't. I think you illustrated that clearly enough. I think the proper practical solution is to spawn off both implementations in different threads, and then take the response that comes back first. If you're more clever, you can heuristically route requests to implementations.
Many algorithms require huge amounts of memory beyond the physical maximum of a machine, and in these cases, the algorithmically more ineffecient in time but efficient in space algorithm is chosen.
Given that we have distributed parallel computing, I say you just let both horses run and let the results speak for themselves.
This is a valid question, but there's no "right" answer — they are incomparable, so there's no notion of "better".
If your interest is practical, then you need to analyze the kinds of inputs that are likely to arise in practice, as well as the practical running times (constants included) of the two algorithms.
If your interest is theoretical, where worst-case analysis is often the norm, then, in terms of the input size, the O(V3) algorithm is better: you know that V ≤ n2, but you cannot polynomially bound n in terms of V, as you showed yourself. Of course the theoretical best algorithm is a hybrid algorithm that runs both and stops when whichever one of them finishes first, thus its running time would be O(min(V3,n3)).
Theoretically, they are both the same, because you actually compare how the number of operations grows when the size of the problem is increased to infinity.
The way your problem is defined, it has 2 sizes - n and number of points, so this question has no answer.