I've reduced my problem (table layout algorithm) to the following problem:
Imagine I have N variables X1, X2, ..., XN. I also have some (undetermined) number of inequalities, like:
X1 >= 2
x2 + X3 >= 13
etc.
Each inequalities is a sum of one or more variables, and it is always compared to a constant by using the >= operator. I cannot say in advance how many inequalities I will have each time, but all the variables have to be non-negative, so that's already one for each variable.
How to solve this system in such a way, that the values of the variables are as small as possible?
Added: Read the wikipedia article and realized that I forgot to mention that the variables have to be integers. Guess this makes it NP-hard, huh?
Minimizing x1 + x2 + ... where the xi satisfy linear constraints is called Linear Programming. It's covered in some detail in Wikipedia
What you have there is a pretty basic Linear Programming problem. You want to maximize the equation X_1 + ... + X_n subject to
X_1 >= 2
X_2 + X_3 >= 13
etc.
There are numerous algorithms to solve this type of problem. The most well known is the Simplex algorithm which will solve your equation (with certain caveats) quite efficiently in the average case, although there exist LP problems for which the Simplex algorithm will require exponentially many steps to solve (in the problem size).
Various implementations of LP solvers exist. For example LP_Solve should satisfy most of your requirements
You may also post directly your linear model to NEOS platform (http://neos.mcs.anl.gov/neos/solvers/index.html) . What you simply have to do first is write your model in an algebraic language such as AMPL. Then NEOS will solve the model and returns the results by e-mail.
Linear programming
Related
I have a set of N (N is very large) linear equations with W variables.
For efficiency sake, I need to find the smallest number of linear equations that are solvable (have a unique solution). It can be assumed that a set of X equations containing Y variables has a unique solution when X == Y.
For example, if I have the following as input:
2a = b - c
a = 0.5b
b = 2 + a
I want to return the equation set:
a = 0.5b
b = 2 + a
Currently, I have an implementation that uses some heuristics. I create a matrix, columns are variables and rows are equations. I search the matrix to find a set of fully connected equations, and then one-by-one try removing equations to see if the remaining set of equations is still solvable, if it is continue, if not, return the set of equations.
Is there a known algorithm for this, and am I trying got reinvent the wheel?
Does anyone have input on how to better approach this?
Thanks.
Short answer is "yes", there are known algorithms. For example, you could add a single equation and then compute the rank of the matrix. Then add the next equation and compute the rank. If it hasn't gone up that new equation isn't helping any and you can get rid of it. Once the rank == the number of variables you have a unique solution and you're done. There are libraries (e.g. Colt, JAMA, la4j, etc.) that will do this for you.
Longer answer is that this is surprisingly difficult to do correctly, especially if your matrix gets big. You end up with lots of numerical stability issues and so on. I'm not a numerical linear algebra expert but I know enough to know there are dragons here if you're not careful. Having said that, if your matrices are small and "well conditioned" (the rows/columns aren't almost parallel) then you should be in good shape. It depends on your application.
I am struggling to understand basics as it related to forming a closed form expression from a summation. I understand the goal at hand, but do not understand the process for which to follow in order to accomplish the goal.
Find a closed form for the sum k+2k+3k+...+K^2. Prove your claim
My first approach was to turn it into a recurrence relation, which did not work cleanly. After that I would attempt to turn from a recurrence relation into a closed form, but I am unsuccessful in getting there.
Does anyone know of a strong approach for solving such problems? Or any simplistic tutorials that can be provided? The material I find online does not help, and causes further confusion.
Thanks
No one gave the mathematical approach, so I am adding the mathematical approach to this AP problem.
Given series is 1k + 2k + 3k + .... + k.k(OR k^2)
Therefore, it means that there are altogether k terms together in the given series.
Next, as here all the consecutive terms are greater than the previous term by a constant common difference,i.e., k.
So, this is an Arithmetic Progression.
Now, to calculate the general summation, the formula is given by :-
S(n) = n/2{a(1)+a(n)}
where,S(n) is the summation of series upto n terms
n is the number of terms in the series,
a(1) is the first term of the series, and
a(n) is the last(n th) term of the series.
Here,fitting the terms of the given series into the summation formula, we get :-
S(n) = k/2{1k + k.k} = (k/2){k+k^2) = [(k^2)/2 + (k^3)/2]*.
If you are interested in a general algorithm to compute sums like these (and more complicated ones) I can't recommend the book A=B enough.
The authors have been so kind to make the pdf freely available:
http://www.math.upenn.edu/~wilf/AeqB.html
Enjoy!
Asad has explained a mathematical approach in the comments to solving this.
If you are interested in a programming approach that works for more complicated expressions, then you can use Sympy in Python.
For example:
import sympy
x,k = sympy.symbols('x k')
print sympy.sum(x*k,(x,1,k))
prints:
k*(k/2 + k**2/2)
[Related to https://codegolf.stackexchange.com/questions/12664/implement-superoptimizer-for-addition from Sep 27, 2013]
I am interested in how to write superoptimizers. In particular to find small logical formulae for sums of bits. This was previously set this as a challenge on codegolf but it seems a lot harder than one might imagine.
I would like to write code that finds the smallest possible propositional logical formula to check if the sum of y binary 0/1 variables equals some value x. Let us call the variables x1, x2, x3, x4 etc. In the simplest approach the logical formula should be equivalent to the sum. That is, the logical formula should be true if and only if the sum equals x.
Here is a naive way to do that. Say y=15 and x = 5. Pick all 3003 different ways of choosing 5 variables and for each make a new clause with the AND of those variables AND the AND of the negation of the remaining variables. You end up with 3003 clauses each of length exactly 15 for a total cost of 45054.
However, if you are allowed to introduce new variables into your solution then you can potentially reduce this a lot by eliminating common subformulae. So in this case your logical formula consists of the y binary variables, x and some new variables. The whole formula would be satisfiable if and only if the sum of the y variables equals x. The only allowed operators are and, or and not.
It turns out there is a clever method for solving this problem when x =1, at least in theory . However, I am looking for a computational intensive method to search for small solutions.
How can you make a superoptimizer for this problem?
Examples. Take as an example two variables where we want a logical formula that is True exactly when they sum to 1. One possible answer is:
(((not y0) and (y1)) or ((y0) and (not y1)))
To introduce a new variable into a formula such as z0 to represent y0 and not y1 then we can introduce a new clause (y0 and not y1) or not z0 and replace y0 and not y1 by z0 throughout the rest of the formula . Of course this is pointless in this example as it makes the expression longer.
Write your desired sum in binary. First look at the least important bit, y0 . Clearly,
x1 xor x2 xor ... xor xn = y0 - that's your first formula. The final formula will be a conjunction of formulae for each bit of the desired sum.
Now, do you know how an adder is implemented? http://en.wikipedia.org/wiki/Adder_(electronics) . Take inspiration from it, group your input into pairs/triples of bits, calculate the carry bits, and use them to make formulae for y1...yk . If you need further hints, let me know.
If I understand what you're asking, you'll want to look into the general topics of logic minimization and/or Boolean function simplification. The references are mostly about general methods for eliminating redundancy in Boolean formulas that are disjunctions ("or"s) of terms that are conjunctions ("and"s).
By hand, the standard method is called a Karnaugh map. The equivalent algorithm expressed in a way that's more amenable to computer implementation is Quine-McKlosky (also called the method of prime implicants). The minimization problem is NP-hard, and QM solves it exactly.
Therefore I think QM is what you want for the "super-optimizer" you're trying to build.
But the combination of NP-hard and exact solution means that QM is impractical for large problems, at least non-trivial ones.
The QM Algorithm lays out the conjunctive terms (called minterms in this context) in a table and conducts searches for 1-bit differences between pairs of terms. These terms can be combined and the factor for the differing bit labeled "don't care" in further combinations. This is repeated with 2-bit, 4-bit, etc. subsets of bits. The exponential behavior results because choices are involved for the combinations of larger bit sets: choosing one rules out another. Therefore it is essentially a search problem.
There is an enormous literature on heuristics to trim the search space, yet find "good" solutions that aren't necessarily optimal. A famous one is Espresso. However, since algorithm improvements translate directly to dollars in semiconductor manufacture, it's entirely possible that the best are proprietary and closely held.
I notice that almost all of new calculators are able to display the roots of quadratic equations in exact form. For example:
x^2-16x+14=0
x1=8+5sqrt2
x2=8-5sqrt2
What algorithm could I use to achieve that? I've been searching around but I found no results related to this problem
Assuming your quadratic equation is in the form
y = ax^2+bx+c
you get the two roots by
x_1,x_2 = ( -b +- sqrt(b^2-4ac)) / 2a
when for one you use the + between b and the square root, and for the other the -.
If you want to take something out of the square root, just compute the factors of the argument and take out the ones with exponent greater than 2.
By the way, the two root you posted are wrong.
The “algorithm” is exactly the same as on paper. Depending on the programming language, it may start with int delta = b*b - 4*a*c;.
You may want to define a datatype of terms and simplifications on them, though, in case the coefficients of the equation are not simply integer but themselves solutions of previous equations. If this is the sort of thing you are after, look up “symbolic computation”. Some languages are better for this purpose than others. I expect that elementary versions of what you are asking is actually used as an example in some ML tutorials, for instance (see chapter 9).
In old games era, we are used to have a look-up table of pre-computed values of sin and cos,..etc, due to the slowness of computing those values in that old CPUs.
Is that considered a dynamic programming technique ? or dynamic programming must solve a recursive function that is always computed or sort of ?
Update:
In dynamic programming the key is to have a memoization table, which is the solution for the sin,cos look up table, so what is really the difference in the technique ?
I'd say for what I see in your question no it's not dynamic programming. Dynamic programming is more about solving problems by solving smaller subproblem and create way to get solution of problem from smaller subproblem.
Your situation looks more like memoization.
For me it could be considered DP if your problem was to compute cos N and you have formula to calculate cos i from array of cos 0, cos 1, ..., cos i - 1, so you calculate cos 1, sin 1 and run you calculation for i from 0 to N.
May be somebody will correct me :)
There's also interesting quote about how dynamic programming differ from divide-and-conquer paradigm:
There are two key attributes that a problem must have in order for
dynamic programming to be applicable: optimal substructure and
overlapping subproblems. If a problem can be solved by combining
optimal solutions to non-overlapping subproblems, the strategy is
called "divide and conquer" instead. This is why mergesort and
quicksort are not classified as dynamic programming problems.
Dynamic programming is the programming technique where you solve a difficult problem by splitting it in smaller problems, which are not independent (this is important!).
Even if you could compute cos i from cos i -1, this would still not be dynamic programming, just recursion.
Dynamic programming classic example is the knapsack problem: http://en.wikipedia.org/wiki/Knapsack_problem
You want to fill a knapsack of size W, with N objects, each one with its size and value.
Since you don't know which permutation of objects will be the best, you "try" everyone.
Recurrence equation will be something like:
OPT(m,w) = MAX ( OPT(m-1, w), //if I don't take this object
OPT(m-1, w - w(m)) //If i take it
Adding the initial case, this is how you solve the problem. Of course you should build the solution starting with m = 0, w = 0 and iterating until m = N and w = W, so that you can reuse previously calculated values.
Using this technique, you can find the optimal combination of objects to bring into the knapsack in just N*W time (which is not polynomial in the input size, of course, otherwise P = NP and no one wants that!), instead of an exponential number of computation steps.
No I don't think this is dynamic programming. Due to limited computing power the values of sine and cosine were fed as pre-computed values which are just like other numeric constants.
For a problem to be solved in dynamic programming technique there are many essential conditions. One of the important condition is that we should be able to break problem into recursive solvable sub-problem, the result these sub-problem of which can be then used as look-up table to replace higher chain in recursion. So it is both recursion and memory.
For more info you can refer Wikipedia link.
http://en.wikipedia.org/wiki/Dynamic_programming
Also the lecture 19 of this course will give you an overview of dynamic programming.
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/lecture-19-dynamic-programming-i-fibonacci-shortest-paths/