I'm trying to solve a 3-SAT-problem, where I get a number of clauses, with 3 literals in each. They are in CNF.
I'm having trouble constructing pseudocode for this. I've figured out that it takes a Boolean formula as input, and that I should solve it using brute force. I also know that it's time complexity is O(2^n) because of 2 possible states for each variable.
Where I have trouble understanding is how we access each variable, since we only get a boolean formula. How do I pick out each variable in every clause to loop over them, how would I do the check and how would I leave some variables unassigned/unassign them?
It doesn't have to be efficient, as I'm just trying to prove a point about the brute force time complexity.
Related
I have a linear function(n inputs -> n outputs), and using special structure of the function(some DP-like algorithm), I can evaluate the output in O(n) time, rather than O(n^2) time. Now, given some output values, I need to find the input that evaluates to the output.
I could spell out the matrix components(by evaluating the linear function with n basis inputs) and use some algorithms like LU decomposition, but that would take O(n^3) time to calculate. Is there a faster algorithm, exploiting the structure of the linear function?
(Since the linear function is not symmetric, Conjugate Gradient method could not be used.)
I need exact solutions, where n is small(n=10~20), but I need to do this kind of calculation hundreds of thousands of times in a second.
From code-design point of view, it would be better if the algorithm did not require transpose of the linear function. (Although at the cost of more code and more debugging, it is possible to provide transpose function with O(n) time complexity.)
Have you considered GMRES? You mentioned that you're looking for exact solutions, however you can get within machine precision error reasonably quickly.
I can evaluate the output in O(n) time, rather than O(n^2) time.
You can use a linear operator to take advantage of this, for example with the GMRES in scipy implementation, A can be a LinearOperator. A linear operator is just a function that evaluates Ax, which is your "evaluate the output" step.
Otherwise, short of an ad hoc solution, I'm not familiar with any exact methods that can be accelerated with linear operators, so I'd need to know more about your problem, eg is your matrix banded?
Given n checks, each of arbitrary (integer) monetary value, decide if the checks can be partitioned into two parts that have the same monetary value.
I'm beyond mind blown on how to solve this. Is there an algorithm to solve this in polynomial time or is this NP-Complete?
Yes it's an NP complete problem. It's a variation of the subset sum problem.
Actually you can solve this in O(n*sum/2) using dynamic programming, first sum up all checks into a varaibel sum, then you can perform dp on the checks values (either take it or leave it or take it) and check at the end if that sum is equal to s/2.
Would the task of outputting whether or not a given scrambled word is a real english word be an equivalent problem to the traveling salesman problem? A well known strategy is to generate all permutations of a given word and compare all of them to all the words in the English dictionary. This algorithm would have a time complexity of O(N!). I could imagine that the two differ in an important aspect: once you find a permutation that matches a word, you can stop generating permutations, whereas with the TSP you have to try out every combination of routes regardless. However, I wrote an algorithm which, instead of generating all permutations of a given word with length n, it instead sorts the letters in the given word and performs the same algoritm on the words of the dictionary, then compares the two sorted strings (this method works 100% of the time). My algorithm uses the default Java sort, and after researching, I found that it runs at O(n log n). In total, my program runs at O(n log n) because this term grows the largest as n approaches infinity. This algorithm runs in less than polynomial time.
So, if the problems are equivalent, couldn't you use a similar method to solve the TSP problem? How would this relate to P vs. NP?
Sorry if any of this didn't make sense or I wasn't using the terminology correctly, I'm not that experienced in this field
The fact that there exist algorithms of the same complexity for solving two problems doesn't necessarily mean that the problems have the same complexity, because there could exist more efficient algorithms for one of the problems but not for the other.
The proper way of relating the complexities of different problems is reduction: If you can show that any instance of problem A can be transformed into an instance of problem B in such a way that the answer to the transformed instance is the same as the answer to the original instance, then problem B is at least as complex as problem A (because the algorithm that solves B also can solve A). If you can show a reduction in the opposite direction too, A and B are equally complex.
In your case, there is no currently known way to transform an arbitrary TSP problem to an equivalent unscrambling problem, so it is (to the best of our knowledge) not the case that the problems have the same complexity.
Imagine having any two functions. You need to find intersections of that functions. You definitely don't want to try all x values to check for f(x)==g(x).
Normally in math, you create simultaneous equations derived from f(x)==g(x). But I see no way how to implement equations in any programing language.
So once more, what am I looking for:
Conceptual algorithm to solve equations.
The same for simultaneous and quadratic equations.
I believe there should be some workaround using function derivations, but I've recently learned derivation concept at school and I have no idea how to use it in this case.
That is a much harder problem than you would imagine. A good place to start for learning about these things is the Newton-Raphson method, which gives numerical approximations to equations of the form h(x) = 0. (When you set h(x) = g(x) - f(x), this provides solutions for the problem you are asking about.)
Exact, algebraic solving of equations (as implemented in Mathematica, for example) are even more difficult, you basically have to recreate everything you would do in your head when solving an equation on a piece of paper.
Obviously this problem is not solvable in the general case because you can construct a "function" which is arbitrarily complex. For example, if you have a "function" with 5 trillion terms in it including various transcendental and complex transformations in it, the computer could take years just to compute a single value, much less intersect it with another similar function.
So, first of all you need to define what you mean by a "function". If you mean a polynomial of degree less than 4 then the problem becomes much more straightforward. In such cases you combine the terms of the polynomial and find the roots of the equation, which will be the intersections.
If the polynomial has more than 5 terms (a quintic or greater) then there is no easy symbolic solution. In this case the terms are combined and you find the roots by iterative approximation. See Root Finding Algorithms.
If the function involves transcendentals such sin/cos/log/e^x, etc, you can potentially find the intersection by representing the functions as a series or a continued fraction. You then subtract one series from the other and set the value to zero. The solution of the continuous equation yields an approximation of the root(s).
Without resorting to asymptotic notation, is tedious step counting the only way to get the time complexity of an algorithm? And without step count of each line of code can we arrive at a big-O representation of any program?
Details: trying to find out the complexity of several numerical analysis algorithms to decide which will be best suited for solving a particular problem.
E.g. - from among Regula-Falsi or Newton-Rhapson method for solving eqns, intention is to evaluate the exact complexity of each method and then decide (putting value of 'n' or whatever arguments there are) which method is less complex.
The only way --- not the "easy" or hard way but the only reasonable way --- to find the exact complexity of a complicated algorithm is to profile it. A modern implementation of an algorithm has a complex interaction with numerical libraries and with the CPU and its floating point unit. For instance in-cache memory access is much faster than out-of-cache memory access, and on top of that there may be more than one level of cache. Counting steps is really much more suitable to the asymptotic complexity that you say isn't enough for your purpose.
But, if you did want to count steps automatically, there are also ways to do that. You can add a counter increment command (like "bloof++;" in C) to every line of code, and then display the value at the end.
You should also know about the more refined time complexity expression, f(n)*(1+o(1)), that is also useful for analytical calculations. For instance n^2+2*n+7 simplifies to n^2*(1+o(1)). If the constant factor is what bothers you about usual asymptotic notation O(f(n)), this refinement is a way to keep track of it and still throw out negligible terms.
The 'easy way' is to simulate it. Try your algorithms with lots of values of n and lots of different data, plot the results then match the curve on the graph to an equation.
Your results may not be strictly correct and they're only as valid as your ability to generate good test data but for most cases this will work.
E.g. - from among Regula-Falsi or Newton-Rhapson method for solving eqns, intention is to evaluate the exact complexity of each method and then decide (putting value of 'n' or whatever arguments there are) which method is less complex.
I don't think it's possible to answer this question in general for nonlinear solvers. You could an exact number of computations per iteration, but you're never going to know in general how many iterations it will take for each solver to converge. There are other complications like needing the Jacobian for Newton's which could make computing the complexity even more difficult.
To sum up, the most efficient nonlinear solver is always dependent on the problem you're solving. If the variety of problems you're solving is very limited, doing a bunch of experiments with different solvers and measuring the number of iterations and CPU time will probably give you more useful information.