2-Satisfiability and Strongly connected components - algorithm

I know that applying Strongly connected components in a Digraph we can check 2-SAT boolean satisfiability if the problem is solvable in polynomial time.
Let's assume the problem is satisfiable.
The question: is there a general algorithm to calculate that solution relying on 2-SAT?

Assuming I understand your question correctly, yes, there is a general algorithm to find a solution (i.e. a satisfying assignment) by using the algorithm for the satisfiability problem.
Suppose I assign a variable xi the value "true" (so that the literal xi is true and ~xi is false) to make a new 2-CNF from the original. If I were to run satisfiability on it (i.e. use the thing with SCCs to find if the new CNF is satisfiable):
What does it tell you about the satisfying assignments in the original CNF if the resulting CNF is now not satisfiable?
What does it tell you about the satisfying assignments in the original CNF if the resulting CNF is still satisfiable?
The idea behind the algorithm is to make the variable "false" if you hit case 1, and "true" if you hit case 2, and loop through every variable (keeping the assignments that you made to variables you've already looked at). Once you can answer the questions I've posed, you'll understand the concept behind it.

Related

Subset sum decision problem -- how to verify "false" case in polynomial time?

I'm having a bit of trouble understanding how one would verify if there is no solution to a given instance of the subset sum problem in polynomial time.
Of course you could easily verify the positive case: simply provide the list of integers which add up to the target sum and check they are all in the original set. (O(N))
How do you verify that the answer "false" is the correct one in polynomial time?
It’s actually not known how to do this - and indeed it’s conjectured that it’s not possible to do so!
The class NP consists of problems where “yes” instances can be verified in polynomial time. The subset sum problem is a canonical example of a problem in NP. Importantly, notice that the definition of NP says nothing about what happens if the answer is “no” - maybe it’ll be easy to show this, or maybe there isn’t an efficient algorithm for doing so.
A counterpart to NP is the class co-NP, which consists of problems where “no” instances can be verified in polynomial time. A canonical example of such a problem is the tautology problem - given a propositional logic formula, is it always true regardless of what values the variables are given? If the formula isn’t a tautology, it’s easy to verify this by having someone tell you how to assign the values to the variables such that the formula is false. But if the formula is always true, it’s unclear how you’d show this efficiently.
Just as the P = NP problem hasn’t been solved, the NP = co-NP problem is also open. We don’t know whether problems where “yes” answers have fast verification are the same as problems where “no” answers have fast verification.
What we do know is that if any NP-complete problem is in co-NP, then NP = co-NP. And since subset sum is NP-complete, there’s no known polynomial time algorithm to verify if the answer to a subset sum instance is “no.”

Linear 3SAT : a version of 3SAT in linear time

Consider a 3SAT instance with the following special locality property. Suppose there are n variables in the Boolean formula, and that they are numbered 1,2,3....n in such a way that each clause involves variables whose numbers are within +-10 of each other. Give a linear-time algorithm for solving such an instance of 3SAT.
I could not solve the problem but my intuition is that if we could map the problem in graph then may be solved but could not go much farther ..
This is a relatively straightforward dynamic programming problem. I'll describe a solution, ignoring the fairly straightforward indexing issues around either boundary.
After the m'th step we have the set of possible values for variables (m-10, m-9, ..., m+10) which could be solutions so far, each linked to a set of values for all previous variables that leads to solutions to equations 1..m.
For the m+1'th step we take each member of this possible solution set, ignore the m-10'th value, and consider each possibility for the m+11'th value. If the m+1'th equation is true, we add this to the next solution set, pointing to our history, only if that solution pattern has not already been added.
This lands us ready for the m+2nd step.
There are n steps required, each of which can have about 2 million possible cases to consider, so this is linear.
(Fun challenge. Modify this algorithm to not just find a solution, but to count how many solutions there are.)
I think you can just brute force it in poly time. Divide the clause list into two pieces. Exhaustive search over variables which are on both sides of the split. There are at most 30 of them, so that's 2^30 = O(1) settings to try. Once those variables are set, you can recursively solve both sides, each one is an independent SAT instance with n/2 variables.

SAT/CNF optimization

Problem
I'm looking at a special subset of SAT optimization problem. For those not familiar with SAT and related topics, here's the related Wikipedia article.
TRUE=(a OR b OR c OR d) AND (a OR f) AND ...
There are no NOTs and it's in conjunctive normal form. This is easily solvable. However I'm trying to minimize the number of true assignments to make the whole statement true. I couldn't find a way to solve that problem.
Possible solutions
I came up with the following ways to solve it:
Convert to a directed graph and search the minimum spanning tree, spanning only a subset of vertices. There's Edmond's algorithm but that gives a MST for the complete graph instead of a subset of the vertices.
Maybe there's a version of Edmond's algorithm that solves the problem for a subset of the vertices?
Maybe there's a way to construct a graph out of the original problem that's solvable with other algorithms?
Use a SAT solver, a LIP solver or exhaustive search. I'm not interested in those solutions as I'm trying to use this problem as lecture material.
Question
Do you have any ideas/comments? Can you come up with other approaches that might work?
This problem is NP-Hard as well.
One can show an east reduction from Hitting Set:
Hitting Set problem: Given sets S1,S2,...,Sn and a number k: chose set S of size k, such that for every Si there is an element s in S such that s is in Si. [alternative definition: the intersection between each Si and S is not empty].
Reduction:
for an instance (S1,...,Sn,k) of hitting set, construct the instance of your problem: (S'1 AND S'2 And ... S'n,k) where S'i is all elements in Si, with OR. These elements in S'i are variables in the formula.
proof:
Hitting Set -> This problem: If there is an instance of hittins set, S then by assigning all of S's elements with true, the formula is satisfied with k elements, since for every S'i there is some variable v which is in S and Si and thus also in S'i.
This problem -> Hitting set: build S with all elements whom assigment is true [same idea as Hitting Set->This problem].
Since you are looking for the optimization problem for this, it is also NP-Hard, and if you are looking for an exact solution - you should try an exponential algorithm

Solution to Recursive Relations with Arrays

from Mexico. The truth is almost never asked or open new issues, because really the forum and not only this, if not to work instead of the network, you can find plenty of information about topic x or y, however this time I feel very defeated.
I have two years of recursion.
Define the following recursive algorithms.
a. Calculate the next n integers.
At first not referred to the master with this is that if the algorithm returns a sum, or set of numbers. Furthermore, although in principle and algorithm design for the second case is asked to resolve by its expression as a recurrence relation ... this is where I am more than lost, not how to express this as a RR. And that can be solved
b. Calculate the minimum of a set of integers
In the other case suppose that calls for the minimum of a set of integers. that's solved, but the fact and pass it to a RR fix, has left me completely flooded.
APPRECIATE ANY HELP, thanks
Answering on b)
You have a set of integers. You pick one and you know that minimal element is either that you've picked or the minimal is still in the set. Recursivly you call function unless you pick all elements from set, you assume that minimum of set that contain no elements is infinity. Then your recurrence is going back updateing the minimal value.
minimum (S) = min(any element, minimum(Rest of S))
if (S is empty) then minimum(empty) = infinity.
Not an implementation in any language cause surely depend on representation of set.
P.S why doing this recursivly?

Algorithm that can turn a linear programming problem to a feasible one

I need an algorithm that automatically makes a linear programming problem feasible. Concretely, the algorithm is such that its input is a linear programming problem which potentially does not have feasible solutions, and its output is a similar programming (with parameters modified with minimum) which is bound to have feasible solutions. I am a newbie in algorithms and inquire if there is any existing research/work for such problems? Any suggestions and comments are appreciated.
Thanks,
Richard
You could just add slack variables to the constraints then minimize the sum of the values squared.
Add a set of "artificial variables", one per equation, with unit weight in that equation and zero weight everywhere else. Then, you can choose that set as your first basis, and add "eliminate the artificial variables" as an initial goal. If you can eliminate all the artificial variables, you can discard them, and you will have a feasible basis for your initial problem; if you cannot eliminate the artificial variables, there is no feasible solution.
original problem (in canonical form -- any LP problem can be converted to this!):
minimize c.x, given: [A]x = b, x_i>=0
(but first, need feasible solution)
to find a feasible solution (assuming all b_j>=0; if not, just multiply the row by -1):
minimize sum(y), given: y + [A]x = b, x_i>=0, y_j>=0
with initial, feasible solution: x_i=0, y_j=b_j
There are variations and optimizations on this kind of scheme; for instance, you don't necessarily need to convert everything to canonical form to do this kind of thing (though it is useful for simplicity of explanation). You should be able to find more details in any linear programming text.
Note that this is similar to the other answer of "slack variables", except that there is no point in squaring anything (which would make the problem nonlinear, and thus more difficult to solve within a linear programming framework...)

Resources