Reduce SAT to HALT - complexity-theory

How can we reduce Boolean SAT problem to HALTING problem? I tried it, but have no idea how to begin. Eventually, I wanted to prove HALT is NP-HARD, so is there a better method than this to prove HALT is NP-HARD?

Basically, we can assume a Turing machine that considers all possible assignments:
If a satisfying assignment is found, the machine halts
Otherwise, it loops forever
if a satisfying assignment is not found then it runs forever. This machine halts if and only if the 3SAT instance is satisfiable. Given an input F (3Sat formula) to 3SAT, we pass the input into HALT(M, F) and see what the answer is.

Related

Recognition of Undecidable Propositions(infinite loop)

Say I want to find a natural number n which n+n=3
To solve this computationally, I would run an algorithm:
int n = 1;
while(n+n!=3)
n++;
System.out.println(n);
Of course we know that this loop is an infinite loop. But is there an algorithm that can predict whether this loop will be infinite or finite? (similar but different from the halting machine, since my desired algorithm only examines this loop while the halting machine can examine all loops) If there is, what would be the algorithm be?
In answer to the specific question asked ("is there an algorithm that can predict if this loop will be infinite or finite"), the algorithm would be simply to simply report "INFINITE".
If you are looking for something more general (i.e. work on arbitrary source code), there are algorithms that work on various classes of algorithms/code. But it has been known for a long time that no such algorithm exists for the general case.
While that program is running, its state can be fully defined by the value of the variable 'n' and by the next operation (amongst a finite set of operations) to be executed. An algorithm can simulate the execution of that program step by step, at each step checking whether both the data and the next operation match those of an earlier step. If a match is found, then the algorithm stops the simulation and reports that the program does not halt. If a match is not found, the new state of the program is logged for future comparisons. If there is no further operation to be executed, i.e. the program finishes running by itself, then the algorithm reports that the program halts.
This algorithm is of course very inefficient, but it shows a very general case: that it is possible to predict whether a specific program stops, when that program does not use an arbitrarily large amount of memory (for code and data).

If algorithm in P, efficient way to extract solutions?

Maybe this is very obvious, but if we had an algorithm in P (so this algorithm gives a yes/no answer in polynomial time), is there a more efficient way to find the solution beyond just guessing and checking?
So, suppose SAT is in P (I know this is an NP-Complete problem, but this seems like the best example for what I'm trying to ask). This means that there is a polynomial time algorithm that will tell you yes or no depending on whether or not the given input is satisfiable.
It would seem that there should thus be an efficient way to find/extract this satisfying assignment (rather than just know it exists, if there is one). However, I can't think of any efficient way to utilize this poly-time algorithm to find such an assignment.
** side note **
For maximization/minimization (e.g. Knapsack) problems I know that you can use binary search to find your solution, but my question is more pertaining to these non-maximization type problems like SAT
You don't have to guess the entire thing and then test it.
You can get a satisfying valuation (if it exists) like this:
Pick a variable, make it false, remove it from all clauses/remove satisfied clauses. Consult the SAT oracle, which apparently runs in polynomial time today. If it's still satisfiable, fine, keep it. Otherwise it must be true, restore the clauses and clean up the clauses again. There's no backtracking, there's just one call to SAT for every variable. That whole thing is still in polynomial time.
Or if that's what you had in mind, then well, that's it. Does it really matter though? Polynomial time is polynomial time, and this isn't usable in practice anyway so wall-clock time is hardly a concern.

NP-Completeness in Task Scheduling

So this is a bit of a thought provoking question to get across the idea of NP-Completeness by my professor. I get WHY there should be a solution, due to the rules of NP-Completeness, but I don't know how to find it. So here is the problem:
The problem is a simple task scheduling problem with two processors. Each processor can handle one of the n tasks, and any two tasks can be done simultaneously. Each task has an end time e, it must be completed by this time. Each task also has a duration d. All time values, such as end time, duration, and the current time in the system (the time will start at 0), are all integers. So we are given a list of n tasks and we need to use these two processors to schedule them ALL. If any one can not be scheduled, the algorithm has to return no solution. Keep in mind that the order does not matter and it doesn't matter which processor gets which task, as long as there is no overlap and each task finishes before its deadline.
So here is where the problem gets conceptual/abstract, say we have access to a special little function, we have no idea how it works, all we know is this: given a list of n tasks and the current schedule, it will return true or false based on whether or not the algorithm is solvable from this point. This function assumes that the already scheduled tasks are set in stone, and it will only change the times of the unscheduled tasks. However, all this function does is return true or false, it will not give you the proper schedule if it does find a solution. The point is that you can use the special function in the implementation of the scheduling problem. The goal is to solve the scheduling problem, and return a working schedule with every job scheduled properly, using a polynomial number of calls to the special function.
EDIT: To clarify, the question is this: Create a solution to schedule all n tasks without any going over deadline, using a polynomial number of calls to the "special function."
I think this problem is to show that verifying a solution is polynomial, but finding it is nonpolynomial. But my professor is insistent that there is a way to solve this using a polynomial number of calls to the special function. Since the problem as a whole is NP-Complete, this would prove that the nonpolynomial aspect of the runtime comes in during the "decision portion of the algorithm.
If you would like me to clear up anything just leave a comment, I know this wasn't a perfect explanation of the problem.
Given an oracle M that returns true or false only:
input:
tasks - list of tasks
output:
schedule: a triplet(task,processor,start) for each task
algorithm:
While there is some unscheduled task:
let p be the processor that currently finished up his scheduled tasks first
let x be the first available time on x
for each unscheduled task t:
assign t with the triplet: (t,p,x)
run M on current tasks
if M answers true:
break the for loop, continue to next iteration of while loop
else:
unassign t, and continue to next iteration of for loop
if no triplet was assigned, return NO_SOLUTION
return all assigned triplets
The above runs in polynomial time - it needs O(N^2) calls to M.
Correctness of the above algorithm can be proved by induction, where the induction hypothesis is After round k of the while loop, if there was a solution before it, there is still a solution after it (and after the assignment of some task). After proving this claim, correctness of the algorithm is achieved trivially for k=#tasks
Formally proving the above claim:
Base of induction is trivial for k=0.
Hypothesis: for any k < i, the claim "if there was a solution before round k, there is still one after round k", is correct
Proof:
Assume there is some solution { (tj,pj,xj) | j=1,...,n}, ordered by j<u <-> xj<xu, and also assume that t1,t2,...,ti-1 is assigned same as the algorithm yielded (induction hypothesis).
Now, we are going to assign ti, and we'll be able to do it, since we are going to find the smallest available time stamp (xi), and place some task on it. We are going to find some task, and since ti is a possibility - it will not "fail" and yield "NO_SOLUTION".
Also, since the algorithm does not yields "NO_SOLUTION" in iteration i, from correctness of M, it will yield some task t, that by assigning (t,p,x) - there will still be a solution, and the claim for step i is proven.

Checking if the following language is decideable

Input: Deterministic TM - M.
Question: Is there any input x, such that when M run on x, there are 3 different states of M, that M going throught the first state once, the seconed state twice, and 3 times throught the third state ?
To which complexity class the following problem belonges ?
A. R
B. RE\R
C. co-RE\R
D. Non of the above
I will be glad if someone can give im a formal proof to this problem, and a tip how to deal with this sort of questions.
thanks.
When dealing with complexity class R, you should always think of the Halting problem. If you can find any algorithm (even if superexponential) to solve the problem, it is in R. If you can deduce the halting problem to your problem, (as in this case), the problem is incomputable (and therefore I think the answer is D).
I'm not sure how a formal proof in this case goes, but the idea is that you want to prove that you can solve the halting problem if you can solve your problem. Now suppose you have a TM and you want to find out if it terminates. We assume that we can solve your problem instance. First, we prepend a Turing machine A before the start state of the halting problem and use the start state of A as the new start state. A is such that no state will be visited exactly once, some state will be visited exactly twice and some state will be visited exactly three times. That part should be trivial. Now, except for the end state of the halting machine, We need to make sure that each state of the halting machine will be visited at least twice. I don't know the details, but this should be possible by having a new arrow from each state to itself and manipulating the symbols to each state so that it needs to go to itself once before it can continue to another state. Now, the only possible state that could be visited exactly once, is the end state. Therefore, if you can solve your problem, you can solve the halting problem. But the halting problem is incomputable, so so is your problem.

Is using static bytecode analysis to determine all the possible paths through a given method a variant of trying to solve the Halting Problem?

Is it possible to determine all the possible execution paths by reading the bytecode of a given method, or will that be equivalent to trying to solve the halting problem? If it can't be reduced to the halting problem, then how far can I go with static analysis without crossing the boundary of trying to solve the halting problem?
Related question: "Finding all the code in a given binary is equivalent to the Halting problem." Really?
Yes, this is easily equivalent to solving the halting problem. Consider the following if statement:
if (TuringMachine(x)) then goto fred;
OK, is it really possible to goto fred? You can only answer this question if you can analyze a Turing machine.
There's an equivalent set of bytecodes for this.
Now, if the only problem is to determine all plausible paths, and you don't care if you get some false positives, the answer is No. Consider the following program:
if (false) then x else y ;
The possbile paths: eval(false);do x and eval(false);do y is a complete enumeration.
You have to treat loops specially, as zero, one, two, or some maximum bounded number of iterations, it you want a computable answer. If a loop can repeat forever, some of your paths will be infinitely long and you can't report them with a algorithm and finite time :-{

Resources