Prove the Stack-Turing Machine is equivalent to the Classic TM [closed] - computation-theory

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
Consider a β€œStack-Turing Machine” variant that operates with one infinite tape and one stack. At each step through the tape, the machine reads the input from the current tape location and from the top of the stack, and then transitions states, writes to the tape, and moves along the tape (like a classic TM) and also can pop from and/or push to the stack (like a classic PDA). In other words: the classic TM has transition function: 𝛿(π‘žπ‘– , π‘Ž) β†’ (π‘žπ‘— , 𝑏, 𝐿 βˆͺ 𝑅) where q is state, a and b are input/output for tape the classic PDA has transition function: 𝛿(π‘žπ‘– , 𝑐) β†’ (π‘žπ‘— , 𝑑) where q is state, c is initial top of stack, and d is newly pushed top of stack the stack-TM has transition function 𝛿(π‘žπ‘– , π‘Ž, 𝑐) β†’ (π‘žπ‘— , 𝑏, 𝐿 βˆͺ 𝑅, 𝑑) merging TM and PDA Prove the Stack-Turing Machine is equivalent to the Classic TM.
!(https://imgur.com/a/daJgTTb)
Not sure how to approach this.
No Code Involved; theory of computation Proof.
None, this is a theory of computation Proof.

A Stack-TM can simulate a regular TM by simply doing nothing interesting with the stack. A two-tape TM can simulate a Stack-TM by treating the second tape as a stack (only writing to the end and reading from the end by clearing off a symbol). Finally, a regular TM can simulate a two-tape TM since we know that multi-tape TMs are equivalent to single-tape TMs (assuming we know this result). Due to transitivity of the simulation relationship, all the systems are equivalent in that they can all simulate each other.

Related

Stack implementation the Trollface way [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In my software engineering course, I encountered the following characteristic of a stack, condensed by me: What you push is what you pop. The fully axiomatic version I Uled here.
Being a natural-born troll, I immediately invented the Troll Stack. If it has more than 1 element already on it, pushing results in a random permutation of those elements. Promptly I got into an argument with the lecturers whether this nonsense implementation actually violates the axioms. I said no, the top element stays where it is. They said yes, somehow you can recursively apply the push-pop-axiom to get "deeper". Which I don't see. Who is right?
The violated axiom is pop(push(s,x)) = s. Take a stack s with n > 1 distinct entries. If you implement push such that push(s,x) is s'x with s' being a random permutation of s, then since pop is a function, you have a problem: how do you reverse random_permutation() such that pop(push(s,x)) = s? The preimage of s' might have been any of the n! > 1 permutations of s, and no matter which one you map to, there are n! - 1 > 0 other original permutations s'' for which pop(push(s'',x)) != s''.
In cases like this, which might be very easy to see for everybody but not for you (hence your usage of the "troll" word), it always helps to simply run the "program" on a piece of paper.
Write down what happens when you push and pop a few times, and you will see.
You should also be able to see how those axioms correspond very closely to the actual behaviour of your stack; they are not just there for fun, but they deeply (in multiple meanings of the word) specify the data structure with its methods. You could even view them as a "formal system" describing the ins and outs of stacks.
Note that it is still good for you to be sceptic; this leads to a) better insight and b) detection of errors your superiours make. In this case they are right, but there are cases where it can save you a lot of time (e.g. while searching the solution for the "MU" riddle in "GΓΆdel, Escher, Bach", which would be an excellent read for you, I think).

How many number of primitive operations does a 16, 32 or a 64-bit processor execute to perform logical right shift of an N-bit Binary number? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Recently,I have been trying to understand how the Binary Extended Euclidean Algorithm works at the processor level. This question is all about finding an Inverse element in GF(2^m) with polynomial basis.
Generally I came across the Extended Euclidean Algorithm for evaluating an inverse element but the fact is that it involves too many addition and multiplication operations. The Binary EEA algorithm requires just bit shifting operations (equivalent to division by 2--logical shift right). The algorithm is in this link, page number 8.
In step 3 and 5 of this algorithm, every iteration shifts the parameters u and b by 1 bit to the right adding zero to the MSB at the same time. The loop ends when u == 1 and returns b. My question is how many primitive operations does a processor (say a 32 bit processor for example) perform in step 3 or step 5 of every iteration?
I came across barrel shifter and I am quite confused about how fast the shifting takes place. Should I really consider these primitive operations or should I ignore them if because the shifting may be faster?
It would really help me a lot if someone would show the primitive operations for the case where the size of u is 194 bits.
In case you might be wondering about the denominator x in step 3 and 5 of the algorithm, its the polynomial representation and x means nothing but 10 in binary and parameter u is an N-bit binary number.
There is no generic answer to this question: you can use portable code that will be tedious to optimize or highly machine specific code that will be even more complicated to optimize without breaking.
If you want real performance, you have to use MMX/AVX registers on the maximum width you can get your hands on. Intel provides lightweight wrappers on low-level instructions as macros and inline functions.
Always use unsigned types for your shifting operations to avoid unnecessary steps.
Usually ther is a "right shift" assembly OP code which is able to right shift a register a given number of bits. Such an operation takes one cycle.
This assumes thet your value is already loaded to the register however.
The best answer anyway: Implement this algorithm in a low level language (C, C++) and look at the assembly code produced by the compiler.

Halting program explained [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am trying to understand why is it impossible to write a program H that can check whether another program P on a specific input I will halt or not (Halting problem), but i am unable to get a clear idea of it.
Though intuition says that if this program H tries to run a program P which is not going to halt, then H itself will go into a loop. But i dont think this is the idea behind the proof.
Can anybody explain the proof in simple layman terms?
The idea behind the proof is by contradiction.
Assume there is a Halting Problem Machine M that solves the Halting problem, and yields 0 if some input program won't finish, and 1 if it will. M is guaranteed to finish.
Create a new machine H.
H runs M with the input of H (itself), and if M answers 1 - get into endless loop, otherwise - stop.
Now, what will happen if you run M on input H?
If the answer is 1 - than it is wrong, since H runs M, and will get to infinite loop.
If the answer is 0 - it is also wrong, since H will stop.
So, it is contradicting the assumption that M is correct - and thus there is no such M.
Intuitively - it is like saying there is no such thing as an oracle, because if "you" tell me you're an oracle, I ask you - which arm am I going to raise?
Then, I will wait for your answer - and do the opposite - which contradicts the claim that the oracle is indeed an oracle.
Turing used proof by contradiction for this (aka reduction to absurdity):
The idea is to assume there is actually such a machine H that given any program p and an input i will tell us weather p stops.
Given such H, we can modify it and create a new machine.
We'll add another part after H's output such that if H outputs yes, our machine will loop infinitely,
and if H outputs no, our new machine will halt.
We'll call our new machine H+.
Now, the paradox occurs when we feed H+ to itself (as both program p and input i).
That's because if H+ does halt, we get a yes answer (from the H part), but then it loops forever.
However, if H+ doesn't halt, we get a no answer (from the H part), but then it halts.
This is explained very nicely in this computerphile episode.

Why do v structures not contribute to flow of probabilistic influence? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I recently went through a video which said that in the relation x->W<-Y, X does not influence y.X has causal relationship to W and W has evidential relationship to Y .So will X not affect Y ?
Let's let W represent "The Lawn is Wet," X represent "It rained recently," and Y represent "The automatic sprinklers were on recently."
Clearly, X influences W: If it rained recently, it is likely that the lawn is wet.
Clearly, Y influences W: If the sprinklers were on recently, it is likely that the lawn is wet.
Clearly, knowing W, we can make inferences about X and Y.
But, does X directly influence Y?
Put differently, does the fact of recent rain (or not) influence whether the automatic sprinklers were on recently?
No. If we know nothing about the state of the lawn, because we didn't look outside, the chance of recent rain is independent of the chance of recent sprinkler activity.
Once we look outside, though, and determine the state of the lawn, then we can draw inferences between rain and sprinkler activity.
If W is not observed, then x and y are independent.
One such v-structured CPD (that is entirely deterministic) looks like this.
Draw X and Y independently as binary variables, and then W is the sum. If you know only X=1, then Y could be 0 and W would be 1, or Y = 1 and W = 2. Knowing X doesn't let you distinguish between those two possibilities.
In general, I think reasoning about v-structures is much easier with deterministic functions than probabilistic ones. Think about what happens when the v-structure is ADD, XOR, and AND and you can usually get specific insights about general claims.
something that helped me understand this was a concrete example that I found in Sections 1.3 and 1.4 of this online book:
It goes through all the cases you have listed above, and for each case explains w/ the running example described in Section 1.4. Please have a look here:
http://people.cs.aau.dk/~uk/papers/pgm-book-I-05.pdf

Programming two trains to intersect without positional data or communication (logic puzzle) [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
A helicopter drops two trains, each on a parachute, onto a straight infinite railway line.
There is an undefined distance between the two trains.
Each faces the same direction, and upon landing, the parachute attached to each train falls to the ground next to the train and detaches.
Each train has a microchip that controls its motion. The chips are identical.
There is no way for the trains to know where they are.
You need to write the code in the chip to make the trains bump into each other.
Each line of code takes a single clock cycle to execute.
You can use the following commands (and only these):
MF - moves the train forward
MB - moves the train backward
IF (P) - conditional that is satisfied if the train is next to a parachute. There is no "then" to this IF statement.
GOTO
Make each train move forwards slowly until it finds a parachute. When the back train finds the parachute of the front train, make it move forwards faster to catch up with the front train.
1. MF
2. IF(P)
3. GOTO 5
4. GOTO 1
5. MF
6. GOTO 5
If you want to make it take less time for the trains to reach each other, at the cost of some extra lines of code, you can unroll the second loop.
label1: MF
If (P)
{
// Do nothing (because of no then?)
}
ELSE
{
MF;
MB;
GOTO label1;
}
label 2:MF
GOTO label2;
GO forward 2 times, backward 1 times until meet the other train's parachute go forward like crazy (to bump into the other - it's still Forward then backward - meaning it's go slower). I use MF one time in label 2, mean it take 2 clock cycle to go one step forward.
In label1 it took 5 clock cycle to go forward one steps. So if we use more MF in label2 two of them will bump into eachother faster.
No variable used.

Resources