Pushdown Automata for the Language {wwR | w∈{0,1}*} - computation-theory

I am currently enrolled in the undergraduate version of Theory of Computation at my university and we are discussing Pushdown Automata, Turing Machines, and the Church-Turing Thesis.
In our homework problem sets, my professor asked the following question:
Construct a PDA for the language {wwR | w∈{0,1}*}.
However, in our class lecture slides, he wrote the following
Not every nondeterministic PDA has an equivalent deterministic PDA. For example, there does exist a nondeterministic PDA recognizing the following language, but no deterministic PDA can recognize this language:
So, my question is whether or not it is possible to write this deterministic PDA or not? I have tried researching it online for the past two hours, but have found no problems which discuss this problem specifically.

There is no deterministic PDA for this language. Which is the same as to say that there is no deterministic context-free grammar for it. The non-deterministic one in ABNF meta-syntax is:
a = ["0" a "0" | "1" a "1"]
The inability to create a deterministic PDA comes from the fact that the decision to accept/reject the input is based on the length of the input (a palindrome in this case). The PDA have no machinery that enables it to make decisions based on the length (you cannot use that "half of the input was accepted by now").
Imagine that at every input character "0"/"1" you push into the PDA's stack "0"/"1" and remain in the same state. Then for the input character that is the same as the previous you are facing a decision, should you push it into the stack and remain in the same state, or you have to start to recognize the reverse of the previously found input characters. There is no way to know what decision to make, based on the input character itself, the last pushed character into the PDA's stack (that is the same as the input character at this moment of time) and the current state of the PDA. You need to know the length of the input, and if you are in the middle of it, you can start to accept the reverse, else you remain in the same state.
There is nothing you can re-arrange, so that a deterministic decision become possible. The only way that remains is to explore both of the decision paths, every time that is needed, and to experience non-linearity.

Related

Is there a name for the do -> recurse -> undo pattern in backtracking?

In backtracking, i.e. the algorithm used for solving the n-queens problem, there are basically two ways to do the recursive call:
copy the parent board to make a child board, modify the child board by placing a new queen, then do the recursive call on the child board.
modify the board directly, do the recursive call, then undo the modification.
The second is preferred since it avoids the costly copy.
This choice is also present in other algorithms, like minimax on games.
Is there a name for pattern 2 as opposed to pattern 1?
In Constraint Programming and SAT-Solving (where your n-queens example usually comes from) i would argue, that these concepts are described as:
copy-based
trail(ing)-based
See for example:
Reischuk, Raphael M., et al. "Maintaining state in propagation solvers." International Conference on Principles and Practice of Constraint Programming. Springer, Berlin, Heidelberg, 2009.
Schulte, Christian. "Comparing Trailing and Copying for Constraint Programming." ICLP. Vol. 99. 1999.
Excerpt of the former:
Constraint propagation solvers interleave propagation, removing impossible values from variable domains, with search. The solver state is modified during propagation. But search requires the solver to return to a previous state. Hence a propagation solver must determine how to maintain state during propagation and forward and backward search.
[...] This paper also provides the first realistic comparison of trailing versus copying for state restoration.
Both have their pros and cons, analyzed in the references.
Keep in mind, that the trail is usually not only about storing your decisions (board placements), but also propagations happened (this placement leads to these now impossible placements due to alldifferent propagation: these effects must be reverted as well!). For an overview of the implementation of such, see: MiniCP
I would say that the two are the same algorithm, only with a mutable or immutable board.
I would also say that for the specific case of n-queens, the copy isn't expensive at all, since you can represent a board using only 64 bits... you probably spend a lot more than that per level of the call stack :)
A related subject to this matter is a design pattern in object-oriented programming, which is called "command-pattern". It helps to undo recent commands depending on a command stack.

In Non-Deterministic Finite Automata (NFA), how is the next branch/transition selected when there are two or more transitions?

For NFA, when there are 2 or more transition states, how does the machine decide which transition to take?
I was only able to find the "Guess and Verify" methodology, where we consider the system to be clairvoyant and always pick the correct path using binary trees.
Is this the only method? Could we also consider it as taking and existing in both states simultaneously?
You can consider it as trying all possible options, NFA accepts a word if there is a path from an initial state to an accepting state using this word.

Can every algorithm be represented by a finite state machine?

I understand that it's possible to represent some algorithms as FSMs, but can FSMs describe every possible algorithm?
No. Intuitively, an algorithm can only be represented as an FSM if it uses only a finite amount of state. For instance, you couldn't sort an arbitrary-length list with an FSM.
Now, add an unbounded amount of state to an FSM -- like an infinite one-dimensional array of values... and add a little bit of "glue" state between the FSM and the array -- a notion of "current position" in that array... and you've got a Turing machine. Which, yes, can do it all.
No.
There is a finite state machine that can describe every Regular Language.
For irregular languages, Finite State Machines, are not enough.
The set of all programs, are called the "Recursively enumerable" languages, and can be accepted by a Turing Machine.
This is often referred as the Chomsky Hirerchy:
Regular Languages <= Context Free Languages <= Context Sensitive Languages <= Recursively enumerable Languages
Which are accepted by:
Regular languages: Finite State Machine
Context Free Language: Push-down automaton
Context Sensitive Languages: Linear Bounded Turing Machines
Recursively enumerable Languages: Turing Machines
It is important to note that a machine that can accepted describe all "higher tier languages" can also describe all the lower tiers (for example, you can create a turing machine to accept each regular language)

Machine learning: Supervised learning to learn & predict next RSA code

I was going through Andrew NG's machine learning course. I am still at the beginning stages.
With his housing price prediction example in the class he teaches Supervised learning.
Is it possible predict the RSA token which will be generated next after providing a dataset of "right" values for the machine learning program? Can we use supervised learning to make the program learn the algorithm?
Supervised learning depends on exploiting regularities in the data. For example if the data is plotted against the desired output there may be clusters or highly populated surfaces in the space. The various learning algorithms you will learn in class are all ways of exploiting one type of structure or another. If the dataset is random and unconnected to the output desired, then no learning can be done.
RSA is useful cryptographically precisely because it is a non-random process that is exceptionally difficult to distinguish from a random process with no structure. There are no obvious regularities in the data to exploit.
I am reluctant to discourage you from taking a look at this; you never know what it might spark or what you might learn. But in your place I would not want any part of my grade to depend on success. I will say that to succeed in any meaningful sense you will almost have to base the learning on features that no-one has thought of till now. If you are determined to try this I would recommend starting with very small primes and only if you get any traction graduate to larger primes.
Part of the reason for being dubious depends on complexity arguments. If one can solve arbitrary RSA problems based on a composite number then one can factor that number in a reasonable amount of time, however factoring an arbitrary composite number is believed (but not known) to be NP hard, though not NP complete.
It won't work.
An RSA token creates a psuedo-random sequence of numbers from a seed.
In theory, if you had infinite resources then you could train an algorithm long enough that it "learnt" the entire sequence of pseudo-random numbers. And then you could predict the sequence (and potentially even infer the seed) from a set of previous values.
In practice this approach is guaranteed to fail because both:
The training time would be too long for this to be feasible.
The size of solution required (e.g. the number of parameters in a neural network) would be too large to be implemented.
By "too large" and "too long" you should understand "longer/larger than anyone in the universe will ever be able to achieve".
You will not achieve a statistically significant success rate with this. This type of thing is prohibited by the mathematics at play inside the token.
For example, a hash chain could be used.

What is the difference between an algorithm and a function? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Are they the same thing?
No.
A function is a block of code in a computer program.
An algorithm is an abstract concept that describes how to solve a problem.
In mathematics, a function is "a mathematical relation such that each element of a given set (the domain of the function) is associated with an element of another set (the range of the function)" (source - google.com, define:function).
In computer science, a function is a piece of code that optionally takes parameters, optionally gives a result, and optionally has a side effect (depending on the language - some languages forbid side-effects). It must have a specific machine implementation in order to execute.
The computer science term came out of the mathematical term, being the machine implementation of the mathematical concept.
An algorithm is "a precise rule (or set of rules) specifying how to solve some problem" (source - google.com, define:algorithm). An algorithm can be defined outside of computer science, and does not have a definitive machine implementation. You can "implement" it by writing it out by hand :)
The key difference here is, in computer science, an algorithm is abstract, and doesn't have a definitive machine implementation. A function is concrete, and does have a machine implementation.
An algorithm is a set of instructions.
In computer programming, a function is an implementation of an algorithm.
An algorithm is a series of steps (a process) for performing a calculation, whereas a function is the mathematical relationship between parameters and results.
A function in programming is different than the typical, mathematical meaning of function because it's a set of instructions implementing an algorithm for calculating a function.
An algorithm describes the general idea, whereas a function is an actual working implementation of that idea.
It might be almost a philosophical question, but I'de say an algorithm is the answer(or how-to) to a problem at hand where as a function does not necerally answer one problem in itself.
What you want to do normally is split your algorithm into severals function that each has their own goal, which, in the end, will achieve the problem at hand, when used together.
Ex : You want to Sort a list of numbers. The algorightm used would be for example the Merge-sort algorithm. That specific algorithm is actually composed of more than one functions, one that will split your array, another to check for equality, another to merge everything back together, and so on.
A mathematical function is the interface, or specification of the inputs and outputs of an algorithm.
An algorithm is the precise recipe that defines the steps that may implement a function.
Confusingly, computer language designers diffuse this distinction by using the concept function, func, method, etc, to talk about both concepts.
So the distinction is one of specification vs. definition.
There is also a semantic distinction: an algorithm seeks to provide a solution to a problem. It is goal-oriented. A function simply is - there is no essential teleological component.
An algorithm is the implementation of a function.
In some cases, the algorithm is trivial:
Function: Sum of two numbers.
Algorithm:
int sum(int x, int y){
return x+y;
}
In other cases, it is not:
Function: Best chess move.
Algorithm:
Move bestChessMove(State gameState){
//I don't know the algorithm.
}
Algorithm is a (possibly informal but necessarily precise) sequence of instructions. Function is a formal rule that associates some input w/ a specific output. Functions implement and formalize algorithms. E.g. we can formalize "go from a to b" as go(a)=b or go(x,a)=b (w/ x the one who goes), etc. According to Wikipedia,
An algorithm is an effective method that can be expressed within a
finite amount of space and time and in a well-defined formal
language for calculating a function.
So you can say that "go(ing) from a to b" is an effective method for calculating go(a)=b (if you want)
An Algorithm usually refers to the method or process used to end up with the result after mathematical processing. A Function is a subroutine used to avoid writing the same code over and over again. They are different in their uses. For instance, there may be an Algorithm that is used for encrypting data, and a function for posting code to a webpage.
Here is some further reference:
http://en.wikipedia.org/wiki/Algorithm
http://en.wikipedia.org/wiki/Function_(computer_science)
A function is a symbolic representation whereare a method is the mechanical steps needed to get the answer.
Suppose this function:
f(x) = x^ 2
Now if I tell you to count f(5000) you have to do things that this function doesn't say. Like for example how to multiply. So really these are just symbols.
But if I have a python method for example:
x = math.pow(500, 2) # or whatever it is
Then in this case the steps themselves for each operation are totally defined (in the libraries ;) ).

Resources