Is NFA without null transition allowed? - nfa

I have made an NFA but have no null transition. is it allowed in NFA? I am creating NFA with generic approach. I am not sure about its correctness. NFA could be any type or is there any particular criteria which should must follow? I had check one website
https://www.researchgate.net/publication/238910238_NFAs_With_and_Without_epsilon-Transitions
I didn't sure about without lambda/epsilon NFA is allowed or not

Related

expressing constraints in relational algebra

I need to express this constrain in relational algebra:
I have some table with one column with all possible values: ALL_VAL
and table with some values from ALL_VAL that not mach some rule: NOT_FIT_VAL
and I can calculate FIT_VAL = ALL_VAL - NOT_FIT_VAL
what i need is a constraint: in FIT_VAL there minimum one item.
I am using not equal sign with empty group:
ALL_VAL,
NOT_FIT_VAL
FIT_VAL = ALL_VAL - NOT_FIT_VAL
FIT_VAL <> {empty}
but I am not sure that <>(not equal) is allowed at all in relational algebra
there is not a single book or article that shows example or saying that I can use it.
I would like some clarification about it, and the correct expression.
thank you
Strictly speaking, the expression "FIT_VAL <> {empty}" is not a relational expression (it does not produce a relation, but a truth value instead), therefore it is a bit problematic to consider such expressions as being "valid relational algebra expressions".
But that is strictly speaking, and I don't know how much slack your textbook cuts its readers/users in that area. Under the "strictly speaking" approach, it is even outright impossible to use relational algebra to define constraints, because the definition of a constraint must produce a boolean result (does the database satisfy it or not) almost by definition. That is probably the reason why it is so highly exceptional to see relational algebra being used to express/define database constraints !
Another approach for defining database constraints using relational algebra, is to define a relational expression that plays the role of "faults expression" and then implicitly, tacitly assume that the rule is that the result of evaluating this expression must at all times be empty. But that's (AFAIK) an entirely private approach of mine, and I'd be surprised if you had also found it in a textbook.

Left-recursive Grammar Identification

Often we would like to refactor a context-free grammar to remove left-recursion. There are numerous algorithms to implement such a transformation; for example here or here.
Such algorithms will restructure a grammar regardless of the presence of left-recursion. This has negative side-effects, such as producing different parse trees from the original grammar, possibly with different associativity. Ideally a grammar would only be transformed if it was absolutely necessary.
Is there an algorithm or tool to identify the presence of left recursion within a grammar? Ideally this might also classify subsets of production rules which contain left recursion.
There is a standard algorithm for identifying nullable non-terminals, which runs in time linear in the size of the grammar (see below). Once you've done that, you can construct the relation A potentially-starts-with B over all non-terminals A, B. (In fact, it's more normal to construct that relationship over all grammatical symbols, since it is also used to construct FIRST sets, but in this case we only need the projection onto non-terminals.)
Having done that, left-recursive non-terminals are all A such that A potentially-starts-with+ A, where potentially-starts-with+ is:
potentially-starts-with ∘ potentially-starts-with*
You can use any transitive closure algorithm to compute that relation.
For reference, to detect nullable non-terminals.
Remove all useless symbols.
Attach a pointer to every production, initially at the first position.
Put all the productions into a workqueue.
While possible, find a production to which one of the following applies:
If the left-hand-side of the production has been marked as an ε-non-terminal, discard the production.
If the token immediately to the right of the pointer is a terminal, discard the production.
If there is no token immediately to the right of the pointer (i.e., the pointer is at the end) mark the left-hand-side of the production as an ε-non-terminal and discard the production.
If the token immediately to the right of the pointer is a non-terminal which has been marked as an ε-non-terminal, advance the pointer one token to the right and return the production to the workqueue.
Once it is no longer possible to select a production from the work queue, all ε-non-terminals have been identified.
Just for fun, a trivial modification of the above algorithm can be used to do step 1. I'll leave it as an exercise (it's also an exercise in the dragon book). Also left as an exercise is the way to make sure the above algorithm executes in linear time.

Validity and satisfiability

I am having problems understanding the difference between validity and satisfiability.
Given the following:
(i) For all F, F is satisfiable or ~F is satisfiable.
(ii) For all F, F is valid or ~F is valid.
How do I prove which is true and which is false?
Statement (i) is true, as for all F, F will either be satisfiable, or ~F will be satisfiable (truth table). However, how do I go about solving for statement (ii)?
Any help is highly appreciated!
Aprilrocks92,
I don't blame you for being confused, because actually logicians, mathematicians, heck even those philosopher types use the words validity differently sometimes.
Trying not to overcomplicate, I'll give you a thin definition: a conclusion if valid when it is true whenever the premises are true. We also say, given a suitably defined logic, that the conclusion follows as a "logical consequence" of the premises.
On the other hand, satisfisability means that there exists a valuation of the non logical symbols in the formula F that makes the formula true in the logic.
So I should probably mention the difference between semantics and syntax to explain. The syntax of your logic is all those logical and non logical symbols, and the deductive rules that enable you to make "steps" towards proof in the logic. My definition of satisfisability above mentioned the word "valuation"- now how does that fit? Well the answer is that you need to supply a semantics: in short this is the structure that the formula of the logic are expressions of, usually given in set theory, and a valuation of a given F is a function that maps all the non logical symbols in F to sets and members of sets, which a given semantics for the logic composes into a truth value.
Hmm. I'm not sure that's the best explanation, but I don't have much time.
Either way, that should help you understand the difference. To answer your question about the difference between (i) and (ii) without giving too much away, think: what's the relationship between the two? Well given that as above an F' is true given a valuation that sends the sentence to true. So you could "rewrite" my definition of validity as: a conclusion is valid iff whenever the premises are satisfisable the conclusion is satisfisable.
Now, with regards your requirement to prove these things, I strongly suspect you've got a lot more context about your logic you're not telling us and your teacher or text book has intimated a context in which to answer these, as actually taken in the general sense your question doesn't make complete sense.

Using a binary decision diagram for the conceptual representation of simple rules

I'm attempting to express rules more formally than plain English sentences and was hoping for some direction in using a propositional approach and some sort of binary decision tree for illustration of the rules.
Suppose that objects outside a specified zone are required to be in a particular state (say redState) in order to be considered safe. Expressed as a plain English sentences;
if object is outside of ZoneA and is in a RedState, then it is Safe,
However, in some instances objects may be exempt from this restriction:
if object is outside of ZoneA, is not in a RedState and is Exempt, then it is Safe.
if object is outside of ZoneA, is not in a RedState and is not Exempt, then it is not Safe.
Whether or not an object in zone A is in a red state is unimportant. The remaining rule is:
if an object is contained in zone A, then it is safe.
Using a propositional formulation, I thought these rules could be expressed as
¬InZoneA ∧ RedState ⇒ Safe
¬InZoneA ∧ ¬RedState ∧ Exempt ⇒ Safe
¬InZoneA ∧ ¬RedState ∧ ¬Exempt ⇒ ¬Safe
InZoneA ⇒ Safe
I've consulted system specification approaches (such as Z) but am more interested in conveying a concise conceptual idea of the rules and less so in ensuring their functioning within a larger system. I therefore thought to represent them as a type of binary decision tree (diagram). I've read some notes on the subject but am a little unsure as to whether their use is the best approach or if I am butchering them. The representation I arrive at for these rules is presented in the figure, where solid lines indicate True and dashed lines indicate False.
I would greatly appreciate your input as to whether or not this representation is correct or if my approach/thinking is flawed. Many Thanks!
What you have shown is fine, as far as I can tell.
However, there are some other things you might want to consider.
For example, if the Exempt is available at any level, it might want to be evaluated before anything else, it depends on nothing, and nothing depends on it. This would save you time by not having to evaluate any other properties that will not affect the outcome.

Converting RE to NFA

Which one is the correct way to show a RE union 0+1? I saw this two ways but I think both are correct. If both are correct why to complicate things?
They are both correct, as you stated.
The first one looks like it was generated using a set of standard rules -- in this case, it's overkill (and just looks silly), but in more complicated cases it's easier to follow easy rules than to hold the whole thing in your head and write an equivalent NFA from scratch.
In general, an NFA can be rewritten such that it has a single final state (obviously there's already only one start state).
Then, two NFAs in this form can be combined in such a way that the language they accept when combined is the union of the languages they accept individually -- this corresponds to the or (+) in a regular expression. To combine the NFAs in this way, simply create a new node to act as the start state and connect it with ε-transitions to the start states of the two NFAs.
Then, in order to neatly end the NFA in a single final state (so that we can use this NFA recursively for other unions if we want), we create an extra node to serve as the unified final state and ε-connect the old final states (which lose their final status) to it.
Using the general rules above, it's easy to arrive at the first diagram (two NFAs unioned together, the first matching 0, the other 1) -- the second is easy to arrive at via common sense since it's such a simple regex ;-)
The first construct belongs to the class of NFA with e-moves, which is an extension for the general NFA class. e-moves gives you an ability to make transitions without needing an input. For transition function, it is important to compute the set of states reachable from a given state using with e-transitions only. Obviously, adding e-moves does not allow NFA to accept non-regular languages so it is equivalent to NFAs and then DFAs in the end.
NFA with e-moves is used by Thompson's construction algorithm to build an automaton from any regular expression. It provides a standard way to construct an automaton from regexs when it's handy to automate construction.

Resources