Chomsky Hierarchy Type 2: Not terminal symbols on left hand site - chomsky-hierarchy

is it allowed to make two non terminal symbols on the left handed side of a grammatic in type 2 grammatic?
I should define a Type 2 grammatic for the Language L2. It was easy if it is allowed to do a rule like
CB->BC but I'm not sure if this would violate any rules. In Type 1 it'd be easy.
Thank you!

No. According to the Chomsky Hierarchy, a Type-2 Language is characterized by rules in the form $A \rightarrow a$ where $A$ is a variable and $a$ is $(V U T)^{\ast}$,

Related

fat arrow in Idris

I hope this question is appropriate for this site, it's just about the choice of concrete syntax in Idris compared to Haskell, since both are very similar. I guess it's not that important, but I'm very curious about it. Idris uses => for some cases where Haskell uses ->. So far I've seen that Idris only uses -> in function types and => for other things like lambdas and case _ of. Did this choice come from realizing that it's useful in practice to have a clear syntactical distinction between these use cases? Is it just an arbitrary cosmetic choice and I'm overthinking it?
Well, in Haskell, type signatures and values are in different namespaces, so something defined in one is at no risk of clashing with something in the other. In Idris, types and values occupy the same namespace, which is why you don't see e.g. data Foo = Foo as you would in Haskell, but rather, data Foo = MkFoo - the type is called Foo, and the constructor is called MkFoo, as there is already a value (the type Foo), bound to the name Foo, e.g. data Pair = MkPair http://docs.idris-lang.org/en/latest/tutorial/typesfuns.html#tuples
So it's probably for the best it didn't try to use the arrow used to construct the type of functions, with the arrow used for lambdas - those are rather different things. You can combine them with e.g. the (Int -> Int) (\x => x).
I think it is because they interpret the -> symbol differently.
From Wikipedia:
A => B means if A is true then B is also true; if A is false then nothing is said about B
which seems right for case expressions, and
-> may mean the same as =>, or it may have the meaning for functions given below
which is
f: X -> Y means the function f maps the set X into the set Y
So my guess is that Idris just uses -> for the narrow second meaning, i.e. for mapping one type to another in type signatures, whereas Haskell uses the broader interpretation, where it means the same as =>.

What is the bottom type?

In wikipedia, the bottom type is simply defined as "the type that has no values". However, if b is this empty type, then the product type (b,b) has no values either, but seems different from b. I agree bottom is uninhabited, but I don't think this property suffices to define it.
By the Curry-Howard correspondence, bottom is associated to mathematical falsity. Now there is a logical principle stating that from False follows any proposition. By Curry-Howard, that means the type forall a. bottom -> a is inhabited, ie there exists a family of functions f :: forall a. bottom -> a.
What are those functions f ? Do they help define bottom, maybe as the infinite product of all types forall a. a ?
In Math
Bottom is a type that has no value. That is : any empty type can play the bottom role.
Those f :: forall a . Bottom -> a functions are the empty functions. "empty" in the set theoretical definition of functions.
In Programming
Dedicating a concrete empty type to have it as bottom by a programming language base library is for convenience. Readability and compatibility of code benefits from everyone using the same empty type as bottom.
In Haskell
Let us refer to them with the more friendly names "Bottom" -> "Void", "f" -> "absurd".
{-# LANGUAGE EmptyDataDecls #-}
data Void
This definition does not contain any constructors => an instance of it can not be created => it is empty.
absurd :: Bottom -> a
absurd = \ case {}
In the case expression we do not have to handle any cases because there are none.
They are already defined in package base

Call by name vs normal order

I know this topic has been discussed several times, but there is something still unclear to me.
I've read this question applicative-order/call-by-value and normal-order/call-by-name differences and there is something I would to clarify once and for all:
Call-by-name
As normal order, but no reductions are performed inside abstractions. For example λx.(λx.x)x is in normal form according to this strategy, although it contains the redex (λx.x)x.
In call by name, the expression λx.(λx.x)x is said to be in normal form; is this because "(λx.x)x" is considered to be the body (since the scope of λ extends as far as possible to the right)? And so on the other side, if I apply the normal order, what would be the result?
In call by name, the expression λx.(λx.x)x is said to be in normal form; is this because "(λx.x)x" is considered to be the body (since the scope of λ extends as far as possible to the right)?
Yes, you are right.
And so on the other side, if I apply the normal order, what would be the result?
You do reduction inside the body: (λx.x)x -> x, so the whole thing reduces to the identity function:
λx.(λx.x)x -> λx.x
To clarify it a bit further, let me do this one more time, renaming the variables to conform with the Barendregt variable convention: λx.(λx.x)x =α λx.(λy.y)x:
λx.(λy.y)x -> λx.[y := x](y) = λx.x

Chomsky Normal Form Conversion Algorithm

Why do we add a new start state S0 -> S when we want to convert a grammar to Chomsky normal form? What goes wrong if we do not do that?
At first I thought it's because of epsilon rules. But we do not remove an epsilon rule from start variable. So, what is benefit of adding S0 -> S?
Thanks
Depending on whether the empty string is in the language you might have the rule $S --> \epsilon$ (or $S_0 --> \epsilon$). This could delete an arbitrary number of symbols $S$ if these could appear on the right hand sides of rules. Because we do not want the start symbol to appear again, we introduce a new one.
This way we get exactly one more symbol per application of a rule A -> BC.
I think I have some explanation. If a grammar is like this:
S --> S1
S1 --> S
S1 --> a
Then, at the step of removing "unit rules" since we do not consider any specific order, we might remove S --> S1 first and we will have:
S1 --> S1
S1 --> a
and the start variable is entirely removed.

Two DrRacket/scheme questions

Programming language: DrRacket/scheme
Hey guys,
I am preparing for my first comp sci midterm, and have two quick questions that I'd love to get some input on:
(1) What exactly is the difference between a data definition and a
structure definition?
I know that for a data definition, I can have something like:
;; a student is a
;; - (make-student ln id height gradyear), where
;; - ln is last name, and
;; - id is ID number, and
;; - height is height in inches, and
;; -gradyear is graduation year
but what is a structure definition?
(2) What exactly are the alphas and betas in contracts that come before functions, i.e.
take : num α-list -> α-list
Thank you in advance!
Quote from How to Design Programs (HtDP):
A DATA DEFINITION states, in a mixture of English and Scheme, how we
intend to use a class of structures and how we construct elements of
this class of data.
Given a problem to solve you as a programmer must decide how your input data is represented as values. In order for others to understand your program it is important to document how this is done in detail.
Some input data are simple and can be represented by a single number (e.g. temperature, pressure, etc).
Other types of data can be represented as fixed number of numbers/strings. (e.g. a cd can be represented as an author name (string), a title (string) and a price (number)). To pack a fixed number of values as one value one can represent this a structure.
If one needs to represent an unknown number of something, say cds, then one must use a list.
The data definition is simply your description of how the data are represented in your program.
To explain what a structure definition is, I'll quote from HtDP:
A STRUCTURE DEFINITION is, as the term says, a new form of definition.
Here >is DrScheme's definition of posn:
(define-struct posn (x y))
Let us look at the cd example again. Since there are no builtin "cd values" in Racket, one must define what a cd value is. This is done with a structure definition:
(define-struct cd (author title price))
After the definition is made one can use make-cd to construct cd values.
In order to explain that autor and title are expected to be strings, and price is expected to be a number, you must write down a data definition that explains how make-cd is supposed to be used.
I forgot to answer your second question:
(2) What exactly are the alphas and betas in contracts that come
before functions, i.e.
take : num α-list -> α-list
The alpha is supposed to be replaced with a type.
If take get a integer-list (list of integers as input) then the output is an integer-list.
If take get a string-list (list of strings as input) then the output is an string-list.
In short if take gets a list of values with some type (alpha) as input, then the output is a list of values with the same type (alsp alpha).
Jens Axel Soegaard's answer is correct, but does not elaborate on the relationship between the two, which I would state as follows.
A DATA DEFINITION describes to the reader how a value is going to be represented using Racket values.
Sometimes, the "built-in" values aren't enough, and we need to define a new kind of data, like the "CD" that Jens refers to. In order to define a new kind of data, we often use a STRUCTURE DEFINITION.
Put differently: some data definitions require structure definitions. Some do not.
If I were to elaborate any more, I would just be badly recapitulating HtDP; if what I've said thus far does not make sense, go read HtDP. :)

Resources