what's the difference between ::= and := in oracle? - oracle

This is impossible to search on google, bing, yahoo, etc, because it uses symbols. How annoying!
What's the difference between ::= and := in oracle's pl/sql?

I am not sure about ::= as I have not seen that in Oracle but the wiki says about :=
In computer programming languages, the equals sign typically denotes
either a boolean operator to test equality of values (e.g. as in
Pascal or Eiffel), which is consistent with the symbol's usage in
mathematics, or an assignment operator (e.g. as in C-like languages).
Languages making the former choice often use a colon-equals (:=) or ≔
to denote their assignment operator. Languages making the latter
choice often use a double equals sign (==) to denote their boolean
equality operator.
Also check here:
The assignment operator in PL/SQL is a colon plus an equal sign
(:=). PL/SQL string literals are delimited by single quotes

The only place (that I'm aware of) where ::= is used is in the syntactical description of PL/SQL (or any other language, for that matter) using Backus-Naur Form (BNF). The ::= symbol is a part of the BNF descriptive language itself, not a part of the language being described. There are many tutorials for BNF -- have fun!

Related

Priority and association of terms in an expression evaluator with exponentiation operator

I am developing an expression evaluator. Which association is considered to be correct for an expression containing more than one exponentiation operator? For example, for the expression "10-2^2^0.5": "10-(2^2) ^0.5"= 8 or "10-2^ (2^0.5)" = 7.33485585731?
The result differs across languages and (possible) interpreters. However, most of them uses right-associative rule.
In Lua print(10-2^2^0.5) returns 7.3348 and in Visual Basic, Console.WriteLine(10-2^2^0.5) returns 8.
The fact that different systems uses different rules suggest me that there is no defined rule for that.

Are operators a subset of statements?

Basically in all high-level languages (as I know) we have two main categories of language mechanisms to create a program: statements and expressions.
Usually statements are represented by some subset of language's keywords: if/else/switch, for/foreach/while, {} (or BEGIN/END), etc.
Expressions are represented by literals (which represent some data) and operators: literals: 1, 2, -100, testTest, etc; operators: +, -, /, *, ==, ===, etc.
If we think deeper, we can notice that statements usually answer on question "what?" and expressions -- on question "how?". Statements represent actions, expressions represent the context of actions.
Then we may look in expressions' parts again: literals and operators. Operators are actions too.
And here is my question again: are operators a subset of statement?
P.S. Generally, I understand that statements and expression are used together to aim some programming target. Separation of this categories is mostly theoretical.
In general, "operator" describes a kind of syntactic form, which can be used to produce an expression, a statement, or some other class of language entity. So, technically, the answer to your question is, "no".
For example, Haskell uses a | operator to generate an algebraic type spec, which is neither an expression nor a statement:
data Maybe a = Just a | Nothing

Is a single constant value considered an expression?

After reading this answer on a CSS question, I wonder:
In Computer Science, is a single, constant value considered an expression?
In other words, is 7px an expression? What about just 7?
Quoting Wikipedia, emphasis mine:
An expression in a programming language is a combination of one or more explicit values, constants, variables, operators, and functions that the programming language interprets [...] and computes to produce [...] another value. This process, as for mathematical expressions, is called evaluation.
Quoting MS Docs, emphasis mine:
An expression is a sequence of one or more operands and zero or more operators that can be evaluated to a single value, object, method, or namespace. Expressions can consist of a literal value [...].
These both seems to indicate that values are expressions. However, one could argue that a value will not be evaluated, as it is already only a value, and therefore doesn't qualify.
Quoting Techopedia, emphasis mine:
[...] In terms of structure, experts point out that an expression inherently needs at least one 'operand’ or value that is acted on, and must have one or more operators. [...]
This suggests that even x does not qualify as expression as it is lacking one or more operators.
It depends on the exact definition of course, but under most definitions expressions are defined recursively with constants being one of the basis cases. So, yes, literal values are special cases of expressions.
You can look at grammars for various languages such as the one for Python
If you trace through the grammar you see that an expr can be an atom which includes number literals. The fact that number literals are Python expressions is also obvious when you consider productions like:
comparison: expr (comp_op expr)*
This is the production which captures expressions like x < 7, which wouldn't be captured if 7 isn't a valid expression.
In Computer Science, is a single, constant value considered an expression?
It depends entirely on the context. For example, FORTRAN, BASIC, and COBOL all have line numbers. Those are numeric constant values that are not expressions.
In other contexts (even within those languages) a numeric constant may be an expression.

How to represent vertical alignment of syntax of code using BNF, EBNF or etc.?

How to say that (in BNF, EBNF, etc) any two or more letters are placed in the same vertical alignment
e.g In python 2.x, we have what we call indentation.
def hello():
print "hello,"
print "world"
hello()
Note letter p (second line) is placed in the same vertical alignment of letter p (third line)
Further example (in markdown):
MyHeader
========
topic
-----
Note M and the first = are placed in the same vertical alignment (also r and last =, t and first -, c and last -)
My question is How to represent these vertical alignment of letters using BNF, EBNF or etc.?
Further note:
My point of this question is searching for a representation method to represent a vertical alignment of code, not just want to know how to write BNF or EBNF of Python or Markdown.
You can parse an indentation-sensitive language (like Python or Haskell) by using a little hack, which is well-described in the Python language reference's chapter on lexical analysis. As described, the lexical analyzer turns leading whitespace into INDENT and DEDENT tokens [Note 1], which are then used in the Python grammar in a straightforward fashion. Here's a small excerpt:
suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT
statement ::= stmt_list NEWLINE | compound_stmt
stmt_list ::= simple_stmt (";" simple_stmt)* [";"]
while_stmt ::= "while" expression ":" suite ["else" ":" suite]
So if you are prepared to describe (or reference) the lexical analysis algorithm, the BNF is simple.
However, you cannot actually write that algorithm as a context free grammar, because it is not context-free. (I'll leave out the proof, but it's similar to the proof that anbncn is not context free, which you can find in most elementary formal language textbooks, and all over the internet.)
ISO standard EBNF (a free PDF is available) provides a way of including "extensions which a user may require": a Special-sequence, which is any text not containing a ? surrounded on both sides by a ?. So you could abuse the notation by including [Note 2]:
DEDENT = ? See section 2.1.8 of https://docs.python.org/3.3/reference/ ? ;
Or you could insert a full description of the algorithm. Of course, neither of those techniques will allow a parser generator to produce an accurate lexical analyzer, but it would be a reasonable way of communicating intent to a human reader.
It's worth noting that EBNF itself uses a special sequence to define one of its productions:
(* see 4.7 *) syntactic exception
= ? a syntactic-factor that could be replaced
by a syntactic-factor containing no
meta-identifiers
? ;
Notes
The lexical analyzer also converts some physical newline characters into NEWLINE tokens, while making other newline characters vanish.
EBNF normally uses the syntax = rather than ::= for a production, and insists that they be terminated with ;. Comments are enclosed between (* and *).

What programming languages are context-free?

Or, to be a little more precise: which programming languages are defined by a context-free grammar?
From what I gather C++ is not context-free due to things like macros and templates. My gut tells me that functional languages might be context free, but I don't have any hard data to back that up with.
Extra rep for concise examples :-)
What programming languages are context-free? [...]
My gut tells me that functional languages might be context-free [...]
The short version: There are hardly any real-world programming languages that are context-free in any meaning of the word. Whether a language is context-free or not has nothing to do with it being functional. It is simply a matter of how complex the syntax is.
Here's a CFG for the imperative language Brainfuck:
Program → Instr Program | ε
Instr → '+' | '-' | '>' | '<' | ',' | '.' | '[' Program ']'
And here's a CFG for the functional SKI combinator calculus:
Program → E
E → 'S' E E E
E → 'K' E E
E → 'I'
E → '(' E ')'
These CFGs recognize all valid programs of the two languages because they're so simple.
The longer version: Usually, context-free grammars (CFGs) are only used to roughly specify the syntax of a language. One must distinguish between syntactically correct programs and programs that compile/evaluate correctly. Most commonly, compilers split language analysis into syntax analysis that builds and verifies the general structure of a piece of code, and semantic analysis that verifies the meaning of the program.
If by "context-free language" you mean "... for which all programs compile", then the answer is: hardly any. Languages that fit this bill hardly have any rules or complicated features, like the existence of variables, whitespace-sensitivity, a type system, or any other context: Information defined in one place and relied upon in another.
If, on the other hand, "context-free language" only means "... for which all programs pass syntax analysis", the answer is a matter of how complex the syntax alone is. There are many syntactic features that are hard or impossible to describe with a CFG alone. Some of these are overcome by adding additional state to parsers for keeping track of counters, lookup tables, and so on.
Examples of syntactic features that are not possible to express with a CFG:
Indentation- and whitespace-sensitive languages like Python and Haskell. Keeping track of arbitrarily nested indentation levels is essentially context-sensitive and requires separate counters for the indentation level; both how many spaces that are used for each level and how many levels there are.
Allowing only a fixed level of indentation using a fixed amount of spaces would work by duplicating the grammar for each level of indentation, but in practice this is inconvenient.
The C Typedef Parsing Problem says that C programs are ambiguous during lexical analysis because it cannot know from the grammar alone if something is a regular identifier or a typedef alias for an existing type.
The example is:
typedef int my_int;
my_int x;
At the semicolon, the type environment needs to be updated with an entry for my_int. But if the lexer has already looked ahead to my_int, it will have lexed it as an identifier rather than a type name.
In context-free grammar terms, the X → ... rule that would trigger on my_int is ambiguous: It could be either one that produces an identifier, or one that produces a typedef'ed type; knowing which one relies on a lookup table (context) beyond the grammar itself.
Macro- and template-based languages like Lisp, C++, Template Haskell, Nim, and so on. Since the syntax changes as it is being parsed, one solution is to make the parser into a self-modifying program. See also Is C++ context-free or context-sensitive?
Often, operator precedence and associativity are not expressed directly in CFGs even though it is possible. For example, a CFG for a small expression grammar where ^ binds tighter than ×, and × binds tighter than +, might look like this:
E → E ^ E
E → E × E
E → E + E
E → (E)
E → num
This CFG is ambiguous, however, and is often accompanied by a precedence / associativity table saying e.g. that ^ binds tightest, × binds tighter than +, that ^ is right-associative, and that × and + are left-associative.
Precedence and associativity can be encoded into a CFG in a mechanical way such that it is unambiguous and only produces syntax trees where the operators behave correctly. An example of this for the grammar above:
E₀ → EA E₁
EA → E₁ + EA
EA → ε
E₁ → EM E₂
EM → E₂ × EM
EM → ε
E₂ → E₃ EP
EP → ^ E₃ EP
E₃ → num
E₃ → (E₀)
But ambiguous CFGs + precedence / associativity tables are common because they're more readable and because various types of LR parser generator libraries can produce more efficient parsers by eliminating shift/reduce conflicts instead of dealing with an unambiguous, transformed grammar of a larger size.
In theory, all finite sets of strings are regular languages, and so all legal programs of bounded size are regular. Since regular languages are a subset of context-free languages, all programs of bounded size are context-free. The argument continues,
While it can be argued that it would be an acceptable limitation for a language to allow only programs of less than a million lines, it is not practical to describe a programming language as a regular language: The description would be far too large.
     — Torben Morgensen's Basics of Compiler Design, ch. 2.10.2
The same goes for CFGs. To address your sub-question a little differently,
Which programming languages are defined by a context-free grammar?
Most real-world programming languages are defined by their implementations, and most parsers for real-world programming languages are either hand-written or uses a parser generator that extends context-free parsing. It is unfortunately not that common to find an exact CFG for your favourite language. When you do, it's usually in Backus-Naur form (BNF), or a parser specification that most likely isn't purely context-free.
Examples of grammar specifications from the wild:
BNF for Standard ML
BNF-like for Haskell
BNF for SQL
Yacc grammar for PHP
The set of programs that are syntactically correct is context-free for almost all languages.
The set of programs that compile is not context-free for almost all languages. For example, if the set of all compiling C programs were context free, then by intersecting with a regular language (also known as a regex), the set of all compiling C programs that match
^int main\(void\) { int a+; a+ = a+; return 0; }$
would be context-free, but this is clearly isomorphic to the language a^kba^kba^k, which is well-known not to be context-free.
Depending on how you understand the question, the answer changes. But IMNSHO, the proper answer is that all modern programming languages are in fact context sensitive. For example there is no context free grammar that accepts only syntactically correct C programs. People who point to yacc/bison context free grammars for C are missing the point.
To go for the most dramatic example of a non-context-free grammar, Perl's grammar is, as I understand it, turing-complete.
If I understand your question, you are looking for programming languages which can be described by context free grammars (cfg) so that the cfg generates all valid programs and only valid programs.
I believe that most (if not all) modern programming languages are therefore not context free. For example, once you have user defined types (very common in modern languages) you are automatically context sensitive.
There is a difference between verifying syntax and verifying semantic correctness of a program. Checking syntax is context free, whereas checking semantic correctness isn't (again, in most languages).
This, however, does not mean that such a language cannot exist. Untyped lambda calculus, for example, can be described using a context free grammar, and is, of course, Turing complete.
Most of the modern programming languages are not context-free languages. As a proof, if I delve into the root of CFL its corresponding machine PDA can't process string matchings like {ww | w is a string}. So most programming languages require that.
Example:
int fa; // w
fa=1; // ww as parser treat it like this
VHDL is somewhat context sensitive:
VHDL is context-sensitive in a mean way. Consider this statement inside a
process:
jinx := foo(1);
Well, depending on the objects defined in the scope of the process (and its
enclosing scopes), this can be either:
A function call
Indexing an array
Indexing an array returned by a parameter-less function call
To parse this correctly, a parser has to carry a hierarchical symbol table
(with enclosing scopes), and the current file isn't even enough. foo can be a
function defined in a package. So the parser should first analyze the packages
imported by the file it's parsing, and figure out the symbols defined in them.
This is just an example. The VHDL type/subtype system is a similarly
context-sensitive mess that's very difficult to parse.
(Eli Bendersky, “Parsing VHDL is [very] hard”, 2009)
Let's take Swift, where the user can define operators including operator precedence and associativity. For example, the operators + and * are actually defined in the standard library.
A context free grammar and a lexer may be able to parse a + b - c * d + e, but the semantics is "five operands a, b, c, d and e, separated by the operators +, -, * and +". That's what a parser can achieve without knowing about operators. A context free grammar and a lexer may also be able to parse a +-+ b -+- c, which is three operands a, b and c separated by operators +-+ and -+-.
A parser can "parse" a source file according to a context-free Swift grammar, but that's nowhere near the job done. Another step would be collecting knowledge about operators, and then change the semantics of a + b - c * d + e to be the same as operator+ (operator- (operator+ (a, b), operator* (c, d)), e).
So there is (or maybe there is, I havent checked to closely) a context free grammar, but it only gets you so far to parsing a program.
I think Haskell and ML are supporting context free. See this link for Haskell.

Resources