ASN.1 compiler error token "SYNTAX" unexpected - snmp

I'm currently trying to compile a snippet of ASN.1 code. It looks as follows:
RFC1213-MIB DEFINITIONS ::= BEGIN
IMPORTS
experimental FROM RFC1155-SMI
OBJECT-TYPE FROM RFC-1212;
mypersonaltest OBJECT IDENTIFIER ::= { experimental 1 }
tester OBJECT-TYPE
SYNTAX INTEGER
ACCESS read-write
STATUS optional
DESCRIPTION "This is a test"
::= { mypersonaltest 1 }
END
Now I'm always getting an error on the line SYNTAX INTEGER:
ASN.1 grammar parse error near line 9 (token "SYNTAX"): syntax error, unexpected TOK_SYNTAX, expecting TOK_PPEQ
Actually, this should work according my example I got here. What am I doing wrong?

This looks like an old version of that specification that uses ASN.1 MACRO notation instead of ASN.1 Information Object Classes. The MACRO notation was removed from ASN.1 in 1994. Please consider finding a newer version of your specification that used Information Object Classes instead of the obsolete MACRO notation.
It is possible that the tool you are using does not support ASN.1 MACRO notation (which was removed from ASN.1 in 1994). You could try using the free online compiler at http://asn1-playground.oss.com/ which I believe still supports MACRO notation. Note that the definition of OBJECT-TYPE must be seen by the compiler before "tester" (which uses the OBJECT-TYPE macro) is parsed.
I will repeat, that you will save yourself many headaches if you use a version of your ASN.1 specification that uses Information Object Classes rather than the obsolete ASN.1 MACRO notation.

It should be OBJECT-TYPE, not OBJECT TYPE. There is something wrong with the MIB document, and you should try to find a proper version of it.

Related

How are types in logic programming language implemented using BNFC?

I've been working on implementing logic programming language using BNFC. The problem I'm having is related to the typing rules. In "Implementing Programming Languages" book by A.Ranta the types are included in the LBNF syntax as in
Tbool. Type ::= "bool" ;
Tdouble. Type ::= "double" ;
Tstring. Type ::= "string";
I understand that for grammars like in C it's important to add types, as they're integral in declarations and therefore need to be parsed by front-end. Further in the book the type checker is written in Haskell or Java. However in logic PL the types aren't so explicit, they are declared separately, the example syntax of types is encoded as :
tid: name_type
ty: type
varTy: tid -> ty
arrTy: ty x ty -> ty
So the question is where in the code does the syntax of types go? Whenever I try to add the types in BNFC it just doesn't make much sense, and the tested input doesn't parse properly. The book has a good example of the C grammar, but doesn't provide a complete picture of how front-end created by BNFC and the type checker are connected, how the information is passed from one to another, etc.
I'm not familiar with BNFC, but from what I can see it's just some kind of compiler
compiler specification.
So the question is where in the code does the syntax of types go?
The classic way is not to handle type checking at syntax level, but rather at semantic level. Or if your language is an interpreted one, at runtime, during interpretation.

How to make Berkeley UPC work with complex numbers?

I am having some problems compiling a UPC code with complex numbers on my laptop
(Mac OS-X; code will eventually run on a Linux CentOS machine) . I was trying to use FFTW in the code, but that returned a lot of errors.
#include </Users/avinash/Programs/fftwinstall/include/fftw3.h>
Error during remote HTTP translation:
upcc: error during UPC-to-C translation (sgiupc stage):
In file included from code1xc.upc:9:
/Users/avinash/Programs/fftwinstall/include/fftw3.h:373: syntax error before `fftwq_complex'
/Users/avinash/Programs/fftwinstall/include/fftw3.h:373: warning: type defaults to `int' in declaration of `fftwq_complex'
/Users/avinash/Programs/fftwinstall/include/fftw3.h:373: warning: data definition has no type or storage class
/Users/avinash/Programs/fftwinstall/include/fftw3.h:373: syntax error before `fftwq_complex'
/Users/avinash/Programs/fftwinstall/include/fftw3.h:373: syntax error before `fftwq_complex'
/Users/avinash/Programs/fftwinstall/include/fftw3.h:373: syntax error before `fftwq_complex'
/Users/avinash/Programs/fftwinstall/include/fftw3.h:373: syntax error before `fftwq_complex'
/Users/avinash/Programs/fftwinstall/include/fftw3.h:373: syntax error before `fftwq_complex'
/Users/avinash/Programs/fftwinstall/include/fftw3.h:373: syntax error before `fftwq_complex'
......
Then I did some google searching and I came across this link – https://hpcrdm.lbl.gov/pipermail/upc-users/2013-December/001758.html
Apparently, BUPC doesn't work with complex numbers on some platforms - http://upc.lbl.gov/docs/user/index.shtml
Programs which #include complex.h, and/or tgmath.h do not work on
certain platforms.
So tried to compile this simple code using complex.h mentioned in the online query and even that returned errors.
#include <upc.h>
#include <complex.h>
int main()
{
return 0;
}
Error during remote HTTP translation:
upcc: error during UPC-to-C translation (sgiupc stage):
In file included from code1xc.upc:7:
/usr/include/complex.h:45: syntax error before `cacosf'
/usr/include/complex.h:46: syntax error before `cacos'
/usr/include/complex.h:47: syntax error before `cacosl'
/usr/include/complex.h:49: syntax error before `casinf'
/usr/include/complex.h:50: syntax error before `casin'
....
So, what exactly am I doing wrong ? I will appreciate any help. Is this an issue only for Berkeley UPC or for GNU UPC as well ? My project requires shared complex arrays. I think there must be a way as FFTs have been mentioned many times in online lectures.
Thanks for your help !!
Portable UPC programs do not rely upon C99's complex.h header, because it is not universally supported by all compilers/systems. Instead they often define their own complex type as a two-element struct.
For example see this simple FT implementation
Another common approach is to keep separate arrays of real and imaginary components, depending on the needs of the application and the data layout expected by any client math libraries.
However neither of these is likely to be helpful if you need to complex trigonometry or use a library that relies specifically on C99 complex. Assuming you have a C compiler that supports complex, you could use it to compile a serial module linked to your UPC program. Alternatively you could try the clang UPC frontend, which I believe supports C99 complex on some platforms.

The process of syntax analysis in compiler construction

I am currently reading the Dragonbook.
In chapter 2, it explains the Syntax Analysis process. I'm struggling to understand the entire picture of this process. By reading the book, I'm sometimes confused in what order things in the syntax analyzer happen.
So from my understanding:
A syntax analyzer contains a syntax definition, which defines a grammar using context free grammar. Is this basically 'the first part' of the syntax analyzer? So does the syntax analyzer include a syntax definition?
After that, the generated tokens by the Lexical Analyzer go into the Syntax Analyzer. The syntax analyzer then check, via CFG if the string input is valid by generating a parse tree.
And from my understanding, this parse tree will eventually become an (abstract) syntax tree (which contains less detail than the parse tree). This tree will go into the semantic analyzer.
Can someone please confirm whether my roughly ''overall picture'' understanding of the syntax analyzer is correct and in the right order?
A syntax analyzer contains a syntax definition, which defines a grammar using context free grammar.
No. In table-driven parsers it contains a table which has been generated from the grammar and which drives the parser. In hand-written parsers, the code structure strongly reflects the grammar. In neither case would it be correct to say that the parser actually 'contains' the grammar. It parses the input, according to the grammar, somehow.
Is this basically 'the first part' of the syntax analyzer?
No. I don't know where you get 'first part' from.
So does the syntax analyzer include a syntax definition?
Only as described above.
After that
No, before that
the generated tokens by the Lexical Analyzer go into the Syntax Analyzer. The syntax analyzer then check, via CFG if the string input is valid
Correct.

How to go about adding a symbol table interface to boost::spirit::lex based lexer?

To implement support for typedef you'd need to lookup the symbol table when ever the lexer identifies a identifier and return a different token. This is easily done in flex lexer. I am trying to use boost Spirit to build the parser and looked about in the examples but none of them are passing any context information between the lexer and parser. What would be the simplest way to do this in the mini c compiler tutorial example?
That's equally easy in Spirit.Lex. All you need is the ability to invoke code after matching a token, but before returning the token to the parser. That's lexer semantic actions:
this->self += identifier[ lex::_tokenid = lookup(lex::_val) ];
where lex::_tokenid is a placeholder referring to the token id of the current token, lex::_val refers to the matched token value (at that point most probably this is a iterator_range<> pointing to the underlying input stream), and lookup is a lazy function (i.e. function object, such as a phoenix::function) implementing the actual lookup logic.
I'll try to find some time to implement a small example to be added to Spirit demonstrating this technique.
To implement support for typedef you'd need to lookup the symbol table when ever the lexer identifies a identifier and return a different token.
Isn't that putting the cart before the horse? The purpose of a lexer is to take text input and turn it into a stream of simple tokens. This makes the parser easier to specify and deal with, as it doesn't have to handle low-level things like "these are the possible representations of a float" and such.
The language-based mapping of an identifier token to a symbol (ie: typedef) is not something that a lexer should be doing. That's something that happens at the parsing stage, or perhaps even later as a post-process of an abstract syntax tree.
Or, to put it another way, there is a good reason why the qi::symbols is a parser object and not a lexer one. It simply isn't the lexer's business to handle this sort of thing.
In any case, it seems to me that what you want to do is build a means to (in the parser) map an identifier token to an object that represents the type that has been typedef'd. A qi::symbols parser seems to be the way to do this kind of thing.

What is an example of a lexical error and is it possible that a language has no lexical errors?

for our compiler theory class, we are tasked with creating a simple interpreter for our own designed programming language. I am using jflex and cup as my generators but i'm a bit stuck with what a lexical error is. Also, is it recommended that i use the state feature of jflex? it feels wrong as it seems like the parser is better suited to handling that aspect. and do you recommend any other tools to create the language. I'm sorry if i'm impatient but it's due on tuesday.
A lexical error is any input that can be rejected by the lexer. This generally results from token recognition falling off the end of the rules you've defined. For example (in no particular syntax):
[0-9]+ ===> NUMBER token
[a-zA-Z] ===> LETTERS token
anything else ===> error!
If you think about a lexer as a finite state machine that accepts valid input strings, then errors are going to be any input strings that do not result in that finite state machine reaching an accepting state.
The rest of your question was rather unclear to me. If you already have some tools you are using, then perhaps you're best to learn how to achieve what you want to achieve using those tools (I have no experience with either of the tools you mentioned).
EDIT: Having re-read your question, there's a second part I can answer. It is possible that a language could have no lexical errors - it's the language in which any input string at all is valid input.
A lexical error could be an invalid or unacceptable character by the language, like '#' which is rejected as a lexical error for identifiers in Java (it's reserved).
Lexical errors are the errors thrown by your lexer when unable to continue. Which means that there's no way to recognise a lexeme as a valid token for you lexer. Syntax errors, on the other side, will be thrown by your scanner when a given set of already recognised valid tokens don't match any of the right sides of your grammar rules.
it feels wrong as it seems like the
parser is better suited to handling
that aspect
No. It seems because context-free languages include regular languages (meaning than a parser can do the work of a lexer). But consider than a parser is a stack automata, and you will be employing extra computer resources (the stack) to recognise something that doesn't require a stack to be recognised (a regular expression). That would be a suboptimal solution.
NOTE: by regular expression, I mean... regular expression in the Chomsky Hierarchy sense, not a java.util.regex.* class.
lexical error is when the input doesn't belong to any of these lists:
key words: "if", "else", "main"...
symbols: '=','+',';'...
double symbols: ">=", "<=", "!=", "++"
variables: [a-z/A-Z]+[0-9]*
numbers: [0-9]*
examples: 9var: error, number before characters, not a variable and not a key word either.
$: error
what I don't know is whether something like more than one symbol after each other is accepted, like "+-"
Compiler can catch an error when it has the grammar in it!
It will depend on the compiler itself whether it has the capacity (scope) of catching the lexical errors or not.
If is decided during the development of compiler what types of lexical error and how (according to the grammar) they are going to be handled.
Usually all famous and mostly used compiler has this capabilities.

Resources