Creating an C AST in ANTLR - syntax

I searched plenty of sources but havent found anything yet.
I'm trying to create an AST (in Java) for a input file written in C. I found ANTLR and it works with the "official" C-Grammar from the ANTLR examples. Nevertheless, I read about editing grammar for the syntax-tree with e.g. "^" to mark roots etc.
When I set the output at the C.g to output=AST an empty graph is printed (some dot gibberish). Also I can't find any of that mentioned tree symbols in the grammar.
So I need to asume, that grammar doesn't support AST-generation.
Is there any working C grammar that has syntax for a AST in it? I need to find a solution in a proper time, and I wanted to check all available ressources bevor starting to edit the given grammar (what I guess will be an awful lot of work).
Thanks for you time,
Simon

Related

Any parser that can handle recursive grammars in Go?

In a Node.js project I wrote a query parser in ANTLR4 (JS target). The user queries have a simplified SQL-like grammar that are then processed to full SQL on the server. The query structure can be arbitrarily nested.
I am now porting this app to go. There is, at the moment, no ANTLR4 target for go. I started exploring Ragel but according to the documentation, it expects a regular grammar and does not handle recursion, except for really simple tasks like balancing parentheses.
Another solution is to use my ANTRL4 grammar with the C++ target and then link the C++ classes to go with SWIG (or something) which feels kinda hairy and last resort type of solution.
Yet another solution is to do the parsing on the client side but this would explode the amount of js needed for the client to download. Also feels a bit desparate.
So my questions are:
1) Are there any parser libraries able to handle recursive grammars usable from Go?
2) I am completely unfamiliar with ragel and as it seems quite a complicated tool I want to get this straight before investing time into learning it: Is there any way to handle some recursion (say up to a certain level) in ragel if the grammar is simple enough?

Is there a Cypher syntax definition anywhere?

I'm looking for a definition of the syntax for the Cypher query language. I tried the docs but they're very vague.
Ideally, I'd like a BNF (or any variant) definition, or one of those "graph" definitions like this or this. Really, anything resembling a formal definition.
What you are looking for will be available in openCypher. Several items will be released as part of the project, one of the first of which is the BNF grammar.
Update 2016-01-30: A first draft of the grammar is now avialable at \https://github.com/opencypher/openCypher/blob/master/grammar.ebnf.
Update: 2016-10-17: EBNF and Antlr grammars, TCK, railroad diagrams, and a list of community projects are available at http://www.opencypher.org/#resources
Take a look at the recently announced (Oct 2015) openCypher project. It involves releasing the language specification, among other things.
From the announcement:
1. Cypher reference documentation:
A comprehensive user documentation describing use of the Cypher query language with examples and tutorials.
2. Technology compatibility kit (TCK):
The TCK consists of a number of tests that a software supplier would run in order to self-certify support for a given version of Cypher.
3. Reference implementation:
Distributed under the Apache 2.0 license, the reference implementation is a fully functional implementation of key parts of the stack needed to support Cypher inside a data platform or tool. The first planned deliverable is a parser that will take a Cypher statement and parse it into an AST (abstract syntax tree) representation. The reference implementation complements the documentation and tests by providing working implementations of Cypher – which are permissively licensed – and can be used as examples or as a foundation for one’s own implementation.
4. Cypher language specification:
Licensed under a Creative Commons license, the Cypher language specification is a technical expression of the language syntax to enable parsers to auto-generate the query syntax. A full semantic specification is also planned as a part of the openCypher project.
The same announcement also says that the process is open and that it is possible to submit, review and comment on language proposals.
Update!
Neo4j has changed a lot since this answer was written. In 2017 the simple answer is yes, you can download the grammar files from https://www.opencypher.org/
Below is the old answer, which was accurate in 2014
As far as I can tell, the only formal definition is in the code. That's the bad news.
The good news is that the code uses a scala library to do the parsing which makes the code rules look kinda/sorta like BNF. And there's some documentation on how to read it.
Here's a link into a scala object that defines what a query is.
This general package on github looks to me like it contains all of the cypher command implementations, and should have everything you're asking for.
Code in this package is written in scala, and looks like this:
object Query {
def start(startItems: StartItem*) = new QueryBuilder().startItems(startItems:_*)
def matches(patterns:Pattern*) = new QueryBuilder().matches(patterns:_*)
def optionalMatches(patterns:Pattern*) = new QueryBuilder().matches(patterns:_*).makeOptional()
def updates(cmds:UpdateAction*) = new QueryBuilder().updates(cmds:_*)
def unique(cmds:UniqueLink*) = new QueryBuilder().startItems(Seq(CreateUniqueStartItem(CreateUniqueAction(cmds:_*))):_*)
(...)
This matches roughly with the upper right hand quadrant of the Cypher refcard. You can sorta see that there can be a start clause, a match clause, and so on. This includes links to other implementation classes (like UpdateAction which further define clauses considered update actions.
Make sure to also read How Neo4J Uses Scala's Parser Combinator: Cypher's Internals Part 1 for more information on what's going on here, and the mapping between the scala classes and what we'd normally consider EBNF. This blog post is old (2011) and the specific code examples it gives shouldn't be trusted, but I think it has good general information on how the implementation works, and what to look for if you want to understand the EBNF behind cypher.
Disclaimer: I'm not a scala hardcore, YMMV, IANAL, devs please overrule me if I'm wrong.
(Michael Hunger answered in a comment, so I can't accept his answer. Here's his answer:)
Cypher uses parboiled as parser, the parboiled rule DSL are pretty easy to read and understand. https://github.com/neo4j/neo4j/blob/d18583d260a957ab1a14bd27d34eb5625df42bc5/community/cypher/cypher-compiler-2.2/src/main/scala/org/neo4j/cypher/internal/compiler/v2_2/parser/Clauses.scala
None of these seem to work any more.
I don't see anything on the opencypher.org site that looks like a grammar to download.
None of the github links from Michael Hunger work.
I'd really like access to SOME resource where I can learn how to construct queries for functions like avg that allegedly take a list expression as an argument, yet barf at every variant I can figure out.

How to build AST by S-expression in Ruby?

I have no idea how to build S-exp.
I want to do it, because I need to build AST for my langauge.
At the beginning I used RubyParser to parse it to sexp then code gen.
But it must be ruby's subset I think.I cant define the language what I want.
Now I need to implement parser for my language.
So anyone could recommend any ruby tool that building AST for S-expression ?
Thanks!
It is not very clear from your question what exactly do you need, but simple Google search gives some interesting links to check. Maybe after checking these links, if they are not the answer to your question, you can edit question and make it more precise and concrete.
http://thingsaaronmade.com/blog/writing-an-s-expression-parser-in-ruby.html
https://github.com/aarongough/sexpistol
You might try the sxp-ruby gem at http://github.com/bendiken/sxp-ruby. I use it for SPARQL S-Expressions (SSE) and similar methods for managing Abstract Syntax Trees in Ruby.
Maybe you could have a look at this gem named Astrapi.
This is just an experiment :
describe your language elements (concepts) in a "mm" file (abstract syntax)
run astrapi on this file
astrapi generates a parser that is able to fill up your AST, from your input source expressed in s-expression (concrete syntax of your concepts).
I have put a modest documentation here.

Parsing XML, how is this actually done? [duplicate]

So, just as a fun project, I decided I'd write my own XML parser. No, not to parse a specific document, and no, not using an XML parser library. I mean writing code to parse out any XML document into a usable data structure. Just because I like the challenge. :-)
With that said, so far it's proved to be... interesting. It's not as easy to parse (especially when you start taking into account special characters, CDATA, empty tags, comments, etc.) as it initially looked.
Are there any well documented XML parsing algorithms or explanations anywhere that anyone knows of? It seems like there are well-documented Queue and Stack and BTree and etc. etc. etc. implementations everywhere, but I'm not sure I've ever seen a simple, well-documented XML parser algorithm...
I repeat: I am not looking for a pre-built parser library! I am looking for information on how to create my own pre-built parser library! Do not tell me "use expat" or "use SAX" or whatever. That's not what I'm asking for.
Antlr offers a tutorial on parsing XML. It breaks the process down into phases: lexing, parsing, tree parsing, etc. Looks pretty interesting.
I don't know if it would be "cheating" in your book, but you could try parsing your XML with a ready-built all-purpose language parser like ANTLR. The result would be a list of tokens (if you just use the lexer) or a parse tree (if you include the parser) and you could then re-build the parse tree almost 1:1 into an XML structure.
Maybe. I haven't thought about the ways in which XML might be different from "normal" ANTLR fodder like programming languages, and whether you would be able to define a suitable grammar.
VTD-XML is probably the simplest parsing technique possible...
http://expat.sourceforge.net/
Expat is an XML parser library written in C. It is a stream-oriented parser in which an application registers handlers for things the parser might find in the XML document (like start tags). An introductory article on using Expat is available on xml.com.

Which tools can help in translating (as in french -> english and not C++ -> java) source code? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have some code that is written in french, that is the variables, class, function all have french names. The comments are also in french. I'd like to translate the code to english. This will be quite a challenge, since it's a 18K lines project and I'd like to know if there is any tool that could help me, especially with the variables/class/function names, since it will be error prone to rename them all.
Is there any tools that can help me? Advices?
edit : I'm not looking for machine translation. I'm looking for a tool that would help me translate the code. Let's say there is class name C and this class has a method named TraverserLaRue and I rename it CrossTheRoad I'd like all references to TraverserLaRue in all files to be translated as CrossTheRoad. However I don't want the method TraverserLaRue of class B to be translated.
I assume the langauge in question is one of the common ones, such as C, C++, C#, Java, ...
(You don't have a language with French keywords? I once encountered an entirely Swedish version of Pascal, and I gave up on working that).
So you have two problems:
Translating identifiers in the source code
Translating comments
Since comments contain arbitrary natural language text, you'll need an arbitrary translation of them. I don't think you can find an automated tool to do that.
Unlike others, however, I think you have a decent chance at translating the identifiers
and changing them en masse.
SD makes a line of source code "obfuscator" products. These tools don't process the code as raw text, rather they process the source code in terms of the targeted language; they accurately distinguish identifiers from operators, numbers, comments etc. In particular, they
operate reliably as need on just the identifiers.
One of the things these tools do is to replace one identifier name by another (usually a nonsense name) to make the code really hard to understand. Think abstractly of a map of identifier names I -> N. (They do other things, but that's not interesting here). Because you often want to re-obfuscate a file that has changed, the same way as an original, these tools allow you to reuse a previous cycle's identifier map, which is represented as list of I -> N pairs.
I think you can abuse this to do what you want.
Step 1: Run such an obfuscator on your original French code. This will produce a text file containing all the identifiers in the code as a map of the form
I1 -> N1
I2 -> N2
....
You don't care about the Ns, just the I's.
Step 2: Manually translate each French I to an English name E you think fits best.
(I have no specific suggestions about how to do this; some of the other answers here
have suggestions).
Some of the I's are likely to be library calls and are thus already correct.
You can modify the text obfuscation map file to be:
I1 -> E1
I2 -> E2
Step 3: Run the obfuscation tool, and make it use your modified obfuscation map.
It can be told to do that.
Viola, all the identifiers in your code will be changed the way you specify.
[You may get, as a freebie, the re-formatting of your original text. These tools can also format code nicely. Your name changes are likely to screw up the indentation/spacing in the original text so this is a nice bonus].
Any refactoring tool has a rename feature. Many questions on SO address language specific refactoring tools.
For the comments, you will have to handle them manually.
I did this with German code a while ago, but had mixed results because of abbreviations in names, etc. Using regular expressions, I wrote a parser that removed all of the language specific keywords and characters, then separated comments from the rest of the code, and now I had a lot of words that didn't necessarily mean anything to me by themselves. So I wrote a unique word finder that added them all to a ordered text file. Next stop was Google's language tools that attempted to translate every word in the list. I ran through the list to see if each word really translated, and if it did, I did a replace all in the code with the english equivalent. The comments I put back in with the complete translation, if it worked. What I found was that I ended up having to talk with someone who understood "Germish" to translate the abbreviations, slang terms, and mixed language pieces. So in short, regular expressions with a dictionary, unless someone has a real tool for this, which I would be interested in also.
You should definitely look into https://launchpad.net/rosetta
Ubuntu uses this to translate thousands of its packages written in hundreds of programming languages into hundreds of human languages, with updates for each new version. Truly herculean task.
edit: ...to clarify how Rosetta is used at Ubuntu: it modifies all natural language strings occuring in source code of the open-source apps, creating a language-specific source packages, which upon compiling create given kinds of binaries. Of course it does not edit binaries themselves.
First maintainers create "template files" which are something like "Patch with wildcards" - a set of rules what and where in the source tree needs to be translated, but not to what. Then Rosetta displays strings to be translated, and allows volunteering translators to provide translations to their language for each entry. Each entry can be discussed, modified, suggestions submitted and moderated. Stats are provided how much needs to be translated, which translations are unsure, which are missing etc. When the translation is complete, patch of given language is applied to the source creating its version for given language. Then a distribution is compiled from the modified sources.
This allows translation both for sources that use some external resources for multilingual allowing for language change on the fly, and for ones that have literal native language strings right in the source code, mixed with business logic.
When a new version of the package is released, template must be edited to include all new strings but it has quite good automation for preserving the existing ones. Of course only translations for new strings are required.
IMHO automatic tools won't be of any help here. Just translating variable and function names is not enough and will make the code worse because they cannot infer the original programmer intent when he choose a variable name.
Depending on what programming language this code is written to there are modern IDEs that might ease the refactoring but if you want to have good results manual code review is a must.
A good IDE will be able to list classes, methods, variables. There's also documentation generation tools that'll do that such as Javadoc for Java, Doxygen for many languages, etc.
To do the actual translation, there will be no tool that will perform well, or even to a satisfactory level. The only way to get something worthwile is to have a bilingual translator translate the terms. I've been doing freelance translations for many years, and can tell you that trying to have some machine do the translating is a waste of time. Many examples, choice of words, will be relevant to your culture and not the other. And that's just the tip of the iceberg.
Unless you find someone that can do the translation, I suggest you abandon the idea. Leave the source code as is. If a non-French speaker reads it, and needs to understand something, let them do the Google lookup. If they are native English speakers they'll probably do a better job of understanding the automatic translated stuff than you would, being French. When translating, you always want to translate into your native language.
For translating only comments you may try this simple utility I wrote (it's using Microsoft's Translator API): transource.

Resources