Analysis of Ruby code - ruby

I'm trying to perform Natural Language Processing (NLP) analysis on source code, and especially on Ruby files. In particular, I want to extract identifiers and comments, considering the structure of the code.
My first attempt was using off-the-shelf NLP libraries, such as Lucene or spacy. However, I was not able to remove all the noise coming from keywords, literals, and the typical stuff in source code.
My second attempt is about to obtain the AST of a particular piece of code, and then extract some parts. There are multiple tools and libraries for a number of languages, but I'm not able to find anything specific to parse Ruby code. So far, my main option is using ANTLR 4, and tailor a Ruby-like grammar (Corundum) to work also with OOP.
Is there a more straightforward path to what I'm looking for?

Related

Does a Tool for Automatically Visualizing a Project's Source Code's Control Flow In-Line Exist?

I would like to be able to use a tool that lets you visualize a program's control flow(s) in the context of its source code. To clarify, such a tool should basically show what happens in a program by spitting out a human-readable abstract syntax tree in the form of a multidigraph with nodes containing snippets of source-code translation units. The resulting graph initial node would, I presume, contain the block of code starting with a program's entry point (that'd be main for a C or C++ program.) New nodes would be created when a node needs to reference another block of code, whether that might be in the current file or in another one, and arrows would connect the nodes. Does such a tool exist, or would it have to be created from scratch?
You aren't going to get a tool that does this for arbitrary languages off the shelf. There are too many languages, each with its own syntax and semantics. You somehow need a tool per language. You might find such tools for very commonly used languages, e.g, Understand for Software.
I think that the only way to do this is to build metatools that enable the construction of language-specific tools relatively easily. Such a tool has to have the common machinery needed by all such language processing tools: strong parsers (so writing grammars for languages is relatively straightforward), AST construction machinery, symbol table support, routines to build control and data flow graphs. By providing such machinery, one can build language front ends for modest costs.
There's a class of tools that does this, program transformation. Most of them have parsing engines, but not the rest of the mechanisms I have suggested above.
I believe this enough to have invested 20 years of my life to building
such meta tools. Our DMS Software Reengineering toolkit shows its strength in being able to parse some 50+ languages, including the stunningly hard C++14 (both MS and GNU variants). It shows symbol table support and control flow graph construction for COBOL, Java, C, C++. (We can't do everything at once; pedaling as fast as practical).
[DMS builds these graphs as data structures rather than "showing" them; the examples on that page are drawn with the additional help of DOT].
One of the few other tools that tries to do this is Clang/LLVM; this covers a wide variety of popular languages. Clang doesn't have any specific support for parsing that I know about; you get to code it all yourself. I think you get control flow graphs only after you convert the language to LLVM. I don't think it has any specific support for drawing control flow graphs, either.
An older tool with a good reputation for multi-language support in this space is CoCo/R;
I don't know a lot about it. I know it parses,
and has some support for ASTs; I don't know what it does
about control flow analysis.

Is there any standard way to store abstract syntax trees files?

I am searching for a way to "dump" abstract syntax trees into files so that code can be parsed with a compiler and then stored in a language- and compiler independent way. Yet I was unable to find any widely recognized way for doing this. Does such a way exist?
There are no standards for storing ASTs, or more importantly from your point of view, sharing them among tools. The reason is that ASTs are dependent on grammars (which vary; C has "many" depending on which specific compiler and version) and parsing technology.
There have been lots of attempts to define universal AST forms across multiple languages but none of them have really worked; the semantics of the operators varies too much. (Consider just "+": what does it really mean? In Fortran, you can add arrays, in Java, you can "add" strings).
However, one can write out specific ASTs rather easily. A simple means is to use some kind of notation in which a node is identified along with its recursive children using some kind of nested "parentheses".
Lisp S-expressions are a common way to do this. You can see an example of the S-expression style generated by our tools.
People have used XML for this, too, but it is pretty bulky. You can see an XML output example here.

Is there a framework for writing phrase structure rules out there that is opensource?

I've worked with the Xerox toolchain so far, which is powerful, not opensource, and a bit overkill for my current problem. Are there libraries that allow my to implement a phrase structure grammar? Preferably in ruby or lisp.
AFAIK, there's no open-source Lisp phrase structure parser available.
But since a parser is actually a black box, it's not so hard to make your application work with a parser written in any language, especially as they produce S-expressions as output. For example, with something like pfp you can just pipe your sentences as strings to it, then read and process the resulting trees. Or you can wrap a socket server around it and you'll get a distributed system :)
There's also cl-langutils, that may be helpful in some basic NLP tasks, like tokenization and, maybe, POS tagging. But overall, it's much less mature and feature rich, than the commonly used packages, like Stanford's or OpenNLP.

Ruby Text Analysis

Is there any Ruby gem or else for text analysis? Word frequency, pattern detection and so forth (preferably with an understanding of french)
the generalization of word frequencies are Language Models, e.g. uni-grams (= single word frequency), bi-grams (= frequency of word pairs), tri-grams (=frequency of world triples), ..., in general: n-grams
You should look for an existing toolkit for Language Models — not a good idea to re-invent the wheel here.
There are a few standard toolkits available, e.g. from the CMU Sphinx team, and also HTK.
These toolkits are typically written in C (for speed!! because you have to process huge corpora) and generate standard output format ARPA n-gram files (those are typically a text format)
Check the following thread, which contains more details and links:
Building openears compatible language model
Once you generated your Language Model with one of these toolkits, you will need either a Ruby Gem which makes the language model accessible in Ruby, or you need to convert the ARPA format into your own format.
adi92's post lists some more Ruby NLP resources.
You can also Google for "ARPA Language Model" for more info
Last not least check Google's online N-gram tool. They built n-grams based on the books they digitized — also available in French and other languages!
The Mendicant Bug: NLP Resources for Ruby
contains lots of useful Ruby NLP links.
I had tried using the Ruby Linguistics stuff a long time ago, and remember having a lot of problems with it... I don't recommend jumping into that.
If most of your text analysis involves stuff like counting ngrams and naive Bayes, I recommend just doing it on your own. Ruby has pretty good basic libraries and awesome support for regexes, so this should not be that tricky, and it will be easier for you to adapt stuff to the idiosyncrasies of the problem you are trying to solve.
Like the Stanford parser gem, its possible to use Java libraries that solve your problem from within Ruby, but this can be tricky, so probably not the best way to solve a problem.
I wrote the gem words_counted for this reason. You can see a demo on rubywordcount.com. It has a lot of the analysis features you mention, and a host more. The API is well documented and can be found in the readme on Github.

Is using a finite state machine a good design for general text parsing?

I am reading a file that is filled
with hex numbers. I have to identify a
particular pattern, say "aaad" (without quotes) from
it. Every time I see the pattern, I
generate some data to some other file.
This would be a very common case in designing programs - parsing and looking for a particular pattern.
I have designed it as a Finite State Machine and structured structured it in C using switch-case to change states. This was the first implementation that occured to me.
DESIGN: Are there some better designs possible?
IMPLEMENTATION: Do you see some problems with using a switch case as I mentioned?
A hand-rolled FSM can work well for simple situations, but they tend to get unwieldy as the number of states and inputs grows.
There is probably no reason to change what you have already designed/implemented, but if you are interested in general-purpose text parsing techniques, you should probably look at things like regular expressions, Flex, Bison, and ANTLR.
For embarrassingly simple cases couple of if's or switch'es are sufficient.
For parsing a string on POSIX systems, man regex (3). For fully featured parsing of whole files (e.g. complex configs) use Lex/Flex and Yacc/Bison.
When writing in C++, look at Boost Regex for the simpler case and Boost Spirit for more complex one. Flex & Bison work with C++ too.

Resources