Are parsing expression grammars suited to parsing the shell command language? - shell

The POSIX shell command language is not easy to parse, largely because of tight coupling between lexing and parsing.
However, parsing expression grammars (PEGs) are often scannerless. By combining lexing and parsing, it seems that I could avoid these problems. The language that I am using (Rust) has a well-maintained PEG library. However, I know of three difficulties that could make it impractical to use this library:
Shells must be able to parse line by line, not reading characters past the end of the line.
Aliases are purely lexical, and can cause a token to be replaced by any sequence of other tokens in certain situations
Shell reserved words are only recognized in certain situations
Is a PEG suited to parsing the shell command language given these requirements, or is a hand-written recursive-descent parser more suitable?

Yes, a PEG can be used, and none of the issues you note should be a problem.
In particular:
1) parsing line by line: most PEG tools will not have any built-in white-space skipping. All white space including newlines must be explicitly handled by you, which means you can handle newline any way you like.
2) You should not use the parse tree from PEG as your AST. Instead you should descend the parse tree and build an AST. For aliases then, after the parse has completed and you're building your AST, you can detect the alias and insert the appropriate expansion for the alias instead.
3) Reserved words are not reserved unless you reserve them. That is, if you have a context where either a reserved word or another alphanumeric symbol can occur, you must first check for the reserved words explicitly, then the arbitrary alphanumeric symbol, because once the PEG decides it has a match, that will not back-track. Anywhere a reserved word is not permitted, simply don't check for it, and your generalised alphanumeric symbol rule will succeed instead.

Related

Better algorithm for shortening English words

I have some unique codes that are generated from strings (ex: website host names) in various independent components of my application.
These codes are meant to be used by machines only so i would like to keep them as short as possible.
The below algorithm would be applied to every word in the string. The output words would be concatenated with a dash to generate the unique code.
The current algorithm I have used:
- Skip word if length is less than 6
- Leave first character as is
- Remove every wowel in the word from the second character onwards
architectural digest eu => archtctrl-dgst-eu
arizona foothills magazine => arzn-fthlls-mgzn
Is there a better way to shorten an English word leaving it as recognisable as possible to a human reader?
The output should be deterministic and produce the same shortened version whenever it is run on the same input.
A good algorithm should also minimise the number of clashes for similarly spelt words.
I have some unique codes that are generated from strings
I am afraid that is not true. There are many English words that will reduce to the same 'code word' when stripped of their vowels. For example, 'leaving' -> 'living' Given, this is fairly rare, it could still cause issues.
How important is it that these 'code words' remain human-readable if as you say, they are meant to be used by machines only? If its not that important, I'd suggest looking into some simpler compression algorithms like Huffman Coding or LZW Compression. Then if the user needs to see the translation of the code word, just uncompress it.
If you must keep it human-readable, I'm not sure that there is much more you can do to shorten it. You could take a look at specific latin + greek roots, and determine if you can shorten those any more by hand, and then just substitute those out automatically.
Alternatively, you could turn to a phonetic approach. Automatically search the pronunciation of the word, and then see if that is any shorter (or itself can be compressed, taking 'cee' to 'C', or 'kay' to 'K'). This would be much more time and CPU intensive, but its still an option if you really, really need short but yet readable codes.
What you're generating sounds like what's called a "slug". There are many libraries to handle this for blogs or site generators that should suit your purposes. Here's a usage example from a Python library called slugify:
txt = "___This is a test ---"
r = slugify(txt)
self.assertEqual(r, "this-is-a-test")
Slug libraries generally work like this:
replacing non-ascii linguistic characters via a mapping (ex: 影師嗎 -> ying-shi-ma)
replace accented latin letters with ascii equivalents via a mapping (ex: C'est déjà l'été. -> c-est-deja-l-ete)
remove beginning and trailing spaces/punctuation
convert remaining spaces and punctuation to dashes, collapsing multiple dashes in a row to a single dash
If you want to make slugs shorter you could remove vowels or, more simply, use a maximum length.

Regix in ruby on rails for adding "\n\n" " 2 new line" when i found "\n" [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
I don't really understand regular expressions. Can you explain them to me in an easy-to-follow manner? If there are any online tools or books, could you also link to them?
The most important part is the concepts. Once you understand how the building blocks work, differences in syntax amount to little more than mild dialects. A layer on top of your regular expression engine's syntax is the syntax of the programming language you're using. Languages such as Perl remove most of this complication, but you'll have to keep in mind other considerations if you're using regular expressions in a C program.
If you think of regular expressions as building blocks that you can mix and match as you please, it helps you learn how to write and debug your own patterns but also how to understand patterns written by others.
Start simple
Conceptually, the simplest regular expressions are literal characters. The pattern N matches the character 'N'.
Regular expressions next to each other match sequences. For example, the pattern Nick matches the sequence 'N' followed by 'i' followed by 'c' followed by 'k'.
If you've ever used grep on Unix—even if only to search for ordinary looking strings—you've already been using regular expressions! (The re in grep refers to regular expressions.)
Order from the menu
Adding just a little complexity, you can match either 'Nick' or 'nick' with the pattern [Nn]ick. The part in square brackets is a character class, which means it matches exactly one of the enclosed characters. You can also use ranges in character classes, so [a-c] matches either 'a' or 'b' or 'c'.
The pattern . is special: rather than matching a literal dot only, it matches any character†. It's the same conceptually as the really big character class [-.?+%$A-Za-z0-9...].
Think of character classes as menus: pick just one.
Helpful shortcuts
Using . can save you lots of typing, and there are other shortcuts for common patterns. Say you want to match a digit: one way to write that is [0-9]. Digits are a frequent match target, so you could instead use the shortcut \d. Others are \s (whitespace) and \w (word characters: alphanumerics or underscore).
The uppercased variants are their complements, so \S matches any non-whitespace character, for example.
Once is not enough
From there, you can repeat parts of your pattern with quantifiers. For example, the pattern ab?c matches 'abc' or 'ac' because the ? quantifier makes the subpattern it modifies optional. Other quantifiers are
* (zero or more times)
+ (one or more times)
{n} (exactly n times)
{n,} (at least n times)
{n,m} (at least n times but no more than m times)
Putting some of these blocks together, the pattern [Nn]*ick matches all of
ick
Nick
nick
Nnick
nNick
nnick
(and so on)
The first match demonstrates an important lesson: * always succeeds! Any pattern can match zero times.
A few other useful examples:
[0-9]+ (and its equivalent \d+) matches any non-negative integer
\d{4}-\d{2}-\d{2} matches dates formatted like 2019-01-01
Grouping
A quantifier modifies the pattern to its immediate left. You might expect 0abc+0 to match '0abc0', '0abcabc0', and so forth, but the pattern immediately to the left of the plus quantifier is c. This means 0abc+0 matches '0abc0', '0abcc0', '0abccc0', and so on.
To match one or more sequences of 'abc' with zeros on the ends, use 0(abc)+0. The parentheses denote a subpattern that can be quantified as a unit. It's also common for regular expression engines to save or "capture" the portion of the input text that matches a parenthesized group. Extracting bits this way is much more flexible and less error-prone than counting indices and substr.
Alternation
Earlier, we saw one way to match either 'Nick' or 'nick'. Another is with alternation as in Nick|nick. Remember that alternation includes everything to its left and everything to its right. Use grouping parentheses to limit the scope of |, e.g., (Nick|nick).
For another example, you could equivalently write [a-c] as a|b|c, but this is likely to be suboptimal because many implementations assume alternatives will have lengths greater than 1.
Escaping
Although some characters match themselves, others have special meanings. The pattern \d+ doesn't match backslash followed by lowercase D followed by a plus sign: to get that, we'd use \\d\+. A backslash removes the special meaning from the following character.
Greediness
Regular expression quantifiers are greedy. This means they match as much text as they possibly can while allowing the entire pattern to match successfully.
For example, say the input is
"Hello," she said, "How are you?"
You might expect ".+" to match only 'Hello,' and will then be surprised when you see that it matched from 'Hello' all the way through 'you?'.
To switch from greedy to what you might think of as cautious, add an extra ? to the quantifier. Now you understand how \((.+?)\), the example from your question works. It matches the sequence of a literal left-parenthesis, followed by one or more characters, and terminated by a right-parenthesis.
If your input is '(123) (456)', then the first capture will be '123'. Non-greedy quantifiers want to allow the rest of the pattern to start matching as soon as possible.
(As to your confusion, I don't know of any regular-expression dialect where ((.+?)) would do the same thing. I suspect something got lost in transmission somewhere along the way.)
Anchors
Use the special pattern ^ to match only at the beginning of your input and $ to match only at the end. Making "bookends" with your patterns where you say, "I know what's at the front and back, but give me everything between" is a useful technique.
Say you want to match comments of the form
-- This is a comment --
you'd write ^--\s+(.+)\s+--$.
Build your own
Regular expressions are recursive, so now that you understand these basic rules, you can combine them however you like.
Tools for writing and debugging regexes:
RegExr (for JavaScript)
Perl: YAPE: Regex Explain
Regex Coach (engine backed by CL-PPCRE)
RegexPal (for JavaScript)
Regular Expressions Online Tester
Regex Buddy
Regex 101 (for PCRE, JavaScript, Python, Golang, Java 8)
I Hate Regex
Visual RegExp
Expresso (for .NET)
Rubular (for Ruby)
Regular Expression Library (Predefined Regexes for common scenarios)
Txt2RE
Regex Tester (for JavaScript)
Regex Storm (for .NET)
Debuggex (visual regex tester and helper)
Books
Mastering Regular Expressions, the 2nd Edition, and the 3rd edition.
Regular Expressions Cheat Sheet
Regex Cookbook
Teach Yourself Regular Expressions
Free resources
RegexOne - Learn with simple, interactive exercises.
Regular Expressions - Everything you should know (PDF Series)
Regex Syntax Summary
How Regexes Work
JavaScript Regular Expressions
Footnote
†: The statement above that . matches any character is a simplification for pedagogical purposes that is not strictly true. Dot matches any character except newline, "\n", but in practice you rarely expect a pattern such as .+ to cross a newline boundary. Perl regexes have a /s switch and Java Pattern.DOTALL, for example, to make . match any character at all. For languages that don't have such a feature, you can use something like [\s\S] to match "any whitespace or any non-whitespace", in other words anything.

Why is useful to have a atom type (like in elixir, erlang)?

According to http://elixir-lang.org/getting-started/basic-types.html#atoms:
Atoms are constants where their name is their own value. Some other
languages call these symbols
I wonder what is the point of have a atom type. Probably to help build a parser or for macros? But in everyday use how it help the programmer?
BTW: Never use elixir or erlang, just note it exist (also in kdb)
They're basically strings that can easily be tested for equality.
Consider a string. Conceptually, we generally want to think of strings as being equal if they have the same contents. For example, "dog" == "dog" but "dog" != "cat". However, to check the equality of strings, we have to check to see if each letter in one string is equal to the letter in the same position in another string, which means that we have to walk through each element of the string and check each character for equality. This becomes a bit more cumbersome if dealing with Unicode strings and having to consider different ways of composing identical characters (for example, the character é has two representations in UTF-8).
It would be much simpler if we stored identical strings at the same location in memory. Then, checking equality would be a simple pointer or index comparison.
As a consequence of storing identical strings in the same location in memory, we can also store one copy of each unique kind of string regardless of how many times it is used in the program, thus saving some memory for commonly-used strings as well.
At a higher level, using atoms also lets us think of strings the same way we think of other primitive data types like integers.
I think that one of the most common usage in erlang is to tag variables and messages, with the benefit of fast comparison (pattern match) as mipadi says.
For example you write a function that may fail depending on parameters provided, the status of connection to a server, or any reason. A very frequent usage is to return a tuple {ok,Value} in case of success, {error,Reason} in case of error. The calling function will have the choice to manage only the success case coding {ok,Value} = yourModule:yourFunction(Param...). Doing this it is clear that you consider only the success case, you extract directly the Value from the function return, it is fast, and you don't have to share any header with yourModule to decode the ok atom.
In messages you will often see things like {add,Key,Value}, {delete,Key},{delete_all}, {replace,Key,Value}, {append,Key,Value}... These are explicit messages, with the same advantages as mentioned before: fast,sensible,no share of header...
Atoms are constants with itself as value.
This is a concept very usefull in distributed systems, where constants can be defined differently on each system, while atoms are self-containing with no need for definement.

Atom escaping rules in Prolog

I need to export to a file a Prolog program expressed using an arbitrary term representation in Java. The idea is that a Prolog interpreter should be able to consult the generated file afterwards.
My question is about the correct way to write in the file Java Strings representing atom terms.
For example, if the string has a space in the middle, it should be surrounded by single quotes in the file:
hello world becomes 'hello world'
And the exporter should take into consideration characters that should be escaped:
' becomes '\''
Could someone point me to the place were these rules are specified?, and: Can I assume that these rules are respected by major Prolog implementors? (I mean, a Prolog program generated following these rules would be correctly parsed by most Prolog interpreters?).
The precise place for this is the standard, ISO/IEC 13211-1:1995, quoted_token (* 6.4.2 *). See this answer how to get it for USD 30.
The precise syntax is quite complex due to a lot of extras like continuation lines and the like. If you are only writing atoms that should be read by Prolog, things are a bit easier. Also in that situation, you could always quote, which makes writing again a bit simpler.
Some things to be aware of:
Only simple spaces may occur as layout in a quoted atom. All other spaces need to be escaped like \t, \n (abrftnv). Many systems accept also other layout but they differ to each other in very tiny details.
Backslash and quote must be escaped.
Characters outside the printable ASCII range depend on the PCS supported by a system. In a conforming system, the accompanying documentation should define how the additional characters (extended characters) are classified. Documentation quality varies on a wide range.
In any case, test your interface also with GNU-Prolog from 1.4.1 upwards. To date, no differences are known between GNU 1.4.1+ and the standard as far as syntax is concerned.
Here are some 240+ syntax related test cases. Please report any oversight!
A practical hint: if you issue a writeq with your Prolog, with data you need to know about, you'll get quotes around when required.

Ruby Regular expression too big / Multiple string match

I have 1,000,000 strings that I want to categorize. The way I do this is to bucket it if it contains a set of words or phrases. The set of words is about 10,000. Ideally I would be able to support regular expressions, but I am focused on making it run fast right now. Example phrases:
ford, porsche, mazda...
I really dont want to match each word against the strings one by one, so I decided to use regular expressions. Unfortunately, I am running into a regular expression issue:
Regexp.new("(a)"*253)
=> /(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)...
Regexp.new("(a)"*254)
RegexpError: regular expression too big: /(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)(a)...
where a would be one of my words or phrases. Right now, I am planning on running 10,000 / 253 matches. I read that the length of the regex heavily impacts performance, but my regex match is really simple and the regexp is created very quickly. I would like to get around the limitation somehow, or use a better solution if anyone has any ideas. Thanks.
You might consider other mechanisms for recognizing 10k words.
Trie: Sometimes called a prefix tree, it is often used by spell checkers for doing word lookups. See Trie on wikipedia
DFA (deterministic finite automata): A DFA is often created by the lexer in a compiler for recognizing the tokens of the language. A DFA runs very quickly. Simple regexes are often compiled into DFAs. See DFA on wikipedia

Resources