Given the following regular language, what is the NFA? - nfa

(01)*01(0|1)*
What I'm seeing here is "at least one 01, followed by 0 or more 0's or 1's".
Am I over simplifying this, or just being paranoid that the expression looks a lot more complex than the drawn machine?

Related

About definiteness of definition of algorithm?

I'm reading a note about the definition of algorithm, it has two requirements that I don't know what's the differences between them
Definiteness: Every instruction should be clear and unambiguous. (I found a source with exactly the same statement)
From the resource I have there are 5 requirements: Input, Output, Definiteness, Finiteness, Effectiveness. I can understand the other 4 except the Definiteness. Can anyone provide some better definition if the above is not precise?
From the above I only suspect that there are at least two subtleties should be considered...
For conclusion from answers below: definiteness = defined(clear) + only_one(unambiguous).
Algorithm should be clear and unambiguous. Each of its steps (or phases), and their inputs/outputs should be clear and must lead to only one meaning.
For example, if one step is to add two integers, we must define both “integers” as well as the “add” operation: we cannot for example use the same symbol to mean addition in one place and multiplication somewhere else.
If presented to an educated human, the text should allow him to simulate execution by hand in exactly the way you had in mind (same steps taken, same results obtained).
When you don't quite understand the definition of a term provided by some author, it's often helpful to look for other definitions of it. I especially like the one for "definite" from wiktionary.org:
Free from any doubt.
In this context, clear becomes understandable, and unambiguous becomes with a single meaning.
It just means that instructions in an algorithm should have one and only one interpretation. Moreover, the interpretation should be obvious.
A statement like "Repeat steps 1 to 4 a few times" does not fit the criteria as "few times" can mean different number of tries to different people.
On the other hand, a statement like "Repeat steps 1 to 4 until x is equal to y" where x and y are some parameters in the algorithm is indeed clear and unambiguous.

Converting PI digits into text strings

It's kind of interesting that pi's decimal representation never ends and never settles into a permanent repeating pattern. Meaning it's highly possible that pi contains every possible combination of numbers.
This guy calculated 5 trillions 5x(10^12) numbers of pi :D
http://www.numberworld.org/misc_runs/pi-5t/details.html
From the internet: "Converted into ASCII text, somewhere in that infinite string of digits is the name of every person you will ever love, the date, time and manner of your death, and the answers to all the great questions of the universe."
Wondering if somebody has already converted and analyzed the resulting string for known sequences of letters (words/sentences)?
Check out this page: http://pi.nersc.gov/.
It allows you to search for both character strings and hexadecimal sequences. Note that this search engine only has indexed the first 4 billion decimals of pi, and uses a formula for arbitrarily positioned binary or hexadecimal digits after those indexed.
The idea that Pi contains everything ever is a nice idea, but if it's correct, that means there is also an infinite amount of false things about everything ever. For example, if Pi contains a list of all the people you will ever love, then it will also have a list of people that seems that it is a list of people you will love, but in reality it's just a mix of names in a pattern that makes it look legit.
Following the same idea, the date, time, and manner of your death could also be "falsified". For example, let's say you are a man named Jason Delara, and you die at the age of 83 at 11:35 PM in your sleep. In Pi somewhere it can say in ASCII text "Jason Delara will die at age 83, 11:35 PM, passed in his sleep." It would also say somewhere else that "Jason Delara will die at age 35, at 6:00 AM, passed in a car accident." There could be an "infinite" amount of these false predictions.
There's also the fact that, if following the idea from above, all but one the answers to one of the great questions of the universe in that digit are wrong, even if many of the answers make sense. I've thought about this a lot, and I thought "What if there's part of the digit that states which facts are correct and which are not?" The answer is "Then there is an infinite amount of false lists in the digit claiming to do the same as the real list." In short, it would be pointless to convert Pi to ASCII text to try and figure everything out.
I know I'm a little late the party, but I wrote this for anybody who comes here looking for the answers to the universe in an endless, non-repeating decimal.
It is massively convenient that pi is an irrational number we're still finding digits for as if you can't find what you want in the sequence then by definition it just happens to be later on.
As for it containing hidden information - if you create any random sequence long enough, you'll be able to create simple words from the resulting output.
Conspiracy theorists just love to see patterns where there are none. They forget the other noise and are endlessly fascinated by mere coincidences.
Would just like to provide further context this question. Yes, the point is that PI goes on infinitely. That means there are endless possibilities for sentence structure and letter combination. This means every single combination of letters will happen and is happening in PI. So technically, everything in PI could apply to everything in the observable world around us.

What is a regular language?

I'm trying to understand the concept of languages levels (regular, context free, context sensitive, etc.).
I can look this up easily, but all explanations I find are a load of symbols and talk about sets. I have two questions:
Can you describe in words what a regular language is, and how the languages differ?
Where do people learn to understand this stuff? As I understand it, it is formal mathematics? I had a couple of courses at uni which used it and barely anyone understood it as the tutors just assumed we knew it. Where can I learn it and why are people "expected" to know it in so many sources? It's like there's a gap in education.
Here's an example:
Any language belonging to this set is a regular language over the alphabet.
How can a language be "over" anything?
In the context of computer science, a word is the concatenation of symbols. The used symbols are called the alphabet. For example, some words formed out of the alphabet {0,1,2,3,4,5,6,7,8,9} would be 1, 2, 12, 543, 1000, and 002.
A language is then a subset of all possible words. For example, we might want to define a language that captures all elite MI6 agents. Those all start with double-0, so words in the language would be 007, 001, 005, and 0012, but not 07 or 15. For simplicity's sake, we say a language is "over an alphabet" instead of "a subset of words formed by concatenation of symbols in an alphabet".
In computer science, we now want to classify languages. We call a language regular if it can be decided if a word is in the language with an algorithm/a machine with constant (finite) memory by examining all symbols in the word one after another. The language consisting just of the word 42 is regular, as you can decide whether a word is in it without requiring arbitrary amounts of memory; you just check whether the first symbol is 4, whether the second is 2, and whether any more numbers follow.
All languages with a finite number of words are regular, because we can (in theory) just build a control flow tree of constant size (you can visualize it as a bunch of nested if-statements that examine one digit after the other). For example, we can test whether a word is in the "prime numbers between 10 and 99" language with the following construct, requiring no memory except the one to encode at which code line we're currently at:
if word[0] == 1:
if word[1] == 1: # 11
return true # "accept" word, i.e. it's in the language
if word[1] == 3: # 13
return true
...
return false
Note that all finite languages are regular, but not all regular languages are finite; our double-0 language contains an infinite number of words (007, 008, but also 004242 and 0012345), but can be tested with constant memory: To test whether a word belongs in it, check whether the first symbol is 0, and whether the second symbol is 0. If that's the case, accept it. If the word is shorter than three or does not start with 00, it's not an MI6 code name.
Formally, the construct of a finite-state machine or a regular grammar is used to prove that a language is regular. These are similar to the if-statements above, but allow for arbitrarily long words. If there's a finite-state machine, there is also a regular grammar, and vice versa, so it's sufficient to show either. For example, the finite state machine for our double-0 language is:
start state: if input = 0 then goto state 2
start state: if input = 1 then fail
start state: if input = 2 then fail
...
state 2: if input = 0 then accept
state 2: if input != 0 then fail
accept: for any input, accept
The equivalent regular grammar is:
start → 0 B
B → 0 accept
accept → 0 accept
accept → 1 accept
...
The equivalent regular expression is:
00[0-9]*
Some languages are not regular. For example, the language of any number of 1, followed by the same number of 2 (often written as 1n2n, for an arbitrary n) is not regular - you need more than a constant amount of memory (= a constant number of states) to store the number of 1s to decide whether a word is in the language.
This should usually be explained in the theoretical computer science course. Luckily, Wikipedia explains both formal and regular languages quite nicely.
Here are some of the equivalent definitions from Wikipedia:
[...] a regular language is a formal language (i.e., a possibly
infinite set of finite sequences of symbols from a finite alphabet)
that satisfies the following equivalent properties:
it can be accepted by a deterministic finite state machine.
it can be accepted by a nondeterministic finite state machine
it can be described by a formal regular expression.
Note that the "regular expression" features provided with many programming languages
are augmented with features that make them capable of recognizing
languages which are not regular, and are therefore not strictly
equivalent to formal regular expressions.
The first thing to note is that a regular language is a formal language, with some restrictions. A formal language is essentially a (possibly infinite) collection of strings. For example, the formal language Java is the collection of all possible Java files, which is a subset of the collection of all possible text files.
One of the most important characteristics is that unlike the context-free languages, a regular language does not support arbitrary nesting/recursion, but you do have arbitrary repetition.
A language always has an underlying alphabet which is the set of allowed symbols. For example, the alphabet of a programming language would usually either be ASCII or Unicode, but in formal language theory it's also fine to talk about languages over other alphabets, for example the binary alphabet where the only allowed characters are 0 and 1.
In my university, we were taught some formal language theory in the Compilers class, but this is probably different between different schools.

Does a specification/grammar for "Preflop Canonical Form" exist?

I hear this term used quite frequently, but have yet to see it specified (and can't find it by searching). Single starting hands are pretty straight forward. Here, I'm using pokerstove syntax as an example:
XX for a pair (e.g.: 77, 99, TT, KK)
XYo for an off suit combination (e.g.: 72o, 54o, AQo)
XYs for a suited combination (e.g.: 76s, 86s, AKs)
Where things get a bit dicier is when it starts to take on a set-builder-like syntax, with ranges, and unions between ranges. (e.g.: 22+, A9s+, AKo for any pair, suited aces >= A9, and AKo).
As far as I see, people tend to use slight variations on the PF form, but the term "canonical form" seems to suggest that someone has at least started looking toward simplifying and/or standardizing it.
On one hand, Stove is ubiquitous enough that duplicating Prock's syntax isn't horrible, but I'd like to implement a standard if one exists.
Seems the answer is 'no' and that Pokerstove's syntax is ubiquitous enough that it can be used.
Feel free to join in fleshing out the syntax.
Two card representations are given here.
Square brackets represent single hands, e.g.: [KQ] represents all possibilities of KQ (suited and non-suited).
Intervals can be represented as [76s-23s] which includes all suited connectors from 76s to 23s. It does not include off suit hands.
Commas are used to join single hands or intervals.

Looking for algorithm that reverses the sprintf() function output

I am working on a project that requires the parsing of log files. I am looking for a fast algorithm that would take groups messages like this:
The temperature at P1 is 35F.
The temperature at P1 is 40F.
The temperature at P3 is 35F.
Logger stopped.
Logger started.
The temperature at P1 is 40F.
and puts out something in the form of a printf():
"The temperature at P%d is %dF.", Int1, Int2"
{(1,35), (1, 40), (3, 35), (1,40)}
The algorithm needs to be generic enough to recognize almost any data load in message groups.
I tried searching for this kind of technology, but I don't even know the correct terms to search for.
I think you might be overlooking and missed fscanf() and sscanf(). Which are the opposite of fprintf() and sprintf().
Overview:
A naïve!! algorithm keeps track of the frequency of words in a per-column manner, where one can assume that each line can be separated into columns with a delimiter.
Example input:
The dog jumped over the moon
The cat jumped over the moon
The moon jumped over the moon
The car jumped over the moon
Frequencies:
Column 1: {The: 4}
Column 2: {car: 1, cat: 1, dog: 1, moon: 1}
Column 3: {jumped: 4}
Column 4: {over: 4}
Column 5: {the: 4}
Column 6: {moon: 4}
We could partition these frequency lists further by grouping based on the total number of fields, but in this simple and convenient example, we are only working with a fixed number of fields (6).
The next step is to iterate through lines which generated these frequency lists, so let's take the first example.
The: meets some hand-wavy criteria and the algorithm decides it must be static.
dog: doesn't appear to be static based on the rest of the frequency list, and thus it must be dynamic as opposed to static text. We loop through a few pre-defined regular expressions and come up with /[a-z]+/i.
over: same deal as #1; it's static, so leave as is.
the: same deal as #1; it's static, so leave as is.
moon: same deal as #1; it's static, so leave as is.
Thus, just from going over the first line we can put together the following regular expression:
/The ([a-z]+?) jumps over the moon/
Considerations:
Obviously one can choose to scan part or the whole document for the first pass, as long as one is confident the frequency lists will be a sufficient sampling of the entire data.
False positives may creep into the results, and it will be up to the filtering algorithm (hand-waving) to provide the best threshold between static and dynamic fields, or some human post-processing.
The overall idea is probably a good one, but the actual implementation will definitely weigh in on the speed and efficiency of this algorithm.
Thanks for all the great suggestions.
Chris, is right. I am looking for a generic solution for normalizing any kind of text. The solution of the problem boils down to dynmamically finding patterns in two or more similar strings.
Almost like predicting the next element in a set, based on the previous two:
1: Everest is 30000 feet high
2: K2 is 28000 feet high
=> What is the pattern?
=> Answer:
[name] is [number] feet high
Now the text file can have millions of lines and thousands of patterns. I would like to parse the files very, very fast, find the patterns and collect the data sets that are associated with each pattern.
I thought about creating some high level semantic hashes to represent the patterns in the message strings.
I would use a tokenizer and give each of the tokens types a specific "weight".
Then I would group the hashes and rate their similarity. Once the grouping is done I would collect the data sets.
I was hoping, that I didn't have to reinvent the wheel and could reuse something that is already out there.
Klaus
It depends on what you are trying to do, if your goal is to quickly generate sprintf() input, this works. If you are trying to parse data, maybe regular expressions would do too..
You're not going to find a tool that can simply take arbitrary input, guess what data you want from it, and produce the output you want. That sounds like strong AI to me.
Producing something like this, even just to recognize numbers, gets really hairy. For example is "123.456" one number or two? How about this "123,456"? Is "35F" a decimal number and an 'F' or is it the hex value 0x35F? You're going to have to build something that will parse in the way you need. You can do this with regular expressions, or you can do it with sscanf, or you can do it some other way, but you're going to have to write something custom.
However, with basic regular expressions, you can do this yourself. It won't be magic, but it's not that much work. Something like this will parse the lines you're interested in and consolidate them (Perl):
my #vals = ();
while (defined(my $line = <>))
{
if ($line =~ /The temperature at P(\d*) is (\d*)F./)
{
push(#vals, "($1,$2)");
}
}
print "The temperature at P%d is %dF. {";
for (my $i = 0; $i < #vals; $i++)
{
print $vals[$i];
if ($i < #vals - 1)
{
print ",";
}
}
print "}\n";
The output from this isL
The temperature at P%d is %dF. {(1,35),(1,40),(3,35),(1,40)}
You could do something similar for each type of line you need to parse. You could even read these regular expressions from a file, instead of custom coding each one.
I don't know of any specific tool to do that. What I did when I had a similar problem to solve was trying to guess regular expressions to match lines.
I then processed the files and displayed only the unmatched lines. If a line is unmatched, it means that the pattern is wrong and should be tweaked or another pattern should be added.
After around an hour of work, I succeeded in finding the ~20 patterns to match 10000+ lines.
In your case, you can first "guess" that one pattern is "The temperature at P[1-3] is [0-9]{2}F.". If you reprocess the file removing any matched line, it leaves "only":
Logger stopped.
Logger started.
Which you can then match with "Logger (.+).".
You can then refine the patterns and find new ones to match your whole log.
#John: I think that the question relates to an algorithm that actually recognises patterns in log files and automatically "guesses" appropriate format strings and data for it. The *scanf family can't do that on its own, it can only be of help once the patterns have been recognised in the first place.
#Derek Park: Well, even a strong AI couldn't be sure it had the right answer.
Perhaps some compression-like mechanism could be used:
Find large, frequent substrings
Find large, frequent substring patterns. (i.e. [pattern:1] [junk] [pattern:2])
Another item to consider might be to group lines by edit-distance. Grouping similar lines should split the problem into one-pattern-per-group chunks.
Actually, if you manage to write this, let the whole world know, I think a lot of us would like this tool!
#Anders
Well, even a strong AI couldn't be sure it had the right answer.
I was thinking that sufficiently strong AI could usually figure out the right answer from the context. e.g. Strong AI could recognize that "35F" in this context is a temperature and not a hex number. There are definitely cases where even strong AI would be unable to answer. Those are the same cases where a human would be unable to answer, though (assuming very strong AI).
Of course, it doesn't really matter, since we don't have strong AI. :)
http://www.logparser.com forwards to an IIS forum which seems fairly active. This is the official site for Gabriele Giuseppini's "Log Parser Toolkit". While I have never actually used this tool, I did pick up a cheap copy of the book from Amazon Marketplace - today a copy is as low as $16. Nothing beats a dead-tree-interface for just flipping through pages.
Glancing at this forum, I had not previously heard about the "New GUI tool for MS Log Parser, Log Parser Lizard" at http://www.lizardl.com/.
The key issue of course is the complexity of your GRAMMAR. To use any kind of log-parser as the term is commonly used, you need to know exactly what you're scanning for, you can write a BNF for it. Many years ago I took a course based on Aho-and-Ullman's "Dragon Book", and the thoroughly understood LALR technology can give you optimal speed, provided of course that you have that CFG.
On the other hand it does seem you're possibly reaching for something AI-like, which is a different order of complexity entirely.

Resources