My problem is the following. I have a list of substitutions, including one substitution for each letter of the alphabet, but also some substitutions for groups of more than one letter. For example, in my cipher p becomes b, l becomes w, e becomes i, but le becomes by, and ple becomes memi.
So, while I can think of a few simple/naïve ways of implementing this cipher, it's not very efficient, and I was wondering what the most efficient way to do it would be. The answer doesn't have to be in any particular language, a general structured English algorithm would be fine, but if it must be in some language I'd prefer C++ or Java or similar.
EDIT: I don't need this cipher to be decipherable, an algorithm that mapped all single letters to the letter 'w' but mapped the string 'had' to the string 'jon' instead should be ok, too (then the string "Mary had a little lamb." would become "Wwww jon w wwwwww wwww.").
I'd like the algorithm to be fully general.
One possible approach is to use deterministic automaton. The closest to your problem and commonly used example is Aho–Corasick string matching algorithm. The difference will be, instead of matching you would like to emit cypher at some transition. Generally at each transition you will emit or do not emit cypher.
In your example
p -> b
l -> w
e -> i
le -> by
ple -> memi
The automaton (in Erlang like pseudocode)
start(p) -> p(next());
start(l) -> l(next());
start(e) -> e(next());
...
p(l) -> pl(next);
p(X) -> emit(b), start(X).
l(e) -> emit(by), start(next());
l(X) -> emit(w), start(X).
e(X) -> emit(i), start(X).
pl(e) -> emit(memi), start(next());
pl(X) -> emit(b), l(X).
If you are not familiar with Erlang, start(), p() are functions each for one state. Each line with -> is one transition and the actions follows the ->. emit() is function which emits cypher and next() is function returning next character. The X is variable for any other character.
Related
I am having a problem with sorting a string array[], based on which strings have most letter 'p' in them, i.e., s1 (apple) would be before s2 (ape)...
I am learning how to implement Comparator to do this and use then s1.compareTo(s2) and lambda. The big question is, can't I somehow use a stream to do this ?
This is how I did it for my String array COUNTRIES in reversed alphabetic sorting
Comparator<String> reverseAlphabetic = (s1,s2) -> -s1.compareToIgnoreCase(s2);
Arrays.sort(COUNTRIES,reverseAlphabetic);
System.out.println("\nCountries in reverse alphabetic order");
for (int i=0; i<10;i++)
System.out.println("\t"+COUNTRIES[i]);
Y̶o̶u̶ ̶c̶a̶n̶ ̶d̶o̶ ̶s̶o̶m̶e̶t̶h̶i̶n̶g̶ ̶l̶i̶k̶e̶ ̶t̶h̶i̶s̶ ̶:̶
Comparator<String> comparator = (str1, str2) ->
((str1.length() - str1.replaceAll("p", "").length()) -
(str2.length() - str2.replaceAll("p", "").length()));
List<String> list = Arrays.asList("ape", "apple", "appple");
list.sort(comparator);
Actually my solution had fundamental mistakes. #Holger commented a solution in my deleted answer.
list.sort(Comparator
.comparingLong((String s) -> s.chars().filter(c -> c == 'p').count()).reversed());
commented by #Holger
Your first variant is broken too, as a - b - c - d is not the same as a - b - (c - d). It just produces the desired result by accident; a different original order will lead to different results. Yet another reason not to use minus when you mean, e.g. Integer.compare(…, …). A correct and efficient comparator can be as simple as Comparator.comparingLong(s -> s.chars().filter(c -> c=='p').count()) (or to have most occurences first, Comparator.comparingLong((String s) -> s.chars().filter(c -> c=='p').count()).reversed()), though a old-fashioned counting loop would be even better
given the following grammar I have to find the appropriate semantic actions to calculate, for each string of the language, the number of pairs of parentheses in the string.
S -> (L)
S -> a
L -> L, S
L -> S
Usually, to perform this type of exercise, I build a derivation tree of a sample string and then I add the attributes. After that it is easier to find the semantic rules.
So I built this derivation tree for the string "((a, (a), a))", but I can't proceed with the resolution of the exercise. How do I count the pairs of parentheses? I'am not able to do that...
I do't want the solution but I'd like someone to help me with the reasoning to be made in these cases.
(I'm sorry for the bad tree...)
The OP wrote:
These might be the correct semantic actions for this grammar?
S -> (L) {S.p = counter + 1}
S -> a {do nothing}
L -> L, S {L.p = S.p}
L -> S {L.p = S.p}
.p is a synthesized attribute.
S-> (S) { S.count =S.count + 1}
S-> SS{ S.count = S.count + S.count}
S-> ϵ{S.count = 0}
This should make things clear
---> Consider the grammar below:
S->SaS|bB
B->AcB| ε
A->dAd| ε
For the grammar given above, write the syntax directed definition that
prints the string that is being parsed and construct an annotated parse tree for the string ‘bddcab’.
Solution:
Now rewriting above grammar we have:
S->S1aS2
S->bB
B->AcB1
B-> ε
A->dA1d
A-> ε
( The numbers 1 and 2 following the non-terminal actually denote subscripts. And the subscripts in above grammar denote instances of the non-terminal.)
The above grammar along with the semantic rules.
Productions Semantic Rules
S->S1aS2 S.val=S1.val+a.lexval + S2.val { print S.val }
S->bB S.val=b.lexval + B.val { Print S.val}
B->AcB1 B.val=A.val+c.lexval + B1.val
B-> ε
A->dA1d A.val=d.lexval + A1.val + d.lexval
A-> ε
** The '+' operator is merely for concatenation.
Is this solution alright? I have a feeling that it might not be accurate.
Here's the annotated parse tree.
I think that those print actions in the S rules will backfire because S can occur multiple times.
S can generate SaS. But each of those S's can also generate SaS.
Basically, if you're building the printed representation as a semantic property, you can only do the print outside of the grammar once it is fully evaluated, ensuring that it happens only once.
This could be shown by introducing a pseudo start symbol X. S is reduced to X just once, and so the print happens just once, pulling the final val from the top-level S.
X -> S { print S.val } // print the top-level S's val, just once.
The other approach would be to have truly syntax-directed printing, whereby the side effect of printing happens as the parsing reductions takes place. E.g. Yacc-like embedded rule among the right hand symbols:
S -> S1 a { print a.lexeme } S2 { /* other semantic rules go here */ }
In every rule that recognizes a terminal, print the terminal as soon as it is recognized. So here, we know that the reduction of S1 causes all of its terminals to be printed (by similar rules all over the grammar). Then we recognize an a and print it, and then S2 is recognized and reduced, causing all of its terminals to be printed. You may recognize that this is closely analogous to an inorder traversal of a tree.
The Context
The context of this question is that I want to play around with Gene Expression Programming (GEP), a form of evolutionary algorithm, using Erlang. GEP makes use of a string based DSL called 'Karva notation'. Karva notation is easily translated into expression parse trees, but the translation algorithm assumes an implementation having mutable objects: incomplete sub-expressions are created early-on the translation process and their own sub-expressions are filled-in later-on with values that were not known at the time they were created.
The purpose of Karva notation is that it guarantees syntactically correct expressions are created without any expensive encoding techniques or corrections of genetic code. The problem is that with a single-assignment programming language like Erlang, I have to recreate the expression tree continually as each sub expression gets filled in. This takes an inexpensive - O(n)? - update operation and converts it into one that would complete in exponential time (unless I'm mistaken). If I can't find an efficient functional algorithm to convert K-expressions into expression trees, then one of the compelling features of GEP is lost.
The Question
I appreciate that the K-expression translation problem is pretty obscure, so what I want is advice on how to convert an inherently-non-functional algorithm (alg that exploits mutable data structures) into one that does not. How do pure functional programming languages adapt many of the algorithms and data structures that were produced in the early days of computer science that depend on mutability to get the performance characteristics they need?
Carefully designed immutability avoids unecessary updating
Immutable data structures are only an efficiency problem if they're constantly changing, or you build them up the wrong way. For example, continually appending more to the end of a growing list is quadratic, whereas concatenating a list of lists is linear. If you think carefully, you can usually build up your structure in a sensible way, and lazy evaluation is your friend - hand out a promise to work it out and stop worrying.
Blindly trying to replicate an imperative algorithm can be ineffecient, but you're mistaken in your assertion that functional programming has to be asymptotically bad here.
Case study: pure functional GEP: Karva notation in linear time
I'll stick with your case study of parsing Karva notation for GEP. (
I've played with this solution more fully in this answer.)
Here's a fairly clean pure functional solution to the problem. I'll take the opportunity to name drop some good general recursion schemes along the way.
Code
(Importing Data.Tree supplies data Tree a = Node {rootLabel :: a, subForest :: Forest a} where type Forest a = [Tree a].)
import Data.Tree
import Data.Tree.Pretty -- from the pretty-tree package for visualising trees
arity :: Char -> Int
arity c
| c `elem` "+*-/" = 2
| c `elem` "Q" = 1
| otherwise = 0
A hylomorphism is the composition of an anamorphism (build up, unfoldr) and a catamorphism (combine, foldr).
These terms are introduced to the FP community in the seminal paper Functional Programming with Bananas, Lenses and Barbed wire.
We're going to pull the levels out (ana/unfold) and combine them back together (cata/fold).
hylomorphism :: b -> (a -> b -> b) -> (c -> (a, c)) -> (c -> Bool) -> c -> b
hylomorphism base combine pullout stop seed = hylo seed where
hylo s | stop s = base
| otherwise = combine new (hylo s')
where (new,s') = pullout s
To pull out a level, we use the total arity from the previous level to find where to split off this new level, and pass on the total arity for this one ready for next time:
pullLevel :: (Int,String) -> (String,(Int,String))
pullLevel (n,cs) = (level,(total, cs')) where
(level, cs') = splitAt n cs
total = sum $ map arity level
To combine a level (as a String) with the level below (that's already a Forest), we just pull off the number of trees that each character needs.
combineLevel :: String -> Forest Char -> Forest Char
combineLevel "" [] = []
combineLevel (c:cs) levelBelow = Node c subforest : combineLevel cs theRest
where (subforest,theRest) = splitAt (arity c) levelBelow
Now we can parse the Karva using a hylomorphism. Note that we seed it with a total arity from outside the string of 1, since there's only one node at the root level. Correspondingly we apply head to the result to get this singleton back out after the hylomorphism.
karvaToTree :: String -> Tree Char
karvaToTree cs = let
zero (n,_) = n == 0
in head $ hylomorphism [] combineLevel pullLevel zero (1,cs)
Linear Time
There's no exponential blowup, nor repeated O(log(n)) lookups or expensive modifications, so we shouldn't be in too much trouble.
arity is O(1)
splitAt part is O(part)
pullLevel (part,cs) is O(part) for grab using splitAt to get level, plus O(part) for the map arity level, so O(part)
combineLevel (c:cs) is O(arity c) for the splitAt, and O(sum $ map arity cs) for the recursive call
hylomorphism [] combineLevel pullLevel zero (1,cs)
makes a pullLevel call for each level, so the total pullLevel cost is O(sum parts) = O(n)
makes a combineLevel call for each level, so the total combineLevel cost is O(sum $ map arity levels) = O(n), since the total arity of the entire input is bound by n for valid strings.
makes O(#levels) calls to zero (which is O(1)), and #levels is bound by n, so that's below O(n) too
Hence karvaToTree is linear in the length of the input.
I think that puts to rest the assertion that you needed to use mutability to get a linear algorithm here.
Demo
Let's have a draw of the results (because Tree is so full of syntax it's hard to read the output!). You have to cabal install pretty-tree to get Data.Tree.Pretty.
see :: Tree Char -> IO ()
see = putStrLn.drawVerticalTree.fmap (:"")
ghci> karvaToTree "Q/a*+b-cbabaccbac"
Node {rootLabel = 'Q', subForest = [Node {rootLabel = '/', subForest = [Node {rootLabel = 'a', subForest = []},Node {rootLabel = '*', subForest = [Node {rootLabel = '+', subForest = [Node {rootLabel = '-', subForest = [Node {rootLabel = 'b', subForest = []},Node {rootLabel = 'a', subForest = []}]},Node {rootLabel = 'c', subForest = []}]},Node {rootLabel = 'b', subForest = []}]}]}]}
ghci> see $ karvaToTree "Q/a*+b-cbabaccbac"
Q
|
/
|
------
/ \
a *
|
-----
/ \
+ b
|
----
/ \
- c
|
--
/ \
b a
which matches the output expected from this tutorial where I found the example:
There isn't a single way to do this, it really has to be attempted case-by-case. I typically try to break them down into simpler operations using fold and unfold and then optimize from there. Karva decoding case is a breadth-first tree unfold as others have noted, so I started with treeUnfoldM_BF. Perhaps there are similar functions in Erlang.
If the decoding operation is unreasonably expensive, you could memoize the decoding and share/reuse subtrees... though it probably wouldn't fit into a generic tree unfolder and you'd need to write specialized function to do so. If the fitness function is slow enough, it may be fine to use a naive decoder like the one I have listed below. It will fully rebuild the tree each invocation.
import Control.Monad.State.Lazy
import Data.Tree
type MaxArity = Int
type NodeType = Char
treeify :: MaxArity -> [Char] -> Tree NodeType
treeify maxArity (x:xs) = evalState (unfoldTreeM_BF (step maxArity) x) xs
treeify _ [] = fail "empty list"
step :: MaxArity -> NodeType -> State [Char] (NodeType, [NodeType])
step maxArity node = do
xs <- get
-- figure out the actual child node count and use it instead of maxArity
let (children, ys) = splitAt maxArity xs
put ys
return (node, children)
main :: IO ()
main = do
let x = treeify 3 "0138513580135135135"
putStr $ drawTree . fmap (:[]) $ x
return ()
There are a couple of solutions when mutable state in functional programming is required.
Use a different algorithm that solves the same problem. E.g. quicksort is generally regarded as mutable and may therefore be less useful in a functional setting, but mergesort is generally better suited for a functional setting. I can't tell if this option is possible or makes sense in your case.
Even functional programming languages usually provide some way to mutate state. (This blog post seems to show how to do it in Erlang.) For some algorithms and data structures this is indeed the only available option (there's active research on the topic, I think); for example hash tables in functional programming languages are generally implemented with mutable state.
In your case, I'm not so sure immutability really leads to a performance bottleneck. You are right, the (sub)tree will be recreated on update, but the Erlang implementation will probably reuse all the subtrees that haven't changed, leading to O(log n) complexity per update instead of O(1) with mutable state. Also, the nodes of the trees won't be copied but instead the references to the nodes, which should be relatively efficient. You can read about tree updates in a functional setting in e.g. the thesis from Okasaki or in his book "Purely Functional Data Structures" based on the thesis. I'd try implementing the algorithm with an immutable data structure and switch to a mutable one if you have a performance problem.
Also see some relevant SO questions here and here.
I think I figured out how to solve your particular problem with the K trees, (the general problem is too hard :P). My solution is presented in some horrible sort of hybrid Python-like psudocode (I am very slow on my FP today) but it doesn't change a node after you create one (the trick is building the tree bottom-up)
First, we need to find which nodes belong to which level:
levels currsize nodes =
this_level , rest = take currsize from nodes, whats left
next_size = sum of the arities of the nodes
return [this_level | levels next_size rest]
(initial currsize is 1)
So in the +/*abcd, example, this should give you [+, /*, abcd]. Now you can convert this into a tree bottom up:
curr_trees = last level
for level in reverse(levels except the last)
next_trees = []
for root in level:
n = arity of root
trees, curr_trees = take n from curr_trees, whats left
next_trees.append( Node(root, trees) )
curr_trees = next_trees
curr_trees should be a list with the single root node now.
I am pretty sure we can convert this into single assignment Erlang/Haskell very easily now.
I'm trying to parse a string in a self-made language into a sort of tree, e.g.:
# a * b1 b2 -> c * d1 d2 -> e # f1 f2 * g
should result in:
# a
* b1 b2
-> c
* d1 d2
-> e
# f1 f2
* g
#, * and -> are symbols. a, b1, etc. are texts.
Since the moment I know only rpn method to evaluate expressions, and my current solution is as follows. If I allow only a single text token after each symbol I can easily convert expression first into RPN notation (b = b1 b2; d = d1 d2; f = f1 f2) and parse it from here:
a b c -> * d e -> * # f g * #
However, merging text tokens and whatever else comes seems to be problematic. My idea was to create marker tokens (M), so RPN looks like:
a M b2 b1 M c -> * M d2 d1 M e -> * # f2 f1 M g * #
which is also parseable and seems to solve the problem.
That said:
Does anyone have experience with something like that and can say it is or it is not a viable solution for the future?
Are there better methods for parsing expressions with undefined arity of operators?
Can you point me at some good resources?
Note. Yes, I know this example very much resembles Lisp prefix notation and maybe the way to go would be to add some brackets, but I don't have any experience here. However, the source text must not contain any artificial brackets and also I'm not sure what to do about potential infix mixins like # a * b -> [if value1 = value2] c -> d.
Thanks for any help.
EDIT: It seems that what I'm looking for are sources on postfix notation with a variable number of arguments.
I couldn't fully understand your question, but it seems what you want is a grammar definition and a parser generator. I suggest you take a look at ANTLR, it should be pretty straightforward with it to define a grammar for either your original syntax or the RPN.
Edit: (After exercising self-criticism, and making some effort to understand the question details.) Actually, the language grammar is unclear from your example. However, it seems to me, that the advantages of the prefix/postfix notations (i.e. that you need neither parentheses nor a precedence-aware parser) stem from the fact that you know the number of arguments every time you encounter an operator, therefore you know exactly how many elements to read (for prefix notation) or to pop from the stack (for postfix notation). OTOH, I beleive that having operators which can have variable number of arguments makes prefix/postfix notations not simply difficult to parse but outright ambiguous. Take the following expression for example:
# a * b c d
Which of the following three is the canonical form?
(a, *(b, c, d))
(a, *(b, c), d)
(a, *(b), c, d)
Without knowing more about the operators, it is impossible to tell. Of course you could define some sort of greedyness of the operators, e.g. * is greedier than #, so it gobbles up all the arguments. But this would beat the purpose of a prefix notation, because you simply wouldn't be able to write down the second variant from the above three; not without additinonal syntactic elements.
Now that I think of it, it is probably not by sheer chance that none of the programming languages I know support operators with a variable number of arguments, only functions/procedures.