How does recursion work in a permutation? - algorithm

I am looking at a permutation program written here. The code looks like this:
public static void main(String[] args) {
permutation("", "CAT");
}
private static void permutation(String prefix, String str) {
int n = str.length();
if (n == 0) {
System.out.println(prefix);
} else {
for (int i = 0; i < n; i++) {
permutation(prefix + str.charAt(i), str.substring(0, i) + str.substring(i + 1));
}
}
}
For the word CAT, I get the following output:
CAT
CTA
ACT
ATC
TCA
TAC
I can trace through the steps of the recursion and understand how it works to get CAT and CTA, but I don't see how it keeps going. After n == 0 (which is the base case) everything should stop (which would happen after we get CTA).
Other sources:
I read the explanation here, but I'm still having trouble understanding how it keeps going. I feel like I get the concept of recursion. I can use it blindly, but I want to understand HOW it is working here.
There is another version of permutation recursion here, but that is using backtracking, and I understand that one a bit better. It's this tail recursive one I don't understand.
Question:
Can someone please explain how the recursion is working so that we get past CTA in the example above? This isn't homework. I'm just looking into different programs and working through some skillbuilders.
Thanks!

Let's look at what the first call generates:
("" + str.charAt(0), str.substring(0, 0) + str.substring(0 + 1))
p("C", "AT")
("" + str.charAt(1), str.substring(0,1) + str.substring(1 + 1))
p("A", "CT")
("" + str.charAt(2), str.substring(0, 2) + str.substring(2 + 1))
p("T", "CA")
Each call extracts each letter of str and adds it to the current prefix. The first call puts each letter of the original string as the start of a permutation. Then, for each such permutation, the algorithm extracts each letter of the remaining suffix and adds it to the accumulating prefix, so that all possibilities are explored:
C AT
CA T
CT A
"CAT"
"CTA"
A CT
AC T
AT C
"ACT"
"ATC"
T CA
TC A
TA C
"TCA"
"TAC"

Remember the state (values of each local variable and parameters) for each recursive call. Only a single call ends after CAT is returned, the others continue where they left off.
Think of each recursive call as a call to an entirely new function that just happens to do the same exact thing.
This is how your function will execute. It might help if you also wrote the values of each local variable (in your case it's just i and the parameters) as well. I just wrote what calls what.
p("", "CAT") -> p("C", "AT") -> p("CA", "T") -> p("CAT", "") -> CAT and return
-> return
-> p("CT", "A") -> p("CTA", "") -> CTA and return
-> return
-> return
-> p("A", "CT") -> ...

To understand recursion we must start with "factorial recursion"

Related

pseudocode function to return two results for processing a tree

I'm trying to write a pseudocode for a recursive function that should process a binary tree. But the problem is that the function should return two variables. I know that functions are supposed to return on variable, and for more return values they should use list, array or vector, but I don't know how to present it as a pseudocode.
Does it look correct for a pseudocode?
function get_min(node *p)
begin
if (p==NULL) then
return list(0,0);
else
(vl,wl) = get_min(p->left)
(vr,wr) = get_min(p->right)
if (vl > vr) then
return list(vr + p->cost, 1)
else
return list(vl + p->cost, 0)
end if
end if
end function
Since it's pseudo-code, just about anything goes.
However, I'd rather go for omitting "list":
return (0, 0)
return (vr + p->cost, 1)
return (vl + p->cost, 0)
There doesn't seem to be any real benefit to putting "list" there - the (..., ...) format pretty clearly indicates returning two values already - there's no need to explicitly say you're returning them in a list.
Side note: You mention list, array or vector, but pair is another option in some languages, or wrapping the two in an object (typically giving the advantage of compile-time type checking - not really applicable in pseudo-code, obviously).
You could consider replacing "list" with "pair" instead of removing it if you wish to make it clear that the function only ever returns exactly 2 values.
If you pass parameters as reference , then there is no need to use lists as #Dukeling suggested .
void function get_min(node *p , int *cost , int * a)
begin
if (p==NULL) then
*cost =0 ; *a =0 ; return ;
else
get_min(p->left ,vl ,vw)
get_min(p->right , vr , wr)
if (vl > vr) then
*cost = vl + p->cost , *a =0 ; return;
else
*cost = vl + p->cost , *a =0 ; return ;
end if
end if
end function

Why is my Scala tail-recursion faster than the while loop?

Here are two solutions to exercise 4.9 in Cay Horstmann's Scala for the Impatient: "Write a function lteqgt(values: Array[Int], v: Int) that returns a triple containing the counts of values less than v, equal to v, and greater than v." One uses tail recursion, the other uses a while loop. I thought that both would compile to similar bytecode but the while loop is slower than the tail recursion by a factor of almost 2. This suggests to me that my while method is badly written.
import scala.annotation.tailrec
import scala.util.Random
object PerformanceTest {
def main(args: Array[String]): Unit = {
val bigArray:Array[Int] = fillArray(new Array[Int](100000000))
println(time(lteqgt(bigArray, 25)))
println(time(lteqgt2(bigArray, 25)))
}
def time[T](block : => T):T = {
val start = System.nanoTime : Double
val result = block
val end = System.nanoTime : Double
println("Time = " + (end - start) / 1000000.0 + " millis")
result
}
#tailrec def fillArray(a:Array[Int], pos:Int=0):Array[Int] = {
if (pos == a.length)
a
else {
a(pos) = Random.nextInt(50)
fillArray(a, pos+1)
}
}
#tailrec def lteqgt(values: Array[Int], v:Int, lt:Int=0, eq:Int=0, gt:Int=0, pos:Int=0):(Int, Int, Int) = {
if (pos == values.length)
(lt, eq, gt)
else
lteqgt(values, v, lt + (if (values(pos) < v) 1 else 0), eq + (if (values(pos) == v) 1 else 0), gt + (if (values(pos) > v) 1 else 0), pos+1)
}
def lteqgt2(values:Array[Int], v:Int):(Int, Int, Int) = {
var lt = 0
var eq = 0
var gt = 0
var pos = 0
val limit = values.length
while (pos < limit) {
if (values(pos) > v)
gt += 1
else if (values(pos) < v)
lt += 1
else
eq += 1
pos += 1
}
(lt, eq, gt)
}
}
Adjust the size of bigArray according to your heap size. Here is some sample output:
Time = 245.110899 millis
(50004367,2003090,47992543)
Time = 465.836894 millis
(50004367,2003090,47992543)
Why is the while method so much slower than the tailrec? Naively the tailrec version looks to be at a slight disadvantage, as it must always perform 3 "if" checks for every iteration, whereas the while version will often only perform 1 or 2 tests due to the else construct. (NB reversing the order I perform the two methods does not affect the outcome).
Test results (after reducing array size to 20000000)
Under Java 1.6.22 I get 151 and 122 ms for tail-recursion and while-loop respectively.
Under Java 1.7.0 I get 55 and 101 ms
So under Java 6 your while-loop is actually faster; both have improved in performance under Java 7, but the tail-recursive version has overtaken the loop.
Explanation
The performance difference is due to the fact that in your loop, you conditionally add 1 to the totals, while for recursion you always add either 1 or 0. So they are not equivalent. The equivalent while-loop to your recursive method is:
def lteqgt2(values:Array[Int], v:Int):(Int, Int, Int) = {
var lt = 0
var eq = 0
var gt = 0
var pos = 0
val limit = values.length
while (pos < limit) {
gt += (if (values(pos) > v) 1 else 0)
lt += (if (values(pos) < v) 1 else 0)
eq += (if (values(pos) == v) 1 else 0)
pos += 1
}
(lt, eq, gt)
}
and this gives exactly the same execution time as the recursive method (regardless of Java version).
Discussion
I'm not an expert on why the Java 7 VM (HotSpot) can optimize this better than your first version, but I'd guess it's because it's taking the same path through the code each time (rather than branching along the if / else if paths), so the bytecode can be inlined more efficiently.
But remember that this is not the case in Java 6. Why one while-loop outperforms the other is a question of JVM internals. Happily for the Scala programmer, the version produced from idiomatic tail-recursion is the faster one in the latest version of the JVM.
The difference could also be occurring at the processor level. See this question, which explains how code slows down if it contains unpredictable branching.
The two constructs are not identical. In particular, in the first case you don't need any jumps (on x86, you can use cmp and setle and add, instead of having to use cmp and jb and (if you don't jump) add. Not jumping is faster than jumping on pretty much every modern architecture.
So, if you have code that looks like
if (a < b) x += 1
where you may add or you may jump instead, vs.
x += (a < b)
(which only makes sense in C/C++ where 1 = true and 0 = false), the latter tends to be faster as it can be turned into more compact assembly code. In Scala/Java, you can't do this, but you can do
x += if (a < b) 1 else 0
which a smart JVM should recognize is the same as x += (a < b), which has a jump-free machine code translation, which is usually faster than jumping. An even smarter JVM would recognize that
if (a < b) x += 1
is the same yet again (because adding zero doesn't do anything).
C/C++ compilers routinely perform optimizations like this. Being unable to apply any of these optimizations was not a mark in the JIT compiler's favor; apparently it can as of 1.7, but only partially (i.e. it doesn't recognize that adding zero is the same as a conditional adding one, but it does at least convert x += if (a<b) 1 else 0 into fast machine code).
Now, none of this has anything to do with tail recursion or while loops per se. With tail recursion it's more natural to write the if (a < b) 1 else 0 form, but you can do either; and with while loops you can also do either. It just so happened that you picked one form for tail recursion and the other for the while loop, making it look like recursion vs. looping was the change instead of the two different ways to do the conditionals.

Scala PriorityQueue on Array[Int] performance issue with complex comparison function (caching is needed)

The problem involves the Scala PriorityQueue[Array[Int]] performance on large data set. The following operations are needed: enqueue, dequeue, and filter. Currently, my implementation is as follows:
For every element of type Array[Int], there is a complex evaluation function: (I'm not sure how to write it in a more efficient way, because it excludes the position 0)
def eval_fun(a : Array[Int]) =
if(a.size < 2) 3
else {
var ret = 0
var i = 1
while(i < a.size) {
if((a(i) & 0x3) == 1) ret += 1
else if((a(i) & 0x3) == 3) ret += 3
i += 1
}
ret / a.size
}
The ordering with a comparison function is based on the evaluation function: (Reversed, descendent order)
val arr_ord = new Ordering[Array[Int]] {
def compare(a : Array[Int], b : Array[Int]) = eval_fun(b) compare eval_fun(a) }
The PriorityQueue is defined as:
val pq: scala.collection.mutable.PriorityQueue[Array[Int]] = PriorityQueue()
Question:
Is there a more elegant and efficient way to write such a evaluation function? I'm thinking of using fold, but fold cannot exclude the position 0.
Is there a data structure to generate a priorityqueue with unique elements? Applying filter operation after each enqueue operation is not efficient.
Is there a cache method to reduce the evaluation computation? Since when adding a new element to the queue, every element may need to be evaluated by eval_fun again, which is not necessary if evaluated value of every element can be cached. Also, I should mention that two distinct element may have the same evaluated value.
Is there a more efficient data structure without using generic type? Because if the size of elements reaches 10,000 and the size of size reaches 1,000, the performance is terribly slow.
Thanks you.
(1) If you want maximum performance here, I would stick to the while loop, even if it is not terribly elegant. Otherwise, if you use a view on Array, you can easily drop the first element before going into the fold:
a.view.drop(1).foldLeft(0)( (sum, a) => sum + ((a & 0x03) match {
case 0x01 => 1
case 0x03 => 3
case _ => 0
})) / a.size
(2) You can maintain two structures, the priority queue, and a set. Both combined give you a sorted-set... So you could use collection.immutable.SortedSet, but there is no mutable variant in the standard library. Do want equality based on the priority function, or the actual array contents? Because in the latter case, you won't get around comparing arrays element by element for each insertion, undoing the effect of caching the priority function value.
(3) Just put the calculated priority along with the array in the queue. I.e.
implicit val ord = Ordering.by[(Int, Array[Int]), Int](_._1)
val pq = new collection.mutable.PriorityQueue[(Int, Array[Int])]
pq += eval_fun(a) -> a
Well, you can use a tail recursive loop (generally these are more "idiomatic":
def eval(a: Array[Int]): Int =
if (a.size < 2) 3
else {
#annotation.tailrec
def loop(ret: Int = 0, i: Int = 1): Int =
if (i >= a.size) ret / a.size
else {
val mod3 = (a(i) & 0x3)
if (mod3 == 1) loop(ret + 1, i + 1)
else if (mod3 == 3) loop(ret + 3, i + 1)
else loop(ret, i + 1)
}
loop()
}
Then you can use that to initialise a cached priority value:
case class PriorityArray(a: Array[Int]) {
lazy val priority = if (a.size < 2) 3 else {
#annotation.tailrec
def loop(ret: Int = 0, i: Int = 1): Int =
if (i >= a.size) ret / a.size
else {
val mod3 = (a(i) & 0x3)
if (mod3 == 2) loop(ret, i + 1)
else loop(ret + mod3, i + 1)
}
loop()
}
}
You may note also that I removed a redundant & op and have only the single conditional (for when it equals 2, rather than two checks for 1 && 3) – these should have some minimal effect.
There is not a huge difference from 0__'s proposal that just came though.
My answers:
If evaluation is critical, keep it as it is. You might get better performance with recursion (not sure why, but it happens), but you'll certainly get worse performance with pretty much any other approach.
No, there isn't, but you can come pretty close to it just modifying the dequeue operation:
def distinctDequeue[T](q: PriorityQueue[T]): T = {
val result = q.dequeue
while (q.head == result) q.dequeue
result
}
Otherwise, you'd have to keep a second data structure just to keep track of whether an element has been added or not. Either way, that equals sign is pretty heavy, but I have a suggestion to make it faster in the next item.
Note, however, that this requires that ties on the the cost function get solved in some other way.
Like 0__ suggested, put the cost on the priority queue. But you can also keep a cache on the function if that would be helpful. I'd try something like this:
val evalMap = scala.collection.mutable.HashMapWrappedArray[Int], Int
def eval_fun(a : Array[Int]) =
if(a.size < 2) 3
else evalMap.getOrElseUpdate(a, {
var ret = 0
var i = 1
while(i < a.size) {
if((a(i) & 0x3) == 1) ret += 1
else if((a(i) & 0x3) == 3) ret += 3
i += 1
}
ret / a.size
})
import scala.math.Ordering.Implicits._
val pq = new collection.mutable.PriorityQueue[(Int, WrappedArray[Int])]
pq += eval_fun(a) -> (a : WrappedArray[Int])
Note that I did not create a special Ordering -- I'm using the standard Ordering so that the WrappedArray will break the ties. There's little cost to wrap the Array, and you get it back with .array, but, on the other hand, you'll get the following:
Ties will be broken by comparing the array themselves. If there aren't many ties in the cost, this should be good enough. If there are, add something else to the tuple to help break ties without comparing the arrays.
That means all equal elements will be kept together, which will enable you to dequeue all of them at the same time, giving the impression of having kept only one.
And that equals will actually work, because WrappedArray compare like Scala sequences do.
I don't understand what you mean by that fourth point.

Fibonacci recursion with a stack

I've already asked a question about this, yet I'm still confused. I want to convert a recursive function into a stack based function without recursion. Take, for example, the fibonacci function:
algorithm Fibonacci(x):
i = 0
i += Fibonacci(x-1)
i += Fibonacci(x-2)
return i
(Yes I know I didn't put a base case and that recursion for fibonacci is really inefficient)
How would this be implemented using an explicit stack? For example, if I have the stack as a while loop, I have to jump out of the loop in order to evaluate the first recursion, and I have no way of returning to the line after the first recursion and continue on with the second recursion.
in pseudo python
def fib(x):
tot = 0
stack = [x]
while stack:
a = stack.pop()
if a in [0,1]:
tot += 1
else:
stack.push(a - 1)
stack.push(a - 2)
return tot
If you do not want the external counter then you will need to push tuples that keep track of the accumulated sum and whether this was a - 1 or a - 2.
It is probably worth your time to explicitly write out the call stack (by hand, on paper) for a run of say fib(3) for your code (though fix your code first so it handles the boundary conditions). Once you do this it should be clear how to do it without a stack.
Edit:
Let us analyze the running of the following Fibonacci algorithm
def fib(x):
if (x == 0) or (x == 1):
return 1
else:
temp1 = fib(x - 1)
temp2 = fib(x - 2)
return temp1 + temp2
(Yes, I know that this isn't even an efficient implementation of an inefficient algorithm, I have declared more temporaries than necessary.)
Now when we use a stack for function calling we need to store two kinds of things on the stack.
Where to return the result.
Space for local variables.
In our case we have three possible places to return to.
Some outside caller
Assign to temp1
Assign to temp2
we also need space for three local variables x, temp1, and temp2. let us examine fib(3)
when we initially call fib we tell the stack that we want to return to wherever we cam from, x = 3, and temp1 and temp2 are uninitialized.
Next we push onto the stack that we want to assign temp1, x = 2 and temp1 and temp2 are uninitialized. We call again and we have a stack of
(assign temp1, x = 1, -, -)
(assign temp1, x = 2, -, -)
(out , x = 3, -, -)
we now return 1 and do the second call and get
(assign temp2, x = 0, -, -)
(assign temp1, x = 2, temp1 = 1, -)
(out , x = 3, -, -)
this now again returns 1
(assign temp1, x = 2, temp1 = 1, temp2 = 1)
(out , x = 3, -, -)
so this returns 2 and we get
(out , x = 3, temp1 =2, -)
So we now recurse to
(assign temp2, x = 1, -, -)
(out , x = 3, temp1 =2, -)
from which we can see our way out.
algorithm Fibonacci(x):
stack = [1,1]
while stack.length < x
push to the stack the sum of two topmost stack elements
return stack.last
You can preserve stack between calls as some kind of cache.
This stack is not a "true stack" since you can do more than only pushing, popping and checking its emptiness, but I believe this is what you are planning to do.
Your question inspired me to write a piece of code, that initially scared me, but I'm not really sure what to think about it now, so here it is for Your amusement. Maybe it can help a bit, with understanding things.
It's a blatant simulation of an execution of a recursive Fibonacci function implementation. The language is C#. For an argument 0 the function returns 0 - according to the definition of the Fibonacci sequence given by Ronald Graham, Donald Knuth, and Oren Patashnik in "Concrete Mathematics". It's defined this way also in Wikipedia. Checks for negative arguments are omitted.
Normally a return address is stored on the stack and execution just jumps to the right address. To simulate this I used an enum
enum JumpAddress
{
beforeTheFirstRecursiveInvocation,
betweenRecursiveInvocations,
afterTheSecondRecursiveInvocation,
outsideFibFunction
}
and a little state machine.
The Frame stored on the stack is defined like this:
class Frame
{
public int argument;
public int localVariable;
public JumpAddress returnAddress;
public Frame(int argument, JumpAddress returnAddress)
{
this.argument = argument;
this.localVariable = 0;
this.returnAddress = returnAddress;
}
}
It's a C# class - a reference type. The stack holds references to the objects placed on the heap, so when I'm doing this:
Frame top = stack.Peek();
top.localVariable = lastresult;
I'm modifying the object still referenced by the reference at the top of a stack, not a copy.
I model invocation of a function, by pushing a frame on the stack and setting the execution address in my state machine to the beginning - beforeTheFirstRecursiveInvocation.
To return form the function I set the lastresult variable, pointOfExecution variable to the return address stored in the top frame and pop the frame from the stack.
Here is the code.
public static int fib(int n)
{
Stack<Frame> stack = new Stack<Frame>(n);
//Constructor uses the parameter to reserve space.
int lastresult = 0;
//variable holding the result of the last "recursive" invocation
stack.Push(new Frame(n, JumpAddress.outsideFibFunction));
JumpAddress pointOfExecution = JumpAddress.beforeTheFirstRecursiveInvocation;
// that's how I model function invocation. I push a frame on the stack and set
// pointOfExecution. Above the frame stores the argument n and a return address
// - outsideFibFunction
while(pointOfExecution != JumpAddress.outsideFibFunction)
{
Frame top = stack.Peek();
switch(pointOfExecution)
{
case JumpAddress.beforeTheFirstRecursiveInvocation:
if(top.argument <= 1)
{
lastresult = top.argument;
pointOfExecution = top.returnAddress;
stack.Pop();
}
else
{
stack.Push(new Frame(top.argument - 1, JumpAddress.betweenRecursiveInvocations));
pointOfExecution = JumpAddress.beforeTheFirstRecursiveInvocation;
}
break;
case JumpAddress.betweenRecursiveInvocations:
top.localVariable = lastresult;
stack.Push(new Frame(top.argument - 2, JumpAddress.afterTheSecondRecursiveInvocation));
pointOfExecution = JumpAddress.beforeTheFirstRecursiveInvocation;
break;
case JumpAddress.afterTheSecondRecursiveInvocation:
lastresult += top.localVariable;
pointOfExecution = top.returnAddress;
stack.Pop();
break;
default:
System.Diagnostics.Debug.Assert(false,"This point should never be reached");
break;
}
}
return lastresult;
}
// 0<x<100
int fib[100];
fib[1]=1;
fib[2]=1;
if(x<=2)
cout<<1;
else{
for(i=3;i<=x;i++)
fib[i]=fib[i-1]+fib[i-2];
cout<<fib[x];
}
OR without using a vector
int x,y,z;
x=1;y=1;z=1;
if(x<=2)
cout<<1;
else{
for(i=3;i<=x;i++){
z=x+y;
x=y;
y=z;
}
cout<<z;
}
The last method works because you only need the previous 2 fibonacci numbers for creating the current one.

Equation (expression) parser with precedence?

I've developed an equation parser using a simple stack algorithm that will handle binary (+, -, |, &, *, /, etc) operators, unary (!) operators, and parenthesis.
Using this method, however, leaves me with everything having the same precedence - it's evaluated left to right regardless of operator, although precedence can be enforced using parenthesis.
So right now "1+11*5" returns 60, not 56 as one might expect.
While this is suitable for the current project, I want to have a general purpose routine I can use for later projects.
Edited for clarity:
What is a good algorithm for parsing equations with precedence?
I'm interested in something simple to implement and understand that I can code myself to avoid licensing issues with available code.
Grammar:
I don't understand the grammar question - I've written this by hand. It's simple enough that I don't see the need for YACC or Bison. I merely need to calculate strings with equations such as "2+3 * (42/13)".
Language:
I'm doing this in C, but I'm interested in an algorithm, not a language specific solution. C is low level enough that it'll be easy to convert to another language should the need arise.
Code Example
I posted the test code for the simple expression parser I was talking about above. The project requirements altered and so I never needed to optimize the code for performance or space as it wasn't incorporated into the project. It's in the original verbose form, and should be readily understandable. If I do anything further with it in terms of operator precedence, I'll probably choose the macro hack because it matches the rest of the program in simplicity. If I ever use this in a real project, though, I'll be going for a more compact/speedy parser.
Related question
Smart design of a math parser?
-Adam
The shunting yard algorithm is the right tool for this. Wikipedia is really confusing about this, but basically the algorithm works like this:
Say, you want to evaluate 1 + 2 * 3 + 4. Intuitively, you "know" you have to do the 2 * 3 first, but how do you get this result? The key is to realize that when you're scanning the string from left to right, you will evaluate an operator when the operator that follows it has a lower (or equal to) precedence. In the context of the example, here's what you want to do:
Look at: 1 + 2, don't do anything.
Now look at 1 + 2 * 3, still don't do anything.
Now look at 1 + 2 * 3 + 4, now you know that 2 * 3 has to to be evaluated because the next operator has lower precedence.
How do you implement this?
You want to have two stacks, one for numbers, and another for operators. You push numbers onto the stack all the time. You compare each new operator with the one at the top of the stack, if the one on top of the stack has higher priority, you pop it off the operator stack, pop the operands off the number stack, apply the operator and push the result onto the number stack. Now you repeat the comparison with the top of stack operator.
Coming back to the example, it works like this:
N = [ ]
Ops = [ ]
Read 1. N = [1], Ops = [ ]
Read +. N = [1], Ops = [+]
Read 2. N = [1 2], Ops = [+]
Read *. N = [1 2], Ops = [+ *]
Read 3. N = [1 2 3], Ops = [+ *]
Read +. N = [1 2 3], Ops = [+ *]
Pop 3, 2 and execute 2*3, and push result onto N. N = [1 6], Ops = [+]
+ is left associative, so you want to pop 1, 6 off as well and execute the +. N = [7], Ops = [].
Finally push the [+] onto the operator stack. N = [7], Ops = [+].
Read 4. N = [7 4]. Ops = [+].
You're run out off input, so you want to empty the stacks now. Upon which you will get the result 11.
There, that's not so difficult, is it? And it makes no invocations to any grammars or parser generators.
The hard way
You want a recursive descent parser.
To get precedence you need to think recursively, for example, using your sample string,
1+11*5
to do this manually, you would have to read the 1, then see the plus and start a whole new recursive parse "session" starting with 11... and make sure to parse the 11 * 5 into its own factor, yielding a parse tree with 1 + (11 * 5).
This all feels so painful even to attempt to explain, especially with the added powerlessness of C. See, after parsing the 11, if the * was actually a + instead, you would have to abandon the attempt at making a term and instead parse the 11 itself as a factor. My head is already exploding. It's possible with the recursive decent strategy, but there is a better way...
The easy (right) way
If you use a GPL tool like Bison, you probably don't need to worry about licensing issues since the C code generated by bison is not covered by the GPL (IANAL but I'm pretty sure GPL tools don't force the GPL on generated code/binaries; for example Apple compiles code like say, Aperture with GCC and they sell it without having to GPL said code).
Download Bison (or something equivalent, ANTLR, etc.).
There is usually some sample code that you can just run bison on and get your desired C code that demonstrates this four function calculator:
http://www.gnu.org/software/bison/manual/html_node/Infix-Calc.html
Look at the generated code, and see that this is not as easy as it sounds. Also, the advantages of using a tool like Bison are 1) you learn something (especially if you read the Dragon book and learn about grammars), 2) you avoid NIH trying to reinvent the wheel. With a real parser-generator tool, you actually have a hope at scaling up later, showing other people you know that parsers are the domain of parsing tools.
Update:
People here have offered much sound advice. My only warning against skipping the parsing tools or just using the Shunting Yard algorithm or a hand rolled recursive decent parser is that little toy languages1 may someday turn into big actual languages with functions (sin, cos, log) and variables, conditions and for loops.
Flex/Bison may very well be overkill for a small, simple interpreter, but a one off parser+evaluator may cause trouble down the line when changes need to be made or features need to be added. Your situation will vary and you will need to use your judgement; just don't punish other people for your sins [2] and build a less than adequate tool.
My favorite tool for parsing
The best tool in the world for the job is the Parsec library (for recursive decent parsers) which comes with the programming language Haskell. It looks a lot like BNF, or like some specialized tool or domain specific language for parsing (sample code [3]), but it is in fact just a regular library in Haskell, meaning that it compiles in the same build step as the rest of your Haskell code, and you can write arbitrary Haskell code and call that within your parser, and you can mix and match other libraries all in the same code. (Embedding a parsing language like this in a language other than Haskell results in loads of syntactic cruft, by the way. I did this in C# and it works quite well but it is not so pretty and succinct.)
Notes:
1 Richard Stallman says, in Why you should not use Tcl
The principal lesson of Emacs is that
a language for extensions should not
be a mere "extension language". It
should be a real programming language,
designed for writing and maintaining
substantial programs. Because people
will want to do that!
[2] Yes, I am forever scarred from using that "language".
Also note that when I submitted this entry, the preview was correct, but SO's less than adequate parser ate my close anchor tag on the first paragraph, proving that parsers are not something to be trifled with because if you use regexes and one off hacks you will probably get something subtle and small wrong.
[3] Snippet of a Haskell parser using Parsec: a four function calculator extended with exponents, parentheses, whitespace for multiplication, and constants (like pi and e).
aexpr = expr `chainl1` toOp
expr = optChainl1 term addop (toScalar 0)
term = factor `chainl1` mulop
factor = sexpr `chainr1` powop
sexpr = parens aexpr
<|> scalar
<|> ident
powop = sym "^" >>= return . (B Pow)
<|> sym "^-" >>= return . (\x y -> B Pow x (B Sub (toScalar 0) y))
toOp = sym "->" >>= return . (B To)
mulop = sym "*" >>= return . (B Mul)
<|> sym "/" >>= return . (B Div)
<|> sym "%" >>= return . (B Mod)
<|> return . (B Mul)
addop = sym "+" >>= return . (B Add)
<|> sym "-" >>= return . (B Sub)
scalar = number >>= return . toScalar
ident = literal >>= return . Lit
parens p = do
lparen
result <- p
rparen
return result
http://www.engr.mun.ca/~theo/Misc/exp_parsing.htm
Very good explanation of different approaches:
Recursive-descent recognition
The shunting yard algorithm
The classic solution
Precedence climbing
Written in simple language and pseudo-code.
I like 'precedence climbing' one.
There's a nice article here about combining a simple recursive-descent parser with operator-precedence parsing. If you've been recently writing parsers, it should be very interesting and instructive to read.
Long time ago, I made up my own parsing algorithm, that I couldn't find in any books on parsing (like the Dragon Book). Looking at the pointers to the Shunting Yard algorithm, I do see the resemblance.
About 2 years ago, I made a post about it, complete with Perl source code, on http://www.perlmonks.org/?node_id=554516. It's easy to port to other languages: the first implementation I did was in Z80 assembler.
It's ideal for direct calculation with numbers, but you can use it to produce a parse tree if you must.
Update Because more people can read (or run) Javascript, I've reimplemented my parser in Javascript, after the code has been reorganized. The whole parser is under 5k of Javascript code (about 100 lines for the parser, 15 lines for a wrapper function) including error reporting, and comments.
You can find a live demo at http://users.telenet.be/bartl/expressionParser/expressionParser.html.
// operator table
var ops = {
'+' : {op: '+', precedence: 10, assoc: 'L', exec: function(l,r) { return l+r; } },
'-' : {op: '-', precedence: 10, assoc: 'L', exec: function(l,r) { return l-r; } },
'*' : {op: '*', precedence: 20, assoc: 'L', exec: function(l,r) { return l*r; } },
'/' : {op: '/', precedence: 20, assoc: 'L', exec: function(l,r) { return l/r; } },
'**' : {op: '**', precedence: 30, assoc: 'R', exec: function(l,r) { return Math.pow(l,r); } }
};
// constants or variables
var vars = { e: Math.exp(1), pi: Math.atan2(1,1)*4 };
// input for parsing
// var r = { string: '123.45+33*8', offset: 0 };
// r is passed by reference: any change in r.offset is returned to the caller
// functions return the parsed/calculated value
function parseVal(r) {
var startOffset = r.offset;
var value;
var m;
// floating point number
// example of parsing ("lexing") without aid of regular expressions
value = 0;
while("0123456789".indexOf(r.string.substr(r.offset, 1)) >= 0 && r.offset < r.string.length) r.offset++;
if(r.string.substr(r.offset, 1) == ".") {
r.offset++;
while("0123456789".indexOf(r.string.substr(r.offset, 1)) >= 0 && r.offset < r.string.length) r.offset++;
}
if(r.offset > startOffset) { // did that work?
// OK, so I'm lazy...
return parseFloat(r.string.substr(startOffset, r.offset-startOffset));
} else if(r.string.substr(r.offset, 1) == "+") { // unary plus
r.offset++;
return parseVal(r);
} else if(r.string.substr(r.offset, 1) == "-") { // unary minus
r.offset++;
return negate(parseVal(r));
} else if(r.string.substr(r.offset, 1) == "(") { // expression in parens
r.offset++; // eat "("
value = parseExpr(r);
if(r.string.substr(r.offset, 1) == ")") {
r.offset++;
return value;
}
r.error = "Parsing error: ')' expected";
throw 'parseError';
} else if(m = /^[a-z_][a-z0-9_]*/i.exec(r.string.substr(r.offset))) { // variable/constant name
// sorry for the regular expression, but I'm too lazy to manually build a varname lexer
var name = m[0]; // matched string
r.offset += name.length;
if(name in vars) return vars[name]; // I know that thing!
r.error = "Semantic error: unknown variable '" + name + "'";
throw 'unknownVar';
} else {
if(r.string.length == r.offset) {
r.error = 'Parsing error at end of string: value expected';
throw 'valueMissing';
} else {
r.error = "Parsing error: unrecognized value";
throw 'valueNotParsed';
}
}
}
function negate (value) {
return -value;
}
function parseOp(r) {
if(r.string.substr(r.offset,2) == '**') {
r.offset += 2;
return ops['**'];
}
if("+-*/".indexOf(r.string.substr(r.offset,1)) >= 0)
return ops[r.string.substr(r.offset++, 1)];
return null;
}
function parseExpr(r) {
var stack = [{precedence: 0, assoc: 'L'}];
var op;
var value = parseVal(r); // first value on the left
for(;;){
op = parseOp(r) || {precedence: 0, assoc: 'L'};
while(op.precedence < stack[stack.length-1].precedence ||
(op.precedence == stack[stack.length-1].precedence && op.assoc == 'L')) {
// precedence op is too low, calculate with what we've got on the left, first
var tos = stack.pop();
if(!tos.exec) return value; // end reached
// do the calculation ("reduce"), producing a new value
value = tos.exec(tos.value, value);
}
// store on stack and continue parsing ("shift")
stack.push({op: op.op, precedence: op.precedence, assoc: op.assoc, exec: op.exec, value: value});
value = parseVal(r); // value on the right
}
}
function parse (string) { // wrapper
var r = {string: string, offset: 0};
try {
var value = parseExpr(r);
if(r.offset < r.string.length){
r.error = 'Syntax error: junk found at offset ' + r.offset;
throw 'trailingJunk';
}
return value;
} catch(e) {
alert(r.error + ' (' + e + '):\n' + r.string.substr(0, r.offset) + '<*>' + r.string.substr(r.offset));
return;
}
}
It would help if you could describe the grammar you are currently using to parse. Sounds like the problem might lie there!
Edit:
The fact that you don't understand the grammar question and that 'you've written this by hand' very likely explains why you're having problems with expressions of the form '1+11*5' (i.e., with operator precedence). Googling for 'grammar for arithmetic expressions', for example, should yield some good pointers. Such a grammar need not be complicated:
<Exp> ::= <Exp> + <Term> |
<Exp> - <Term> |
<Term>
<Term> ::= <Term> * <Factor> |
<Term> / <Factor> |
<Factor>
<Factor> ::= x | y | ... |
( <Exp> ) |
- <Factor> |
<Number>
would do the trick for example, and can be trivially augmented to take care of some more complicated expressions (including functions for example, or powers,...).
I suggest you have a look at this thread, for example.
Almost all introductions to grammars/parsing treat arithmetic expressions as an example.
Note that using a grammar does not at all imply using a specific tool (a la Yacc, Bison,...). Indeed, you most certainly are already using the following grammar:
<Exp> :: <Leaf> | <Exp> <Op> <Leaf>
<Op> :: + | - | * | /
<Leaf> :: <Number> | (<Exp>)
(or something of the kind) without knowing it!
Have you thought about using Boost Spirit? It allows you to write EBNF-like grammars in C++ like this:
group = '(' >> expression >> ')';
factor = integer | group;
term = factor >> *(('*' >> factor) | ('/' >> factor));
expression = term >> *(('+' >> term) | ('-' >> term));
As you put your question there is no need for recursion whatsoever. The answer is three things: Postfix notation plus Shunting Yard algorithm plus Postfix expression evaluation:
1). Postfix notation = invented to eliminate the need for explicit precedence specification. Read more on the net but here is the gist of it: infix expression ( 1 + 2 ) * 3 while easy for humans to read and process not very efficient for computing via machine. What is? Simple rule that says "rewrite expression by caching in precedence,then always process it left-to-right". So infix ( 1 + 2 ) * 3 becomes a postfix 12+3*. POST because operator is placed always AFTER the operands.
2). Evaluating postfix expression. Easy. Read numbers off postfix string. Push them on a stack until an operator is seen. Check operator type - unary? binary? tertiary? Pop as many operands off stack as needed to evaluate this operator. Evaluate. Push result back on stack! And u r almost done. Keep doing so until stack has only one entry = value u r looking for.
Let's do ( 1 + 2 ) * 3 which is in postfix is "12+3*". Read first number = 1. Push it on stack. Read next. Number = 2. Push it on stack. Read next. Operator. Which one? +. What kind? Binary = needs two operands. Pop stack twice = argright is 2 and argleft is 1. 1 + 2 is 3. Push 3 back on stack. Read next from postfix string. Its a number. 3.Push. Read next. Operator. Which one? *. What kind? Binary = needs two numbers -> pop stack twice. First pop into argright, second time into argleft. Evaluate operation - 3 times 3 is 9.Push 9 on stack. Read next postfix char. It's null. End of input. Pop stack onec = that's your answer.
3). Shunting Yard is used to transform human (easily) readable infix expression into postfix expression (also human easily readable after some practice). Easy to code manually. See comments above and net.
I would suggest cheating and using the Shunting Yard Algorithm. It's an easy means of writing a simple calculator-type parser and takes precedence into account.
If you want to properly tokenise things and have variables, etc. involved then I would go ahead and write a recursive descent parser as suggested by others here, however if you simply require a calculator-style parser then this algorithm should be sufficient :-)
Another resource for precedence parsing is the Operator-precedence parser entry on Wikipedia. Covers Dijkstra's shunting yard algorithm, and a tree alternate algorithm, but more notably covers a really simple macro replacement algorithm that can be trivially implemented in front of any precedence ignorant parser:
#include <stdio.h>
int main(int argc, char *argv[]){
printf("((((");
for(int i=1;i!=argc;i++){
if(argv[i] && !argv[i][1]){
switch(argv[i]){
case '^': printf(")^("); continue;
case '*': printf("))*(("); continue;
case '/': printf("))/(("); continue;
case '+': printf(")))+((("); continue;
case '-': printf(")))-((("); continue;
}
}
printf("%s", argv[i]);
}
printf("))))\n");
return 0;
}
Invoke it as:
$ cc -o parenthesise parenthesise.c
$ ./parenthesise a \* b + c ^ d / e
((((a))*((b)))+(((c)^(d))/((e))))
Which is awesome in its simplicity, and very understandable.
Is there a language you want to use? ANTLR will let you do this from a Java perspective. Adrian Kuhn has an excellent writeup on how to write an executable grammar in Ruby; in fact, his example is almost exactly your arithmetic expression example.
It depends on how "general" you want it to be.
If you want it to be really really general such as be able to parse mathematical functions as well like sin(4+5)*cos(7^3) you will probably need a parse tree.
In which, I do not think that a complete implementation is proper to be pasted here. I'd suggest that you check out one of the infamous "Dragon book".
But if you just want precedence support, then you could do that by first converting the expression to postfix form in which an algorithm that you can copy-and-paste should be available from google or I think you can code it up yourself with a binary tree.
When you have it in postfix form, then it's piece of cake from then on since you already understand how the stack helps.
I found this on the PIClist about the Shunting Yard algorithm:
Harold writes:
I remember reading, a long time ago, of an algorithm that converted
algebraic expressions to RPN for easy evaluation. Each infix value or
operator or parenthesis was represented by a railroad car on a
track. One
type of car split off to another track and the other continued straight
ahead. I don't recall the details (obviously!), but always thought it
would be interesting to code. This is back when I was writing 6800 (not
68000) assembly code.
This is the "shunting yard algorythm"
and it is what most machine parsers
use. See the article on parsing in
Wikipedia. An easy way to code the
shunting yard algorythm is to use two
stacks. One is the "push" stack and
the other the "reduce" or "result"
stack. Example:
pstack = () // empty rstack = ()
input: 1+2*3 precedence = 10 // lowest
reduce = 0 // don't reduce
start: token '1': isnumber, put in
pstack (push) token '+': isoperator
set precedence=2 if precedence <
previous_operator_precedence then
reduce() // see below put '+' in
pstack (push) token '2': isnumber,
put in pstack (push) token '*':
isoperator, set precedence=1, put in
pstack (push) // check precedence as
// above token '3': isnumber, put in
pstack (push) end of input, need to
reduce (goal is empty pstack) reduce()
//done
to reduce, pop elements from the push
stack and put them into the result
stack, always swap the top 2 items on
pstack if they are of the form
'operator' 'number':
pstack: '1' '+' '2' '' '3' rstack: ()
... pstack: () rstack: '3' '2' '' '1'
'+'
if the expression would have been:
1*2+3
then the reduce trigger would have
been the reading of the token '+'
which has lower precendece than the
'*' already pushed, so it would have
done:
pstack: '1' '' '2' rstack: () ...
pstack: () rstack: '1' '2' ''
and then pushed '+' and then '3' and
then finally reduced:
pstack: '+' '3' rstack: '1' '2' ''
... pstack: () rstack: '1' '2' '' '3'
'+'
So the short version is: push numbers,
when pushing operators check the
precedence of the previous operator.
If it was higher than the operator's
that is to be pushed now, first
reduce, then push the current
operator. To handle parens simply save
the precedence of the 'previous'
operator, and put a mark on the pstack
that tells the reduce algorythm to
stop reducing when solving the inside
of a paren pair. The closing paren
triggers a reduction as does the end
of input, and also removes the open
paren mark from the pstack, and
restores the 'previous operation'
precedence so parsing can continue
after the close paren where it left
off. This can be done with recursion
or without (hint: use a stack to store
the previous precedence when
encountering a '(' ...). The
generalized version of this is to use
a parser generator implemented
shunting yard algorythm, f.ex. using
yacc or bison or taccle (tcl analog of
yacc).
Peter
-Adam
I have posted source for an ultra compact (1 class, < 10 KiB) Java Math Evaluator on my web site. This is a recursive descent parser of the type that caused the cranial explosion for the poster of the accepted answer.
It supports full precedence, parenthesis, named variables and single-argument functions.
i released an expression parser based on Dijkstra's Shunting Yard algorithm, under the terms of the Apache License 2.0:
http://projects.congrace.de/exp4j/index.html
I've implemented a recursive descent parser in Java in the MathEclipse Parser project. It could also be used in as a Google Web Toolkit module
I'm currently working on a series of articles building a regular expression parser as a learning tool for design patterns and readable programing. You can take a look at readablecode. The article presents a clear use of shunting yards algorithm.
I wrote an expression parser in F# and blogged about it here. It uses the shunting yard algorithm, but instead of converting from infix to RPN, I added a second stack to accumulate the results of calculations. It correctly handles operator precedence, but doesn't support unary operators. I wrote this to learn F#, not to learn expression parsing, though.
A Python solution using pyparsing can be found here. Parsing infix notation with various operators with precedence is fairly common, and so pyparsing also includes the infixNotation (formerly operatorPrecedence) expression builder. With it you can easily define boolean expressions using "AND", "OR", "NOT", for example. Or you can expand your four-function arithmetic to use other operators, such as ! for factorial, or '%' for modulus, or add P and C operators to compute permutations and combinations. You could write an infix parser for matrix notation, that includes handling of '-1' or 'T' operators (for inversion and transpose). The operatorPrecedence example of a 4-function parser (with '!' thrown in for fun) is here and a more fully featured parser and evaluator is here.
I know this is a late answer, but I've just written a tiny parser that allows all operators (prefix, postfix and infix-left, infix-right and nonassociative) to have arbitrary precedence.
I'm going to expand this for a language with arbitrary DSL support, but I just wanted to point out that one doesn't need custom parsers for operator precedence, one can use a generalized parser that doesn't need tables at all, and just looks up the precedence of each operator as it appears. People have been mentioning custom Pratt parsers or shunting yard parsers that can accept illegal inputs - this one doesn't need to be customized and (unless there's a bug) won't accept bad input. It isn't complete in a sense, it was written to test the algorithm and its input is in a form that will need some preprocessing, but there are comments that make it clear.
Note some common kinds of operators are missing for instance the sort of operator used for indexing ie table[index] or calling a function function(parameter-expression, ...)
I'm going to add those, but think of both as postfix operators where what comes between the delimeters '[' and ']' or '(' and ')' is parsed with a different instance of the expression parser. Sorry to have left that out, but the postfix part is in - adding the rest will probably almost double the size of the code.
Since the parser is just 100 lines of racket code, perhaps I should just paste it here, I hope this isn't longer than stackoverflow allows.
A few details on arbitrary decisions:
If a low precedence postfix operator is competing for the same infix blocks as a low precedence prefix operator the prefix operator wins. This doesn't come up in most languages since most don't have low precedence postfix operators.
- for instance: ((data a) (left 1 +) (pre 2 not)(data b)(post 3 !) (left 1 +) (data c))
is a+not b!+c where not is a prefix operator and ! is postfix operator and both have lower
precedence than + so they want to group in incompatible ways either as
(a+not b!)+c
or as
a+(not b!+c)
in these cases the prefix operator always wins, so the second is the way it parses
Nonassociative infix operators are really there so that you don't have to pretend that operators that return different types than they take make sense together, but without having different expression types for each it's a kludge. As such, in this algorithm, non-associative operators refuse to associate not just with themselves but with any operator with the same precedence. That's a common case as < <= == >= etc don't associate with each other in most languages.
The question of how different kinds of operators (left, prefix etc) break ties on precedence is one that shouldn't come up, because it doesn't really make sense to give operators of different types the same precedence. This algorithm does something in those cases, but I'm not even bothering to figure out exactly what because such a grammar is a bad idea in the first place.
#lang racket
;cool the algorithm fits in 100 lines!
(define MIN-PREC -10000)
;format (pre prec name) (left prec name) (right prec name) (nonassoc prec name) (post prec name) (data name) (grouped exp)
;for example "not a*-7+5 < b*b or c >= 4"
;which groups as: not ((((a*(-7))+5) < (b*b)) or (c >= 4))"
;is represented as '((pre 0 not)(data a)(left 4 *)(pre 5 -)(data 7)(left 3 +)(data 5)(nonassoc 2 <)(data b)(left 4 *)(data b)(right 1 or)(data c)(nonassoc 2 >=)(data 4))
;higher numbers are higher precedence
;"(a+b)*c" is represented as ((grouped (data a)(left 3 +)(data b))(left 4 *)(data c))
(struct prec-parse ([data-stack #:mutable #:auto]
[op-stack #:mutable #:auto])
#:auto-value '())
(define (pop-data stacks)
(let [(data (car (prec-parse-data-stack stacks)))]
(set-prec-parse-data-stack! stacks (cdr (prec-parse-data-stack stacks)))
data))
(define (pop-op stacks)
(let [(op (car (prec-parse-op-stack stacks)))]
(set-prec-parse-op-stack! stacks (cdr (prec-parse-op-stack stacks)))
op))
(define (push-data! stacks data)
(set-prec-parse-data-stack! stacks (cons data (prec-parse-data-stack stacks))))
(define (push-op! stacks op)
(set-prec-parse-op-stack! stacks (cons op (prec-parse-op-stack stacks))))
(define (process-prec min-prec stacks)
(let [(op-stack (prec-parse-op-stack stacks))]
(cond ((not (null? op-stack))
(let [(op (car op-stack))]
(cond ((>= (cadr op) min-prec)
(apply-op op stacks)
(set-prec-parse-op-stack! stacks (cdr op-stack))
(process-prec min-prec stacks))))))))
(define (process-nonassoc min-prec stacks)
(let [(op-stack (prec-parse-op-stack stacks))]
(cond ((not (null? op-stack))
(let [(op (car op-stack))]
(cond ((> (cadr op) min-prec)
(apply-op op stacks)
(set-prec-parse-op-stack! stacks (cdr op-stack))
(process-nonassoc min-prec stacks))
((= (cadr op) min-prec) (error "multiply applied non-associative operator"))
))))))
(define (apply-op op stacks)
(let [(op-type (car op))]
(cond ((eq? op-type 'post)
(push-data! stacks `(,op ,(pop-data stacks) )))
(else ;assume infix
(let [(tos (pop-data stacks))]
(push-data! stacks `(,op ,(pop-data stacks) ,tos)))))))
(define (finish input min-prec stacks)
(process-prec min-prec stacks)
input
)
(define (post input min-prec stacks)
(if (null? input) (finish input min-prec stacks)
(let* [(cur (car input))
(input-type (car cur))]
(cond ((eq? input-type 'post)
(cond ((< (cadr cur) min-prec)
(finish input min-prec stacks))
(else
(process-prec (cadr cur)stacks)
(push-data! stacks (cons cur (list (pop-data stacks))))
(post (cdr input) min-prec stacks))))
(else (let [(handle-infix (lambda (proc-fn inc)
(cond ((< (cadr cur) min-prec)
(finish input min-prec stacks))
(else
(proc-fn (+ inc (cadr cur)) stacks)
(push-op! stacks cur)
(start (cdr input) min-prec stacks)))))]
(cond ((eq? input-type 'left) (handle-infix process-prec 0))
((eq? input-type 'right) (handle-infix process-prec 1))
((eq? input-type 'nonassoc) (handle-infix process-nonassoc 0))
(else error "post op, infix op or end of expression expected here"))))))))
;alters the stacks and returns the input
(define (start input min-prec stacks)
(if (null? input) (error "expression expected")
(let* [(cur (car input))
(input-type (car cur))]
(set! input (cdr input))
;pre could clearly work with new stacks, but could it reuse the current one?
(cond ((eq? input-type 'pre)
(let [(new-stack (prec-parse))]
(set! input (start input (cadr cur) new-stack))
(push-data! stacks
(cons cur (list (pop-data new-stack))))
;we might want to assert here that the cdr of the new stack is null
(post input min-prec stacks)))
((eq? input-type 'data)
(push-data! stacks cur)
(post input min-prec stacks))
((eq? input-type 'grouped)
(let [(new-stack (prec-parse))]
(start (cdr cur) MIN-PREC new-stack)
(push-data! stacks (pop-data new-stack)))
;we might want to assert here that the cdr of the new stack is null
(post input min-prec stacks))
(else (error "bad input"))))))
(define (op-parse input)
(let [(stacks (prec-parse))]
(start input MIN-PREC stacks)
(pop-data stacks)))
(define (main)
(op-parse (read)))
(main)
Here is a simple case recursive solution written in Java. Note it does not handle negative numbers but you can do add that if you want to:
public class ExpressionParser {
public double eval(String exp){
int bracketCounter = 0;
int operatorIndex = -1;
for(int i=0; i<exp.length(); i++){
char c = exp.charAt(i);
if(c == '(') bracketCounter++;
else if(c == ')') bracketCounter--;
else if((c == '+' || c == '-') && bracketCounter == 0){
operatorIndex = i;
break;
}
else if((c == '*' || c == '/') && bracketCounter == 0 && operatorIndex < 0){
operatorIndex = i;
}
}
if(operatorIndex < 0){
exp = exp.trim();
if(exp.charAt(0) == '(' && exp.charAt(exp.length()-1) == ')')
return eval(exp.substring(1, exp.length()-1));
else
return Double.parseDouble(exp);
}
else{
switch(exp.charAt(operatorIndex)){
case '+':
return eval(exp.substring(0, operatorIndex)) + eval(exp.substring(operatorIndex+1));
case '-':
return eval(exp.substring(0, operatorIndex)) - eval(exp.substring(operatorIndex+1));
case '*':
return eval(exp.substring(0, operatorIndex)) * eval(exp.substring(operatorIndex+1));
case '/':
return eval(exp.substring(0, operatorIndex)) / eval(exp.substring(operatorIndex+1));
}
}
return 0;
}
}
Algorithm could be easily encoded in C as recursive descent parser.
#include <stdio.h>
#include <ctype.h>
/*
* expression -> sum
* sum -> product | product "+" sum
* product -> term | term "*" product
* term -> number | expression
* number -> [0..9]+
*/
typedef struct {
int value;
const char* context;
} expression_t;
expression_t expression(int value, const char* context) {
return (expression_t) { value, context };
}
/* begin: parsers */
expression_t eval_expression(const char* symbols);
expression_t eval_number(const char* symbols) {
// number -> [0..9]+
double number = 0;
while (isdigit(*symbols)) {
number = 10 * number + (*symbols - '0');
symbols++;
}
return expression(number, symbols);
}
expression_t eval_term(const char* symbols) {
// term -> number | expression
expression_t number = eval_number(symbols);
return number.context != symbols ? number : eval_expression(symbols);
}
expression_t eval_product(const char* symbols) {
// product -> term | term "*" product
expression_t term = eval_term(symbols);
if (*term.context != '*')
return term;
expression_t product = eval_product(term.context + 1);
return expression(term.value * product.value, product.context);
}
expression_t eval_sum(const char* symbols) {
// sum -> product | product "+" sum
expression_t product = eval_product(symbols);
if (*product.context != '+')
return product;
expression_t sum = eval_sum(product.context + 1);
return expression(product.value + sum.value, sum.context);
}
expression_t eval_expression(const char* symbols) {
// expression -> sum
return eval_sum(symbols);
}
/* end: parsers */
int main() {
const char* expression = "1+11*5";
printf("eval(\"%s\") == %d\n", expression, eval_expression(expression).value);
return 0;
}
next libs might be useful:
yupana - strictly arithmetic operations;
tinyexpr - arithmetic operations + C math functions + one provided by user;
mpc - parser combinators
Explanation
Let's capture sequence of symbols that represent algebraic expression.
First one is a number, that is a decimal digit repeated one or more times.
We will refer such notation as production rule.
number -> [0..9]+
Addition operator with its operands is another rule.
It is either number or any symbols that represents sum "*" sum sequence.
sum -> number | sum "+" sum
Try substitute number into sum "+" sum that will be number "+" number which in turn could be expanded into [0..9]+ "+" [0..9]+ that finally could be reduced to 1+8 which is correct addition expression.
Other substitutions will also produce correct expression: sum "+" sum -> number "+" sum -> number "+" sum "+" sum -> number "+" sum "+" number -> number "+" number "+" number -> 12+3+5
Bit by bit we could resemble set of production rules aka grammar that express all possible algebraic expression.
expression -> sum
sum -> difference | difference "+" sum
difference -> product | difference "-" product
product -> fraction | fraction "*" product
fraction -> term | fraction "/" term
term -> "(" expression ")" | number
number -> digit+
To control operator precedence alter position of its production rule against others. Look at grammar above and note that production rule for * is placed below + this will force product evaluate before sum.
Implementation just combines pattern recognition with evaluation and thus closely mirrors production rules.
expression_t eval_product(const char* symbols) {
// product -> term | term "*" product
expression_t term = eval_term(symbols);
if (*term.context != '*')
return term;
expression_t product = eval_product(term.context + 1);
return expression(term.value * product.value, product.context);
}
Here we eval term first and return it if there is no * character after it this is left choise in our production rule otherwise - evaluate symbols after and return term.value * product.value this is right choise in our production rule i.e. term "*" product
Actually there's a way to do this without recursion, which allows you to go through the entire expression once, character by character. This is O(n) for time and space. It takes all of 5 milliseconds to run even for a medium-sized expression.
First, you'd want to do a check to ensure that your parens are balanced. I'm not doing it here for simplicity. Also, I'm acting as if this were a calculator. Calculators do not apply precedence unless you wrap an expression in parens.
I'm using two stacks, one for the operands and another for the operators. I increase the priority of the operation whenever I reach an opening '(' paren and decrease the priority whenever I reach a closing ')' paren. I've even revised the code to add in numbers with decimals. This is in c#.
NOTE: This doesn't work for signed numbers like negative numbers. Probably is just a simple revision.
internal double Compute(string sequence)
{
int priority = 0;
int sequenceCount = sequence.Length;
for (int i = 0; i < sequenceCount; i++) {
char s = sequence[i];
if (Char.IsDigit(s)) {
double value = ParseNextNumber(sequence, i);
numberStack.Push(value);
i = i + value.ToString().Length - 1;
} else if (s == '+' || s == '-' || s == '*' || s == '/') {
Operator op = ParseNextOperator(sequence, i, priority);
CollapseTop(op, numberStack, operatorStack);
operatorStack.Push(op);
} if (s == '(') { priority++; ; continue; }
else if (s == ')') { priority--; continue; }
}
if (priority != 0) { throw new ApplicationException("Parens not balanced"); }
CollapseTop(new Operator(' ', 0), numberStack, operatorStack);
if (numberStack.Count == 1 && operatorStack.Count == 0) {
return numberStack.Pop();
}
return 0;
}
Then to test this out:
Calculator c = new Calculator();
double value = c.Compute("89.8+((9*3)+8)+(9*2)+1");
Console.WriteLine(string.Format("The sum of the expression is: {0}", (float)value));
//prints out The sum of the expression is: 143.8
Pure javascript, no dependencies needed
I very like bart's answer.
and I do some modifications to read it easier, and also add support some function(and easily extend)
function Parse(str) {
try {
return parseExpr(str.replaceAll(" ", "")) // Implement? See full code.
} catch (e) {
alert(e.message)
}
}
Parse("123.45+3*22*4")
It can support as below
const testArray = [
// 👇 Basic Test
["(3+5)*4", ""],
["123.45+3*22*4", ""],
["8%2", ""],
["8%3", ""],
["7/3", ""],
["2*pi*e", 2 * Math.atan2(0, -1) * Math.exp(1)],
["2**3", ""],
// 👇 unary Test
["3+(-5)", ""],
["3+(+5)", ""],
// 👇 Function Test
["pow{2,3}*2", 16],
["4*sqrt{16}", 16],
["round{3.4}", 3],
["round{3.5}", 4],
["((1+e)*3/round{3.5})%2", ((1 + Math.exp(1)) * 3 / Math.round(3.5)) % 2],
["round{3.5}+pow{2,3}", Math.round(3.5)+Math.pow(2,3)],
]
Full code
// 👇 Main
(() => {
window.onload = () => {
const nativeConsoleLogFunc = window.console.error
window.console.error = (...data) => { // Override native function, just for test.
const range = document.createRange()
const frag = range.createContextualFragment(`<div>${data}</div>`)
document.querySelector("body").append(frag)
nativeConsoleLogFunc(...data)
}
// Add Enter event
document.querySelector(`input`).onkeyup = (keyboardEvent) => {
if (keyboardEvent.key === "Enter") {
const result = Parse(document.getElementById('expr').value)
if (result !== undefined) {
alert(result)
}
}
}
const testArray = [
// 👇 Basic Test
["(3+5)*4", ""],
["123.45+3*22*4", ""],
["8%2", ""],
["8%3", ""],
["7/3", ""],
["2*pi*e", 2 * Math.atan2(0, -1) * Math.exp(1)],
["2**3", ""],
// 👇 unary
["3+(-5)", ""],
["3+(+5)", ""],
// 👇 Function Test
["pow{2,3}*2", 16],
["4*sqrt{16}", 16],
["round{3.4}", 3],
["round{3.5}", 4],
["((1+e)*3/round{3.5})%2", ((1 + Math.exp(1)) * 3 / Math.round(3.5)) % 2],
["round{3.5}+pow{2,3}", Math.round(3.5) + Math.pow(2, 3)],
// 👇 error test
["21+", ValueMissingError],
["21+*", ParseError],
["(1+2", ParseError], // miss ")"
["round(3.12)", MissingParaError], // should be round{3.12}
["help", UnknownVarError],
]
for (let [testString, expected] of testArray) {
if (expected === "") {
expected = eval(testString) // Why don't you use eval instead of writing the function yourself? Because the browser may disable eval due to policy considerations. [CSP](https://content-security-policy.com/)
}
const actual = Parse(testString, false)
if (actual !== expected) {
if (actual instanceof Error && actual instanceof expected) {
continue
}
console.error(`${testString} = ${actual}, value <code>${expected}</code> expected`)
}
}
}
})()
// 👇 Script
class UnknownVarError extends Error {
}
class ValueMissingError extends Error {
}
class ParseError extends Error {
}
class MissingParaError extends Error {
}
/**
* #description Operator
* #param {string} sign "+", "-", "*", "/", ...
* #param {number} precedence
* #param {"L"|"R"} assoc associativity left or right
* #param {function} exec
* */
function Op(sign, precedence, assoc, exec = undefined) {
this.sign = sign
this.precedence = precedence
this.assoc = assoc
this.exec = exec
}
const OpArray = [
new Op("+", 10, "L", (l, r) => l + r),
new Op("-", 10, "L", (l, r) => l - r),
new Op("*", 20, "L", (l, r) => l * r),
new Op("/", 20, "L", (l, r) => l / r),
new Op("%", 20, "L", (l, r) => l % r),
new Op("**", 30, "R", (l, r) => Math.pow(l, r))
]
const VarTable = {
e: Math.exp(1),
pi: Math.atan2(0, -1), // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/atan2
pow: (x, y) => Math.pow(x, y),
sqrt: (x) => Math.sqrt(x),
round: (x) => Math.round(x),
}
/**
* #param {Op} op
* #param {Number} value
* */
function Item(op, value = undefined) {
this.op = op
this.value = value
}
class Stack extends Array {
constructor(...items) {
super(...items)
this.push(new Item(new Op("", 0, "L")))
}
GetLastItem() {
return this[this.length - 1] // fast then pop // https://stackoverflow.com/a/61839489/9935654
}
}
function Cursor(str, pos) {
this.str = str
this.pos = pos
this.MoveRight = (step = 1) => {
this.pos += step
}
this.PeekRightChar = (step = 1) => {
return this.str.substring(this.pos, this.pos + step)
}
/**
* #return {Op}
* */
this.MoveToNextOp = () => {
const opArray = OpArray.sort((a, b) => b.precedence - a.precedence)
for (const op of opArray) {
const sign = this.PeekRightChar(op.sign.length)
if (op.sign === sign) {
this.MoveRight(op.sign.length)
return op
}
}
return null
}
}
/**
* #param {Cursor} cursor
* */
function parseVal(cursor) {
let startOffset = cursor.pos
const regex = /^(?<OpOrVar>[^\d.])?(?<Num>[\d.]*)/g
const m = regex.exec(cursor.str.substr(startOffset))
if (m) {
const {groups: {OpOrVar, Num}} = m
if (OpOrVar === undefined && Num) {
cursor.pos = startOffset + Num.length
if (cursor.pos > startOffset) {
return parseFloat(cursor.str.substring(startOffset, startOffset + cursor.pos - startOffset)) // do not use string.substr() // It will be removed in the future. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Deprecated_and_obsolete_features#string_methods
}
}
if ("+-(".indexOf(OpOrVar) !== -1) {
cursor.pos++
switch (OpOrVar) {
case "+": // unary plus, for example: (+5)
return parseVal(cursor)
case "-":
return -(parseVal(cursor))
case "(":
const value = parseExpr(cursor)
if (cursor.PeekRightChar() === ")") {
cursor.MoveRight()
return value
}
throw new ParseError("Parsing error: ')' expected")
}
}
}
// 👇 below is for Variable or Function
const match = cursor.str.substring(cursor.pos).match(/^[a-z_][a-z0-9_]*/i) // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/match
if (match) {
// 👇 Variable
const varName = match[0]
cursor.MoveRight(varName.length)
const bracket = cursor.PeekRightChar(1)
if (bracket !== "{") {
if (varName in VarTable) {
const val = VarTable[varName]
if (typeof val === "function") {
throw new MissingParaError(`${varName} is a function, it needs big curly brackets`)
}
return val
}
}
// 👇 is function
const regex = /{(?<Para>[^{]*)}/gm
const m = regex.exec(cursor.str.substring(cursor.pos))
if (m && m.groups.Para !== undefined) {
const paraString = m.groups.Para
const para = paraString.split(',')
cursor.MoveRight(paraString.length + 2) // 2 = { + }
return VarTable[varName](...para)
}
throw new UnknownVarError(`unknown variable ${varName}`)
}
// 👇 Handle Error
if (cursor.str.length === cursor.pos) { // example: 1+2+
throw new ValueMissingError(`Parsing error at end of string: value expected.`)
} else { // example: 1+2+*
throw new ParseError("Parsing error: unrecognized value")
}
}
/**
* #param {string|Cursor} expr
* */
function parseExpr(expr) {
const stack = new Stack()
const cursor = (expr instanceof Cursor) ? expr : new Cursor(expr, 0)
while (1) {
let rightValue = parseVal(cursor)
const op = cursor.MoveToNextOp() ?? new Op("", 0, "L")
while (
op.precedence < stack.GetLastItem().op.precedence ||
(op.precedence === stack.GetLastItem().op.precedence && op.assoc === 'L')) {
const lastItem = stack.pop()
if (!lastItem.op.exec) { // end reached
return rightValue
}
rightValue = lastItem.op.exec(lastItem.value, rightValue)
}
stack.push(new Item(op, rightValue))
}
}
function Parse(str, alertError = true) {
try {
return parseExpr(str.replaceAll(" ", ""))
} catch (e) {
if (alertError) {
alert(e.message)
return undefined
}
return e
}
}
<input type="text" id="expr" name="expr" placeholder="123.45+3*22*4">
<button onclick="const x = Parse(document.getElementById('expr').value); if(x != null) alert(x);">
Calculate!
</button>

Resources