What is tail call optimization? - algorithm

Very simply, what is tail-call optimization?
More specifically, what are some small code snippets where it could be applied, and where not, with an explanation of why?

Tail-call optimization is where you are able to avoid allocating a new stack frame for a function because the calling function will simply return the value that it gets from the called function. The most common use is tail-recursion, where a recursive function written to take advantage of tail-call optimization can use constant stack space.
Scheme is one of the few programming languages that guarantee in the spec that any implementation must provide this optimization, so here are two examples of the factorial function in Scheme:
(define (fact x)
(if (= x 0) 1
(* x (fact (- x 1)))))
(define (fact x)
(define (fact-tail x accum)
(if (= x 0) accum
(fact-tail (- x 1) (* x accum))))
(fact-tail x 1))
The first function is not tail recursive because when the recursive call is made, the function needs to keep track of the multiplication it needs to do with the result after the call returns. As such, the stack looks as follows:
(fact 3)
(* 3 (fact 2))
(* 3 (* 2 (fact 1)))
(* 3 (* 2 (* 1 (fact 0))))
(* 3 (* 2 (* 1 1)))
(* 3 (* 2 1))
(* 3 2)
6
In contrast, the stack trace for the tail recursive factorial looks as follows:
(fact 3)
(fact-tail 3 1)
(fact-tail 2 3)
(fact-tail 1 6)
(fact-tail 0 6)
6
As you can see, we only need to keep track of the same amount of data for every call to fact-tail because we are simply returning the value we get right through to the top. This means that even if I were to call (fact 1000000), I need only the same amount of space as (fact 3). This is not the case with the non-tail-recursive fact, and as such large values may cause a stack overflow.

Let's walk through a simple example: the factorial function implemented in C.
We start with the obvious recursive definition
unsigned fac(unsigned n)
{
if (n < 2) return 1;
return n * fac(n - 1);
}
A function ends with a tail call if the last operation before the function returns is another function call. If this call invokes the same function, it is tail-recursive.
Even though fac() looks tail-recursive at first glance, it is not as what actually happens is
unsigned fac(unsigned n)
{
if (n < 2) return 1;
unsigned acc = fac(n - 1);
return n * acc;
}
ie the last operation is the multiplication and not the function call.
However, it's possible to rewrite fac() to be tail-recursive by passing the accumulated value down the call chain as an additional argument and passing only the final result up again as the return value:
unsigned fac(unsigned n)
{
return fac_tailrec(1, n);
}
unsigned fac_tailrec(unsigned acc, unsigned n)
{
if (n < 2) return acc;
return fac_tailrec(n * acc, n - 1);
}
Now, why is this useful? Because we immediately return after the tail call, we can discard the previous stackframe before invoking the function in tail position, or, in case of recursive functions, reuse the stackframe as-is.
The tail-call optimization transforms our recursive code into
unsigned fac_tailrec(unsigned acc, unsigned n)
{
TOP:
if (n < 2) return acc;
acc = n * acc;
n = n - 1;
goto TOP;
}
This can be inlined into fac() and we arrive at
unsigned fac(unsigned n)
{
unsigned acc = 1;
TOP:
if (n < 2) return acc;
acc = n * acc;
n = n - 1;
goto TOP;
}
which is equivalent to
unsigned fac(unsigned n)
{
unsigned acc = 1;
for (; n > 1; --n)
acc *= n;
return acc;
}
As we can see here, a sufficiently advanced optimizer can replace tail-recursion with iteration, which is far more efficient as you avoid function call overhead and only use a constant amount of stack space.

TCO (Tail Call Optimization) is the process by which a smart compiler can make a call to a function and take no additional stack space. The only situation in which this happens is if the last instruction executed in a function f is a call to a function g (Note: g can be f). The key here is that f no longer needs stack space - it simply calls g and then returns whatever g would return. In this case the optimization can be made that g just runs and returns whatever value it would have to the thing that called f.
This optimization can make recursive calls take constant stack space, rather than explode.
Example: this factorial function is not TCOptimizable:
from dis import dis
def fact(n):
if n == 0:
return 1
return n * fact(n-1)
dis(fact)
2 0 LOAD_FAST 0 (n)
2 LOAD_CONST 1 (0)
4 COMPARE_OP 2 (==)
6 POP_JUMP_IF_FALSE 12
3 8 LOAD_CONST 2 (1)
10 RETURN_VALUE
4 >> 12 LOAD_FAST 0 (n)
14 LOAD_GLOBAL 0 (fact)
16 LOAD_FAST 0 (n)
18 LOAD_CONST 2 (1)
20 BINARY_SUBTRACT
22 CALL_FUNCTION 1
24 BINARY_MULTIPLY
26 RETURN_VALUE
This function does things besides call another function in its return statement.
This below function is TCOptimizable:
def fact_h(n, acc):
if n == 0:
return acc
return fact_h(n-1, acc*n)
def fact(n):
return fact_h(n, 1)
dis(fact)
2 0 LOAD_GLOBAL 0 (fact_h)
2 LOAD_FAST 0 (n)
4 LOAD_CONST 1 (1)
6 CALL_FUNCTION 2
8 RETURN_VALUE
This is because the last thing to happen in any of these functions is to call another function.

Probably the best high level description I have found for tail calls, recursive tail calls and tail call optimization is the blog post
"What the heck is: A tail call"
by Dan Sugalski. On tail call optimization he writes:
Consider, for a moment, this simple function:
sub foo (int a) {
a += 15;
return bar(a);
}
So, what can you, or rather your language compiler, do? Well, what it can do is turn code of the form return somefunc(); into the low-level sequence pop stack frame; goto somefunc();. In our example, that means before we call bar, foo cleans itself up and then, rather than calling bar as a subroutine, we do a low-level goto operation to the start of bar. Foo's already cleaned itself out of the stack, so when bar starts it looks like whoever called foo has really called bar, and when bar returns its value, it returns it directly to whoever called foo, rather than returning it to foo which would then return it to its caller.
And on tail recursion:
Tail recursion happens if a function, as its last operation, returns
the result of calling itself. Tail recursion is easier to deal with
because rather than having to jump to the beginning of some random
function somewhere, you just do a goto back to the beginning of
yourself, which is a darned simple thing to do.
So that this:
sub foo (int a, int b) {
if (b == 1) {
return a;
} else {
return foo(a*a + a, b - 1);
}
gets quietly turned into:
sub foo (int a, int b) {
label:
if (b == 1) {
return a;
} else {
a = a*a + a;
b = b - 1;
goto label;
}
What I like about this description is how succinct and easy it is to grasp for those coming from an imperative language background (C, C++, Java)

GCC C minimal runnable example with x86 disassembly analysis
Let's see how GCC can automatically do tail call optimizations for us by looking at the generated assembly.
This will serve as an extremely concrete example of what was mentioned in other answers such as https://stackoverflow.com/a/9814654/895245 that the optimization can convert recursive function calls to a loop.
This in turn saves memory and improves performance, since memory accesses are often the main thing that makes programs slow nowadays.
As an input, we give GCC a non-optimized naive stack based factorial:
tail_call.c
#include <stdio.h>
#include <stdlib.h>
unsigned factorial(unsigned n) {
if (n == 1) {
return 1;
}
return n * factorial(n - 1);
}
int main(int argc, char **argv) {
int input;
if (argc > 1) {
input = strtoul(argv[1], NULL, 0);
} else {
input = 5;
}
printf("%u\n", factorial(input));
return EXIT_SUCCESS;
}
GitHub upstream.
Compile and disassemble:
gcc -O1 -foptimize-sibling-calls -ggdb3 -std=c99 -Wall -Wextra -Wpedantic \
-o tail_call.out tail_call.c
objdump -d tail_call.out
where -foptimize-sibling-calls is the name of generalization of tail calls according to man gcc:
-foptimize-sibling-calls
Optimize sibling and tail recursive calls.
Enabled at levels -O2, -O3, -Os.
as mentioned at: How do I check if gcc is performing tail-recursion optimization?
I choose -O1 because:
the optimization is not done with -O0. I suspect that this is because there are required intermediate transformations missing.
-O3 produces ungodly efficient code that would not be very educative, although it is also tail call optimized.
Disassembly with -fno-optimize-sibling-calls:
0000000000001145 <factorial>:
1145: 89 f8 mov %edi,%eax
1147: 83 ff 01 cmp $0x1,%edi
114a: 74 10 je 115c <factorial+0x17>
114c: 53 push %rbx
114d: 89 fb mov %edi,%ebx
114f: 8d 7f ff lea -0x1(%rdi),%edi
1152: e8 ee ff ff ff callq 1145 <factorial>
1157: 0f af c3 imul %ebx,%eax
115a: 5b pop %rbx
115b: c3 retq
115c: c3 retq
With -foptimize-sibling-calls:
0000000000001145 <factorial>:
1145: b8 01 00 00 00 mov $0x1,%eax
114a: 83 ff 01 cmp $0x1,%edi
114d: 74 0e je 115d <factorial+0x18>
114f: 8d 57 ff lea -0x1(%rdi),%edx
1152: 0f af c7 imul %edi,%eax
1155: 89 d7 mov %edx,%edi
1157: 83 fa 01 cmp $0x1,%edx
115a: 75 f3 jne 114f <factorial+0xa>
115c: c3 retq
115d: 89 f8 mov %edi,%eax
115f: c3 retq
The key difference between the two is that:
the -fno-optimize-sibling-calls uses callq, which is the typical non-optimized function call.
This instruction pushes the return address to the stack, therefore increasing it.
Furthermore, this version also does push %rbx, which pushes %rbx to the stack.
GCC does this because it stores edi, which is the first function argument (n) into ebx, then calls factorial.
GCC needs to do this because it is preparing for another call to factorial, which will use the new edi == n-1.
It chooses ebx because this register is callee-saved: What registers are preserved through a linux x86-64 function call so the subcall to factorial won't change it and lose n.
the -foptimize-sibling-calls does not use any instructions that push to the stack: it only does goto jumps within factorial with the instructions je and jne.
Therefore, this version is equivalent to a while loop, without any function calls. Stack usage is constant.
Tested in Ubuntu 18.10, GCC 8.2.

Note first of all that not all languages support it.
TCO applys to a special case of recursion. The gist of it is, if the last thing you do in a function is call itself (e.g. it is calling itself from the "tail" position), this can be optimized by the compiler to act like iteration instead of standard recursion.
You see, normally during recursion, the runtime needs to keep track of all the recursive calls, so that when one returns it can resume at the previous call and so on. (Try manually writing out the result of a recursive call to get a visual idea of how this works.) Keeping track of all the calls takes up space, which gets significant when the function calls itself a lot. But with TCO, it can just say "go back to the beginning, only this time change the parameter values to these new ones." It can do that because nothing after the recursive call refers to those values.

Look here:
http://tratt.net/laurie/tech_articles/articles/tail_call_optimization
As you probably know, recursive function calls can wreak havoc on a stack; it is easy to quickly run out of stack space. Tail call optimization is way by which you can create a recursive style algorithm that uses constant stack space, therefore it does not grow and grow and you get stack errors.

The recursive function approach has a problem. It builds up a call stack of size O(n), which makes our total memory cost O(n). This makes it vulnerable to a stack overflow error, where the call stack gets too big and runs out of space.
Tail call optimization (TCO) scheme. Where it can optimize recursive functions to avoid building up a tall call stack and hence saves the memory cost.
There are many languages who are doing TCO like (JavaScript, Ruby and few C) whereas Python and Java do not do TCO.
JavaScript language has confirmed using :) http://2ality.com/2015/06/tail-call-optimization.html

We should ensure that there are no goto statements in the function itself .. taken care by function call being the last thing in the callee function.
Large scale recursions can use this for optimizations, but in small scale, the instruction overhead for making the function call a tail call reduces the actual purpose.
TCO might cause a forever running function:
void eternity()
{
eternity();
}

In a functional language, tail call optimization is as if a function call could return a partially evaluated expression as the result, which would then be evaluated by the caller.
f x = g x
f 6 reduces to g 6. So if the implementation could return g 6 as the result, and then call that expression it would save a stack frame.
Also
f x = if c x then g x else h x.
Reduces to f 6 to either g 6 or h 6. So if the implementation evaluates c 6 and finds it is true then it can reduce,
if true then g x else h x ---> g x
f x ---> h x
A simple non tail call optimization interpreter might look like this,
class simple_expresion
{
...
public:
virtual ximple_value *DoEvaluate() const = 0;
};
class simple_value
{
...
};
class simple_function : public simple_expresion
{
...
private:
simple_expresion *m_Function;
simple_expresion *m_Parameter;
public:
virtual simple_value *DoEvaluate() const
{
vector<simple_expresion *> parameterList;
parameterList->push_back(m_Parameter);
return m_Function->Call(parameterList);
}
};
class simple_if : public simple_function
{
private:
simple_expresion *m_Condition;
simple_expresion *m_Positive;
simple_expresion *m_Negative;
public:
simple_value *DoEvaluate() const
{
if (m_Condition.DoEvaluate()->IsTrue())
{
return m_Positive.DoEvaluate();
}
else
{
return m_Negative.DoEvaluate();
}
}
}
A tail call optimization interpreter might look like this,
class tco_expresion
{
...
public:
virtual tco_expresion *DoEvaluate() const = 0;
virtual bool IsValue()
{
return false;
}
};
class tco_value
{
...
public:
virtual bool IsValue()
{
return true;
}
};
class tco_function : public tco_expresion
{
...
private:
tco_expresion *m_Function;
tco_expresion *m_Parameter;
public:
virtual tco_expression *DoEvaluate() const
{
vector< tco_expression *> parameterList;
tco_expression *function = const_cast<SNI_Function *>(this);
while (!function->IsValue())
{
function = function->DoCall(parameterList);
}
return function;
}
tco_expresion *DoCall(vector<tco_expresion *> &p_ParameterList)
{
p_ParameterList.push_back(m_Parameter);
return m_Function;
}
};
class tco_if : public tco_function
{
private:
tco_expresion *m_Condition;
tco_expresion *m_Positive;
tco_expresion *m_Negative;
tco_expresion *DoEvaluate() const
{
if (m_Condition.DoEvaluate()->IsTrue())
{
return m_Positive;
}
else
{
return m_Negative;
}
}
}

Related

F# Performance Impact of Checked Calcs?

Is there a performance impact from using the Checked module? I've tested it out with sequences of type int and see no noticeable difference. Sometimes the checked version is faster and sometimes unchecked is faster, but generally not by much.
Seq.initInfinite (fun x-> x) |> Seq.item 1000000000;;
Real: 00:00:05.272, CPU: 00:00:05.272, GC gen0: 0, gen1: 0, gen2: 0
val it : int = 1000000000
open Checked
Seq.initInfinite (fun x-> x) |> Seq.item 1000000000;;
Real: 00:00:04.785, CPU: 00:00:04.773, GC gen0: 0, gen1: 0, gen2: 0
val it : int = 1000000000
Basically I'm trying to figure out if there would be any downside to always opening Checked. (I encountered an overflow that wasn't immediately obvious, so I'm now playing the role of the jilted lover who doesn't want another broken heart.) The only non-contrived reason I can come up with for not always using Checked is if there were some performance hit, but I haven't seen one yet.
When you measure performance it's usually not a good idea to include Seq as Seq adds lots of overhead (at least compared to int operations) so you risk that most of the time is spent in Seq, not in the code you like to test.
I wrote a small test program for (+):
let clock =
let sw = System.Diagnostics.Stopwatch ()
sw.Start ()
fun () ->
sw.ElapsedMilliseconds
let dbreak () = System.Diagnostics.Debugger.Break ()
let time a =
let b = clock ()
let r = a ()
let n = clock ()
let d = n - b
d, r
module Unchecked =
let run c () =
let rec loop a i =
if i < c then
loop (a + 1) (i + 1)
else
a
loop 0 0
module Checked =
open Checked
let run c () =
let rec loop a i =
if i < c then
loop (a + 1) (i + 1)
else
a
loop 0 0
[<EntryPoint>]
let main argv =
let count = 1000000000
let testCases =
[|
"Unchecked" , Unchecked.run
"Checked" , Checked.run
|]
for nm, a in testCases do
printfn "Running %s ..." nm
let ms, r = time (a count)
printfn "... it took %d ms, result is %A" ms r
0
The performance results are this:
Running Unchecked ...
... it took 561 ms, result is 1000000000
Running Checked ...
... it took 1103 ms, result is 1000000000
So it seems some overhead is added by using Checked. The cost of int add should be less than the loop overhead so the overhead of Checked is higher than 2x maybe closer to 4x.
Out of curiousity we can check the IL Code using tools like ILSpy:
Unchecked:
IL_0000: nop
IL_0001: ldarg.2
IL_0002: ldarg.0
IL_0003: bge.s IL_0014
IL_0005: ldarg.0
IL_0006: ldarg.1
IL_0007: ldc.i4.1
IL_0008: add
IL_0009: ldarg.2
IL_000a: ldc.i4.1
IL_000b: add
IL_000c: starg.s i
IL_000e: starg.s a
IL_0010: starg.s c
IL_0012: br.s IL_0000
Checked:
IL_0000: nop
IL_0001: ldarg.2
IL_0002: ldarg.0
IL_0003: bge.s IL_0014
IL_0005: ldarg.0
IL_0006: ldarg.1
IL_0007: ldc.i4.1
IL_0008: add.ovf
IL_0009: ldarg.2
IL_000a: ldc.i4.1
IL_000b: add.ovf
IL_000c: starg.s i
IL_000e: starg.s a
IL_0010: starg.s c
IL_0012: br.s IL_0000
The only difference is that Unchecked uses add and Checked uses add.ovf. add.ovf is add with overflow check.
We can dig even deeper by looking at the jitted x86_64 code.
Unchecked:
; if i < c then
00007FF926A611B3 cmp esi,ebx
00007FF926A611B5 jge 00007FF926A611BD
; i + 1
00007FF926A611B7 inc esi
; a + 1
00007FF926A611B9 inc edi
; loop (a + 1) (i + 1)
00007FF926A611BB jmp 00007FF926A611B3
Checked:
; if i < c then
00007FF926A62613 cmp esi,ebx
00007FF926A62615 jge 00007FF926A62623
; a + 1
00007FF926A62617 add edi,1
; Overflow?
00007FF926A6261A jo 00007FF926A6262D
; i + 1
00007FF926A6261C add esi,1
; Overflow?
00007FF926A6261F jo 00007FF926A6262D
; loop (a + 1) (i + 1)
00007FF926A62621 jmp 00007FF926A62613
Now the reason for the Checked overhead is visible. After each operation the jitter inserts the conditional instruction jo which jumps to code that raises OverflowException if the overflow flag is set.
This chart shows us that the cost of an integer add is less than 1 clock cycle. The reason it's less than 1 clock cycle is that modern CPU can execute certain instructions in parallel.
The chart also shows us that branch that was correctly predicted by the CPU takes around 1-2 clock cycles.
So assuming a throughtput of at least 2 the cost of two integer additions in the Unchecked example should be 1 clock cycle.
In the Checked example we do add, jo, add, jo. Most likely CPU can't parallelize in this case and the cost of this should be around 4-6 clock cycles.
Another interesting difference is that the order of additions changed. With checked additions the order of the operations matter but with unchecked the jitter (and the CPU) has a greater flexibility moving the operations possibly improving performance.
So long story short; for cheap operations like (+) the overhead of Checked should be around 4x-6x compared to Unchecked.
This assumes no overflow exception. The cost of a .NET exception is probably around 100,000x times more expensive than an integer addition.

Why is my Scala tail-recursion faster than the while loop?

Here are two solutions to exercise 4.9 in Cay Horstmann's Scala for the Impatient: "Write a function lteqgt(values: Array[Int], v: Int) that returns a triple containing the counts of values less than v, equal to v, and greater than v." One uses tail recursion, the other uses a while loop. I thought that both would compile to similar bytecode but the while loop is slower than the tail recursion by a factor of almost 2. This suggests to me that my while method is badly written.
import scala.annotation.tailrec
import scala.util.Random
object PerformanceTest {
def main(args: Array[String]): Unit = {
val bigArray:Array[Int] = fillArray(new Array[Int](100000000))
println(time(lteqgt(bigArray, 25)))
println(time(lteqgt2(bigArray, 25)))
}
def time[T](block : => T):T = {
val start = System.nanoTime : Double
val result = block
val end = System.nanoTime : Double
println("Time = " + (end - start) / 1000000.0 + " millis")
result
}
#tailrec def fillArray(a:Array[Int], pos:Int=0):Array[Int] = {
if (pos == a.length)
a
else {
a(pos) = Random.nextInt(50)
fillArray(a, pos+1)
}
}
#tailrec def lteqgt(values: Array[Int], v:Int, lt:Int=0, eq:Int=0, gt:Int=0, pos:Int=0):(Int, Int, Int) = {
if (pos == values.length)
(lt, eq, gt)
else
lteqgt(values, v, lt + (if (values(pos) < v) 1 else 0), eq + (if (values(pos) == v) 1 else 0), gt + (if (values(pos) > v) 1 else 0), pos+1)
}
def lteqgt2(values:Array[Int], v:Int):(Int, Int, Int) = {
var lt = 0
var eq = 0
var gt = 0
var pos = 0
val limit = values.length
while (pos < limit) {
if (values(pos) > v)
gt += 1
else if (values(pos) < v)
lt += 1
else
eq += 1
pos += 1
}
(lt, eq, gt)
}
}
Adjust the size of bigArray according to your heap size. Here is some sample output:
Time = 245.110899 millis
(50004367,2003090,47992543)
Time = 465.836894 millis
(50004367,2003090,47992543)
Why is the while method so much slower than the tailrec? Naively the tailrec version looks to be at a slight disadvantage, as it must always perform 3 "if" checks for every iteration, whereas the while version will often only perform 1 or 2 tests due to the else construct. (NB reversing the order I perform the two methods does not affect the outcome).
Test results (after reducing array size to 20000000)
Under Java 1.6.22 I get 151 and 122 ms for tail-recursion and while-loop respectively.
Under Java 1.7.0 I get 55 and 101 ms
So under Java 6 your while-loop is actually faster; both have improved in performance under Java 7, but the tail-recursive version has overtaken the loop.
Explanation
The performance difference is due to the fact that in your loop, you conditionally add 1 to the totals, while for recursion you always add either 1 or 0. So they are not equivalent. The equivalent while-loop to your recursive method is:
def lteqgt2(values:Array[Int], v:Int):(Int, Int, Int) = {
var lt = 0
var eq = 0
var gt = 0
var pos = 0
val limit = values.length
while (pos < limit) {
gt += (if (values(pos) > v) 1 else 0)
lt += (if (values(pos) < v) 1 else 0)
eq += (if (values(pos) == v) 1 else 0)
pos += 1
}
(lt, eq, gt)
}
and this gives exactly the same execution time as the recursive method (regardless of Java version).
Discussion
I'm not an expert on why the Java 7 VM (HotSpot) can optimize this better than your first version, but I'd guess it's because it's taking the same path through the code each time (rather than branching along the if / else if paths), so the bytecode can be inlined more efficiently.
But remember that this is not the case in Java 6. Why one while-loop outperforms the other is a question of JVM internals. Happily for the Scala programmer, the version produced from idiomatic tail-recursion is the faster one in the latest version of the JVM.
The difference could also be occurring at the processor level. See this question, which explains how code slows down if it contains unpredictable branching.
The two constructs are not identical. In particular, in the first case you don't need any jumps (on x86, you can use cmp and setle and add, instead of having to use cmp and jb and (if you don't jump) add. Not jumping is faster than jumping on pretty much every modern architecture.
So, if you have code that looks like
if (a < b) x += 1
where you may add or you may jump instead, vs.
x += (a < b)
(which only makes sense in C/C++ where 1 = true and 0 = false), the latter tends to be faster as it can be turned into more compact assembly code. In Scala/Java, you can't do this, but you can do
x += if (a < b) 1 else 0
which a smart JVM should recognize is the same as x += (a < b), which has a jump-free machine code translation, which is usually faster than jumping. An even smarter JVM would recognize that
if (a < b) x += 1
is the same yet again (because adding zero doesn't do anything).
C/C++ compilers routinely perform optimizations like this. Being unable to apply any of these optimizations was not a mark in the JIT compiler's favor; apparently it can as of 1.7, but only partially (i.e. it doesn't recognize that adding zero is the same as a conditional adding one, but it does at least convert x += if (a<b) 1 else 0 into fast machine code).
Now, none of this has anything to do with tail recursion or while loops per se. With tail recursion it's more natural to write the if (a < b) 1 else 0 form, but you can do either; and with while loops you can also do either. It just so happened that you picked one form for tail recursion and the other for the while loop, making it look like recursion vs. looping was the change instead of the two different ways to do the conditionals.

Fibonacci recursion with a stack

I've already asked a question about this, yet I'm still confused. I want to convert a recursive function into a stack based function without recursion. Take, for example, the fibonacci function:
algorithm Fibonacci(x):
i = 0
i += Fibonacci(x-1)
i += Fibonacci(x-2)
return i
(Yes I know I didn't put a base case and that recursion for fibonacci is really inefficient)
How would this be implemented using an explicit stack? For example, if I have the stack as a while loop, I have to jump out of the loop in order to evaluate the first recursion, and I have no way of returning to the line after the first recursion and continue on with the second recursion.
in pseudo python
def fib(x):
tot = 0
stack = [x]
while stack:
a = stack.pop()
if a in [0,1]:
tot += 1
else:
stack.push(a - 1)
stack.push(a - 2)
return tot
If you do not want the external counter then you will need to push tuples that keep track of the accumulated sum and whether this was a - 1 or a - 2.
It is probably worth your time to explicitly write out the call stack (by hand, on paper) for a run of say fib(3) for your code (though fix your code first so it handles the boundary conditions). Once you do this it should be clear how to do it without a stack.
Edit:
Let us analyze the running of the following Fibonacci algorithm
def fib(x):
if (x == 0) or (x == 1):
return 1
else:
temp1 = fib(x - 1)
temp2 = fib(x - 2)
return temp1 + temp2
(Yes, I know that this isn't even an efficient implementation of an inefficient algorithm, I have declared more temporaries than necessary.)
Now when we use a stack for function calling we need to store two kinds of things on the stack.
Where to return the result.
Space for local variables.
In our case we have three possible places to return to.
Some outside caller
Assign to temp1
Assign to temp2
we also need space for three local variables x, temp1, and temp2. let us examine fib(3)
when we initially call fib we tell the stack that we want to return to wherever we cam from, x = 3, and temp1 and temp2 are uninitialized.
Next we push onto the stack that we want to assign temp1, x = 2 and temp1 and temp2 are uninitialized. We call again and we have a stack of
(assign temp1, x = 1, -, -)
(assign temp1, x = 2, -, -)
(out , x = 3, -, -)
we now return 1 and do the second call and get
(assign temp2, x = 0, -, -)
(assign temp1, x = 2, temp1 = 1, -)
(out , x = 3, -, -)
this now again returns 1
(assign temp1, x = 2, temp1 = 1, temp2 = 1)
(out , x = 3, -, -)
so this returns 2 and we get
(out , x = 3, temp1 =2, -)
So we now recurse to
(assign temp2, x = 1, -, -)
(out , x = 3, temp1 =2, -)
from which we can see our way out.
algorithm Fibonacci(x):
stack = [1,1]
while stack.length < x
push to the stack the sum of two topmost stack elements
return stack.last
You can preserve stack between calls as some kind of cache.
This stack is not a "true stack" since you can do more than only pushing, popping and checking its emptiness, but I believe this is what you are planning to do.
Your question inspired me to write a piece of code, that initially scared me, but I'm not really sure what to think about it now, so here it is for Your amusement. Maybe it can help a bit, with understanding things.
It's a blatant simulation of an execution of a recursive Fibonacci function implementation. The language is C#. For an argument 0 the function returns 0 - according to the definition of the Fibonacci sequence given by Ronald Graham, Donald Knuth, and Oren Patashnik in "Concrete Mathematics". It's defined this way also in Wikipedia. Checks for negative arguments are omitted.
Normally a return address is stored on the stack and execution just jumps to the right address. To simulate this I used an enum
enum JumpAddress
{
beforeTheFirstRecursiveInvocation,
betweenRecursiveInvocations,
afterTheSecondRecursiveInvocation,
outsideFibFunction
}
and a little state machine.
The Frame stored on the stack is defined like this:
class Frame
{
public int argument;
public int localVariable;
public JumpAddress returnAddress;
public Frame(int argument, JumpAddress returnAddress)
{
this.argument = argument;
this.localVariable = 0;
this.returnAddress = returnAddress;
}
}
It's a C# class - a reference type. The stack holds references to the objects placed on the heap, so when I'm doing this:
Frame top = stack.Peek();
top.localVariable = lastresult;
I'm modifying the object still referenced by the reference at the top of a stack, not a copy.
I model invocation of a function, by pushing a frame on the stack and setting the execution address in my state machine to the beginning - beforeTheFirstRecursiveInvocation.
To return form the function I set the lastresult variable, pointOfExecution variable to the return address stored in the top frame and pop the frame from the stack.
Here is the code.
public static int fib(int n)
{
Stack<Frame> stack = new Stack<Frame>(n);
//Constructor uses the parameter to reserve space.
int lastresult = 0;
//variable holding the result of the last "recursive" invocation
stack.Push(new Frame(n, JumpAddress.outsideFibFunction));
JumpAddress pointOfExecution = JumpAddress.beforeTheFirstRecursiveInvocation;
// that's how I model function invocation. I push a frame on the stack and set
// pointOfExecution. Above the frame stores the argument n and a return address
// - outsideFibFunction
while(pointOfExecution != JumpAddress.outsideFibFunction)
{
Frame top = stack.Peek();
switch(pointOfExecution)
{
case JumpAddress.beforeTheFirstRecursiveInvocation:
if(top.argument <= 1)
{
lastresult = top.argument;
pointOfExecution = top.returnAddress;
stack.Pop();
}
else
{
stack.Push(new Frame(top.argument - 1, JumpAddress.betweenRecursiveInvocations));
pointOfExecution = JumpAddress.beforeTheFirstRecursiveInvocation;
}
break;
case JumpAddress.betweenRecursiveInvocations:
top.localVariable = lastresult;
stack.Push(new Frame(top.argument - 2, JumpAddress.afterTheSecondRecursiveInvocation));
pointOfExecution = JumpAddress.beforeTheFirstRecursiveInvocation;
break;
case JumpAddress.afterTheSecondRecursiveInvocation:
lastresult += top.localVariable;
pointOfExecution = top.returnAddress;
stack.Pop();
break;
default:
System.Diagnostics.Debug.Assert(false,"This point should never be reached");
break;
}
}
return lastresult;
}
// 0<x<100
int fib[100];
fib[1]=1;
fib[2]=1;
if(x<=2)
cout<<1;
else{
for(i=3;i<=x;i++)
fib[i]=fib[i-1]+fib[i-2];
cout<<fib[x];
}
OR without using a vector
int x,y,z;
x=1;y=1;z=1;
if(x<=2)
cout<<1;
else{
for(i=3;i<=x;i++){
z=x+y;
x=y;
y=z;
}
cout<<z;
}
The last method works because you only need the previous 2 fibonacci numbers for creating the current one.

Counting, reversed bit pattern

I am trying to find an algorithm to count from 0 to 2n-1 but their bit pattern reversed. I care about only n LSB of a word. As you may have guessed I failed.
For n=3:
000 -> 0
100 -> 4
010 -> 2
110 -> 6
001 -> 1
101 -> 5
011 -> 3
111 -> 7
You get the idea.
Answers in pseudo-code is great. Code fragments in any language are welcome, answers without bit operations are preferred.
Please don't just post a fragment without even a short explanation or a pointer to a source.
Edit: I forgot to add, I already have a naive implementation which just bit-reverses a count variable. In a sense, this method is not really counting.
This is, I think easiest with bit operations, even though you said this wasn't preferred
Assuming 32 bit ints, here's a nifty chunk of code that can reverse all of the bits without doing it in 32 steps:
unsigned int i;
i = (i & 0x55555555) << 1 | (i & 0xaaaaaaaa) >> 1;
i = (i & 0x33333333) << 2 | (i & 0xcccccccc) >> 2;
i = (i & 0x0f0f0f0f) << 4 | (i & 0xf0f0f0f0) >> 4;
i = (i & 0x00ff00ff) << 8 | (i & 0xff00ff00) >> 8;
i = (i & 0x0000ffff) << 16 | (i & 0xffff0000) >> 16;
i >>= (32 - n);
Essentially this does an interleaved shuffle of all of the bits. Each time around half of the bits in the value are swapped with the other half.
The last line is necessary to realign the bits so that bin "n" is the most significant bit.
Shorter versions of this are possible if "n" is <= 16, or <= 8
At each step, find the leftmost 0 digit of your value. Set it, and clear all digits to the left of it. If you don't find a 0 digit, then you've overflowed: return 0, or stop, or crash, or whatever you want.
This is what happens on a normal binary increment (by which I mean it's the effect, not how it's implemented in hardware), but we're doing it on the left instead of the right.
Whether you do this in bit ops, strings, or whatever, is up to you. If you do it in bitops, then a clz (or call to an equivalent hibit-style function) on ~value might be the most efficient way: __builtin_clz where available. But that's an implementation detail.
This solution was originally in binary and converted to conventional math as the requester specified.
It would make more sense as binary, at least the multiply by 2 and divide by 2 should be << 1 and >> 1 for speed, the additions and subtractions probably don't matter one way or the other.
If you pass in mask instead of nBits, and use bitshifting instead of multiplying or dividing, and change the tail recursion to a loop, this will probably be the most performant solution you'll find since every other call it will be nothing but a single add, it would only be as slow as Alnitak's solution once every 4, maybe even 8 calls.
int incrementBizarre(int initial, int nBits)
// in the 3 bit example, this should create 100
mask=2^(nBits-1)
// This should only return true if the first (least significant) bit is not set
// if initial is 011 and mask is 100
// 3 4, bit is not set
if(initial < mask)
// If it was not, just set it and bail.
return initial+ mask // 011 (3) + 100 (4) = 111 (7)
else
// it was set, are we at the most significant bit yet?
// mask 100 (4) / 2 = 010 (2), 001/2 = 0 indicating overflow
if(mask / 2) > 0
// No, we were't, so unset it (initial-mask) and increment the next bit
return incrementBizarre(initial - mask, mask/2)
else
// Whoops we were at the most significant bit. Error condition
throw new OverflowedMyBitsException()
Wow, that turned out kinda cool. I didn't figure in the recursion until the last second there.
It feels wrong--like there are some operations that should not work, but they do because of the nature of what you are doing (like it feels like you should get into trouble when you are operating on a bit and some bits to the left are non-zero, but it turns out you can't ever be operating on a bit unless all the bits to the left are zero--which is a very strange condition, but true.
Example of flow to get from 110 to 001 (backwards 3 to backwards 4):
mask 100 (4), initial 110 (6); initial < mask=false; initial-mask = 010 (2), now try on the next bit
mask 010 (2), initial 010 (2); initial < mask=false; initial-mask = 000 (0), now inc the next bit
mask 001 (1), initial 000 (0); initial < mask=true; initial + mask = 001--correct answer
Here's a solution from my answer to a different question that computes the next bit-reversed index without looping. It relies heavily on bit operations, though.
The key idea is that incrementing a number simply flips a sequence of least-significant bits, for example from nnnn0111 to nnnn1000. So in order to compute the next bit-reversed index, you have to flip a sequence of most-significant bits. If your target platform has a CTZ ("count trailing zeros") instruction, this can be done efficiently.
Example in C using GCC's __builtin_ctz:
void iter_reversed(unsigned bits) {
unsigned n = 1 << bits;
for (unsigned i = 0, j = 0; i < n; i++) {
printf("%x\n", j);
// Compute a mask of LSBs.
unsigned mask = i ^ (i + 1);
// Length of the mask.
unsigned len = __builtin_ctz(~mask);
// Align the mask to MSB of n.
mask <<= bits - len;
// XOR with mask.
j ^= mask;
}
}
Without a CTZ instruction, you can also use integer division:
void iter_reversed(unsigned bits) {
unsigned n = 1 << bits;
for (unsigned i = 0, j = 0; i < n; i++) {
printf("%x\n", j);
// Find least significant zero bit.
unsigned bit = ~i & (i + 1);
// Using division to bit-reverse a single bit.
unsigned rev = (n / 2) / bit;
// XOR with mask.
j ^= (n - 1) & ~(rev - 1);
}
}
void reverse(int nMaxVal, int nBits)
{
int thisVal, bit, out;
// Calculate for each value from 0 to nMaxVal.
for (thisVal=0; thisVal<=nMaxVal; ++thisVal)
{
out = 0;
// Shift each bit from thisVal into out, in reverse order.
for (bit=0; bit<nBits; ++bit)
out = (out<<1) + ((thisVal>>bit) & 1)
}
printf("%d -> %d\n", thisVal, out);
}
Maybe increment from 0 to N (the "usual" way") and do ReverseBitOrder() for each iteration. You can find several implementations here (I like the LUT one the best).
Should be really quick.
Here's an answer in Perl. You don't say what comes after the all ones pattern, so I just return zero. I took out the bitwise operations so that it should be easy to translate into another language.
sub reverse_increment {
my($n, $bits) = #_;
my $carry = 2**$bits;
while($carry > 1) {
$carry /= 2;
if($carry > $n) {
return $carry + $n;
} else {
$n -= $carry;
}
}
return 0;
}
Here's a solution which doesn't actually try to do any addition, but exploits the on/off pattern of the seqence (most sig bit alternates every time, next most sig bit alternates every other time, etc), adjust n as desired:
#define FLIP(x, i) do { (x) ^= (1 << (i)); } while(0)
int main() {
int n = 3;
int max = (1 << n);
int x = 0;
for(int i = 1; i <= max; ++i) {
std::cout << x << std::endl;
/* if n == 3, this next part is functionally equivalent to this:
*
* if((i % 1) == 0) FLIP(x, n - 1);
* if((i % 2) == 0) FLIP(x, n - 2);
* if((i % 4) == 0) FLIP(x, n - 3);
*/
for(int j = 0; j < n; ++j) {
if((i % (1 << j)) == 0) FLIP(x, n - (j + 1));
}
}
}
How about adding 1 to the most significant bit, then carrying to the next (less significant) bit, if necessary. You could speed this up by operating on bytes:
Precompute a lookup table for counting in bit-reverse from 0 to 256 (00000000 -> 10000000, 10000000 -> 01000000, ..., 11111111 -> 00000000).
Set all bytes in your multi-byte number to zero.
Increment the most significant byte using the lookup table. If the byte is 0, increment the next byte using the lookup table. If the byte is 0, increment the next byte...
Go to step 3.
With n as your power of 2 and x the variable you want to step:
(defun inv-step (x n) ; the following is a function declaration
"returns a bit-inverse step of x, bounded by 2^n" ; documentation
(do ((i (expt 2 (- n 1)) ; loop, init of i
(/ i 2)) ; stepping of i
(s x)) ; init of s as x
((not (integerp i)) ; breaking condition
s) ; returned value if all bits are 1 (is 0 then)
(if (< s i) ; the loop's body: if s < i
(return-from inv-step (+ s i)) ; -> add i to s and return the result
(decf s i)))) ; else: reduce s by i
I commented it thoroughly as you may not be familiar with this syntax.
edit: here is the tail recursive version. It seems to be a little faster, provided that you have a compiler with tail call optimization.
(defun inv-step (x n)
(let ((i (expt 2 (- n 1))))
(cond ((= n 1)
(if (zerop x) 1 0)) ; this is really (logxor x 1)
((< x i)
(+ x i))
(t
(inv-step (- x i) (- n 1))))))
When you reverse 0 to 2^n-1 but their bit pattern reversed, you pretty much cover the entire 0-2^n-1 sequence
Sum = 2^n * (2^n+1)/2
O(1) operation. No need to do bit reversals
Edit: Of course original poster's question was about to do increment by (reversed) one, which makes things more simple than adding two random values. So nwellnhof's answer contains the algorithm already.
Summing two bit-reversal values
Here is one solution in php:
function RevSum ($a,$b) {
// loop until our adder, $b, is zero
while ($b) {
// get carry (aka overflow) bit for every bit-location by AND-operation
// 0 + 0 --> 00 no overflow, carry is "0"
// 0 + 1 --> 01 no overflow, carry is "0"
// 1 + 0 --> 01 no overflow, carry is "0"
// 1 + 1 --> 10 overflow! carry is "1"
$c = $a & $b;
// do 1-bit addition for every bit location at once by XOR-operation
// 0 + 0 --> 00 result = 0
// 0 + 1 --> 01 result = 1
// 1 + 0 --> 01 result = 1
// 1 + 1 --> 10 result = 0 (ignored that "1", already taken care above)
$a ^= $b;
// now: shift carry bits to the next bit-locations to be added to $a in
// next iteration.
// PHP_INT_MAX here is used to ensure that the most-significant bit of the
// $b will be cleared after shifting. see link in the side note below.
$b = ($c >> 1) & PHP_INT_MAX;
}
return $a;
}
Side note: See this question about shifting negative values.
And as for test; start from zero and increment value by 8-bit reversed one (10000000):
$value = 0;
$add = 0x80; // 10000000 <-- "one" as bit reversed
for ($count = 20; $count--;) { // loop 20 times
printf("%08b\n", $value); // show value as 8-bit binary
$value = RevSum($value, $add); // do addition
}
... will output:
00000000
10000000
01000000
11000000
00100000
10100000
01100000
11100000
00010000
10010000
01010000
11010000
00110000
10110000
01110000
11110000
00001000
10001000
01001000
11001000
Let assume number 1110101 and our task is to find next one.
1) Find zero on highest position and mark position as index.
11101010 (4th position, so index = 4)
2) Set to zero all bits on position higher than index.
00001010
3) Change founded zero from step 1) to '1'
00011010
That's it. This is by far the fastest algorithm since most of cpu's has instructions to achieve this very efficiently. Here is a C++ implementation which increment 64bit number in reversed patern.
#include <intrin.h>
unsigned __int64 reversed_increment(unsigned __int64 number)
{
unsigned long index, result;
_BitScanReverse64(&index, ~number); // returns index of the highest '1' on bit-reverse number (trick to find the highest '0')
result = _bzhi_u64(number, index); // set to '0' all bits at number higher than index position
result |= (unsigned __int64) 1 << index; // changes to '1' bit on index position
return result;
}
Its not hit your requirements to have "no bits" operations, however i fear there is now way how to achieve something similar without them.

Equation (expression) parser with precedence?

I've developed an equation parser using a simple stack algorithm that will handle binary (+, -, |, &, *, /, etc) operators, unary (!) operators, and parenthesis.
Using this method, however, leaves me with everything having the same precedence - it's evaluated left to right regardless of operator, although precedence can be enforced using parenthesis.
So right now "1+11*5" returns 60, not 56 as one might expect.
While this is suitable for the current project, I want to have a general purpose routine I can use for later projects.
Edited for clarity:
What is a good algorithm for parsing equations with precedence?
I'm interested in something simple to implement and understand that I can code myself to avoid licensing issues with available code.
Grammar:
I don't understand the grammar question - I've written this by hand. It's simple enough that I don't see the need for YACC or Bison. I merely need to calculate strings with equations such as "2+3 * (42/13)".
Language:
I'm doing this in C, but I'm interested in an algorithm, not a language specific solution. C is low level enough that it'll be easy to convert to another language should the need arise.
Code Example
I posted the test code for the simple expression parser I was talking about above. The project requirements altered and so I never needed to optimize the code for performance or space as it wasn't incorporated into the project. It's in the original verbose form, and should be readily understandable. If I do anything further with it in terms of operator precedence, I'll probably choose the macro hack because it matches the rest of the program in simplicity. If I ever use this in a real project, though, I'll be going for a more compact/speedy parser.
Related question
Smart design of a math parser?
-Adam
The shunting yard algorithm is the right tool for this. Wikipedia is really confusing about this, but basically the algorithm works like this:
Say, you want to evaluate 1 + 2 * 3 + 4. Intuitively, you "know" you have to do the 2 * 3 first, but how do you get this result? The key is to realize that when you're scanning the string from left to right, you will evaluate an operator when the operator that follows it has a lower (or equal to) precedence. In the context of the example, here's what you want to do:
Look at: 1 + 2, don't do anything.
Now look at 1 + 2 * 3, still don't do anything.
Now look at 1 + 2 * 3 + 4, now you know that 2 * 3 has to to be evaluated because the next operator has lower precedence.
How do you implement this?
You want to have two stacks, one for numbers, and another for operators. You push numbers onto the stack all the time. You compare each new operator with the one at the top of the stack, if the one on top of the stack has higher priority, you pop it off the operator stack, pop the operands off the number stack, apply the operator and push the result onto the number stack. Now you repeat the comparison with the top of stack operator.
Coming back to the example, it works like this:
N = [ ]
Ops = [ ]
Read 1. N = [1], Ops = [ ]
Read +. N = [1], Ops = [+]
Read 2. N = [1 2], Ops = [+]
Read *. N = [1 2], Ops = [+ *]
Read 3. N = [1 2 3], Ops = [+ *]
Read +. N = [1 2 3], Ops = [+ *]
Pop 3, 2 and execute 2*3, and push result onto N. N = [1 6], Ops = [+]
+ is left associative, so you want to pop 1, 6 off as well and execute the +. N = [7], Ops = [].
Finally push the [+] onto the operator stack. N = [7], Ops = [+].
Read 4. N = [7 4]. Ops = [+].
You're run out off input, so you want to empty the stacks now. Upon which you will get the result 11.
There, that's not so difficult, is it? And it makes no invocations to any grammars or parser generators.
The hard way
You want a recursive descent parser.
To get precedence you need to think recursively, for example, using your sample string,
1+11*5
to do this manually, you would have to read the 1, then see the plus and start a whole new recursive parse "session" starting with 11... and make sure to parse the 11 * 5 into its own factor, yielding a parse tree with 1 + (11 * 5).
This all feels so painful even to attempt to explain, especially with the added powerlessness of C. See, after parsing the 11, if the * was actually a + instead, you would have to abandon the attempt at making a term and instead parse the 11 itself as a factor. My head is already exploding. It's possible with the recursive decent strategy, but there is a better way...
The easy (right) way
If you use a GPL tool like Bison, you probably don't need to worry about licensing issues since the C code generated by bison is not covered by the GPL (IANAL but I'm pretty sure GPL tools don't force the GPL on generated code/binaries; for example Apple compiles code like say, Aperture with GCC and they sell it without having to GPL said code).
Download Bison (or something equivalent, ANTLR, etc.).
There is usually some sample code that you can just run bison on and get your desired C code that demonstrates this four function calculator:
http://www.gnu.org/software/bison/manual/html_node/Infix-Calc.html
Look at the generated code, and see that this is not as easy as it sounds. Also, the advantages of using a tool like Bison are 1) you learn something (especially if you read the Dragon book and learn about grammars), 2) you avoid NIH trying to reinvent the wheel. With a real parser-generator tool, you actually have a hope at scaling up later, showing other people you know that parsers are the domain of parsing tools.
Update:
People here have offered much sound advice. My only warning against skipping the parsing tools or just using the Shunting Yard algorithm or a hand rolled recursive decent parser is that little toy languages1 may someday turn into big actual languages with functions (sin, cos, log) and variables, conditions and for loops.
Flex/Bison may very well be overkill for a small, simple interpreter, but a one off parser+evaluator may cause trouble down the line when changes need to be made or features need to be added. Your situation will vary and you will need to use your judgement; just don't punish other people for your sins [2] and build a less than adequate tool.
My favorite tool for parsing
The best tool in the world for the job is the Parsec library (for recursive decent parsers) which comes with the programming language Haskell. It looks a lot like BNF, or like some specialized tool or domain specific language for parsing (sample code [3]), but it is in fact just a regular library in Haskell, meaning that it compiles in the same build step as the rest of your Haskell code, and you can write arbitrary Haskell code and call that within your parser, and you can mix and match other libraries all in the same code. (Embedding a parsing language like this in a language other than Haskell results in loads of syntactic cruft, by the way. I did this in C# and it works quite well but it is not so pretty and succinct.)
Notes:
1 Richard Stallman says, in Why you should not use Tcl
The principal lesson of Emacs is that
a language for extensions should not
be a mere "extension language". It
should be a real programming language,
designed for writing and maintaining
substantial programs. Because people
will want to do that!
[2] Yes, I am forever scarred from using that "language".
Also note that when I submitted this entry, the preview was correct, but SO's less than adequate parser ate my close anchor tag on the first paragraph, proving that parsers are not something to be trifled with because if you use regexes and one off hacks you will probably get something subtle and small wrong.
[3] Snippet of a Haskell parser using Parsec: a four function calculator extended with exponents, parentheses, whitespace for multiplication, and constants (like pi and e).
aexpr = expr `chainl1` toOp
expr = optChainl1 term addop (toScalar 0)
term = factor `chainl1` mulop
factor = sexpr `chainr1` powop
sexpr = parens aexpr
<|> scalar
<|> ident
powop = sym "^" >>= return . (B Pow)
<|> sym "^-" >>= return . (\x y -> B Pow x (B Sub (toScalar 0) y))
toOp = sym "->" >>= return . (B To)
mulop = sym "*" >>= return . (B Mul)
<|> sym "/" >>= return . (B Div)
<|> sym "%" >>= return . (B Mod)
<|> return . (B Mul)
addop = sym "+" >>= return . (B Add)
<|> sym "-" >>= return . (B Sub)
scalar = number >>= return . toScalar
ident = literal >>= return . Lit
parens p = do
lparen
result <- p
rparen
return result
http://www.engr.mun.ca/~theo/Misc/exp_parsing.htm
Very good explanation of different approaches:
Recursive-descent recognition
The shunting yard algorithm
The classic solution
Precedence climbing
Written in simple language and pseudo-code.
I like 'precedence climbing' one.
There's a nice article here about combining a simple recursive-descent parser with operator-precedence parsing. If you've been recently writing parsers, it should be very interesting and instructive to read.
Long time ago, I made up my own parsing algorithm, that I couldn't find in any books on parsing (like the Dragon Book). Looking at the pointers to the Shunting Yard algorithm, I do see the resemblance.
About 2 years ago, I made a post about it, complete with Perl source code, on http://www.perlmonks.org/?node_id=554516. It's easy to port to other languages: the first implementation I did was in Z80 assembler.
It's ideal for direct calculation with numbers, but you can use it to produce a parse tree if you must.
Update Because more people can read (or run) Javascript, I've reimplemented my parser in Javascript, after the code has been reorganized. The whole parser is under 5k of Javascript code (about 100 lines for the parser, 15 lines for a wrapper function) including error reporting, and comments.
You can find a live demo at http://users.telenet.be/bartl/expressionParser/expressionParser.html.
// operator table
var ops = {
'+' : {op: '+', precedence: 10, assoc: 'L', exec: function(l,r) { return l+r; } },
'-' : {op: '-', precedence: 10, assoc: 'L', exec: function(l,r) { return l-r; } },
'*' : {op: '*', precedence: 20, assoc: 'L', exec: function(l,r) { return l*r; } },
'/' : {op: '/', precedence: 20, assoc: 'L', exec: function(l,r) { return l/r; } },
'**' : {op: '**', precedence: 30, assoc: 'R', exec: function(l,r) { return Math.pow(l,r); } }
};
// constants or variables
var vars = { e: Math.exp(1), pi: Math.atan2(1,1)*4 };
// input for parsing
// var r = { string: '123.45+33*8', offset: 0 };
// r is passed by reference: any change in r.offset is returned to the caller
// functions return the parsed/calculated value
function parseVal(r) {
var startOffset = r.offset;
var value;
var m;
// floating point number
// example of parsing ("lexing") without aid of regular expressions
value = 0;
while("0123456789".indexOf(r.string.substr(r.offset, 1)) >= 0 && r.offset < r.string.length) r.offset++;
if(r.string.substr(r.offset, 1) == ".") {
r.offset++;
while("0123456789".indexOf(r.string.substr(r.offset, 1)) >= 0 && r.offset < r.string.length) r.offset++;
}
if(r.offset > startOffset) { // did that work?
// OK, so I'm lazy...
return parseFloat(r.string.substr(startOffset, r.offset-startOffset));
} else if(r.string.substr(r.offset, 1) == "+") { // unary plus
r.offset++;
return parseVal(r);
} else if(r.string.substr(r.offset, 1) == "-") { // unary minus
r.offset++;
return negate(parseVal(r));
} else if(r.string.substr(r.offset, 1) == "(") { // expression in parens
r.offset++; // eat "("
value = parseExpr(r);
if(r.string.substr(r.offset, 1) == ")") {
r.offset++;
return value;
}
r.error = "Parsing error: ')' expected";
throw 'parseError';
} else if(m = /^[a-z_][a-z0-9_]*/i.exec(r.string.substr(r.offset))) { // variable/constant name
// sorry for the regular expression, but I'm too lazy to manually build a varname lexer
var name = m[0]; // matched string
r.offset += name.length;
if(name in vars) return vars[name]; // I know that thing!
r.error = "Semantic error: unknown variable '" + name + "'";
throw 'unknownVar';
} else {
if(r.string.length == r.offset) {
r.error = 'Parsing error at end of string: value expected';
throw 'valueMissing';
} else {
r.error = "Parsing error: unrecognized value";
throw 'valueNotParsed';
}
}
}
function negate (value) {
return -value;
}
function parseOp(r) {
if(r.string.substr(r.offset,2) == '**') {
r.offset += 2;
return ops['**'];
}
if("+-*/".indexOf(r.string.substr(r.offset,1)) >= 0)
return ops[r.string.substr(r.offset++, 1)];
return null;
}
function parseExpr(r) {
var stack = [{precedence: 0, assoc: 'L'}];
var op;
var value = parseVal(r); // first value on the left
for(;;){
op = parseOp(r) || {precedence: 0, assoc: 'L'};
while(op.precedence < stack[stack.length-1].precedence ||
(op.precedence == stack[stack.length-1].precedence && op.assoc == 'L')) {
// precedence op is too low, calculate with what we've got on the left, first
var tos = stack.pop();
if(!tos.exec) return value; // end reached
// do the calculation ("reduce"), producing a new value
value = tos.exec(tos.value, value);
}
// store on stack and continue parsing ("shift")
stack.push({op: op.op, precedence: op.precedence, assoc: op.assoc, exec: op.exec, value: value});
value = parseVal(r); // value on the right
}
}
function parse (string) { // wrapper
var r = {string: string, offset: 0};
try {
var value = parseExpr(r);
if(r.offset < r.string.length){
r.error = 'Syntax error: junk found at offset ' + r.offset;
throw 'trailingJunk';
}
return value;
} catch(e) {
alert(r.error + ' (' + e + '):\n' + r.string.substr(0, r.offset) + '<*>' + r.string.substr(r.offset));
return;
}
}
It would help if you could describe the grammar you are currently using to parse. Sounds like the problem might lie there!
Edit:
The fact that you don't understand the grammar question and that 'you've written this by hand' very likely explains why you're having problems with expressions of the form '1+11*5' (i.e., with operator precedence). Googling for 'grammar for arithmetic expressions', for example, should yield some good pointers. Such a grammar need not be complicated:
<Exp> ::= <Exp> + <Term> |
<Exp> - <Term> |
<Term>
<Term> ::= <Term> * <Factor> |
<Term> / <Factor> |
<Factor>
<Factor> ::= x | y | ... |
( <Exp> ) |
- <Factor> |
<Number>
would do the trick for example, and can be trivially augmented to take care of some more complicated expressions (including functions for example, or powers,...).
I suggest you have a look at this thread, for example.
Almost all introductions to grammars/parsing treat arithmetic expressions as an example.
Note that using a grammar does not at all imply using a specific tool (a la Yacc, Bison,...). Indeed, you most certainly are already using the following grammar:
<Exp> :: <Leaf> | <Exp> <Op> <Leaf>
<Op> :: + | - | * | /
<Leaf> :: <Number> | (<Exp>)
(or something of the kind) without knowing it!
Have you thought about using Boost Spirit? It allows you to write EBNF-like grammars in C++ like this:
group = '(' >> expression >> ')';
factor = integer | group;
term = factor >> *(('*' >> factor) | ('/' >> factor));
expression = term >> *(('+' >> term) | ('-' >> term));
As you put your question there is no need for recursion whatsoever. The answer is three things: Postfix notation plus Shunting Yard algorithm plus Postfix expression evaluation:
1). Postfix notation = invented to eliminate the need for explicit precedence specification. Read more on the net but here is the gist of it: infix expression ( 1 + 2 ) * 3 while easy for humans to read and process not very efficient for computing via machine. What is? Simple rule that says "rewrite expression by caching in precedence,then always process it left-to-right". So infix ( 1 + 2 ) * 3 becomes a postfix 12+3*. POST because operator is placed always AFTER the operands.
2). Evaluating postfix expression. Easy. Read numbers off postfix string. Push them on a stack until an operator is seen. Check operator type - unary? binary? tertiary? Pop as many operands off stack as needed to evaluate this operator. Evaluate. Push result back on stack! And u r almost done. Keep doing so until stack has only one entry = value u r looking for.
Let's do ( 1 + 2 ) * 3 which is in postfix is "12+3*". Read first number = 1. Push it on stack. Read next. Number = 2. Push it on stack. Read next. Operator. Which one? +. What kind? Binary = needs two operands. Pop stack twice = argright is 2 and argleft is 1. 1 + 2 is 3. Push 3 back on stack. Read next from postfix string. Its a number. 3.Push. Read next. Operator. Which one? *. What kind? Binary = needs two numbers -> pop stack twice. First pop into argright, second time into argleft. Evaluate operation - 3 times 3 is 9.Push 9 on stack. Read next postfix char. It's null. End of input. Pop stack onec = that's your answer.
3). Shunting Yard is used to transform human (easily) readable infix expression into postfix expression (also human easily readable after some practice). Easy to code manually. See comments above and net.
I would suggest cheating and using the Shunting Yard Algorithm. It's an easy means of writing a simple calculator-type parser and takes precedence into account.
If you want to properly tokenise things and have variables, etc. involved then I would go ahead and write a recursive descent parser as suggested by others here, however if you simply require a calculator-style parser then this algorithm should be sufficient :-)
Another resource for precedence parsing is the Operator-precedence parser entry on Wikipedia. Covers Dijkstra's shunting yard algorithm, and a tree alternate algorithm, but more notably covers a really simple macro replacement algorithm that can be trivially implemented in front of any precedence ignorant parser:
#include <stdio.h>
int main(int argc, char *argv[]){
printf("((((");
for(int i=1;i!=argc;i++){
if(argv[i] && !argv[i][1]){
switch(argv[i]){
case '^': printf(")^("); continue;
case '*': printf("))*(("); continue;
case '/': printf("))/(("); continue;
case '+': printf(")))+((("); continue;
case '-': printf(")))-((("); continue;
}
}
printf("%s", argv[i]);
}
printf("))))\n");
return 0;
}
Invoke it as:
$ cc -o parenthesise parenthesise.c
$ ./parenthesise a \* b + c ^ d / e
((((a))*((b)))+(((c)^(d))/((e))))
Which is awesome in its simplicity, and very understandable.
Is there a language you want to use? ANTLR will let you do this from a Java perspective. Adrian Kuhn has an excellent writeup on how to write an executable grammar in Ruby; in fact, his example is almost exactly your arithmetic expression example.
It depends on how "general" you want it to be.
If you want it to be really really general such as be able to parse mathematical functions as well like sin(4+5)*cos(7^3) you will probably need a parse tree.
In which, I do not think that a complete implementation is proper to be pasted here. I'd suggest that you check out one of the infamous "Dragon book".
But if you just want precedence support, then you could do that by first converting the expression to postfix form in which an algorithm that you can copy-and-paste should be available from google or I think you can code it up yourself with a binary tree.
When you have it in postfix form, then it's piece of cake from then on since you already understand how the stack helps.
I found this on the PIClist about the Shunting Yard algorithm:
Harold writes:
I remember reading, a long time ago, of an algorithm that converted
algebraic expressions to RPN for easy evaluation. Each infix value or
operator or parenthesis was represented by a railroad car on a
track. One
type of car split off to another track and the other continued straight
ahead. I don't recall the details (obviously!), but always thought it
would be interesting to code. This is back when I was writing 6800 (not
68000) assembly code.
This is the "shunting yard algorythm"
and it is what most machine parsers
use. See the article on parsing in
Wikipedia. An easy way to code the
shunting yard algorythm is to use two
stacks. One is the "push" stack and
the other the "reduce" or "result"
stack. Example:
pstack = () // empty rstack = ()
input: 1+2*3 precedence = 10 // lowest
reduce = 0 // don't reduce
start: token '1': isnumber, put in
pstack (push) token '+': isoperator
set precedence=2 if precedence <
previous_operator_precedence then
reduce() // see below put '+' in
pstack (push) token '2': isnumber,
put in pstack (push) token '*':
isoperator, set precedence=1, put in
pstack (push) // check precedence as
// above token '3': isnumber, put in
pstack (push) end of input, need to
reduce (goal is empty pstack) reduce()
//done
to reduce, pop elements from the push
stack and put them into the result
stack, always swap the top 2 items on
pstack if they are of the form
'operator' 'number':
pstack: '1' '+' '2' '' '3' rstack: ()
... pstack: () rstack: '3' '2' '' '1'
'+'
if the expression would have been:
1*2+3
then the reduce trigger would have
been the reading of the token '+'
which has lower precendece than the
'*' already pushed, so it would have
done:
pstack: '1' '' '2' rstack: () ...
pstack: () rstack: '1' '2' ''
and then pushed '+' and then '3' and
then finally reduced:
pstack: '+' '3' rstack: '1' '2' ''
... pstack: () rstack: '1' '2' '' '3'
'+'
So the short version is: push numbers,
when pushing operators check the
precedence of the previous operator.
If it was higher than the operator's
that is to be pushed now, first
reduce, then push the current
operator. To handle parens simply save
the precedence of the 'previous'
operator, and put a mark on the pstack
that tells the reduce algorythm to
stop reducing when solving the inside
of a paren pair. The closing paren
triggers a reduction as does the end
of input, and also removes the open
paren mark from the pstack, and
restores the 'previous operation'
precedence so parsing can continue
after the close paren where it left
off. This can be done with recursion
or without (hint: use a stack to store
the previous precedence when
encountering a '(' ...). The
generalized version of this is to use
a parser generator implemented
shunting yard algorythm, f.ex. using
yacc or bison or taccle (tcl analog of
yacc).
Peter
-Adam
I have posted source for an ultra compact (1 class, < 10 KiB) Java Math Evaluator on my web site. This is a recursive descent parser of the type that caused the cranial explosion for the poster of the accepted answer.
It supports full precedence, parenthesis, named variables and single-argument functions.
i released an expression parser based on Dijkstra's Shunting Yard algorithm, under the terms of the Apache License 2.0:
http://projects.congrace.de/exp4j/index.html
I've implemented a recursive descent parser in Java in the MathEclipse Parser project. It could also be used in as a Google Web Toolkit module
I'm currently working on a series of articles building a regular expression parser as a learning tool for design patterns and readable programing. You can take a look at readablecode. The article presents a clear use of shunting yards algorithm.
I wrote an expression parser in F# and blogged about it here. It uses the shunting yard algorithm, but instead of converting from infix to RPN, I added a second stack to accumulate the results of calculations. It correctly handles operator precedence, but doesn't support unary operators. I wrote this to learn F#, not to learn expression parsing, though.
A Python solution using pyparsing can be found here. Parsing infix notation with various operators with precedence is fairly common, and so pyparsing also includes the infixNotation (formerly operatorPrecedence) expression builder. With it you can easily define boolean expressions using "AND", "OR", "NOT", for example. Or you can expand your four-function arithmetic to use other operators, such as ! for factorial, or '%' for modulus, or add P and C operators to compute permutations and combinations. You could write an infix parser for matrix notation, that includes handling of '-1' or 'T' operators (for inversion and transpose). The operatorPrecedence example of a 4-function parser (with '!' thrown in for fun) is here and a more fully featured parser and evaluator is here.
I know this is a late answer, but I've just written a tiny parser that allows all operators (prefix, postfix and infix-left, infix-right and nonassociative) to have arbitrary precedence.
I'm going to expand this for a language with arbitrary DSL support, but I just wanted to point out that one doesn't need custom parsers for operator precedence, one can use a generalized parser that doesn't need tables at all, and just looks up the precedence of each operator as it appears. People have been mentioning custom Pratt parsers or shunting yard parsers that can accept illegal inputs - this one doesn't need to be customized and (unless there's a bug) won't accept bad input. It isn't complete in a sense, it was written to test the algorithm and its input is in a form that will need some preprocessing, but there are comments that make it clear.
Note some common kinds of operators are missing for instance the sort of operator used for indexing ie table[index] or calling a function function(parameter-expression, ...)
I'm going to add those, but think of both as postfix operators where what comes between the delimeters '[' and ']' or '(' and ')' is parsed with a different instance of the expression parser. Sorry to have left that out, but the postfix part is in - adding the rest will probably almost double the size of the code.
Since the parser is just 100 lines of racket code, perhaps I should just paste it here, I hope this isn't longer than stackoverflow allows.
A few details on arbitrary decisions:
If a low precedence postfix operator is competing for the same infix blocks as a low precedence prefix operator the prefix operator wins. This doesn't come up in most languages since most don't have low precedence postfix operators.
- for instance: ((data a) (left 1 +) (pre 2 not)(data b)(post 3 !) (left 1 +) (data c))
is a+not b!+c where not is a prefix operator and ! is postfix operator and both have lower
precedence than + so they want to group in incompatible ways either as
(a+not b!)+c
or as
a+(not b!+c)
in these cases the prefix operator always wins, so the second is the way it parses
Nonassociative infix operators are really there so that you don't have to pretend that operators that return different types than they take make sense together, but without having different expression types for each it's a kludge. As such, in this algorithm, non-associative operators refuse to associate not just with themselves but with any operator with the same precedence. That's a common case as < <= == >= etc don't associate with each other in most languages.
The question of how different kinds of operators (left, prefix etc) break ties on precedence is one that shouldn't come up, because it doesn't really make sense to give operators of different types the same precedence. This algorithm does something in those cases, but I'm not even bothering to figure out exactly what because such a grammar is a bad idea in the first place.
#lang racket
;cool the algorithm fits in 100 lines!
(define MIN-PREC -10000)
;format (pre prec name) (left prec name) (right prec name) (nonassoc prec name) (post prec name) (data name) (grouped exp)
;for example "not a*-7+5 < b*b or c >= 4"
;which groups as: not ((((a*(-7))+5) < (b*b)) or (c >= 4))"
;is represented as '((pre 0 not)(data a)(left 4 *)(pre 5 -)(data 7)(left 3 +)(data 5)(nonassoc 2 <)(data b)(left 4 *)(data b)(right 1 or)(data c)(nonassoc 2 >=)(data 4))
;higher numbers are higher precedence
;"(a+b)*c" is represented as ((grouped (data a)(left 3 +)(data b))(left 4 *)(data c))
(struct prec-parse ([data-stack #:mutable #:auto]
[op-stack #:mutable #:auto])
#:auto-value '())
(define (pop-data stacks)
(let [(data (car (prec-parse-data-stack stacks)))]
(set-prec-parse-data-stack! stacks (cdr (prec-parse-data-stack stacks)))
data))
(define (pop-op stacks)
(let [(op (car (prec-parse-op-stack stacks)))]
(set-prec-parse-op-stack! stacks (cdr (prec-parse-op-stack stacks)))
op))
(define (push-data! stacks data)
(set-prec-parse-data-stack! stacks (cons data (prec-parse-data-stack stacks))))
(define (push-op! stacks op)
(set-prec-parse-op-stack! stacks (cons op (prec-parse-op-stack stacks))))
(define (process-prec min-prec stacks)
(let [(op-stack (prec-parse-op-stack stacks))]
(cond ((not (null? op-stack))
(let [(op (car op-stack))]
(cond ((>= (cadr op) min-prec)
(apply-op op stacks)
(set-prec-parse-op-stack! stacks (cdr op-stack))
(process-prec min-prec stacks))))))))
(define (process-nonassoc min-prec stacks)
(let [(op-stack (prec-parse-op-stack stacks))]
(cond ((not (null? op-stack))
(let [(op (car op-stack))]
(cond ((> (cadr op) min-prec)
(apply-op op stacks)
(set-prec-parse-op-stack! stacks (cdr op-stack))
(process-nonassoc min-prec stacks))
((= (cadr op) min-prec) (error "multiply applied non-associative operator"))
))))))
(define (apply-op op stacks)
(let [(op-type (car op))]
(cond ((eq? op-type 'post)
(push-data! stacks `(,op ,(pop-data stacks) )))
(else ;assume infix
(let [(tos (pop-data stacks))]
(push-data! stacks `(,op ,(pop-data stacks) ,tos)))))))
(define (finish input min-prec stacks)
(process-prec min-prec stacks)
input
)
(define (post input min-prec stacks)
(if (null? input) (finish input min-prec stacks)
(let* [(cur (car input))
(input-type (car cur))]
(cond ((eq? input-type 'post)
(cond ((< (cadr cur) min-prec)
(finish input min-prec stacks))
(else
(process-prec (cadr cur)stacks)
(push-data! stacks (cons cur (list (pop-data stacks))))
(post (cdr input) min-prec stacks))))
(else (let [(handle-infix (lambda (proc-fn inc)
(cond ((< (cadr cur) min-prec)
(finish input min-prec stacks))
(else
(proc-fn (+ inc (cadr cur)) stacks)
(push-op! stacks cur)
(start (cdr input) min-prec stacks)))))]
(cond ((eq? input-type 'left) (handle-infix process-prec 0))
((eq? input-type 'right) (handle-infix process-prec 1))
((eq? input-type 'nonassoc) (handle-infix process-nonassoc 0))
(else error "post op, infix op or end of expression expected here"))))))))
;alters the stacks and returns the input
(define (start input min-prec stacks)
(if (null? input) (error "expression expected")
(let* [(cur (car input))
(input-type (car cur))]
(set! input (cdr input))
;pre could clearly work with new stacks, but could it reuse the current one?
(cond ((eq? input-type 'pre)
(let [(new-stack (prec-parse))]
(set! input (start input (cadr cur) new-stack))
(push-data! stacks
(cons cur (list (pop-data new-stack))))
;we might want to assert here that the cdr of the new stack is null
(post input min-prec stacks)))
((eq? input-type 'data)
(push-data! stacks cur)
(post input min-prec stacks))
((eq? input-type 'grouped)
(let [(new-stack (prec-parse))]
(start (cdr cur) MIN-PREC new-stack)
(push-data! stacks (pop-data new-stack)))
;we might want to assert here that the cdr of the new stack is null
(post input min-prec stacks))
(else (error "bad input"))))))
(define (op-parse input)
(let [(stacks (prec-parse))]
(start input MIN-PREC stacks)
(pop-data stacks)))
(define (main)
(op-parse (read)))
(main)
Here is a simple case recursive solution written in Java. Note it does not handle negative numbers but you can do add that if you want to:
public class ExpressionParser {
public double eval(String exp){
int bracketCounter = 0;
int operatorIndex = -1;
for(int i=0; i<exp.length(); i++){
char c = exp.charAt(i);
if(c == '(') bracketCounter++;
else if(c == ')') bracketCounter--;
else if((c == '+' || c == '-') && bracketCounter == 0){
operatorIndex = i;
break;
}
else if((c == '*' || c == '/') && bracketCounter == 0 && operatorIndex < 0){
operatorIndex = i;
}
}
if(operatorIndex < 0){
exp = exp.trim();
if(exp.charAt(0) == '(' && exp.charAt(exp.length()-1) == ')')
return eval(exp.substring(1, exp.length()-1));
else
return Double.parseDouble(exp);
}
else{
switch(exp.charAt(operatorIndex)){
case '+':
return eval(exp.substring(0, operatorIndex)) + eval(exp.substring(operatorIndex+1));
case '-':
return eval(exp.substring(0, operatorIndex)) - eval(exp.substring(operatorIndex+1));
case '*':
return eval(exp.substring(0, operatorIndex)) * eval(exp.substring(operatorIndex+1));
case '/':
return eval(exp.substring(0, operatorIndex)) / eval(exp.substring(operatorIndex+1));
}
}
return 0;
}
}
Algorithm could be easily encoded in C as recursive descent parser.
#include <stdio.h>
#include <ctype.h>
/*
* expression -> sum
* sum -> product | product "+" sum
* product -> term | term "*" product
* term -> number | expression
* number -> [0..9]+
*/
typedef struct {
int value;
const char* context;
} expression_t;
expression_t expression(int value, const char* context) {
return (expression_t) { value, context };
}
/* begin: parsers */
expression_t eval_expression(const char* symbols);
expression_t eval_number(const char* symbols) {
// number -> [0..9]+
double number = 0;
while (isdigit(*symbols)) {
number = 10 * number + (*symbols - '0');
symbols++;
}
return expression(number, symbols);
}
expression_t eval_term(const char* symbols) {
// term -> number | expression
expression_t number = eval_number(symbols);
return number.context != symbols ? number : eval_expression(symbols);
}
expression_t eval_product(const char* symbols) {
// product -> term | term "*" product
expression_t term = eval_term(symbols);
if (*term.context != '*')
return term;
expression_t product = eval_product(term.context + 1);
return expression(term.value * product.value, product.context);
}
expression_t eval_sum(const char* symbols) {
// sum -> product | product "+" sum
expression_t product = eval_product(symbols);
if (*product.context != '+')
return product;
expression_t sum = eval_sum(product.context + 1);
return expression(product.value + sum.value, sum.context);
}
expression_t eval_expression(const char* symbols) {
// expression -> sum
return eval_sum(symbols);
}
/* end: parsers */
int main() {
const char* expression = "1+11*5";
printf("eval(\"%s\") == %d\n", expression, eval_expression(expression).value);
return 0;
}
next libs might be useful:
yupana - strictly arithmetic operations;
tinyexpr - arithmetic operations + C math functions + one provided by user;
mpc - parser combinators
Explanation
Let's capture sequence of symbols that represent algebraic expression.
First one is a number, that is a decimal digit repeated one or more times.
We will refer such notation as production rule.
number -> [0..9]+
Addition operator with its operands is another rule.
It is either number or any symbols that represents sum "*" sum sequence.
sum -> number | sum "+" sum
Try substitute number into sum "+" sum that will be number "+" number which in turn could be expanded into [0..9]+ "+" [0..9]+ that finally could be reduced to 1+8 which is correct addition expression.
Other substitutions will also produce correct expression: sum "+" sum -> number "+" sum -> number "+" sum "+" sum -> number "+" sum "+" number -> number "+" number "+" number -> 12+3+5
Bit by bit we could resemble set of production rules aka grammar that express all possible algebraic expression.
expression -> sum
sum -> difference | difference "+" sum
difference -> product | difference "-" product
product -> fraction | fraction "*" product
fraction -> term | fraction "/" term
term -> "(" expression ")" | number
number -> digit+
To control operator precedence alter position of its production rule against others. Look at grammar above and note that production rule for * is placed below + this will force product evaluate before sum.
Implementation just combines pattern recognition with evaluation and thus closely mirrors production rules.
expression_t eval_product(const char* symbols) {
// product -> term | term "*" product
expression_t term = eval_term(symbols);
if (*term.context != '*')
return term;
expression_t product = eval_product(term.context + 1);
return expression(term.value * product.value, product.context);
}
Here we eval term first and return it if there is no * character after it this is left choise in our production rule otherwise - evaluate symbols after and return term.value * product.value this is right choise in our production rule i.e. term "*" product
Actually there's a way to do this without recursion, which allows you to go through the entire expression once, character by character. This is O(n) for time and space. It takes all of 5 milliseconds to run even for a medium-sized expression.
First, you'd want to do a check to ensure that your parens are balanced. I'm not doing it here for simplicity. Also, I'm acting as if this were a calculator. Calculators do not apply precedence unless you wrap an expression in parens.
I'm using two stacks, one for the operands and another for the operators. I increase the priority of the operation whenever I reach an opening '(' paren and decrease the priority whenever I reach a closing ')' paren. I've even revised the code to add in numbers with decimals. This is in c#.
NOTE: This doesn't work for signed numbers like negative numbers. Probably is just a simple revision.
internal double Compute(string sequence)
{
int priority = 0;
int sequenceCount = sequence.Length;
for (int i = 0; i < sequenceCount; i++) {
char s = sequence[i];
if (Char.IsDigit(s)) {
double value = ParseNextNumber(sequence, i);
numberStack.Push(value);
i = i + value.ToString().Length - 1;
} else if (s == '+' || s == '-' || s == '*' || s == '/') {
Operator op = ParseNextOperator(sequence, i, priority);
CollapseTop(op, numberStack, operatorStack);
operatorStack.Push(op);
} if (s == '(') { priority++; ; continue; }
else if (s == ')') { priority--; continue; }
}
if (priority != 0) { throw new ApplicationException("Parens not balanced"); }
CollapseTop(new Operator(' ', 0), numberStack, operatorStack);
if (numberStack.Count == 1 && operatorStack.Count == 0) {
return numberStack.Pop();
}
return 0;
}
Then to test this out:
Calculator c = new Calculator();
double value = c.Compute("89.8+((9*3)+8)+(9*2)+1");
Console.WriteLine(string.Format("The sum of the expression is: {0}", (float)value));
//prints out The sum of the expression is: 143.8
Pure javascript, no dependencies needed
I very like bart's answer.
and I do some modifications to read it easier, and also add support some function(and easily extend)
function Parse(str) {
try {
return parseExpr(str.replaceAll(" ", "")) // Implement? See full code.
} catch (e) {
alert(e.message)
}
}
Parse("123.45+3*22*4")
It can support as below
const testArray = [
// 👇 Basic Test
["(3+5)*4", ""],
["123.45+3*22*4", ""],
["8%2", ""],
["8%3", ""],
["7/3", ""],
["2*pi*e", 2 * Math.atan2(0, -1) * Math.exp(1)],
["2**3", ""],
// 👇 unary Test
["3+(-5)", ""],
["3+(+5)", ""],
// 👇 Function Test
["pow{2,3}*2", 16],
["4*sqrt{16}", 16],
["round{3.4}", 3],
["round{3.5}", 4],
["((1+e)*3/round{3.5})%2", ((1 + Math.exp(1)) * 3 / Math.round(3.5)) % 2],
["round{3.5}+pow{2,3}", Math.round(3.5)+Math.pow(2,3)],
]
Full code
// 👇 Main
(() => {
window.onload = () => {
const nativeConsoleLogFunc = window.console.error
window.console.error = (...data) => { // Override native function, just for test.
const range = document.createRange()
const frag = range.createContextualFragment(`<div>${data}</div>`)
document.querySelector("body").append(frag)
nativeConsoleLogFunc(...data)
}
// Add Enter event
document.querySelector(`input`).onkeyup = (keyboardEvent) => {
if (keyboardEvent.key === "Enter") {
const result = Parse(document.getElementById('expr').value)
if (result !== undefined) {
alert(result)
}
}
}
const testArray = [
// 👇 Basic Test
["(3+5)*4", ""],
["123.45+3*22*4", ""],
["8%2", ""],
["8%3", ""],
["7/3", ""],
["2*pi*e", 2 * Math.atan2(0, -1) * Math.exp(1)],
["2**3", ""],
// 👇 unary
["3+(-5)", ""],
["3+(+5)", ""],
// 👇 Function Test
["pow{2,3}*2", 16],
["4*sqrt{16}", 16],
["round{3.4}", 3],
["round{3.5}", 4],
["((1+e)*3/round{3.5})%2", ((1 + Math.exp(1)) * 3 / Math.round(3.5)) % 2],
["round{3.5}+pow{2,3}", Math.round(3.5) + Math.pow(2, 3)],
// 👇 error test
["21+", ValueMissingError],
["21+*", ParseError],
["(1+2", ParseError], // miss ")"
["round(3.12)", MissingParaError], // should be round{3.12}
["help", UnknownVarError],
]
for (let [testString, expected] of testArray) {
if (expected === "") {
expected = eval(testString) // Why don't you use eval instead of writing the function yourself? Because the browser may disable eval due to policy considerations. [CSP](https://content-security-policy.com/)
}
const actual = Parse(testString, false)
if (actual !== expected) {
if (actual instanceof Error && actual instanceof expected) {
continue
}
console.error(`${testString} = ${actual}, value <code>${expected}</code> expected`)
}
}
}
})()
// 👇 Script
class UnknownVarError extends Error {
}
class ValueMissingError extends Error {
}
class ParseError extends Error {
}
class MissingParaError extends Error {
}
/**
* #description Operator
* #param {string} sign "+", "-", "*", "/", ...
* #param {number} precedence
* #param {"L"|"R"} assoc associativity left or right
* #param {function} exec
* */
function Op(sign, precedence, assoc, exec = undefined) {
this.sign = sign
this.precedence = precedence
this.assoc = assoc
this.exec = exec
}
const OpArray = [
new Op("+", 10, "L", (l, r) => l + r),
new Op("-", 10, "L", (l, r) => l - r),
new Op("*", 20, "L", (l, r) => l * r),
new Op("/", 20, "L", (l, r) => l / r),
new Op("%", 20, "L", (l, r) => l % r),
new Op("**", 30, "R", (l, r) => Math.pow(l, r))
]
const VarTable = {
e: Math.exp(1),
pi: Math.atan2(0, -1), // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/atan2
pow: (x, y) => Math.pow(x, y),
sqrt: (x) => Math.sqrt(x),
round: (x) => Math.round(x),
}
/**
* #param {Op} op
* #param {Number} value
* */
function Item(op, value = undefined) {
this.op = op
this.value = value
}
class Stack extends Array {
constructor(...items) {
super(...items)
this.push(new Item(new Op("", 0, "L")))
}
GetLastItem() {
return this[this.length - 1] // fast then pop // https://stackoverflow.com/a/61839489/9935654
}
}
function Cursor(str, pos) {
this.str = str
this.pos = pos
this.MoveRight = (step = 1) => {
this.pos += step
}
this.PeekRightChar = (step = 1) => {
return this.str.substring(this.pos, this.pos + step)
}
/**
* #return {Op}
* */
this.MoveToNextOp = () => {
const opArray = OpArray.sort((a, b) => b.precedence - a.precedence)
for (const op of opArray) {
const sign = this.PeekRightChar(op.sign.length)
if (op.sign === sign) {
this.MoveRight(op.sign.length)
return op
}
}
return null
}
}
/**
* #param {Cursor} cursor
* */
function parseVal(cursor) {
let startOffset = cursor.pos
const regex = /^(?<OpOrVar>[^\d.])?(?<Num>[\d.]*)/g
const m = regex.exec(cursor.str.substr(startOffset))
if (m) {
const {groups: {OpOrVar, Num}} = m
if (OpOrVar === undefined && Num) {
cursor.pos = startOffset + Num.length
if (cursor.pos > startOffset) {
return parseFloat(cursor.str.substring(startOffset, startOffset + cursor.pos - startOffset)) // do not use string.substr() // It will be removed in the future. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Deprecated_and_obsolete_features#string_methods
}
}
if ("+-(".indexOf(OpOrVar) !== -1) {
cursor.pos++
switch (OpOrVar) {
case "+": // unary plus, for example: (+5)
return parseVal(cursor)
case "-":
return -(parseVal(cursor))
case "(":
const value = parseExpr(cursor)
if (cursor.PeekRightChar() === ")") {
cursor.MoveRight()
return value
}
throw new ParseError("Parsing error: ')' expected")
}
}
}
// 👇 below is for Variable or Function
const match = cursor.str.substring(cursor.pos).match(/^[a-z_][a-z0-9_]*/i) // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/match
if (match) {
// 👇 Variable
const varName = match[0]
cursor.MoveRight(varName.length)
const bracket = cursor.PeekRightChar(1)
if (bracket !== "{") {
if (varName in VarTable) {
const val = VarTable[varName]
if (typeof val === "function") {
throw new MissingParaError(`${varName} is a function, it needs big curly brackets`)
}
return val
}
}
// 👇 is function
const regex = /{(?<Para>[^{]*)}/gm
const m = regex.exec(cursor.str.substring(cursor.pos))
if (m && m.groups.Para !== undefined) {
const paraString = m.groups.Para
const para = paraString.split(',')
cursor.MoveRight(paraString.length + 2) // 2 = { + }
return VarTable[varName](...para)
}
throw new UnknownVarError(`unknown variable ${varName}`)
}
// 👇 Handle Error
if (cursor.str.length === cursor.pos) { // example: 1+2+
throw new ValueMissingError(`Parsing error at end of string: value expected.`)
} else { // example: 1+2+*
throw new ParseError("Parsing error: unrecognized value")
}
}
/**
* #param {string|Cursor} expr
* */
function parseExpr(expr) {
const stack = new Stack()
const cursor = (expr instanceof Cursor) ? expr : new Cursor(expr, 0)
while (1) {
let rightValue = parseVal(cursor)
const op = cursor.MoveToNextOp() ?? new Op("", 0, "L")
while (
op.precedence < stack.GetLastItem().op.precedence ||
(op.precedence === stack.GetLastItem().op.precedence && op.assoc === 'L')) {
const lastItem = stack.pop()
if (!lastItem.op.exec) { // end reached
return rightValue
}
rightValue = lastItem.op.exec(lastItem.value, rightValue)
}
stack.push(new Item(op, rightValue))
}
}
function Parse(str, alertError = true) {
try {
return parseExpr(str.replaceAll(" ", ""))
} catch (e) {
if (alertError) {
alert(e.message)
return undefined
}
return e
}
}
<input type="text" id="expr" name="expr" placeholder="123.45+3*22*4">
<button onclick="const x = Parse(document.getElementById('expr').value); if(x != null) alert(x);">
Calculate!
</button>

Resources