By use I mean its use in many calculators like HP35-
My guesses (and confusions) are -
postfix is actually more memory efficient -( SO post comments here ). (confusion - The evaluation algorithm of both are similar with a stack)
keyboard input type in calculators back then(confusion - this shouldn't have mattered much as it only depends on order of operators given first or last)
Another way this question can be asked is what advantages postfix notation have over prefix?
Can anyone enlighten me?
For one it is easier to implement evaluation.
With prefix, if you push an operator, then its operands, you need to have forward knowledge of when the operator has all its operands. Basically you need to keep track of when operators you've pushed have all their operands so that you can unwind the stack and evaluate.
Since a complex expression will likely end up with many operators on the stack you need to have a data structure that can handle this.
For instance, this expression: - + 10 20 + 30 40 will have one - and one + on the stack at the same time, and for each you need to know if you have the operands available.
With suffix, when you push an operator, the operands are (should) already on the stack, simply pop the operands and evaluate. You only need a stack that can handle operands, and no other data structure is necessary.
Prefix notation is probably used more commonly ... in mathematics, in expressions like F(x,y). It's a very old convention, but like many old systems (feet and inches, letter paper) it has drawbacks compared to what we could do if we used a more thoughtfully designed system.
Just about every first year university math textbook has to waste a page at least explaining that f(g(x)) means we apply g first then f. Doing it in reading order makes so much more sense: x.f.g means we apply f first. Then if we want to apply h "after" we just say x.f.g.h.
As an example, consider an issue in 3d rotations that I recently had to deal with. We want to rotate a vector according to XYZ convention. In postfix, the operation is vec.rotx(phi).roty(theta).rotz(psi). With prefix, we have to overload * or () and then reverse the order of the operations, e.g., rotz*roty*rotx*vec. That is error prone and irritating to have to think about that all the time when you want to be thinking about bigger issues.
For example, I saw something like rotx*roty*rotz*vec in someone else's code and I didn't know whether it was a mistake or an unusual ZYX rotation convention. I still don't know. The program worked, so it was internally self-consistent, but in this case prefix notation made it hard to maintain.
Another issue with prefix notation is that when we (or a computer) parses the expression f(g(h(x))) we have to hold f in our memory (or on the stack), then g, then h, then ok we can apply h to x, then we can apply g to the result, then f to the result. Too much stuff in memory compared to x.f.g.h. At some point (for humans much sooner than computers) we will run out of memory. Failure in that way is not common, but why even open the door to that when x.f.g.h requires no short term memory. It's like the difference between recursion and looping.
And another thing: f(g(h(x))) has so many parentheses that it's starting to look like Lisp. Postfix notation is unambiguous when it comes to operator precedence.
Some mathematicians (in particular Nathan Jacobson) have tried changing the convention, because postfix so much easier to work with in noncommutative algebra where order really matters, to little avail. But since we have a chance to do things over, better, in computing, we should take the opportunity.
Basically, because if you write the expression in postfix, you can evaluate that expression using just a Stack:
Read the next element of the expression,
If it is an operand, push into Stack,
Otherwise read from Stack operands required by the Operation, & push the result into Stack.
If not the end of the expression, go to 1.
Example
expression = 1 2 + 3 4 + *
stack = [ ]
Read 1, 1 is Operand, Push 1
[ 1 ]
Read 2, 2 is Operand, Push 2
[ 1 2 ]
Read +, + is Operation, Pop two Operands 1 2
Evaluate 1 + 2 = 3, Push 3
[ 3 ]
Read 3, 3 is Operand, Push 3
[ 3 3 ]
Read 4, 4 is Operand, Push 4
[ 3 3 4 ]
Read +, + is Operation, Pop two Operands 3 4
Evaluate 3 + 4 = 7, Push 7
[ 3 7 ]
Read *, * is Operation, Pop two Operands 3 7
Evaluate 3 * 7 = 21, Push 21
[ 21 ]
If you like your human reading order to match the machine's stack-based evaluation order then postfix is a good choice.
That is, assuming you read left-to-right, which not everyone does (e.g. Hebrew, Arabic, ...). And assuming your machine evaluates with a stack, which not all do (e.g. term rewriting - see Joy).
On the other hand, there's nothing wrong with the human preferring prefix while the machine evaluates "back to front/bottom-to-top". Serialization could be reversed too if the concern is evaluation as tokens arrive. Tool assistance may work better in prefix notation (knowing functions/words first may help scope valid arguments), but you could always type right-to-left.
It's merely a convention I believe...
Offline evaluation of both notation is same in theoretical machine
(Eager evaluation strategy)Evaluating with only one stack(without putting operator in stack)
It can be done by evaluating Prefix-notation right-to-left.
- 7 + 2 3
# evaluate + 2 3
- 7 5
# evaluate - 7 5
2
It is same as evaluating Postfix-notation left-to-right.
7 2 3 + -
# put 7 on stack
7 2 3 + -
# evaluate 2 3 +
7 5 -
# evaluate 7 5 -
2
(Optimized short-circuit strategy) Evaluating with two stacks(one for operator and one for operand)
It can be done by evaluating Prefix-notation left-to-right.
|| 1 < 2 3
# put || in instruction stack, 1 in operand stack or keep the pair in stack
instruction-stack: or
operand-stack: 1
< 2 3
# push < 2 3 in stack
instruction-stack: or, less_than
operand-stack: 1, 2, 3
# evaluate < 2 3 as 1
instruction-stack: or
operand-stack: 1, 1
# evaluate || 1 1 as 1
operand-stack:1
Notice that we can do short-circuit optimization for the boolean expression here easily(compared to previous evaluation sequence).
|| 1 < 2 3
# put || in instruction stack, 1 in operand stack or keep the pair in stack
instruction-stack: or
operand-stack: 1
< 2 3
# Is it possible to evaluate `|| 1` without evaluating the rest ? Yes !!
# skip < 2 3 and put place-holder 0
instruction-stack: or
operand-stack: 1 0
# evaluate || 1 0 as 1
operand-stack: 1
It is same as evaluating Postfix-notation right-to-left.
(Optimized short-circuit strategy) Evaluating with one stack that takes a tuple (same as above)
It can be done by evaluating Prefix-notation left-to-right.
|| 1 < 2 3
# put || 1 in tuple-stack
stack tuple[or,1,unknown]
< 2 3
# We do not need to compute < 2 3
stack tuple[or,1,unknown]
# evaluate || 1 unknown as 1
1
It is same as evaluating Postfix-notation right-to-left.
Online evaluation in a calculator while human entering data in left-to-right
When putting numbers in a calculator, the Postfix-notation 2 3 + can be evaluated instantly without any knowledge of the symbol human is going to put. It is opposite of Prefix notation because when we have - 7 + we have nothing to do, not until we get something like - 7 + 2 3.
Online evaluation in a calculator while human entering data in right-to-left
Now the Prefix-notation can evaluate + 2 3 instantly, while the Postfix-notation waits for further input when it has 3 + - .
Please refer to #AshleyF note that the Arabic-language writes from right-to-left in contrast to English-language that writes from left-to-write !
I guess little-endian and big-endian is something related to this prefix/postfix notation.
One final comment, Reverse-Polish notation is strongly supported by Dijkstra (he is strong opponent of short-circuit optimization and regarded as the inventor of Reverse-Polish notation). It is your choice to support his opinion or not(I do not).
Related
Someone tell me any algorithm or steps to be taken for converting infix expression to prefix expression without using stack, array, any programming language or implementation. Just simple human algorithms for Non CS Students.
If anyone have better algorithm or steps please specify and also try to solve it for me please... :)
(5+15/3)^2-(8*3/3*4/5*32/5+42)*(3*3/3*5/4)
The "simple human algorithm" uses a stack. Consider the Shunting Yard, for example. You can do that with paper and pencil. The "output queue" is simply the solution that you output. The "stack" is just a holding place. So when it says, "push onto the stack", imagine putting that value on the top of a stack of other values. When it says, "pop from the stack," imagine removing the thing that was on top.
When doing it with pencil and paper, dedicate a couple of lines at the bottom of the page for your output queue. Create a column on the right side of the page as your stack. Wherever it says, "write it to the output queue", write that value as the next value on your answer line.
The first time it says, "push onto the stack", write that value in the stack column, at the bottom. If you have to push something else, write it above that value. When it says "pop from the stack," erase the top value from your stack column, freeing up a space.
That really is the simplest reliable way to do things by hand.
I'll use the first bit of your example for a demonstration. Let's say you want to convert (5+15/3)^2 to postfix. Using the instructions in the Shunting Yard article:
Your output queue is empty and so is your stack. The first token is (. The instructions say to push it onto the stack. So we have:
output queue:
stack: (
The next token is 5. It goes to the output queue:
output queue: 5
stack: (
Next is +. Since there is no token on the top of the stack, we just push it:
output queue: 5
stack: ( +
Next is 15. It goes to the output queue
output queue: 5 15
stack: ( +
Next is /. It's an operator and there's an operator on the stack, but / has higher precedence than +. So according to the rules, we push / onto the stack:
output queue 5 15
stack: ( + /
Next is 3. It goes to the output queue:
output queue 5 15 3
stack: ( + /
Next is ). The rules say to start popping operators from the stack until we get to the open parenthesis. Or, if we empty the stack and there's no open paren, then we have mismatched parentheses. Anyway, popping the stack and adding to the output queue:
output queue: 5 15 3 / +
stack: <empty>
Next token is ^. There are no operators on the stack, so we push it.
output queue: 5 15 3 / +
stack: ^
Finally, we have 2. It goes to the output queue:
output queue: 5 15 3 / + 2
stack: ^
And we're at the end of the string, so we pop all the operators and put them on the output queue:
output queue: 5 15 3 / + 2 ^
And that's the postfix representation of (5 + 15/3)^2.
The only tricky part is getting the operator precedence right. Basically, exponentiation is highest. Multiplication and division next, at equal precedence, then addition and subtraction at equal precedence. If those are the only operators, it's easy to remember. Otherwise you'll probably want a table of operator precedence handy so you can refer to it when you're working the algorithm. And the unary minus (i.e. 5 + -1) require a special case. But really, that's all there is to it.
I discovered the "time" command in unix today and thought I'd use it to check the difference in runtimes between tail-recursive and normal recursive functions in Haskell.
I wrote the following functions:
--tail recursive
fac :: (Integral a) => a -> a
fac x = fac' x 1 where
fac' 1 y = y
fac' x y = fac' (x-1) (x*y)
--normal recursive
facSlow :: (Integral a) => a -> a
facSlow 1 = 1
facSlow x = x * facSlow (x-1)
These are valid keeping in mind they were solely for use with this project, so I didn't bother to check for zeroes or negative numbers.
However, upon writing a main method for each, compiling them, and running them with the "time" command, both had similar runtimes with the normal recursive function edging out the tail recursive one. This is contrary to what I'd heard with regards to tail-recursive optimization in lisp. What's the reason for this?
Haskell uses lazy-evaluation to implement recursion, so treats anything as a promise to provide a value when needed (this is called a thunk). Thunks get reduced only as much as necessary to proceed, no more. This resembles the way you simplify an expression mathematically, so it's helpful to think of it that way. The fact that evaluation order is not specified by your code allows the compiler to do lots of even cleverer optimisations than just the tail-call elimination youre used to. Compile with -O2 if you want optimisation!
Let's see how we evaluate facSlow 5 as a case study:
facSlow 5
5 * facSlow 4 -- Note that the `5-1` only got evaluated to 4
5 * (4 * facSlow 3) -- because it has to be checked against 1 to see
5 * (4 * (3 * facSlow 2)) -- which definition of `facSlow` to apply.
5 * (4 * (3 * (2 * facSlow 1)))
5 * (4 * (3 * (2 * 1)))
5 * (4 * (3 * 2))
5 * (4 * 6)
5 * 24
120
So just as you worried, we have a build-up of numbers before any calculations happen, but unlike you worried, there's no stack of facSlow function calls hanging around waiting to terminate - each reduction is applied and goes away, leaving a stack frame in its wake (that is because (*) is strict and so triggers the evaluation of its second argument).
Haskell's recursive functions aren't evaluated in a very recursive way! The only stack of calls hanging around are the multiplications themselves. If (*) is viewed as a strict data constructor, this is what's known as guarded recursion (although it is usually referred to as such with non-strict data constructors, where what's left in its wake are the data constructors - when forced by further access).
Now let's look at the tail-recursive fac 5:
fac 5
fac' 5 1
fac' 4 {5*1} -- Note that the `5-1` only got evaluated to 4
fac' 3 {4*{5*1}} -- because it has to be checked against 1 to see
fac' 2 {3*{4*{5*1}}} -- which definition of `fac'` to apply.
fac' 1 {2*{3*{4*{5*1}}}}
{2*{3*{4*{5*1}}}} -- the thunk "{...}"
(2*{3*{4*{5*1}}}) -- is retraced
(2*(3*{4*{5*1}})) -- to create
(2*(3*(4*{5*1}))) -- the computation
(2*(3*(4*(5*1)))) -- on the stack
(2*(3*(4*5)))
(2*(3*20))
(2*60)
120
So you can see how the tail recursion by itself hasn't saved you any time or space. Not only does it take more steps overall than facSlow 5, it also builds a nested thunk (shown here as {...}) - needing an extra space for it - which describes the future computation, the nested multiplications to be performed.
This thunk is then unraveled by traversing it to the bottom, recreating the computation on the stack. There is also a danger here of causing stack overflow with very long computations, for both versions.
If we want to hand-optimise this, all we need to do is make it strict. You could use the strict application operator $! to define
facSlim :: (Integral a) => a -> a
facSlim x = facS' x 1 where
facS' 1 y = y
facS' x y = facS' (x-1) $! (x*y)
This forces facS' to be strict in its second argument. (It's already strict in its first argument because that has to be evaluated to decide which definition of facS' to apply.)
Sometimes strictness can help enormously, sometimes it's a big mistake because laziness is more efficient. Here it's a good idea:
facSlim 5
facS' 5 1
facS' 4 5
facS' 3 20
facS' 2 60
facS' 1 120
120
Which is what you wanted to achieve I think.
Summary
If you want to optimise your code, step one is to compile with -O2
Tail recursion is only good when there's no thunk build-up, and adding strictness usually helps to prevent it, if and where appropriate. This happens when you're building a result that is needed later on all at once.
Sometimes tail recursion is a bad plan and guarded recursion is a better fit, i.e. when the result you're building will be needed bit by bit, in portions. See this question about foldr and foldl for example, and test them against each other.
Try these two:
length $ foldl1 (++) $ replicate 1000
"The size of intermediate expressions is more important than tail recursion."
length $ foldr1 (++) $ replicate 1000
"The number of reductions performed is more important than tail recursion!!!"
foldl1 is tail recursive, whereas foldr1 performs guarded recursion so that the first item is immediately presented for further processing/access. (The first "parenthesizes" to the left at once, (...((s+s)+s)+...)+s, forcing its input list fully to its end and building a big thunk of future computation much sooner than its full results are needed; the second parenthesizes to the right gradually, s+(s+(...+(s+s)...)), consuming the input list bit by bit, so the whole thing is able to operate in constant space, with optimizations).
You might need to adjust the number of zeros depending on what hardware you're using.
It should be mentioned that the fac function is not a good candidate for guarded recursion. Tail recursion is the way to go here. Due to laziness you are not getting the effect of TCO in your fac' function because the accumulator arguments keep building large thunks, which when evaluated will require a huge stack. To prevent this and get the desired effect of TCO you need to make these accumulator arguments strict.
{-# LANGUAGE BangPatterns #-}
fac :: (Integral a) => a -> a
fac x = fac' x 1 where
fac' 1 y = y
fac' x !y = fac' (x-1) (x*y)
If you compile using -O2 (or just -O) GHC will probably do this on its own in the strictness analysis phase.
You should check out the wiki article on tail recursion in Haskell. In particular, because of expression evaluation, the kind of recursion you want is guarded recursion. If you work out the details of what's going on under the hood (in the abstract machine for Haskell) you get the same kind of thing as with tail recursion in strict languages. Along with this, you have a uniform syntax for lazy functions (tail recursion will tie you to strict evaluation, whereas guarded recursion works more naturally).
(And in learning Haskell, the rest of those wiki pages are awesome, too!)
If I recall correctly, GHC automatically optimizes plain recursive functions into tail-recursive optimized ones.
Recently I had an exam where we were tested on logic circuits. I encountered something on that exam that I had never encountered before. Forgive me for I do not remember the exact problem given and we have not received our grade for it; however I will describe the problem.
The problem had a 3 or 4 inputs. We were told to simplify then draw a logic circuit design for that simplification. However, when I simplified, I ended up eliminating the other inputs and ended up literally with just
A
I had another problem like this as well where there was 4 inputs and when I simplified, I ended up with three. My question is:
What do I do with the eliminated inputs? Do I just not have it on the circuit? How would I draw it?
Typically an output is a requirement which would not be eliminated, even if it ends up being dependent on a single input. If input A flows through to output Y, just connect A to Y in the diagram. If output Y is always 0 or 1, connect an incoming 0 or 1 to output Y.
On the other hand, inputs are possible, not required, factors in the definition of the problem. Inputs that have no bearing on the output need not be shown in the circuit diagram at all.
Apparently it not eliminating inputs but the resulted expression is the simplified outcome which you need to think of implementing with logic circuit.
As an example if you have a expression given with 3 inputs namely with the combination of A, B & c, possible literals can be 2^9 = 9 between 000 through 111. Now when you said your simplification lead to just A that mean, when any of those 9 input combinations will result in to value which contain by A.
An example boolean expression simplified to output A truth table is as follows,
A B | Output = A
------------------
0 0 | 0
0 1 | 0
1 0 | 1
1 1 | 1
This question already has answers here:
Writing an algorithm to decide whether a target number can be reached with a set of other numbers and specific operators?
(3 answers)
Closed 8 years ago.
Here's the problem:
Given 4 numbers, I need to give a calculated process which results 24. All the operations I can use are addition, subtraction, multiplication, division. How to print the calculated process?
Ex:
Input: 4,7,8,8
Output: (7-(8/8))*4=24.
(The following is an expansion on an idea suggested by Sayakiss)
One option would be enumerating all possible combinations of numbers and arithmetic operations performed on them.
If you have 4 numbers, there are only 24 different ways to write them in a list (the following example is for numbers 4, 7, 8, 9 - i changed the last number in your example to make them all different):
4 7 8 9
4 7 9 8
4 8 7 9
4 8 9 7
...
9 8 7 4
If some numbers are identical, some of the above lists will appear twice (not a problem).
For each of the above orderings, there are 64 different ways to insert an arithmetic operation between the numbers:
4+7+8+9
4+7+8-9
4+7+8*9
4+7+8/9
4+7-8+9
...
4/7/8/9
For each of the above sequences, there are 5 ways to place parentheses:
((4-7)-8)-9
(4-7)-(8-9)
(4-(7-8))-9
4-((7-8)-9)
4-(7-(8-9))
When you combine all 3 "aspects" mentioned above, you get 24 * 64 * 5 = 7680 expressions; evaluate each one and check whether its value is 24 (or whatever number you need it to be).
It may be convenient to generate the expressions in a tree form, to simplify evaluation (this depends on the programming language you want to use; e.g. in C/C++ there is no eval function) . For example, the expression 4*((7-8)+9) may be represented by the following tree:
*
/ \
4 +
/ \
- 9
/ \
7 8
Some notes:
You may want to tweak the choice of arithmetic operations to allow for expressions like 47+88 - not sure whether the rules of your game permit that.
Many of the evaluated expressions may be annoyingly verbose, like ((4+7)+8)+8 and 4+(7+(8+8)) (which are also examined twice, with the order of the 8's switched); you could prevent that by inserting some dedicated checks into your algorithm.
My understanding of calculators is that they are stack-based. When you use most calculators, if you type 1 + 2 [enter] [enter] you get 5. 1 is pushed on the stack, + is the operator, then 2 is pushed on the stack. The 1st [enter] should pop 1 and 2 off the stack, add them to get 3 then push 3 back on the stack. The 2nd [enter] shouldn't have access to the 2 because it effectively doesn't exist anywhere.
How is the 2 retained so that the 2nd [enter] can use it?
Is 2 pushed back onto the stack before the 3 or is it retained somewhere else for later use? If it is pushed back on the stack, can you conceivably cause a stack overflow by repeatedly doing [operator] [number] [enter] [enter]?
Conceptually, in hardware these values are put into registers. In simple ALU (Arithmatic Logical Units (i.e. simply CPUs)), one of the registers would be considered an accumulator. The values you're discussing could be put on a stack to process, but once the stack is empty, the register value (including the last operation) may be cached in these registers. To which, when told to perform the operation again, uses the accumulator as well as the last argument.
For example,
Reg1 Reg2 (Accumulator) Operator
Input 1 1
Input + 1 +
Input 2 2 1 +
Enter 2 3 +
Enter 2 5 +
Enter 2 7 +
So it may be a function of the hardware being used.
The only true stack based calculators are calculators which have Reverse Polish Notation as the input method, as that notation directly operates on stacks.
All you would need to do is retain the last operator and operand, and just apply them if the stack is empty.
There is an excellent description and tutorial of the Shunting-yard algorithm (infix -> rpn conversion) on wikipedia.