I am a new student in the compilers world ^_^ and I want to know is legal represent negative number in the stack.
For example:
infix: 1-5=-4 postfix: 15-
The statements are:
push(1)
push(5)
x=pop()
y=pop()
t=sub(y,x)
push(t)
The final result in the stack will be (-4)
How can i represent this if it is legal??
Thank you ^_^
Yes. Negative numbers are stored in Two's complement form in memory, so you don't need an additional cell on the stack for the sign.
In case you are referring to representing the expression textually (perhaps in a file that is read in by your program), then you would probably define some syntactic rules for your expression - say separation of tokens by whitespace
For example, is 444/ in postfix the same as (4 / 44) or (44 / 4) or (4 / (4 / 4)) in infix? You would need some way of seperating multi-digit numbers.
Now, assuming you decide on whitespace, you could make a rule that a negative integer would be a minus sign followed by a series of digits without any separating whitespace
So the infix expression '-1 * (3 ^ (-4) - 7' could become '-1 3 -4 ^ * 7 -'
Is this what you were looking for?
PS - With a proper parser, you could actually do it without whitespace for operators, but you still need to separate operands from each other.
If you talk about a stack you talk about abstract data types. As long as you have a push/pop functionality it makes no difference what you put in the stack
First note that there is a difference between the dash, '-', used as the subtraction operator and as the negative sign. Though we use the same character, they have a different meaning.
Both positive and negative integers, like -4, take only a single slot in the stack.
If your postifx language can only take single-digit integers and the arithmetic operators, you can represent negative numbers by subtracting from zero:
04-2+
This is equivalent in infix notation to
0-4+2
Here is some terminology: the subtraction operation is a "binary operator," that is, it takes two operands; the negative sign is a "unary operator," that is, it takes one operand. Infix and postfix are notations for binary operators and their operands.
Related
A <= 3 * B;
In the above statement is 3 an integer or a natural number. If it's a natural number, what if I use a negative number there. Does VHDL recognize it as an integer?
Integer literals are of the anonymous predefined type universal_integer. They are implicitly converted to the required (sub-)type, e.g. integer or natural, for your operator *. See also IEEE Std. 1076-2008, para. 5.2.3.1.
Thus, if you specify the term -3, this is parsed as a simple expression composed of the minus sign - and the abstract (decimal) literal 3. The number 3 will be of type universal_integer, and after applying the sign operator, it is still of the same type. (Thanks to #user1155120 for the clarification.)
After that, the conversion of the expression -3 will fail, if your operator requires a natural.
Same applies for floating-point literals which are of the anonymous predefined type universal_real, see also para. 5.2.5.1.
Operands of any integer type, can be converted to any floating point type and vice versa. The conversion from floating point to integer takes place using rounding to the nearest integer. Floating point values with a fractional part of 0.5 are either rounded up or down, see also para. 9.3.6.
I encountered this problem in a programming contest:
Given expression x1 op x2 op x3 op . . . op xn, where op is either addition '+' or multiplication '*' and xi are digits between 1 to 9. The goal is to insert just one set of parenthesis within the expression such that it maximizes the result of the expression.
The n is maximum 2500.
Eg.:
Input:
3+5*7+8*4
Output:
303
Explanation:
3+5*(7+8)*4
There was another constraint given in the problem that at max only 15 '*' sign will be present. This simplified the problem. As we will have just 17 options of brackets insertion and brute force would work in O(17*n).
I have been thinking if this constraint was not present, then can I theoretically solve the problem in O(n^2)? It seemed to me a DP problem. I am saying theoretically because the answers will be quite big (9^2500 possible). So if I ignore the time complexity of working with big numbers then is O(n^2) possible?
If there is no multiplication, you are finished.
If there is no addition, you are finished.
The leading and trailing operation of subterms that have to be evaluated always are additions, because parenthesis around a multiplication does not alter the outcome.
If you have subterms with only additions, you do not need to evaluate subparts of them. Multiplication of the full subterm will always be bigger. (Since we only have positiv numbers/digits.)
Travers the term once, trying to place the opening parenthesis after (worst case) each * that is succeeded with a +, and within that loop a second time, trying to place the closing parenthesis before (worst case) each succeeding * that immediately follows an +.
You can solve the problem in O(ma/2), with m: number of multiplications and a: number of additions. This is smaller than n^2.
Possible places for parenthesis shown with ^:
1*2*^3+4+5^*6*^7+8^
Problem:
The numbers from 1 to 10 are given. Put the equal sign(somewhere between
them) and any arithmetic operator {+ - * /} so that a perfect integer
equality is obtained(both the final result and the partial results must be
integer)
Example:
1*2*3*4*5/6+7=8+9+10
1*2*3*4*5/6+7-8=9+10
My first idea to resolve this was using backtracking:
Generate all possibilities of putting operators between the numbers
For one such possibility replace all the operators, one by one, with the equal sign and check if we have two equal results
But this solution takes a lot of time.
So, my question is: Is there a faster solution, maybe something that uses the operator properties or some other cool math trick ?
I'd start with the equals sign. Pick a possible location for that, and split your sequence there. For left and right side independently, find all possible results you could get for each, and store them in a dict. Then match them up later on.
Finding all 226 solutions took my Python program, based on this approach, less than 0.15 seconds. So there certainly is no need to optimize further, is there? Along the way, I computed a total of 20683 subexpressions for a single side of one equation. They are fairly well balenced: 10327 expressions for left hand sides and 10356 expressions for right hand sides.
If you want to be a bit more clever, you can try reduce the places where you even attempt division. In order to allov for division without remainder, the prime factors of the divisor must be contained in those of the dividend. So the dividend must be some product and that product must contain the factors of number by which you divide. 2, 3, 5 and 7 are prime numbers, so they can never be such divisors. 4 will never have two even numbers before it. So the only possible ways are 2*3*4*5/6, 4*5*6*7/8 and 3*4*5*6*7*8/9. But I'd say it's far easier to check whether a given division is possible as you go, without any need for cleverness.
My understanding is that floats are internally represented as binary expansion, and that introduces errors. If that is the case, then why are float literals represented as are given? Suppose that 0.1 is represented internally as 0.0999999999999999 according to binary expansion (I am using a fake example, just to show the point. This particular value is probably not correct.). Then in inspection or in the return value of irb, why does it/why is it possible to print the given literal 0.1 and not 0.0999999999999999? Isn't the original literal form gone once it is interpreted and expanded into binary?
In other words, a float literal-to-internal binary expression is a many-to-one mapping. Different float literals that are close enough are mapped to the same internal binary expression. Why then is it possible to reconstruct the original literal from the internal expression (modulo differences between 1.10 and 1.1, 1.23e2 123.0 as in Mark Dickinson's comment)?
The decimal-to-floating-point conversion applied to floating-point literals such as “0.1” rounds to the nearest floating-point (0.5 ULP) for most platforms. (Ruby calls a function from the platform for this, and the only fallback Ruby's source code contains for portability is awful, but let us assume conversion to the nearest). As a consequence, if you print to any number of decimals between 1…15 the closest decimal representation to the double that correspond to the literal 0.1, then the result is 0.10…0 (and the trailing zeroes can be omitted, of course), and if you print the shortest decimal representation that converts back to the double nearest 0.1, then this results in “0.1”, of course.
Programming languages usually use one of the above two approaches (fixed number of significant digits, or shortest decimal representation that converts back to the original floating-point number) when converting floating-point number to a decimal representation. Ruby uses the latter.
This article introduced “floating-point to shortest decimal representation that converts back to the same floating-point number” floating-point-to-decimal conversion.
Ruby (like several other languages, including Java and Python) uses a "shortest-string" representation when converting binary floating-point numbers to decimal for display purposes: given a binary floating-point number, it will compute the shortest decimal string that rounds back to that binary floating-point number under the usual round-to-nearest decimal-to-binary method.
Now suppose that we start out with a reasonably short (in terms of number of significant digits) decimal literal, 123.456 for example, and convert that to the nearest binary float, z. Clearly 123.456 is one decimal string that rounds to z, so it's a candidate for the representation of z, and it should be at least plausible that the "shortest-string" algorithm will spit that back at us. But we'd like more than just plausibility here: to be sure that 123.456 is exactly what we're going to get back, all we need to know is that there aren't any other, shorter, candidates.
And that's true, essentially because if we restrict to short-ish decimal values (to be made more precise below), the spacing between successive such values is larger than the spacing between successive floats. More precisely, we can make a statement like the following:
Any decimal literal x with 15 or fewer significant digits and absolute value
between 10^-307 and 10^308 will be recovered by the "shortest-string" algorithm.
Here by "recovered", I mean that the output string will have the same decimal digits, and the same value as the original literal when thought of as a decimal number; it's still possible that the form of the literal may have changed, e.g., from 1.230 to 1.23, or from 0.000345 to 3.45e-4. I'm also assuming IEEE 754 binary64 format with the usual round-ties-to-even rounding mode.
Now let's give a sketch of a proof. Without loss of generality, assume x is positive. Let z be the binary floating-point value nearest x. We have to show that there's no other, shorter, string y that also rounds to z under round-to-nearest. But if y is a shorter string than x, it's also representable in 15 significant digits or fewer, so it differs from x by at least one 'unit in the last place'. To formalize that, find integers e and f such that 2^(e-1) <= x < 2^e and 10^(f-1) < x <= 10^f. Then the difference |x-y| is at least 10^(f-15). However, if y is too far away from x, it can't possibly round to z: since the binary64 format has a precision of 53 bits (away from the underflow and overflow ranges, at least) the interval of numbers that round to z has width at most 2^(e-53)[1]. We need to show that the width of this interval is smaller than |x-y|; that is, that 2^(e-53) < 10^(f-15).
But this follows from our choices: 2^(e-53) <= 2^-52 x by our choice of e, and since 2^-52 < 10^-15 we get 2^(e-53) < 10^-15 x. Then 10^-15 x <= 10^(f-15) (by choice of f).
It's not hard to find examples showing that 15 is best possible here. For example, the literal 8.123451234512346 has 16 significant digits, and converts to the floating-point value 0x1.03f35000dc341p+3, or 4573096494089025/562949953421312. When rendered back as a string using the shortest string algorithm, we get 8.123451234512347.
[1] Not quite true: there's an annoying corner case when z is an exact power of two, in which case the width of the interval is 1.5 2^(e-53). The statement remains true in that case, though; I'll leave the details as an exercise.
I am looking through a documentation on divmod. Part of a table showing the difference between methods div, divmod, modulo, and remainder is displayed below:
Why is 13.div(-4) rounded to -4 and not to -3? Is there any rule or convention in Ruby to round down negative numbers? If so, why is the following code not rounding down?
-3.25.round() #3
13.div(-4) == -4 and 13.modulo(-4) == -3 so that
(-4 * -4) + -3 == 13
and you get the consistent relationship
(b * (a/b)) + a.modulo(b) == a
Why is 13.div(-4) rounded to -4 and not to -3?
This is a misconception. 13.div(-4) is not really rounded at all. It is integer division, and follows self-consistent rules for working with integers and modular arithmetic. The rounding logic described in your link fits with it, and is then applied consistently when dealing with the same divmod operation when one or both the parameters are Floats. Mathematical operations on negative or fractional numbers are often extended from simpler, more intuitive results on positive integers in this kind of way. E.g. this follows similar logic to how fractional and negative powers, or non-integer factorials are created from their positive integer variants.
In this case, it's all about self-consistency of divmod, but not about rounding in general.
Ruby's designers had a choice to make when dealing with negative numbers, not all languages will give the same result. However, once it was decided Ruby would return sign of modulo result matching the divisor (as opposed to matching the division as a whole), that set how the rest of the numbers work.
Is there any rule or convention in Ruby to round down negative numbers?
Yes. Rounding a float number means to return the numerically closest integer. When there are two equally close integers, Ruby rounds to the integer furthest from 0. This is entirely separate design decision from how integer division and modulo arithmetic methods work.
If so, why is the following code not rounding down? -3.25.round() #3
I assume you mean the result to read -3. The round method does not "round down". It does "round closest". -3 is the closest integer to -3.25. Ruby's designers did have to make a choice though, what to do with -3.5.round() # -4. Some languages would instead return a -3 when rounding that number.