The way oracle calculate mod(binary_float, 0) - oracle

Oracle document (http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions088.htm#i77996) says that "MOD returns the remainder of n2 divided by n1. Returns n2 if n1 is 0.". But I got an unexpected result (I thought it should be 1.1 but I got 0) when I put a binary_float in n2.
SQL> select mod(1.1,0), to_binary_float('1.1'), mod(to_binary_float('1.1'), 0) from dual;
MOD(1.1,0) TO_BINARY_FLOAT('1.1') MOD(TO_BINARY_FLOAT('1.1'),0)
---------- ---------------------- -----------------------------
1.1 1.1E+000 0
Does anyone has any idea?

Interesting. I think it has something to do with FLOOR vs ROUND used internally in the calculations.
For example, the REMAINDER function is very similar to mod, except is uses ROUND instead of FLOOR. For this example, it will return NAN (not a number):
select remainder(to_binary_float(1.1), 0) from dual
Output:
NAN
Whats more interesting is that I can use NANVL function to provide a default value if NAN is returned (in this case, mimic the MOD behavior and return 1.1), and it will return a float value:
select nanvl(remainder(to_binary_float(1.1), 0), 1.1) from dual
Output:
1.10000002384186
So perhaps thats your workaround.
Hope that helps

Related

Vlookup?? query??? Suggestions?

I have the following Google Sheet.
[1]: [https://docs.google.com/spreadsheets/d/1q9I7XhyEGKeAk93mDyiXpP95Kc9qmiPfvnhEOcTwbIU/edit#gid=0]
I could edit the fields in green (A16, B16), with A16 being a data validation drop-down list and B16 being a typed-in number.
Using vlookup, I can get the Base Value without any issues.
=VLOOKUP(A16,A2:B12,2, false)
What I can not figure out is how to get the factor value. For example, I have select H, which is row #9. As the size value is 1.62, which is greater than the Mid Size but less than the Max Size for that row, I want to return the Factor 2 value of 1.5.
I have tried multiple vlookup / query codes, but all don't work.
The Sum is just Base * Factor to give a final value. Ideally, I would select the lookup, enter a size and it only shows the Sum value.
This definitely is NOT the most efficient way, but hopefully it works:
=IF(A16=A2,IF(B16>B2,"1.62",C2),=IF(A16=A3,IF(B16>B3,"1.62",C3),=IF(A16=A4,IF(B16>B4,"1.62",C4),=IF(A16=A5,IF(B16>B5,"1.62",C5),=IF(A16=A6,IF(B16>B6,"1.62",C6),=IF(A16=A7,IF(B16>B7,"1.62",C7),=IF(A16=A8,IF(B16>B8,"1.62",C8),=IF(A16=A9,IF(B16>B9,"1.62",C9),=IF(A16=A10,IF(B16>B10,"1.62",C10),=IF(A16=A11,IF(B16>B11,"1.62",C11),=IF(A16=A12,IF(B16>B12,"1.62",C12),"ERROR")))))))))))
Probably not the best way to handle this, but it does work. Partially cause my base factor value is 1, so multipling the value by 1, does not change anything
=(IFNA(QUERY(A2:H12,"select D where (A = '"&A16&"' AND (C <= "&B16&" AND E > "&B16&")) ORDER BY A"),1)*IFNA(QUERY(A2:H12,"select F where (A = '"&A16&"' AND (E < "&B16&" AND G >= "&B16&")) ORDER BY A"),1)*IFNA(QUERY(A2:H12,"select H where (A = '"&A16&"' AND (G <= "&B16&")) ORDER BY A"),1))

Binary fixed point multiplication

I am implementing a VHDL 8 bit fixed point multiplication module which returns an 8bit truncated number but I have a problem when I do multiplications by hand in order to test it. The problem arises when I want to multiply two negative numbers.
I tried multiplying two positive values 1.67 * 0.625 ~ 1.04(0.906 in binary multipication).
001.10101 -> 1.67
000.10100 -> 0.625
------------
000000.1110111010 = 000.11101 (truncated to 8bits = 0.906)
I tried multiplying negative and positive numbers (-0.875 * 3 ~2.62)
111.00100 -> -0.875
011.00000 -> 3
----------
010101.0110000000 = 101.01100 (truncated to 8bits = -2.625)
So far everythng is working properly. The problem comes hen I try to multiply two negative numbers. According to what I know (unless I'm mistaken):
- multiplying two numbers will give a result with twice the resolution (multiply two 8 bit numbers and you get a 16 bit number)
- the fixed point gets dislocated as well. In this example there are 3 bits before the fixed and 5 points after. This means that in the resulting number the fixed point will have 6 digits before the point and 10 bits after the point.
By assuming this the above calculations worked properly. But when I try to multiply two negative values (-0.875 * -1.91 ~ 1.67)
110.00010 -> -1.91 (1.9375)
111.00100 -> -0.875
------------
101011.0011001000 = 011.00110(truncated to 8 bits = 3.1875)
Naturally, I tried another negative multiplication (-2.64 * -0.875 = 2.31)
101.01011 -> -2.64
111.00100 -> -0.875
----------
100110.0001001100 = 110.00010 (truncated to 8bits = -1.9375)
Clearly I'm doing something wrong, but I just can't see what I'm doing wrong.
PS: I haven't implemented it yet. The thought came to me I figured out how I was going to do it and then I tried to test it by hand with some simple examples. And I also tried more multiplications. I thought that maybe they worked out because I was lucky, but apparently not, I tried a few more multiplications and they worked. So maybe I'm doing something wrong when multiplying two negative numbers, maybe I'm truncating it wrong? Probably.
EDIT:
Ok, I found a Xilinx document that states how multiplication is made when the two operands are negative, here is the link. According to this docuent, in order to this document, this can only be done when doing extended multiplication. And the last partial sum for the multiplication must be inverted and then add 1 to it and it will result in the correct number.
In order to the multiplications I used windows' calculator in programmer mode, which means that in order to multiply the 8 bits I put the numbers in the calculator and then got the result and truncated it. If they worked for the other cases it means that the windows calculator is doing a direct multiplication (adding all the partial sums as they should be instead of inverting the last partial sum). So, this means that in order to obtain the real result I should substract the first operand from the final result and then add the first operand inverted + 1
110.00010 -> -1.91 (1.9375)
111.00100 -> -0.875
------------
101011.0011001000
Which gave me the result: 000010.0111001000 = 010.01110(truncated to 8bits =2.43)
And the with the other one I came up with the result of 1.875. Those outputs aren't exactly great, but at least they are closer to what I expected. Is there any other way to do this in an easier way?
Your intermediate results are wrong, so that, the truncation did not work as expected. Moreover, the truncation is only possible without overflow if the four top-most bit of the intermediate result are equal in your format.
You should use signed data-types to do the multiplication right.
Even your second example is wrong. The intermediate binary result 010101.0110000000 represents the decimal number 21.375 which is not the product of -0.875 and 3. So, let's do the multiplication by hand:
a * b = -0.875 * 3 = -2.625
111.00100 * 011.00000
---------------------
. 00000000 // further lines containing only zeros have been omitted
+ .01100000
+ 011.00000
+ 0110.0000
+ 110100.000 // add -(2^2) * b !
= 111101.0110000000 = -2.625 (intermediate result)
= 101.01100 = -2.625 after truncation
You have to add the two's complement of b in the last partial sum because the '1' in the top-most bit of a represent the value -(2^2) = -4. Truncation without overflow is possible here because the 4 top-most bits of the intermediate result are equal.
And now the third example
a b = -1.9375 * -0.875 = 1.6953125
110.00010 * 111.00100
---------------------
. 00000000 // further lines containing only zeros have been omitted
+ 111111.111100100 // sign-extended partial-sum
+ 111110.0100 // sign-extended partial-sum
+ 000011.100 // add -4 * b
= 000001.101100100 = 1.6953125 (intermediate result)
~ 001.10110 = 1.6875 after truncation
As b is a signed number, one has always sign-extend the partial sum to the width of the intermediate result. Of course, this has also been done in the calculation of the second example, but there it does not make a difference.

Oracle function POWER gives approximate results

Recently I found out that ORACLE function POWER does not always give exact results.
This can be easily checked by this script or similar:
BEGIN
FOR X IN 1 .. 100 LOOP
IF SQRT(X) = POWER(X, 1 / 2) THEN
DBMS_OUTPUT.PUT_LINE(X);
END IF;
END LOOP;
END;
The output results are just following: 1, 5, 7, 11, 16, 24, 35, 37, 46, 48, 53, 70, 72, 73.
I.e. we see the situation when only in 14 cases from 100 first natural numbers the square root of a number is equal to its exponentiation with an index of 1/2.
I think this has to do with the limits to the NUMBER data type. I believe NUMBER precision can only go to 38 max. If you try with BINARY_DOUBLE, you'll find all values 1->100 will match:
DECLARE
l_num binary_double := 0;
BEGIN
LOOP
l_num := l_num + 1;
exit when l_num > 100;
IF ( SQRT(l_num) = POWER(l_num, 0.5) ) THEN
DBMS_OUTPUT.PUT_LINE(l_num);
ELSE
DBMS_OUTPUT.PUT_LINE(l_num || ': ' || SQRT(l_num) || ' <> ' || POWER(l_num, 0.5));
END IF;
END LOOP;
END;
Output (partial):
1.0E+000
2.0E+000
3.0E+000
...
9.8E+001
9.9E+001
1.0E+002
Another option is to round the results of both SQRT and POWER to, say, 35 or less (if you must use NUMBER datatype).
The Oracle documentation somewhat covers this:
Numeric Functions
Numeric functions accept numeric input and return numeric values. Most
numeric functions return NUMBER values that are accurate to 38 decimal
digits. The transcendental functions COS, COSH, EXP, LN,
LOG, SIN, SINH, SQRT, TAN, and TANH are accurate to 36
decimal digits. The transcendental functions ACOS, ASIN, ATAN,
and ATAN2 are accurate to 30 decimal digits.
So SQRT is stated to be accurate to 36 decimal digits; but POWER isn't in the list so it is implied to be accurate to 38 decimal digits. If you look at the values returned by the two functions you can see the discrepancy way down in the least significant digits; e.g. for X = 2:
SQRT(2): 1.41421356237309504880168872420969807857
POWER(2, 1/2): 1.41421356237309504880168872420969807855
Curiously, though, it looks like SQRT is more accurate and it's POWER that is slightly less precise, as you stated. Wolfram Alpha gives:
1.4142135623730950488016887242096980785696718753769480...
(but notice that also states it's an approximation), which rounds to the same as SQRT; and if you reverse the process with SQRT(2) * SQRT(2) and POWER((POWER(2, 1/2), 2) you get:
(SQRT): 2
(POWER): 1.99999999999999999999999999999999999994
When X is a binary_double rather than a number you get the same value for both:
1.4142135623730951
but you've lost precision; squaring that again gives:
2.0000000000000004
Ultimately any decimal representation of a floating point number has a limit to its precision, and will be an approximation. Two functions giving slightly different approximations is perhaps a little confusing, but since they seem to have SQRT a closer approximation (despite what the documentation says) - as a special case - I'm not sure that's really something to complain about.

Efficient database lookup based on input where not all digits are sigificant

I would like to do a database lookup based on a 10 digit numeric value where only the first n digits are significant. Assume that there is no way in advance to determine n by looking at the value.
For example, I receive the value 5432154321. The corresponding entry (if it exists) might have key 54 or 543215 or any value based on n being somewhere between 1 and 10 inclusive.
Is there any efficient approach to matching on such a string short of simply trying all 10 possibilities?
Some background
The value is from a barcode scan. The barcodes are EAN13 restricted circulation numbers so they have the following structure:
02[1234567890]C
where C is a check sum. The 10 digits in between the 02 and the check sum consist of an item identifier followed by an item measure. There might be a check digit after the item identifier.
Since I can't depend on the data to adhere to any single standard, I would like to be able to define on an ad-hoc basis, how particular barcodes are structured which means that the portion of the 10 digit number that I extract, can be any length between 1 and 10.
Just a few ideas here:
1)
Maybe store these numbers in reversed form in your DB.
If you have N = 54321 you store it as N = 12345 in the DB.
Say N is the name of the column you stored it in.
When you read K = 5432154321, reverse this one too,
you get K1 = 1234512345, now check the DB column N
(whose value is let's say P), if K1 % 10^s == P,
where s=floor(Math.log(P) + 1).
Note: floor(Math.log(P) + 1) is a formula for
the count of digits of the number P > 0.
The value floor(Math.log(P) + 1) you may also
store in the DB as precomputed one, so that
you don't need to compute it each time.
2) As this 1) is kind of sick (but maybe best of the 3 ideas here),
maybe you just store them in a string column and check it with
'like operator'. But this is trivial, you probably considered it
already.
3) Or ... you store the numbers reversed, but you also
store all their residues mod 10^k for k=1...10.
col1, col2,..., col10
Then you can compare numbers almost directly,
the check will be something like
N % 10 == col1
or
N % 100 == col2
or
...
(N % 10^10) == col10.
Still not very elegant though (and not quite sure
if applicable to your case).
I decided to check my idea 1).
So here is an example
(I did it in SQL Server).
insert into numbers
(number, cnt_dig)
values
(1234, 1 + floor(log10(1234)))
insert into numbers
(number, cnt_dig)
values
(51234, 1 + floor(log10(51234)))
insert into numbers
(number, cnt_dig)
values
(7812334, 1 + floor(log10(7812334)))
select * From numbers
/*
Now we have this in our table:
id number cnt_dig
4 1234 4
5 51234 5
6 7812334 7
*/
-- Note that the actual numbers stored here
-- are the reversed ones: 4321, 43215, 4332187.
-- So far so good.
-- Now we read say K = 433218799 on the input
-- We reverse it and we get K1 = 997812334
declare #K1 bigint
set #K1 = 997812334
select * From numbers
where
#K1 % power(10, cnt_dig) = number
-- So from the last 3 queries,
-- we get this row:
-- id number cnt_dig
-- 6 7812334 7
--
-- meaning we have a match
-- i.e. the actual number 433218799
-- was matched successfully with the
-- actual number (from the DB) 4332187.
So this idea 1) doesn't seem that bad after all.

about why -1%2=-1?

I search in google with -1%2 is (-1) mod 2 = 1, but in xcode -1%2= -1. Have any one tell me why? Thank your help!^.^
http://www.google.com.hk/#hl=zh-TW&source=hp&q=-1%252&oq=-1%252&aq=f&aqi=&aql=1&gs_sm=e&gs_upl=2572l7521l0l7993l10l10l0l9l0l0l154l154l0.1l1l0&bav=on.2,or.r_gc.r_pw.,cf.osb&fp=1573ccabb4b5821b&biw=929&bih=825
This text was taken from this post.
Objective-C is a superset of C99 and C99 defines a % b to be negative when a is negative. See also the Wikipedia entry on the Modulo operation and this StackOverflow question.
Something like (a >= 0) ? (a % b) : ((a % b) + b) (which hasn't been tested and probably has unnecessary parentheses) should give you the result you want.
Often, the modulo operator is calculated using:
number % modulus := number - (number / modulus) * modulus
In your case, you get (-1) - (-1/2)*2 = (-1) - (0) = -1. Note that the -1/2 evaluates to 0 since we are using integar math.
It is beyond me if all computers operate this way, or if it varies depending on the hardware, but I remember this coming up in a class years ago.

Resources