How can I change a precision of functions ln and power in oracle? I'm getting very precise results - 40 digits. The problem is that I have a huge table, therefore, the calculations are very long, and I don't need that kind of precision. Standard 7 or 16 digits would be fine, and probably would speed up the computation. Note that I'm not asking about round function because it would only change the format of the result, and would not influence the computation itself.
Edit
My real query is complicated, so to keep things simple, let us consider
select ln(2) from dual;
As a result, I'm getting
.6931471805599453094172321214581765680782
whereas, I would like to get, e.g., .69314718, but not by rounding the final result .6931471805599453094172321214581765680782. I want to avoid the computation of those additional digits.
Just trunc the ln to avoid rounding.
select trunc(ln(2),7),ln(2) from dual;
Outputs:
0.6931471 0.693147180559945
It turned out that the argument conversion to binary_double is the perfect solution for my efficiency problems. For binary_double arguments, power and ln functions produce binary_double results. Now, both of my queries are evaluated in couple of minutes instead of 1 hour 15 minutes and 40 minutes.
Related
my function has to work with very big numbers, so in order to do that I used parts in my code such as big() . Unfortunately this resulted in giving me a result that is too precise (in other words its slowing the entire code down).
This is how the result looks like.
ΔE = 0.08298347005140644564908076516066986088852555871299296314640532293721884964540988
If possible I would like to limit the result to 4 digits
ΔE = 0.0829
If performance is a concern, probably the best way to do this is with https://github.com/dzhang314/MultiFloats.jl, e.g.
using MultiFloats
x = Float64x4(2.0)
# Calculations performed on x will have Float64x4 precision subsequently...
MultiFloats.jl appears to be the fastest package around at present for such calculations, and will let you choose from precision levels between Float64x2 and Float64x8. In any event, this will be dramatically faster than the BigFloats used in the example above.
What is the typical approach in LUA (before the introduction of integers in 5.3) for dealing with calculated range values in for loops? Mathematical calculations on the start and end values in a numerical for loop put the code at risk of bugs, possibly nasty latent ones as this will only occur on certain values and/or with changes to calculation ordering. Here's a concocted example of a loop not producing the desire output:
a={"a","b","c","d","e"}
maybethree = 3
maybethree = maybethree / 94
maybethree = maybethree * 94
for i = 1,maybethree do print(a[i]) end
This produces the unforuntate output of two items rather than the desired three (tested on 5.1.4 on 64bit x86):
a
b
Programmers unfamiliar with this territory might be further confused by print() output as that prints 3!
The application of a rounding function to the nearest whole number could work here. I understand the approximatation with FP and why this fails, I'm interested in what the typical style/solution is for this in LUA.
Related questions:
Lua for loop does not do all iterations
Lua: converting from float to int
The solution is to avoid this reliance on floating-point math where floating-point precision may become an issue. Or, more realistically, just be aware of when you are using FP and be mindul of the precision issue. This isn’t a Lua problem that requires a Lua-specific solution.
maybethree is a misnomer: it is never three. Your code above is deterministic. It will always print just a and b. Since the maybethree variable is less than three, of course the for loop would not execute 3 times.
The print function is also behaving as defined/expected. Use string.format to show thr FP number in all its glory:
print(string.format("%1.16f", maybethree)) -- 2.9999999999999996
Still need to use calculated values to control your for loop? Then you already mentioned the answer: implement a rounding function.
Why Oracle is not using Bankers rule (the rounding method)?
Accurate decimal arithmatic is a large and complex subject.
Google 'mike colishaw decimal rounding' if you want to read the ahem Oracle on the subject.
Basically there are many rounding schemes which are possible:-
Round everthing down - the default in most languages including C as Oracle is written in C this is probably why they do this.
Round everything up - rarely seen but occasionally needs to be implemented because of obscure market and tax rules.
Basic Half Rounding - anything above .5 rounds up everything else rounds down.
Generous Half Rounding - anything below .5 rounds down everthing else rounds up.
Bankers Rounding - Even numbers follow the Basic Half Rounding rule, odd numbers the Generous Half Rounding rule. This is rarely seen in actual banks which prefer rounding up if the moneys coming thier way and rounding down when its going the clients way.
ORACLE NUMBER is actually a pretty good Decimal Arithmatic implementation and is accurate as far as it goes.
Oracle has implemented round half away from zero:
SQL> select round(22.5) from dual
2 /
ROUND(22.5)
-----------
23
SQL> select round(23.5) from dual
2 /
ROUND(23.5)
-----------
24
SQL> select round(-23.5) from dual
2 /
ROUND(-23.5)
------------
-24
SQL> select round(-22.5) from dual
2 /
ROUND(-22.5)
------------
-23
SQL>
Why don't they change it to Bankers' Rounding? Well, for most purposes round half away from zero is good enough. Plus there's that old fallback, changing it would likely break too much of the existing codebase - Oracle's own as well as all their customers.
Old thread, but someone may still need this. Oracle's binary floats and binary doubles follow the banker's rounding rule when rounding to a whole number. So you can use that. It's ugly but it works:
given : price = 2.445
SQL> select round(to_binary_float(price * 100)) / 100 as price_rounded from dual;
price_rounded
-------------
2.44
given : price = 2.435
SQL> select round(to_binary_float(price * 100)) / 100 as price_rounded from dual;
price_rounded
-------------
2.44
The multiply and divide by 100 are necessary in this example. I haven't been able to figure out the specifics of the behavior, but select round(to_binary_float(price), 2) for some decimal, price, does not seem to consistently round up or down by the same rules. I have found, however, that rounding to a whole number consistently gives me what I need.
You can always implement your own function for banker's rounding as described here.
Banker's rounding round's 0.5 to 0: it round's towards even numbers.
Just as background, I'm building an application in Cocoa. This application existed originally in C++ in another environment. I'd like to do as much as possible in Objective-C.
My questions are:
1)
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
2)
When used in an objective-C program, including time.h, what are the units of
clock()
Thank you for your help.
You can use CFAbsoluteTimeGetCurrent() but bear in mind the clock can change between two calls and can screw you over. If you want to protect against that you should use CACurrentMediaTime().
The return type of these is CFAbsoluteTime and CFTimeInterval respectively, which are both double by default. So they return the number of seconds with double precision. If you really want an integer you can use mach_absolute_time() found in #include <mach/mach_time.h> which returns a 64 bit integer. This needs a bit of unit conversion, so check out this link for example code. This is what CACurrentMediaTime() uses internally so it's probably best to stick with that.
Computing the difference between two calls is obviously just a subtraction, use a variable to remember the last value.
For the clock function see the documentation here: clock(). Basically you need to divide the return value by CLOCKS_PER_SEC to get the actual time.
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
Is there any reason you need it as an integral number of milliseconds? Asking NSDate for the time interval since another date will give you a floating-point number of seconds. If you really do need milliseconds, you can simply multiply by that by 1000 to get a floating-point number of milliseconds. If you really do need an integer, you can round or truncate the floating-point value.
If you'd like to do it with integers from start to finish, use either UpTime or mach_absolute_time to get the current time in absolute units, then use AbsoluteToNanoseconds to convert that to a real-world unit. Obviously, you'll have to divide that by 1,000,000 to get milliseconds.
QA1398 suggests mach_absolute_time, but UpTime is easier, since it returns the same type AbsoluteToNanoseconds uses (no “pointer fun” as shown in the technote).
AbsoluteToNanoseconds returns an UnsignedWide, which is a structure. (This stuff dates back to before Mac machines could handle scalar 64-bit values.) Use the UnsignedWideToUInt64 function to convert it to a scalar. That just leaves the subtraction, which you'll do the normal way.
I'm having problems with a mammoth legacy PL/SQL procedure which has the following logic:
l_elapsed := dbms_utility.get_time - l_timestamp;
where l_elapsed and l_timestamp are of type PLS_INTEGER and l_timestamp holds the result of a previous call to get_time
This line suddenly started failing during a batch run with a ORA-01426: numeric overflow
The documentation on get_time is a bit vague, possibly deliberately so, but it strongly suggests that the return value has no absolute significance, and can be pretty much any numeric value. So I was suspicious to see it being assigned to a PLS_INTEGER, which can only support 32 bit integers. However, the interweb is replete with examples of people doing exactly this kind of thing.
The smoking gun is found when I invoke get_time manually, it is returning a value of -214512572, which is suspiciously close to the min value of a 32 bit signed integer. I'm wondering if during the time elapsed between the first call to get_time and the next, Oracle's internal counter rolled over from its max value and its min value, resulting in an overflow when trying to subtract one from the other.
Is this a likely explanation? If so, is this an inherent flaw in the get_time function? I could just wait and see if the batch fails again tonight, but I'm keen to get an explanation for this behaviour before then.
Maybe late, but this may benefit someone searching on the same question.
The underlying implementation is a simple 32 bit binary counter, which is incremented every 100th of a second, starting from when the database was last started.
This binary counter is is being mapped onto a PL/SQL BINARY_INTEGER type - which is a signed 32-bit integer (there is no sign of it being changed to 64-bit on 64-bit machines).
So, presuming the clock starts at zero it will hit the +ve integer limit after about 248 days, and then flip over to become a -ve value falling back down to zero.
The good news is that provided both numbers are the same sign, you can do a simple subtraction to find duration - otherwise you can use the 32-bit remainder.
IF SIGN(:now) = SIGN(:then) THEN
RETURN :now - :then;
ELSE
RETURN MOD(:now - :then + POWER(2,32),POWER(2,32));
END IF;
Edit : This code will blow the int limit and fail if the gap between the times is too large (248 days) but you shouldn't be using GET_TIME to compare durations measure in days anyway (see below).
Lastly - there's the question of why you would ever use GET_TIME.
Historically, it was the only way to get a sub-second time, but since the introduction of SYSTIMESTAMP, the only reason you would ever use GET_TIME is because it's fast - it is a simple mapping of a 32-bit counter, with no real type conversion, and doesn't make any hit on the underlying OS clock functions (SYSTIMESTAMP seems to).
As it only measures relative time, it's only use is for measuring the duration between two points. For any task that takes a significant amount of time (you know, over 1/1000th of a second or so) the cost of using a timestamp instead is insignificant.
The number of occasions on where it is actually useful is minimal (the only one I've found is checking the age of data in a cache, where doing a clock hit for every access becomes significant).
From the 10g doc:
Numbers are returned in the range -2147483648 to 2147483647 depending on platform and machine, and your application must take the sign of the number into account in determining the interval. For instance, in the case of two negative numbers, application logic must allow that the first (earlier) number will be larger than the second (later) number which is closer to zero. By the same token, your application should also allow that the first (earlier) number be negative and the second (later) number be positive.
So while it is safe to assign the result of dbms_utility.get_time to a PLS_INTEGER it is theoretically possible (however unlikely) to have an overflow during the execution of your batch run. The difference between the two values would then be greater than 2^31.
If your job takes a lot of time (therefore increasing the chance that the overflow will happen), you may want to switch to a TIMESTAMP datatype.
Assigning a negative value to your PLS_INTEGER variable does raise an ORA-01426:
SQL> l
1 declare
2 a pls_integer;
3 begin
4 a := -power(2,33);
5* end;
SQL> /
declare
*
FOUT in regel 1:
.ORA-01426: numeric overflow
ORA-06512: at line 4
However, you seem to suggest that -214512572 is close to -2^31, but it's not, unless you forgot to typ a digit. Are we looking at a smoking gun?
Regards,
Rob.