Why does dividing two time.durations in Go result in another time.duration? - go

I don't understand what it means to divide a time.Duration in Go.
For example, this is super lovely:
d,_ := time.ParseDuration("4s")
fmt.Println(d/4)
print 1s. Which is ace, because (naively) 4 seconds divided by 4 is 1 second.
It gets a little confusing though when we find out that the 4 in the denominator has to be a duration. So although:
d1 := time.Duration(4)
fmt.Println(d/d1)
also prints 1s, we know that d1 is actually 4ns and I'm entirely unconvinced that 4 seconds divided by 4 nanoseconds is 1 second.
I'm confused because a duration divided by duration should be dimensionless (I think, right?), whereas a duration divided by a dimensionless number should have units of time.
And I know that type != unit, but I'm clearly misunderstanding something, or quite possibly a set of things. Any help to clear this up would be most appreciated!
Here is a go playground of the above examples. https://play.golang.org/p/Ny2_ENRlX6. And just for context, I'm trying to calculate the average time between events. I can fall back to using floats for seconds, but am trying to stay in time.Duration land.

Mathematically, you're correct: dividing two time.Durations should result in a dimensionless quantity. But that's not how go's type system works. Any mathematical operation results in a value of the same type as the inputs. You'll have to explicitly cast the result of the division to an int64 to get an "untyped" quantity.

It is so because time.Duration is int64. See documentation of time package.
You make a division of 4000000000 (4s) by 4 (4ns) and you get 1000000000 (1s). You should look at the operations as they where integers not typed values. Type Duration make it look like a physical value but for division operation it is just a number.

There are no units attached to a time.Duration. A time.Duration represents the physical concept of a duration (measured in seconds and having a unit) by providing a distinct type, namely the time.Duration type. But technically it is just a uint64.
If you try to attach actual units to types you'll enter unit-hell: What would a (time.Duration * time.Duration)/acceleration.Radial * mass.MetricTon be? Undefined most probably.

Related

It it safe to convert from int64 to float64?

As far as I know int64 can be converted in float64 in Go, the language allows this with float64(some_int64_variable), but I also know that not all 64 bit signed integers can be represented in double (because of IEE754 approximations).
We have some code which receives the price of an item in cents using int64 and does something like
const TB = 1 << 40
func ComputeSomething(numBytes int64) {
Terabytes := float64(numBytes) / float64(TB)
I'm wondering how safe this is, since not all integers can be represented with doubles.
Depends on what you mean by "safe".
Yes, precision can be lost here in some cases. float64 cannot represent all values of int64 precisely (since it only has 53 bits of mantissa). So if you need a completely accurate result, this function is not "safe"; if you want to represent money in float64 you may get into trouble.
On the other hand, do you really need the number of terabytes with absolute precision? Will numBytes actually divide by TB accurately? That's pretty unlikely, but it all depends on your specification and needs. If your code has a counter of bytes and you want to display approximately how many TB it is (e.g. 0.05 TB or 2.124 TB) then this calculation is fine.
Answering "is it safe" really requires a better understanding of your needs, and what exactly you do with these numbers. So let's ask a related but more precise question that we can answer with certainty:
What is the minimum positive integer value that float64 cannot exactly represent?
For int64, this number turns out to be 9007199254740993. This is the first integer that float64 "skips" over.
This might look quite large, and perhaps not so alarming. (If these are "cents", then I believe it's about 90 trillion dollars or so.) But if you use a single-precision float, the answer might surprise you. If you use float32, that number is: 16777217. about 168 thousand dollars, if interpreted as cents. Good thing you're not using single-precision floats!
As a rule of thumb, you should never use float types (whatever precision it might be) for dealing with money. Floats are really not designed for "money" like discrete quantities, but rather dealing with fractional values that arise in scientific applications. Rounding errors can creep up, throwing off your calculations. Use big-integer representations instead. Big integer implementations might be slower since they are mostly realized in software, but if you're dealing with money computations, I'd hazard a guess that you don't really need the speed of floating-point computation that the hardware can provide.

How many bytes are in a golang time object

I am having to store a time object in an array of bytes in a go project I am working on and have to declare the size of the array up front. I can not find the length in bytes referenced anywhere. At this point, I am planning on using the time.MarshalBinary() from the time library to convert it to bytes and manually figuring it out. But I wanted to know if anyone has any reference to the number of bytes this is and if time.MarshalBinary() is the best method to use for converting to bytes.
The answer to this question is not as straight forward as it might seem. It depends a lot on how much detail you need to preserve in your marshaling.
As pointed out in another answer, you can simply use unsafe.Sizeof() to determine the in-memory size of a time object, but this has little resemblance to the actual marshaled size, for the simple reason that it contains a pointer. If we look at the definition of time.Time we see:
type Time struct {
// wall and ext encode the wall time seconds, wall time nanoseconds,
// and optional monotonic clock reading in nanoseconds.
//
// From high to low bit position, wall encodes a 1-bit flag (hasMonotonic),
// a 33-bit seconds field, and a 30-bit wall time nanoseconds field.
// The nanoseconds field is in the range [0, 999999999].
// If the hasMonotonic bit is 0, then the 33-bit field must be zero
// and the full signed 64-bit wall seconds since Jan 1 year 1 is stored in ext.
// If the hasMonotonic bit is 1, then the 33-bit field holds a 33-bit
// unsigned wall seconds since Jan 1 year 1885, and ext holds a
// signed 64-bit monotonic clock reading, nanoseconds since process start.
wall uint64
ext int64
// loc specifies the Location that should be used to
// determine the minute, hour, month, day, and year
// that correspond to this Time.
// The nil location means UTC.
// All UTC times are represented with loc==nil, never loc==&utcLoc.
loc *Location
}
Whether you care about the timezone info stored in loc, is application dependent. If you always store UTC times (usually the best approach), then you can ignore this bit entirely, which means you can get by storing just the two uint64s.
But even these two fields depend on whether or not you're using a monotonic clock. When marshaling data, you almost certainly do not care about the monotonic clock, whether or not it's encoded in those bits.
What this means is that, in most cases, you should be able to store a full time object in 64 bits (8 bytes), plus a timezone indicator, if necessary.
Further, depending on the precision you need, you may be able to store only the seconds field (discarding sub-second precision), which needs only 33 bits. If you only care about minutes or days, you could use even less space.
You can use usafe.Sizeof to get the size in bytes of a variable. I did this
package main
import (
"fmt"
"time"
"unsafe"
)
func main() {
t := time.Now()
fmt.Printf("a: %T, %d\n", t, unsafe.Sizeof(t))
}
Looks like its 24 bytes! :)
Also MarshalBinary looks like it works, although it depends on where you are sending it and how you want to unmarshal it. It may be easier to simply convert it to a string then use that if you are using it in Javascript or something.

What type to use to correctly handle division of odd numbers of cents? (or smallest units of any currency)

Most of the time I see recommendations to represent money as its most fundamental unit; and to use 64 bit unsigned integer to provide maximal capacity.
On the surface this seems fine, but what about the case where I want to split 1 cent?
In Java/Scala the BigDecimal type, which I also see recommended for handling money, will track fractions of a cent, 0.01/2 = 0.005
But dividing a 64 bit unsigned int, 1/2 = 0
I'm trying to write some Go that handles money, and want to know which type to use (just use uint64 or find something else?).
Thank you!
You can use big.Rat for rational numbers of arbitrary size. Then you can split quantities to your heart's content without losing any precision.
int64 (or uint64) still can be used to represent monetary amounts with cent fractions. E.g. if the minimum amount that you want to operate with is 0.01 cents then you can represent 1 cent as 100, then half a cent will be 50 and 1/100 of a cent will be 1. This representation is very efficient (from performance and memory usage point of view) but not very flexible. Things to be aware of are:
there is maximum value (~2^64/100 cents) that you can represent using this method
changes will be required to the app and its stored data if the maximum precision changes
all arithmetic operations needs to be carefully implemented taking rounding into account

Determining Millisecond Time Intervals In Cocoa

Just as background, I'm building an application in Cocoa. This application existed originally in C++ in another environment. I'd like to do as much as possible in Objective-C.
My questions are:
1)
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
2)
When used in an objective-C program, including time.h, what are the units of
clock()
Thank you for your help.
You can use CFAbsoluteTimeGetCurrent() but bear in mind the clock can change between two calls and can screw you over. If you want to protect against that you should use CACurrentMediaTime().
The return type of these is CFAbsoluteTime and CFTimeInterval respectively, which are both double by default. So they return the number of seconds with double precision. If you really want an integer you can use mach_absolute_time() found in #include <mach/mach_time.h> which returns a 64 bit integer. This needs a bit of unit conversion, so check out this link for example code. This is what CACurrentMediaTime() uses internally so it's probably best to stick with that.
Computing the difference between two calls is obviously just a subtraction, use a variable to remember the last value.
For the clock function see the documentation here: clock(). Basically you need to divide the return value by CLOCKS_PER_SEC to get the actual time.
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
Is there any reason you need it as an integral number of milliseconds? Asking NSDate for the time interval since another date will give you a floating-point number of seconds. If you really do need milliseconds, you can simply multiply by that by 1000 to get a floating-point number of milliseconds. If you really do need an integer, you can round or truncate the floating-point value.
If you'd like to do it with integers from start to finish, use either UpTime or mach_absolute_time to get the current time in absolute units, then use AbsoluteToNanoseconds to convert that to a real-world unit. Obviously, you'll have to divide that by 1,000,000 to get milliseconds.
QA1398 suggests mach_absolute_time, but UpTime is easier, since it returns the same type AbsoluteToNanoseconds uses (no “pointer fun” as shown in the technote).
AbsoluteToNanoseconds returns an UnsignedWide, which is a structure. (This stuff dates back to before Mac machines could handle scalar 64-bit values.) Use the UnsignedWideToUInt64 function to convert it to a scalar. That just leaves the subtraction, which you'll do the normal way.

Can dbms_utility.get_time rollover?

I'm having problems with a mammoth legacy PL/SQL procedure which has the following logic:
l_elapsed := dbms_utility.get_time - l_timestamp;
where l_elapsed and l_timestamp are of type PLS_INTEGER and l_timestamp holds the result of a previous call to get_time
This line suddenly started failing during a batch run with a ORA-01426: numeric overflow
The documentation on get_time is a bit vague, possibly deliberately so, but it strongly suggests that the return value has no absolute significance, and can be pretty much any numeric value. So I was suspicious to see it being assigned to a PLS_INTEGER, which can only support 32 bit integers. However, the interweb is replete with examples of people doing exactly this kind of thing.
The smoking gun is found when I invoke get_time manually, it is returning a value of -214512572, which is suspiciously close to the min value of a 32 bit signed integer. I'm wondering if during the time elapsed between the first call to get_time and the next, Oracle's internal counter rolled over from its max value and its min value, resulting in an overflow when trying to subtract one from the other.
Is this a likely explanation? If so, is this an inherent flaw in the get_time function? I could just wait and see if the batch fails again tonight, but I'm keen to get an explanation for this behaviour before then.
Maybe late, but this may benefit someone searching on the same question.
The underlying implementation is a simple 32 bit binary counter, which is incremented every 100th of a second, starting from when the database was last started.
This binary counter is is being mapped onto a PL/SQL BINARY_INTEGER type - which is a signed 32-bit integer (there is no sign of it being changed to 64-bit on 64-bit machines).
So, presuming the clock starts at zero it will hit the +ve integer limit after about 248 days, and then flip over to become a -ve value falling back down to zero.
The good news is that provided both numbers are the same sign, you can do a simple subtraction to find duration - otherwise you can use the 32-bit remainder.
IF SIGN(:now) = SIGN(:then) THEN
RETURN :now - :then;
ELSE
RETURN MOD(:now - :then + POWER(2,32),POWER(2,32));
END IF;
Edit : This code will blow the int limit and fail if the gap between the times is too large (248 days) but you shouldn't be using GET_TIME to compare durations measure in days anyway (see below).
Lastly - there's the question of why you would ever use GET_TIME.
Historically, it was the only way to get a sub-second time, but since the introduction of SYSTIMESTAMP, the only reason you would ever use GET_TIME is because it's fast - it is a simple mapping of a 32-bit counter, with no real type conversion, and doesn't make any hit on the underlying OS clock functions (SYSTIMESTAMP seems to).
As it only measures relative time, it's only use is for measuring the duration between two points. For any task that takes a significant amount of time (you know, over 1/1000th of a second or so) the cost of using a timestamp instead is insignificant.
The number of occasions on where it is actually useful is minimal (the only one I've found is checking the age of data in a cache, where doing a clock hit for every access becomes significant).
From the 10g doc:
Numbers are returned in the range -2147483648 to 2147483647 depending on platform and machine, and your application must take the sign of the number into account in determining the interval. For instance, in the case of two negative numbers, application logic must allow that the first (earlier) number will be larger than the second (later) number which is closer to zero. By the same token, your application should also allow that the first (earlier) number be negative and the second (later) number be positive.
So while it is safe to assign the result of dbms_utility.get_time to a PLS_INTEGER it is theoretically possible (however unlikely) to have an overflow during the execution of your batch run. The difference between the two values would then be greater than 2^31.
If your job takes a lot of time (therefore increasing the chance that the overflow will happen), you may want to switch to a TIMESTAMP datatype.
Assigning a negative value to your PLS_INTEGER variable does raise an ORA-01426:
SQL> l
1 declare
2 a pls_integer;
3 begin
4 a := -power(2,33);
5* end;
SQL> /
declare
*
FOUT in regel 1:
.ORA-01426: numeric overflow
ORA-06512: at line 4
However, you seem to suggest that -214512572 is close to -2^31, but it's not, unless you forgot to typ a digit. Are we looking at a smoking gun?
Regards,
Rob.

Resources