Recover long from int after precision loss - algorithm

I inherited an application that took 2281534395476 and incorrectly casted it to an integer of 906761300. I'm looking to essentially un-cast it back to its proper and correct (long) data type.
What is a reproducible algorithm for doing so that could be run over a list? I'm looking for pseudocode or something of the sort, although the end result will be written in Ruby.

One simply cannot get the value back when already casted from long to int. When you are casting from long to int, you are losing some bits. So, you just can't get back those bits and can't get back your lost long value by using a generic algorithm.
In a system , where long and int uses same number of bits, the conversion is trivial; it simply copies the value. In this case, you may get the value back.
On a system where long is wider than int, if the long value won't fit in an int, then the result of the conversion is implementation-defined. What typically happens is that the high-order bits are discarded. Your system seems to be this one. So, you can't get back the long value.

Related

In Golang, uint16 VS int, which is less cost?

I am using a 64Bit server. My golang program needs integer type.
SO, If I use uint16 and uint32 type in source code, does it cost more than use most regular int type?
I am considering both computing cost and developing cost.
For the vast majority of cases using int makes more sense.
Here are some reasons:
Go doesn't implicitly convert between the numeric types, even when you think it should. If you start using some unsigned type instead of int, you should expect to pepper your code with multiple type conversions, because of other libraries or APIs preferring not to bother with unsigned types, because of untyped constant numerical expressions returning int values, etc.
Unsigned types are more prone to underflowing than signed types, because 0 (an unsigned type's boundary value) is much more of a naturally occurring value in computer programs than, for example, -9223372036854775808.
If you want to use an unsigned type because it restricts the values that you can put in it, keep in mind that when you combine silent underflow and compile time-only constant propagation, you probably aren't getting the bargain you were looking for. For example, while you cannot convert the constant math.MinInt64 to a uint, you can easily convert an int variable with value math.MinInt64 to a uint. And arguably it's not a bad Go style to have an if check whether the value you're trying to assign is valid for your program.
Unless you are experiencing significant memory pressure and your value space is somewhere slightly over what a smaller signed type would offer you, I'd think that using int will be much more efficient even if only because of development cost.
And even then, chances are that either there's a problem somewhere else in your program's memory footprint, or a managed language like Go is not the best fit for your needs.

What are reasons to use a sized or unsigned integer type in go?

Doing the go tour in chapter "Basics/Basic types" it says:
When you need an integer value you should use int unless you have a specific reason to use a sized or unsigned integer type.
What are those specific reasons? Can we name them all?
Other available ressources only talk about 32 and 64 bit signed and unsigned types. But why would someone use int types < 32 bit?
If you cannot think of a reason not to use a standard int, you should use a standard int. In most cases, saving memory isn't worth the extra effort and you are probably not going to need to store that large values anyway.
If you are saving a very large number of small values, you might be able to save a lot of memory by changing the datatype to a smaller one, such as byte. Storing 8bit values in an int means we are storing 24bits of zeroes for every 8 bits of data, and thus, wasting a lot of space. Of course, you could store 4 (or maybe 8) bytes inside an int with some bitshift magic, but why do the hard work when you can let the compiler and the cpu do it for you?
If you are trying to do computations that might not fit inside a 32bit integer, you might want an int64 instead, or even bigint.

C comparing times that can overflow

I need to detect the overflowing of an unsigned long.
This variable holds the amount of milliseconds since the device is running (it's an Arduino). Doing sizeof(unsigned long), I have come to see it's indeed a 32-bit number. Now, since it increments every millisecond, which means the device will run for about 49 days before this value overflows.
Since it's for a home system, it isn't really advisable. Now what I'm using the number for, is comparing if the current time is larger than the previous time plus an amount of debouncing.
if(timeChanged + amountOfMs < currentTime){ ... }
Needless to say, once overflow occurs this isn't going to work anymore. What's an efficient way to solve this? I've thought about also having a second-timer as well to check if the milliseconds one has overflowed, but in the end I'll have the same problem.
This rollover issue seems to cause quite a bit of confusion...
The right answer is that you need not worry about the millis() rollover, as long as you do your calculation properly.
This is bad:
if (timeChanged + amountOfMs < currentTime) { ... }
This is good (rollover-safe):
if (currentTime - timeChanged > amountOfMs) { ... }
The reason it works is that arithmetics with unsigned integers (unsigned long in your case) reliably works modulo max+1 (ULONG_MAX+1 is 232). Thus, currentTime, timeChanged and their difference always have the correct value, modulo 232. As long as you test your button more often than once every 49 days (which is likely) the difference will be in the range of an unsigned long, and your test will be correct.
Let put it another way: if millis() rolls over between timeChanged and currentTime, then the difference currentTime - timeChanged will be negative. But since the difference is actually computed with unsigned numbers, it will underflow and roll-over to the correct result. I do not like this explanation though, as it sounds like an error compensating another error. The truth is: if you think of unsigned numbers in terms of modular arithmetics, there is no error anywhere.
This is such a common mistake (and one that I've made myself) that the Arduino Playground has a nice, thorough, and correct answer. See https://playground.arduino.cc/Code/TimingRollover
You can create a new if loop checking the condition : if(currentTime == 0xFFFFFFFE)
If this is the condition, next millisecond will overflow your variable. So at this point you can manually reset it to zero and goto loop where it starts from zero.
This might or might not help your situation. I can't say for sure because you haven't shared any further details about your code.
Define two variables, I'm going to call them 'now' and 'lastNow'.
unsigned long now;
unsigned long lastNow = 0;
In your loop you can now do this:
now = millis();
if (now < lastNow) {
// rollover!
}
lastNow = now;
Nice and reliable regardless of how frequently (or infrequently) you loop.

Internal allocation VB

Let's say I have any array Trial() As Integer
I have two following variables defined as Integer:
Dim Left As Integer
Dim Right As Integer
Now I am increasing the array index of trial
ReDim Preserve Trial(Left+Right)
Now If My total (Left+Right) exceeds Integer limit the above will give error.
And if redeclared Left as Long then it will work fine.
Actually I want to understand the internal calculation for (Left+Right).
Does it allocate the space for total depending on the datatype of "Left"?
Or it may also depends on datatype of "Right"?
It depends on both. The compiler will examine both variables and determine from both what data type it needs. For example. If you were to add (or multiple or divide) an Integer and a Long then the result will give you a long.
The calculation Left + Right is done presuming the result was an integer too, that's where the overflow occurs.
If you go CLng(Left) + CLng(Right) first, it is done the same way, only that the result will be a long, and thus no overflow occurs.

Determining Millisecond Time Intervals In Cocoa

Just as background, I'm building an application in Cocoa. This application existed originally in C++ in another environment. I'd like to do as much as possible in Objective-C.
My questions are:
1)
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
2)
When used in an objective-C program, including time.h, what are the units of
clock()
Thank you for your help.
You can use CFAbsoluteTimeGetCurrent() but bear in mind the clock can change between two calls and can screw you over. If you want to protect against that you should use CACurrentMediaTime().
The return type of these is CFAbsoluteTime and CFTimeInterval respectively, which are both double by default. So they return the number of seconds with double precision. If you really want an integer you can use mach_absolute_time() found in #include <mach/mach_time.h> which returns a 64 bit integer. This needs a bit of unit conversion, so check out this link for example code. This is what CACurrentMediaTime() uses internally so it's probably best to stick with that.
Computing the difference between two calls is obviously just a subtraction, use a variable to remember the last value.
For the clock function see the documentation here: clock(). Basically you need to divide the return value by CLOCKS_PER_SEC to get the actual time.
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
Is there any reason you need it as an integral number of milliseconds? Asking NSDate for the time interval since another date will give you a floating-point number of seconds. If you really do need milliseconds, you can simply multiply by that by 1000 to get a floating-point number of milliseconds. If you really do need an integer, you can round or truncate the floating-point value.
If you'd like to do it with integers from start to finish, use either UpTime or mach_absolute_time to get the current time in absolute units, then use AbsoluteToNanoseconds to convert that to a real-world unit. Obviously, you'll have to divide that by 1,000,000 to get milliseconds.
QA1398 suggests mach_absolute_time, but UpTime is easier, since it returns the same type AbsoluteToNanoseconds uses (no “pointer fun” as shown in the technote).
AbsoluteToNanoseconds returns an UnsignedWide, which is a structure. (This stuff dates back to before Mac machines could handle scalar 64-bit values.) Use the UnsignedWideToUInt64 function to convert it to a scalar. That just leaves the subtraction, which you'll do the normal way.

Resources