How to detect integer overflow in int64 variables with X++ - overflow

Is there a reliable way to detect integer overflow in int64 variables with X++?
Example:
int64 n;
n = 0x7FFFFFFFFFFFFFFF;
n += 1;
info(int642str(n));
Reference:
Integers [AX 2012]
How to detect double precision floating point overflow and underflow?

Adapted code from here:
int64 a, b;
#localmacro.ABS_INT
(%1 < 0 ? 0 - %1 : %1)
#endmacro
a = 0x7FFFFFFFFFFFFFFF;
b = 1;
if ( ((a < 0) == (b < 0)) && (#ABS_INT(b) > int64Max() - #ABS_INT(a)))
{
throw error(strFmt("Integer overflow in %1.", funcName()));
}
a += b;
info(int642str(a));

Related

Overflow detection in unsigned division algorithm

I have a question about 64/32-bits division algorithm as it appears in Hacker's Delight in Chapter 9-4 Unsigned Long Division, Figure 9-3, "div1u". Online it can be seen here, from where I copy-pasted it as follows:
unsigned divlu2(unsigned u1, unsigned u0, unsigned v,
unsigned *r) {
const unsigned b = 65536; // Number base (16 bits).
unsigned un1, un0, // Norm. dividend LSD's.
vn1, vn0, // Norm. divisor digits.
q1, q0, // Quotient digits.
un32, un21, un10,// Dividend digit pairs.
rhat; // A remainder.
int s; // Shift amount for norm.
if (u1 >= v) { // If overflow, set rem.
if (r != NULL) // to an impossible value,
*r = 0xFFFFFFFF; // and return the largest
return 0xFFFFFFFF;} // possible quotient.
s = nlz(v); // 0 <= s <= 31.
v = v << s; // Normalize divisor.
vn1 = v >> 16; // Break divisor up into
vn0 = v & 0xFFFF; // two 16-bit digits.
un32 = (u1 << s) | (u0 >> 32 - s) & (-s >> 31);
un10 = u0 << s; // Shift dividend left.
un1 = un10 >> 16; // Break right half of
un0 = un10 & 0xFFFF; // dividend into two digits.
q1 = un32/vn1; // Compute the first
rhat = un32 - q1*vn1; // quotient digit, q1.
again1:
if (q1 >= b || q1*vn0 > b*rhat + un1) {
q1 = q1 - 1;
rhat = rhat + vn1;
if (rhat < b) goto again1;}
un21 = un32*b + un1 - q1*v; // Multiply and subtract.
q0 = un21/vn1; // Compute the second
rhat = un21 - q0*vn1; // quotient digit, q0.
again2:
if (q0 >= b || q0*vn0 > b*rhat + un0) {
q0 = q0 - 1;
rhat = rhat + vn1;
if (rhat < b) goto again2;}
if (r != NULL) // If remainder is wanted,
*r = (un21*b + un0 - q0*v) >> s; // return it.
return q1*b + q0;
}
Specifically, I'm interested in the bounds of the variable un21. How large can it be? Somewhat surprising, it can be larger than v but by how much?
In other words, under again2 there is the test q0 >= b. If I wanted to know whether the division (q0 = un21/vn1) eventually overflows, is it enough to test (un21 >> 16) == vn1 or does it have to read (un21 >> 16) >= vn1, instead if q0 >= b?
The idea is to know in advance, prior to calculating the quotient, whether the division overflows or not.

Binary searching via bitmasking?

I have used this algorithm many times to binary search over Ints or Longs. Basically, I start from Long.MinValue and Long.MaxValue and decide to set the bit at ith position depending on the value of the function I am maximizing (or minimizing). In practice, this turns out to be faster (exactly 63*2 bitwise operations) and easier to code and avoids the many gotchas of traditional binary search implementations.
Here is my algorithm in Scala:
/**
* #return Some(x) such that x is the largest number for which f(x) is true
* If no such x is found, return None
*/
def bitBinSearch(f: Long => Boolean): Option[Long] = {
var n = 1L << 63
var p = 0L
for (i <- 62 to 0 by -1) {
val t = 1L << i
if (f(n + t)) n += t
if (f(p + t)) p += t
}
if (f(p)) Some(p) else if (f(n)) Some(n) else None
}
I have 3 questions:
What is this algorithm called in literature? Surely, I can't be the inventor of this - but, I did not find anything when I tried googling for various combinations of binary-search + bit-masking/toggling. I have been personally calling it "bitBinSearch". I have not seen this mentioned at all in articles going over binary search over an Int or Long domain where this would be trivial to write.
Can the code be improved/shortened in anyway? Right now I keep track of the negative and positive solutions in n and p. Any clever way I can merge them into single variable? Here are some sample test cases: http://scalafiddle.net/console/70a3e3e59bc61c8eb7acfbba1073980c before you attempt an answer
Is there a version that can be made to work with Doubles and Floats?
As long as you're bit-twiddling (a popular pastime in some circles) why not go all the way? I don't know if there's any efficiency to be gained, but I think it actually makes the algorithm a little clearer.
def bitBinSearch(f: Long => Boolean): Option[Long] = {
var n = Long.MinValue
var p = 0L
var t = n >>> 1
while (t > 0) {
if ( f(n|t) ) n |= t
if ( f(p|t) ) p |= t
t >>= 1
}
List(p,n).find(f)
}
Of course, if you go recursive you can eliminate those nasty vars.
import scala.annotation.tailrec
#tailrec
def bitBinSearch( f: Long => Boolean
, n: Long = Long.MinValue
, p: Long = 0L
, t: Long = Long.MinValue >>> 1 ): Option[Long] = {
if (t > 0) bitBinSearch(f
, if (f(n|t)) n|t else n
, if (f(p|t)) p|t else p
, t >> 1
)
else List(p,n).find(f)
}
Again, probably not more efficient, but perhaps a bit more Scala-like.
UPDATE
Your comment about Int/Long got me wondering if one function could do it all.
After traveling down a few dead-ends I finally came up with this (which is, oddly, actually pretty close to your original code).
import Integral.Implicits._
import Ordering.Implicits._
def bitBinSearch[I](f: I => Boolean)(implicit ev:Integral[I]): Option[I] = {
def topBit(x: I = ev.one):I = if (x+x < ev.zero) x else topBit(x+x)
var t:I = topBit()
var p:I = ev.zero
var n:I = t+t
while (t > ev.zero) {
if ( f(p+t) ) p += t
if ( f(n+t) ) n += t
t /= (ev.one+ev.one)
}
List(p,n).find(f)
}
This passes the following tests.
assert(bitBinSearch[Byte] (_ <= 0) == Some(0))
assert(bitBinSearch[Byte] (_ <= 1) == Some(1))
assert(bitBinSearch[Byte] (_ <= -1) == Some(-1))
assert(bitBinSearch[Byte] (_ <= 100) == Some(100))
assert(bitBinSearch[Byte] (_ <= -100) == Some(-100))
assert(bitBinSearch[Short](_ <= 10000) == Some(10000))
assert(bitBinSearch[Short](_ <= -10000) == Some(-10000))
assert(bitBinSearch[Int] (_ <= Int.MinValue) == Some(Int.MinValue))
assert(bitBinSearch[Int] (_ <= Int.MaxValue) == Some(Int.MaxValue))
assert(bitBinSearch[Long] (_ <= Long.MinValue) == Some(Long.MinValue))
assert(bitBinSearch[Long] (_ <= Long.MaxValue) == Some(Long.MaxValue))
assert(bitBinSearch[Long] (_ < Long.MinValue) == None)
I don't know Scala, but this is my version of Binary searching via bitmasking in java
My algorithm is like this
We start with the index with highest power of 2 and end at 20. Every time we see A[itemIndex] ≤ A[index] we update itemIndex += index
After the iteration itemIndex gives the index of the item if present in the array else gives the floor value in A
int find(int[] A, int item) { // A uses 1 based indexing
int index = 0;
int N = A.length;
for (int i = Integer.highestOneBit(N); i > 0; i >>= 1) {
int j = index | i;
if (j < N && A[j] <= item) {
index = j;
if (A[j] == item) break;
}
}
return item == A[index] ? index : -1;
}

Smallest number in a range [a,b] with maximum number of '1' in binary representation

Given a range [a,b] (both inclusive) I need to find the smallest number with the maximum number of '1's in binary representation. My current approach is I find the number of bits set in all numbers from a to b and keep track of the maximum.
However this is very slow, any faster method?
Let's find most significant bit which is different in a and b. It will be 0 in a, 1 in b. If we place all other bits to the right to 1 - resulting number will be still in range [a; b]. And it will the single number with maximum number of ones in representation.
EDIT. The result of this algorithm always returns the number with n-1 bits set to one, where n is number of bits which can be changed. As pointed in comments - there is a bug in case if all of there n bits in b are set to 1. Here is the fixed code snippet:
int maximizeBits(int a, int b) {
if (a == b) {
return a;
}
int m = a ^ b, pow2 = 1; // MSB of m=a^b is bit that we need to find
while (m > pow2) { // Set other bits to 0
if ((m & pow2) != 0) {
m ^= pow2;
}
pow2 <<= 1;
}
int res = a | (m - 1); // Now m is in form of 2^n and m - 1 would be mask of n-1 bits
if ((res | b) <= b) { // Fix of problem if all n bits in b are set to 1
res = b;
}
return res;
}
You can replace the loop in Jarlax' answer by a "parallel suffix OR", like this
uint32_t m = (a ^ b) >> 1;
m |= m >> 1;
m |= m >> 2;
m |= m >> 4;
m |= m >> 8;
m |= m >> 16;
uint32_t res = a | m;
if ((res | b) <= b)
res = b;
return res;
It generalizes to different sizes integer, using ceil(log(k)) steps in general. The initial test a == b is not necessary, a ^ b would be zero, therefore m is zero, so nothing interesting happens anyway.
Alternatively, here's a completely different approach: keep changing the lowest 0 to a 1 until it is no longer possible.
unsigned x = a;
while (x < b) {
unsigned newx = (x + 1) | x; // set lowest 0
if (newx <= b)
x = newx;
else
break;
}
return x;

Add two numbers without using + and - operators

Suppose you have two numbers, both signed integers, and you want to sum them but can't use your language's conventional + and - operators. How would you do that?
Based on http://www.ocf.berkeley.edu/~wwu/riddles/cs.shtml
Not mine, but cute
int a = 42;
int b = 17;
char *ptr = (char*)a;
int result = (int)&ptr[b];
Using Bitwise operations just like Adder Circuits
Cringe. Nobody builds an adder from 1-bit adders anymore.
do {
sum = a ^ b;
carry = a & b;
a = sum;
b = carry << 1;
} while (b);
return sum;
Of course, arithmetic here is assumed to be unsigned modulo 2n or twos-complement. It's only guaranteed to work in C if you convert to unsigned, perform the calculation unsigned, and then convert back to signed.
Since ++ and -- are not + and - operators:
int add(int lhs, int rhs) {
if (lhs < 0)
while (lhs++) --rhs;
else
while (lhs--) ++rhs;
return rhs;
}
Using bitwise logic:
int sum = 0;
int carry = 0;
while (n1 > 0 || n2 > 0) {
int b1 = n1 % 2;
int b2 = n2 % 2;
int sumBits = b1 ^ b2 ^ carry;
sum = (sum << 1) | sumBits;
carry = (b1 & b2) | (b1 & carry) | (b2 & carry);
n1 /= 2;
n2 /= 2;
}
Here's something different than what's been posted already. Use the facts that:
log (a^b) = b * log a
e^a * e^b = e^(a + b)
So:
log (e^(a + b)) = log(e^a * e^b) = a + b (if the log is base e)
So just find log(e^a * e^b).
Of course this is just theoretical, in practice this is going to be inefficient and most likely inexact too.
If we're obeying the letter of the rules:
a += b;
Otherwise http://www.geekinterview.com/question_details/67647 has a pretty complete list.
This version has a restriction on the number range:
(((int64_t)a << 32) | ((int64_t)b & INT64_C(0xFFFFFFFF)) % 0xFFFFFFFF
This also counts under the "letter of the rules" category.
Simple example in Python, complete with a simple test:
NUM_BITS = 32
def adder(a, b, carry):
sum = a ^ b ^ carry
carry = (a & b) | (carry & (a ^ b))
#print "%d + %d = %d (carry %d)" % (a, b, sum, carry)
return sum, carry
def add_two_numbers(a, b):
carry = 0
result = 0
for n in range(NUM_BITS):
mask = 1 << n
bit_a = (a & mask) >> n
bit_b = (b & mask) >> n
sum, carry = adder(bit_a, bit_b, carry)
result = result | (sum << n)
return result
if __name__ == '__main__':
assert add_two_numbers(2, 3) == 5
assert add_two_numbers(57, 23) == 80
for a in range(10):
for b in range(10):
result = add_two_numbers(a, b)
print "%d + %d == %d" % (a, b, result)
assert result == a + b
In Common Lisp:
(defun esoteric-sum (a b)
(let ((and (logand a b)))
(if (zerop and)
;; No carrying necessary.
(logior a b)
;; Combine the partial sum with the carried bits again.
(esoteric-sum (logxor a b) (ash and 1)))))
That's taking the bitwise-and of the numbers, which figures out which bits need to carry, and, if there are no bits that require shifting, returns the bitwise-or of the operands. Otherwise, it shifts the carried bits one to the left and combines them again with the bitwise-exclusive-or of the numbers, which sums all the bits that don't need to carry, until no more carrying is necessary.
Here's an iterative alternative to the recursive form above:
(defun esoteric-sum-iterative (a b)
(loop for first = a then (logxor first second)
for second = b then (ash and 1)
for and = (logand first second)
until (zerop and)
finally (return (logior first second))))
Note that the function needs another concession to overcome Common Lisp's reluctance to employ fixed-width two's complement arithmetic—normally an immeasurable asset—but I'd rather not cloud the form of the function with that accidental complexity.
If you need more detail on why that works, please ask a more detailed question to probe the topic.
Not very creative, I know, but in Python:
sum([a,b])
I realize that this might not be the most elegant solution to the problem, but I figured out a way to do this using the len(list) function as a substitute for the addition operator.
'''
Addition without operators: This program obtains two integers from the user
and then adds them together without using operators. This is one of the 'hard'
questions from 'Cracking the Coding Interview' by
'''
print('Welcome to addition without a plus sign!')
item1 = int(input('Please enter the first number: '))
item2 = int(input('Please eneter the second number: '))
item1_list = []
item2_list = []
total = 0
total_list = []
marker = 'x'
placeholder = 'placeholder'
while len(item1_list) < item1:
item1_list.append(marker)
while len(item2_list) < item2:
item2_list.append(marker)
item1_list.insert(1, placeholder)
item1_list.insert(1, placeholder)
for item in range(1, len(item1_list)):
total_list.append(item1_list.pop())
for item in range(1, len(item2_list)):
total_list.append(item2_list.pop())
total = len(total_list)
print('The sum of', item1, 'and', item2, 'is', total)
#include <stdio.h>
int main()
{
int n1=5,n2=55,i=0;
int sum = 0;
int carry = 0;
while (n1 > 0 || n2 > 0)
{
int b1 = n1 % 2;
int b2 = n2 % 2;
int sumBits = b1 ^ b2 ^ carry;
sum = sum | ( sumBits << i);
i++;
carry = (b1 & b2) | (b1 & carry) | (b2 & carry);
n1 /= 2;
n2 /= 2;
}
sum = sum | ( carry << i );
printf("%d",sum);
return 0;
}

Longest palindrome prefix

How to find the longest palindrome prefix of a string in O(n)?
Use a rolling hash. If a is your string, let ha[x] be the hash of the first x chars in a computed from left to right and let hr[x] be the hash of the first x characters in s computed from right to left. You're interested in the last position i for which hf[i] = hb[i].
Code in C (use two hashes for each direction to avoid false positives):
int match = n - 1;
int ha1 = 0, ha2 = 0, hr1 = 0, hr2 = 0;
int m1 = 1, m2 = 1;
for ( int i = 0; a[i]; ++i )
{
ha1 = (ha1 + m1*a[i]) % mod1;
ha2 = (ha2 + m2*a[i]) % mod2;
hr1 = (a[i] + base1*hr1) % mod1;
hr2 = (a[i] + base2*hr2) % mod2;
m1 *= base1, m1 %= mod1;
m2 *= base2, m2 %= mod2;
if ( ha1 == hr1 && ha2 == hr2 )
match = i;
}
Solution for a more general problem, not prefix but sub-string, in O(n) :
http://www.akalin.cx/2007/11/28/finding-the-longest-palindromic-substring-in-linear-time/
Second result on google for "longest palindrome prefix"....
Or solution using suffix-trees :
http://www.allisons.org/ll/AlgDS/Tree/Suffix/
Using z-algorithm (https://codeforces.com/blog/entry/3107). Suppose s is the given string of length m. Code:
string rev="",str=s;
int m=s.size(),longestPalindromicPrefix=1;
if(m==0 || m==1) longestPalindromicPrefix=m;
for(int i=m-1;i>=0;i--)
rev+=s[i];
s+='#';
s+=rev;
int n=s.size(),z[n+4],l=0,r=0;
for(int i=1;i<n;i++){
if(i>r){
l=r=i;
while(r<n && s[r-l]==s[r])
r++;
z[i]=r-l,r--;
}
else{
int k=i-l;
if(z[k]<r-i+1)
z[i]=z[k];
else{
l=i;
while(r<n && s[r-l]==s[r])
r++;
z[i]=r-l,r--;
}
}
}
for(int i=m+1;i<n;i++){
if(2*z[i]>=2*m-i && z[i]>longestPalindromicPrefix)
longestPalindromicPrefix=z[i];
}

Resources