How can I count the digits in an integer without a string cast? - zero-pad

I fear there's a simple and obvious answer to this question. I need to determine how many digits wide a count of items is, so that I can pad each item number with the minimum number of leading zeros required to maintain alignment. For example, I want no leading zeros if the total is < 10, 1 if it's between 10 and 99, etc.
One solution would be to cast the item count to a string and then count characters. Yuck! Is there a better way?
Edit: I would not have thought to use the common logarithm (I didn't know such a thing existed). So, not obvious - to me - but definitely simple.

This should do it:
int length = (number ==0) ? 1 : (int)Math.log10(number) + 1;

int length = (int)Math.Log10(Math.Abs(number)) + 1;
You may need to account for the negative sign..

A more efficient solution than repeated division would be repeated if statements with multiplies... e.g. (where n is the number whose number of digits is required)
unsigned int test = 1;
unsigned int digits = 0;
while (n >= test)
{
++digits;
test *= 10;
}
If there is some reasonable upper bound on the item count (e.g. the 32-bit range of an unsigned int) then an even better way is to compare with members of some static array, e.g.
// this covers the whole range of 32-bit unsigned values
const unsigned int test[] = { 1, 10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000, 1000000000 };
unsigned int digits = 10;
while(n < test[digits]) --digits;

If you are going to pad the number in .Net, then
num.ToString().PadLeft(10, '0')
might do what you want.

You can use a while loop, which will likely be faster than a logarithm because this uses integer arithmetic only:
int len = 0;
while (n > 0) {
len++;
n /= 10;
}
I leave it as an exercise for the reader to adjust this algorithm to handle zero and negative numbers.

I would have posted a comment but my rep score won't grant me that distinction.
All I wanted to point out was that even though the Log(10) is a very elegant (read: very few lines of code) solution, it is probably the one most taxing on the processor.
I think jherico's answer is probably the most efficient solution and therefore should be rewarded as such.
Especially if you are going to be doing this for a lot of numbers..

Since a number doesn't have leading zeroes, you're converting anyway to add them. I'm not sure why you're trying so hard to avoid it to find the length when the end result will have to be a string anyway.

One solution is provided by base 10 logarithm, a bit overkill.

You can loop through and delete by 10, count the number of times you loop;
int num = 423;
int minimum = 1;
while (num > 10) {
num = num/10;
minimum++;
}

Okay, I can't resist: use /=:
#include <stdio.h>
int
main(){
int num = 423;
int count = 1;
while( num /= 10)
count ++;
printf("Count: %d\n", count);
return 0;
}
534 $ gcc count.c && ./a.out
Count: 3
535 $

Related

Binary to decimal (on huge numbers)

I am building a C library on big integer number. Basically, I'm seeking a fast algorythm to convert any integer in it binary representation to a decimal one
I saw JDK's Biginteger.toString() implementation, but it looks quite heavy to me, as it was made to convert the number to any radix (it uses a division for each digits, which should be pretty slow while dealing with thousands of digits).
So if you have any documentations / knowledge to share about it, I would be glad to read it.
EDIT: more precisions about my question:
Let P a memory address
Let N be the number of bytes allocated (and set) at P
How to convert the integer represented by the N bytes at address P (let's say in little endian to make things simpler), to a C string
Example:
N = 1
P = some random memory address storing '00101010'
out string = "42"
Thank for your answer still
The reason for the BigInteger.toString method looking heavy is doing the conversion in chunks.
A trivial algorithm would take the last digits and then divide the whole big integer by the radix until there is nothing left.
One problem with this is that a big integer division is quite expensive, so the number is subdivided into chunks that can be processed with regular integer division (opposed to BigInt division):
static String toDecimal(BigInteger bigInt) {
BigInteger chunker = new BigInteger(1000000000);
StringBuilder sb = new StringBuilder();
do {
int current = bigInt.mod(chunker).getInt(0);
bigInt = bigInt.div(chunker);
for (int i = 0; i < 9; i ++) {
sb.append((char) ('0' + remainder % 10));
current /= 10;
if (currnet == 0 && bigInt.signum() == 0) {
break;
}
}
} while (bigInt.signum() != 0);
return sb.reverse().toString();
}
That said, for a fixed radix, you are probably even better off with porting the "double dabble" algorithm to your needs, as suggested in the comments: https://en.wikipedia.org/wiki/Double_dabble
I recently got the challenge to print a big mersenne prime: 2**82589933-1. On my CPU that takes ~40 minutes with apcalc and ~120 minutes with python 2.7. It's a number with 24 million digits and a bit.
Here is my own little C code for the conversion:
// print 2**82589933-1
#include <stdio.h>
#include <math.h>
#include <stdint.h>
#include <inttypes.h>
#include <string.h>
const uint32_t exponent = 82589933;
//const uint32_t exponent = 100;
//outputs 1267650600228229401496703205375
const uint32_t blocks = (exponent + 31) / 32;
const uint32_t digits = (int)(exponent * log(2.0) / log(10.0)) + 10;
uint32_t num[2][blocks];
char out[digits + 1];
// blocks : number of uint32_t in num1 and num2
// num1 : number to convert
// num2 : free space
// out : end of output buffer
void conv(uint32_t blocks, uint32_t *num1, uint32_t *num2, char *out) {
if (blocks == 0) return;
const uint32_t div = 1000000000;
uint64_t t = 0;
for (uint32_t i = 0; i < blocks; ++i) {
t = (t << 32) + num1[i];
num2[i] = t / div;
t = t % div;
}
for (int i = 0; i < 9; ++i) {
*out-- = '0' + (t % 10);
t /= 10;
}
if (num2[0] == 0) {
--blocks;
num2++;
}
conv(blocks, num2, num1, out);
}
int main() {
// prepare number
uint32_t t = exponent % 32;
num[0][0] = (1LLU << t) - 1;
memset(&num[0][1], 0xFF, (blocks - 1) * 4);
// prepare output
memset(out, '0', digits);
out[digits] = 0;
// convert to decimal
conv(blocks, num[0], num[1], &out[digits - 1]);
// output number
char *res = out;
while(*res == '0') ++res;
printf("%s\n", res);
return 0;
}
The conversion is destructive and tail recursive. In each step it divides num1 by 1_000_000_000 and stores the result in num2. The remainder is added to out. Then it calls itself with num1 and num2 switched and often shortened by one (blocks is decremented). out is filled from back to front. You have to allocate it large enough and then strip leading zeroes.
Python seems to be using a similar mechanism for converting big integers to decimal.
Want to do better?
For large number like in my case each division by 1_000_000_000 takes rather long. At a certain size a divide&conquer algorithm does better. In my case the first division would be by dividing by 10 ^ 16777216 to split the number into divident and remainder. Then convert each part separately. Now each part is still big so split again at 10 ^ 8388608. Recursively keep splitting till the numbers are small enough. Say maybe 1024 digits each. Those convert with the simple algorithm above. The right definition of "small enough" would have to be tested, 1024 is just a guess.
While the long division of two big integer numbers is expensive, much more so than a division by 1_000_000_000, the time spend there is then saved because each separate chunk requires far fewer divisions by 1_000_000_000 to convert to decimal.
And if you have split the problem into separate and independent chunks it's only a tiny step away from spreading the chunks out among multiple cores. That would really speed up the conversion another step. It looks like apcalc uses divide&conquer but not multi-threading.

Get the last 1000 digits of 5^1234566789893943

I saw the following interview question on some online forum. What is a good solution for this?
Get the last 1000 digits of 5^1234566789893943
Simple algorithm:
1. Maintain a 1000-digits array which will have the answer at the end
2. Implement a multiplication routine like you do in school. It is O(d^2).
3. Use modular exponentiation by squaring.
Iterative exponentiation:
array ans;
int a = 5;
while (p > 0) {
if (p&1) {
ans = multiply(ans, a)
}
p = p>>1;
ans = multiply(ans, ans);
}
multiply: multiplies two large number using the school method and return last 1000 digits.
Time complexity: O(d^2*logp) where d is number of last digits needed and p is power.
A typical solution for this problem would be to use modular arithmetic and exponentiation by squaring to compute the remainder of 5^1234566789893943 when divided by 10^1000. However in your case this will still not be good enough as it would take about 1000*log(1234566789893943) operations and this is not too much, but I will propose a more general approach that would work for greater values of the exponent.
You will have to use a bit more complicated number theory. You can use Euler's theorem to get the remainder of 5^1234566789893943 modulo 2^1000 a lot more efficiently. Denote that r. It is also obvious that 5^1234566789893943 is divisible by 5^1000.
After that you need to find a number d such that 5^1000*d = r(modulo 2^1000). To solve this equation you should compute 5^1000(modulo 2^1000). After that all that is left is to do division modulo 2^1000. Using again Euler's theorem this can be done efficiently. Use that x^(phi(2^1000)-1)*x =1(modulo 2^1000). This approach is way faster and is the only feasible solution.
The key phrase is "modular exponentiation". Python has that built in:
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:38:22) [MSC v.1600 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> help(pow)
Help on built-in function pow in module builtins:
pow(...)
pow(x, y[, z]) -> number
With two arguments, equivalent to x**y. With three arguments,
equivalent to (x**y) % z, but may be more efficient (e.g. for ints).
>>> digits = pow(5, 1234566789893943, 10**1000)
>>> len(str(digits))
1000
>>> digits
4750414775792952522204114184342722049638880929773624902773914715850189808476532716372371599198399541490535712666678457047950561228398126854813955228082149950029586996237166535637925022587538404245894713557782868186911348163750456080173694616157985752707395420982029720018418176528050046735160132510039430638924070731480858515227638960577060664844432475135181968277088315958312427313480771984874517274455070808286089278055166204573155093723933924226458522505574738359787477768274598805619392248788499020057331479403377350096157635924457653815121544961705226996087472416473967901157340721436252325091988301798899201640961322478421979046764449146045325215261829432737214561242087559734390139448919027470137649372264607375942527202021229200886927993079738795532281264345533044058574930108964976191133834748071751521214092905298139886778347051165211279789776682686753139533912795298973229094197221087871530034608077419911440782714084922725088980350599242632517985214513078773279630695469677448272705078125
>>>
The technique we need to know is exponentiation by squaring and modulus. We also need to use BigInteger in Java.
Simple code in Java:
BigInteger m = //BigInteger of 10^1000
BigInteger pow(BigInteger a, long b) {
if (b == 0) {
return BigInteger.ONE;
}
BigInteger val = pow(a, b/2);
if (b % 2 == 0)
return (val.multiply(val)).mod(m);
else
return (val.multiply(val).multiply(a)).mod(m);
}
In Java, the function modPow has done it all for you (thank Java).
Use congruence and apply modular arithmetic.
Square and multiply algorithm.
If you divide any number in base 10 by 10 then the remainder represents
the last digit. i.e. 23422222=2342222*10+2
So we know:
5=5(mod 10)
5^2=25=5(mod 10)
5^4=(5^2)*(5^2)=5*5=5(mod 10)
5^8=(5^4)*(5^4)=5*5=5(mod 10)
... and keep going until you get to that exponent
OR, you can realize that as we keep going you keep getting 5 as your remainder.
Convert the number to a string.
Loop on the string, starting at the last index up to 1000.
Then reverse the result string.
I posted a solution based on some hints here.
#include <vector>
#include <iostream>
using namespace std;
vector<char> multiplyArrays(const vector<char> &data1, const vector<char> &data2, int k) {
int sz1 = data1.size();
int sz2 = data2.size();
vector<char> result(sz1+sz2,0);
for(int i=sz1-1; i>=0; --i) {
char carry = 0;
for(int j=sz2-1; j>=0; --j) {
char value = data1[i] * data2[j]+result[i+j+1]+carry;
carry = value/10;
result[i+j+1] = value % 10;
}
result[i]=carry;
}
if(sz1+sz2>k){
vector<char> lastKElements(result.begin()+(sz1+sz2-k), result.end());
return lastKElements;
}
else
return result;
}
vector<char> calculate(unsigned long m, unsigned long n, int k) {
if(n == 0) {
return vector<char>(1, 1);
} else if(n % 2) { // odd number
vector<char> tmp(1, m);
vector<char> result1 = calculate(m, n-1, k);
return multiplyArrays(result1, tmp, k);
} else {
vector<char> result1 = calculate(m, n/2, k);
return multiplyArrays(result1, result1, k);
}
}
int main(int argc, char const *argv[]){
vector<char> v=calculate(5,8,1000);
for(auto c : v){
cout<<static_cast<unsigned>(c);
}
}
I don't know if Windows can show a big number (Or if my computer is fast enough to show it) But I guess you COULD use this code like and algorithm:
ulong x = 5; //There are a lot of libraries for other languages like C/C++ that support super big numbers. In this case I'm using C#'s default `Uint64` number.
for(ulong i=1; i<1234566789893943; i++)
{
x = x * x; //I will make the multiplication raise power over here
}
string term = x.ToString(); //Store the number to a string. I remember strings can store up to 1 billion characters.
char[] number = term.ToCharArray(); //Array of all the digits
int tmp=0;
while(number[tmp]!='.') //This will search for the period.
tmp++;
tmp++; //After finding the period, I will start storing 1000 digits from this index of the char array
string thousandDigits = ""; //Here I will store the digits.
for (int i = tmp; i <= 1000+tmp; i++)
{
thousandDigits += number[i]; //Storing digits
}
Using this as a reference, I guess if you want to try getting the LAST 1000 characters of this array, change to this in the for of the above code:
string thousandDigits = "";
for (int i = 0; i > 1000; i++)
{
thousandDigits += number[number.Length-i]; //Reverse array... ¿?
}
As I don't work with super super looooong numbers, I don't know if my computer can get those, I tried the code and it works but when I try to show the result in console it just leave the pointer flickering xD Guess it's still working. Don't have a pro Processor. Try it if you want :P

Fastest way to find the largest power of 10 smaller than x

Is there any fast way to find the largest power of 10 smaller than a given number?
I'm using this algorithm, at the moment, but something inside myself dies anytime I see it:
10**( int( math.log10(x) ) ) # python
pow( 10, (int) log10(x) ) // C
I could implement simple log10 and pow functions for my problems with one loop each, but still I'm wondering if there is some bit magic for decimal numbers.
An alternative algorithm is:
i = 1;
while((i * 10) < x)
i *= 10;
Log and power are expensive operations. If you want fast, you probably want to look up the IEEE binary exponent in table to get the approximate power of ten, and then check if the mantissa forces a change by +1 or not. This should be 3 or 4 integer machine instructions (alternatively O(1) with a pretty small constant).
Given tables:
int IEEE_exponent_to_power_of_ten[2048]; // needs to be 2*max(IEEE_exponent)
double next_power_of_ten[600]; // needs to be 2*log10(pow(2,1024)]
// you can compute these tables offline if needed
for (p=-1023;p>1023;p++) // bounds are rough, see actual IEEE exponent ranges
{ IEEE_exponent_to_power_of_ten[p+1024]=log10(pow(2,p)); // you might have to worry about roundoff errors here
next_power_of_ten[log10(pow(2,p))+1024]=pow(10,IEEE_exponent_to_power_of_ten[p+1024]);
}
then your computation should be:
power_of_ten=IEEE_exponent_to_power_of_10[IEEE_Exponent(x)+1023];
if (x>=next_power_of_ten[power_of_ten]) power_of_ten++;
answer=next_power_of_ten[power_of_ten];
[You might really need to write this as assembler to squeeze out every last clock.]
[This code not tested.]
However, if you insist on doing this in python, the interpreter overhead may swamp the log/exp time and it might not matter.
So, do you want fast, or do you want short-to-write?
EDIT 12/23: OP now tells us that his "x" is integral. Under the assumption that it is a 64 (or 32) bit integer, my proposal still works but obviously there isn't an "IEEE_Exponent". Most processors have a "find first one" instruction that will tell you the number of 0 bits on the left hand (most significant) part of the value, e.g., leading zeros; you likely This is in essence 64 (or 32) minus the power of two for the value. Given exponent = 64 - leadingzeros, you have the power of two exponent and most of the rest of the algorithm is essentially unchanged (Modifications left for the reader).
If the processor doesn't have a find-first-one instruction, then probably the best bet is a balanced discrimination tree to determine the power of ten. For 64 bits, such a tree would take at most 18 compares to determine the exponent (10^18 ~~ 2^64).
Create an array of powers of 10. Search through it for the largest value smaller than x.
If x is fairly small, you may find that a linear search provides better performance than a binary search, due in part to fewer branch mis-predictions.
The asymptotically fastest way, as far as I know, involves repeated squaring.
func LogFloor(int value, int base) as int
//iterates values of the form (value: base^(2^i), power: 2^i)
val superPowers = iterator
var p = 1
var c = base
while c <= value
yield (c, p)
c *= c
p += p
endwhile
enditerator
//binary search for the correct power
var p = 0
var c = 1
for val ci, pi in superPowers.Reverse()
if c*ci <= value
c *= ci
p += pi
endif
endfor
return p
The algorithm takes logarithmic time and space in N, which is linear to N's representation size. [The time bound is probably a bit worse because I simplified optimistically]
Note that I assumed arbitrarily large integers (watch out for overflow!), since the naive times-10-until-over algorithm is probably fast enough when dealing with just 32-bit integers.
I think the fastest way is O(log(log(n))^2), the while loop takes O(log(log(n)) and it can be recursive call finite time (we can say O(c) where see is constant), first recursive call is takes log(log(sqrt(n))) time second takes .. and the number of sqrt in sqrt(sqrt(sqrt....(n)) < 10 is log(log(n)) and constant, because of machine limitations.
static long findPow10(long n)
{
if (n == 0)
return 0;
long i = 10;
long prevI = 10;
int count = 1;
while (i < n)
{
prevI = i;
i *= i;
count*=2;
}
if (i == n)
return count;
return count / 2 + findPow10(n / prevI);
}
In Python:
10**(len(str(int(x)))-1)
Given that this is language independent, if you can get the power of two that this number is significant to, eg y in x*2^y (which is the way the number is stored, though I'm not sure I have seen an easy way to access y in any language I have used) then if
z = int(y/(ln(10)/ln(2)))
(one floating point division)
10^z or 10^(z+1) will be your answer, though 10^z is still is not so simple (beg to be corrected).
I timed the methods with the following variations in C++ for the value a being a size_t type (inlining improves performance but does not change relative ordering).
Try 1: Multiply until find number.
size_t try1( size_t a )
{
size_t scalar = 1ul;
while( scalar * 10 < a ) scalar *= 10;
return scalar;
}
Try 2: Multiway if (could also be programmed using a lookup table).
size_t try2( size_t a )
{
return ( a < 10ul ? 1ul :
( a < 100ul ? 10ul :
( a < 1000ul ? 100ul :
( a < 10000ul ? 1000ul :
( a < 100000ul ? 10000ul :
( a < 1000000ul ? 100000ul :
( a < 10000000ul ? 1000000ul :
( a < 100000000ul ? 10000000ul :
( a < 1000000000ul ? 100000000ul :
( a < 10000000000ul ? 1000000000ul :
( a < 100000000000ul ? 10000000000ul :
( a < 1000000000000ul ? 100000000000ul :
( a < 10000000000000ul ? 1000000000000ul :
( a < 100000000000000ul ? 10000000000000ul :
( a < 1000000000000000ul ? 100000000000000ul :
( a < 10000000000000000ul ? 1000000000000000ul :
( a < 100000000000000000ul ? 10000000000000000ul :
( a < 1000000000000000000ul ? 100000000000000000ul :
( a < 10000000000000000000ul ? 1000000000000000000ul :
10000000000000000000ul )))))))))))))))))));
}
Try 3: Modified from findPow10 of #Saaed Amiri, which uses squaring to more rapidly find very large powers than Try 1.
size_t try3( size_t a )
{
if (a == 0)
return 0;
size_t i, j = 1;
size_t prev = 1;
while( j != 100 )
{
i = prev;
j = 10;
while (i <= a)
{
prev = i;
i *= j;
j *= j;
}
}
return prev;
}
Try 4: Lookup table indexed using count leading zeros instruction as per #Ira Baxter.
static const std::array<size_t,64> ltable2{
1ul, 1ul, 1ul, 1ul, 1ul, 10ul, 10ul, 10ul,
100ul, 100ul, 100ul, 1000ul, 1000ul, 1000ul,
1000ul, 10000ul, 10000ul, 10000ul, 100000ul,
100000ul, 100000ul, 1000000ul, 1000000ul,
1000000ul, 1000000ul, 10000000ul, 10000000ul,
10000000ul, 100000000ul, 100000000ul,
100000000ul, 1000000000ul, 1000000000ul,
1000000000ul, 1000000000ul, 10000000000ul,
10000000000ul, 10000000000ul, 100000000000ul,
100000000000ul, 100000000000ul, 1000000000000ul,
1000000000000ul, 1000000000000ul, 1000000000000ul,
10000000000000ul, 10000000000000ul, 10000000000000ul,
100000000000000ul, 100000000000000ul, 100000000000000ul,
1000000000000000ul, 1000000000000000ul, 1000000000000000ul,
1000000000000000ul, 10000000000000000ul, 10000000000000000ul,
10000000000000000ul, 100000000000000000ul, 100000000000000000ul,
100000000000000000ul, 100000000000000000ul, 1000000000000000000ul,
1000000000000000000ul };
size_t try4( size_t a )
{
if( a == 0 ) return 0;
size_t scalar = ltable2[ 64 - __builtin_clzl(a) ];
return (scalar * 10 > a ? scalar : scalar * 10 );
}
Timing is as follows (gcc 4.8)
for( size_t i = 0; i != 1000000000; ++i) try1(i) 6.6
for( size_t i = 0; i != 1000000000; ++i) try2(i) 0.3
for( size_t i = 0; i != 1000000000; ++i) try3(i) 6.5
for( size_t i = 0; i != 1000000000; ++i) try4(i) 0.3
for( size_t i = 0; i != 1000000000; ++i) pow(10,size_t(log10((double)i)))
98.1
The lookup/multiway-if beats everything in C++, but requires we know integers are a finite size. try3 is slower than try1 in this test for smaller values of the loop end value, for large numbers try3 beats try1. In python things are made difficult because integers are not limited so I would combine try2 with try3 to quickly process numbers up to a fixed limit then handle the possibly very large numbers.
In python I think lookup using a list comprehension is probably faster than a multiway-if.
# where we previously define lookuptable = ( 1, 10, 100, ..... )
scalar = [i for i in lookuptable if i < a][-1]

calculating the number of bits using K&R method with infinite memory

I got answer for the question, counting number of sets bits from here.
How to count the number of set bits in a 32-bit integer?
long count_bits(long n) {
unsigned int c; // c accumulates the total bits set in v
for (c = 0; n; c++)
n &= n - 1; // clear the least significant bit set
return c;
}
It is simple to understand also. And found the best answer as Brian Kernighans method, posted by hoyhoy... and he adds the following at the end.
Note that this is an question used during interviews. The interviewer will add the caveat that you have "infinite memory". In that case, you basically create an array of size 232 and fill in the bit counts for the numbers at each location. Then, this function becomes O(1).
Can somebody explain how to do this ? If i have infinite memory ...
The fastest way I have ever seen to populate such an array is ...
array[0] = 0;
for (i = 1; i < NELEMENTS; i++) {
array[i] = array[i >> 1] + (i & 1);
}
Then to count the number of set bits in a given number (provided the given number is less than NELEMENTS) ...
numSetBits = array[givenNumber];
If your memory is not finite, I often see NELEMENTS set to 256 (for one byte's worth) and add the number of set bits in each byte in your integer.
int counts[MAX_LONG];
void init() {
for (int i= 0; i < MAX_LONG; i++)
{
counts[i] = count_bits[i]; // as given
}
}
int count_bits_o1(long number)
{
return counts[number];
}
You can probably pre-populate the array more wiseley, i.e. fill with zeros, then every second index add one, then every fourth index add 1, then every eighth index add 1 etc, which might be a bit faster, although I doubt it...
Also, you might account for unsigned values.

How to find a binary logarithm very fast? (O(1) at best)

Is there any very fast method to find a binary logarithm of an integer number? For example, given a number
x=52656145834278593348959013841835216159447547700274555627155488768 such algorithm must find y=log(x,2) which is 215. x is always a power of 2.
The problem seems to be really simple. All what is required is to find the position of the most significant 1 bit. There is a well-known method FloorLog, but it is not very fast especially for the very long multi-words integers.
What is the fastest method?
A quick hack: Most floating-point number representations automatically normalise values, meaning that they effectively perform the loop Christoffer Hammarström mentioned in hardware. So simply converting from an integer to FP and extracting the exponent should do the trick, provided the numbers are within the FP representation's exponent range! (In your case, your integer input requires multiple machine words, so multiple "shifts" will need to be performed in the conversion.)
If the integers are stored in a uint32_t a[], then my obvious solution would be as follows:
Run a linear search over a[] to find the highest-valued non-zero uint32_t value a[i] in a[] (test using uint64_t for that search if your machine has native uint64_t support)
Apply the bit twiddling hacks to find the binary log b of the uint32_t value a[i] you found in step 1.
Evaluate 32*i+b.
The answer is implementation or language dependent. Any implementation can store the number of significant bits along with the data, as it is often useful. If it must be calculated, then find the most significant word/limb and the most significant bit in that word.
If you're using fixed-width integers then the other answers already have you pretty-well covered.
If you're using arbitrarily large integers, like int in Python or BigInteger in Java, then you can take advantage of the fact that their variable-size representation uses an underlying array, so the base-2 logarithm can be computed easily and quickly in O(1) time using the length of the underlying array. The base-2 logarithm of a power of 2 is simply one less than the number of bits required to represent the number.
So when n is an integer power of 2:
In Python, you can write n.bit_length() - 1 (docs).
In Java, you can write n.bitLength() - 1 (docs).
You can create an array of logarithms beforehand. This will find logarithmic values up to log(N):
#define N 100000
int naj[N];
naj[2] = 1;
for ( int i = 3; i <= N; i++ )
{
naj[i] = naj[i-1];
if ( (1 << (naj[i]+1)) <= i )
naj[i]++;
}
The array naj is your logarithmic values. Where naj[k] = log(k).
Log is based on two.
This uses binary search for finding the closest power of 2.
public static int binLog(int x,boolean shouldRoundResult){
// assuming 32-bit integer
int lo=0;
int hi=31;
int rangeDelta=hi-lo;
int expGuess=0;
int guess;
while(rangeDelta>1){
expGuess=(lo+hi)/2; // or (loGuess+hiGuess)>>1
guess=1<<expGuess;
if(guess<x){
lo=expGuess;
} else if(guess>x){
hi=expGuess;
} else {
lo=hi=expGuess;
}
rangeDelta=hi-lo;
}
if(shouldRoundResult && hi>lo){
int loGuess=1<<lo;
int hiGuess=1<<hi;
int loDelta=Math.abs(x-loGuess);
int hiDelta=Math.abs(hiGuess-x);
if(loDelta<hiDelta)
expGuess=lo;
else
expGuess=hi;
} else {
expGuess=lo;
}
int result=expGuess;
return result;
}
The best option on top of my head would be a O(log(logn)) approach, by using binary search. Here is an example for a 64-bit ( <= 2^63 - 1 ) number (in C++):
int log2(int64_t num) {
int res = 0, pw = 0;
for(int i = 32; i > 0; i --) {
res += i;
if(((1LL << res) - 1) & num)
res -= i;
}
return res;
}
This algorithm will basically profide me with the highest number res such as (2^res - 1 & num) == 0. Of course, for any number, you can work it out in a similar matter:
int log2_better(int64_t num) {
var res = 0;
for(i = 32; i > 0; i >>= 1) {
if( (1LL << (res + i)) <= num )
res += i;
}
return res;
}
Note that this method relies on the fact that the "bitshift" operation is more or less O(1). If this is not the case, you would have to precompute either all the powers of 2, or the numbers of form 2^2^i (2^1, 2^2, 2^4, 2^8, etc.) and do some multiplications(which in this case aren't O(1)) anymore.
The example in the OP is an integer string of 65 characters, which is not representable by a INT64 or even INT128. It is still very easy to get the Log(2,x) from this string by converting it to a double-precision number. This at least gives you easy access to integers upto 2^1023.
Below you find some form of pseudocode
# 1. read the string
string="52656145834278593348959013841835216159447547700274555627155488768"
# 2. extract the length of the string
l=length(string) # l = 65
# 3. read the first min(l,17) digits in a float
float=to_float(string(1: min(17,l) ))
# 4. multiply with the correct power of 10
float = float * 10^(l-min(17,l) ) # float = 5.2656145834278593E64
# 5. Take the log2 of this number and round to the nearest integer
log2 = Round( Log(float,2) ) # 215
Note:
some computer languages can convert arbitrary strings into a double precision number. So steps 2,3 and 4 could be replaced by x=to_float(string)
Step 5 could be done quicker by just reading the double-precision exponent (bits 53 up to and including 63) and subtracting 1023 from it.
Quick example code: If you have awk you can quickly test this algorithm.
The following code creates the first 300 powers of two:
awk 'BEGIN{for(n=0;n<300; n++) print 2^n}'
The following reads the input and does the above algorithm:
awk '{ l=length($0); m = (l > 17 ? 17 : l)
x = substr($0,1,m) * 10^(l-m)
print log(x)/log(2)
}'
So the following bash-command is a convoluted way to create a consecutive list of numbers from 0 to 299:
$ awk 'BEGIN{for(n=0;n<300; n++) print 2^n}' | awk '{ l=length($0); m = (l > 17 ? 17 : l); x = substr($0,1,m) * 10^(l-m); print log(x)/log(2) }'
0
1
2
...
299

Resources