Can this function be refactored to be O(1) - performance

I have this function which is used to calculate a value with diminishing returns. It counts how often an ever increasing value can be subtracted from the input value and returns the number of subtractions. It is currently implemented iteratively with an infinite loop:
// inputValue is our parameter. It is manipulated in the method body.
// step counts how many subtractions have been performed so far. It is also our returned value.
// loss is the value that is being subtracted from the inputValue at each step. It grows polynomially with each step.
public int calculate(int inputValue) {
for (int step = 1; true; step++) {// infinite java for-each loop
int loss = (int) (1 + 0.0006 * step*step + 0.2 * step);
if (inputValue > loss) {
inputValue -= loss;
} else {
return step;
}
}
}
This function is used in various places within the larger application and sometimes in performance critical code. I would prefer it to be refactored in a way which does not require the loop anymore.
I am fairly sure that it is possible to somehow calculate the result more directly. But my mathematical skills seem to be insufficient to do this.
Can anybody show me a function which produces identical results without the need for a loop or recursion? It is OK if the refactored code may produce different results for extreme values and corner cases. Negative inputs need not be considered.
Thank you all in advance.

I don't think you can make the code faster preserving the exact logic. Particularly you have some hard to emulate rounding at
int loss = (int) (1 + 0.0006 * step*step + 0.2 * step);
If this is a requirement of your business logic rather than a bug, I don't think you can do significantly better. On the other hand if what you really want is something like (from the syntax I assumed you use Java):
public static int calculate_double(int inputValue) {
double value = inputValue;
for (int step = 1; true; step++) {// infinite java for-each loop
double loss = (1 + 0.0006 * step * step + 0.2 * step); // no rounding!
if (value > loss) {
value -= loss;
} else {
return step;
}
}
}
I.e. the same logic but without a rounding at every step, then there are some hopes.
Note: unfortunately this rounding does make a difference. For example, according to my test the output of calculate and calculate_double are slightly different for every inputValue in the range of [4, 46465] (sometimes even more than by +1, for example for inputValue = 1000 it is calculate = 90 vs calculate_double = 88). For bigger inputValue the results are more consistent. For example for the result of 519/520 the range of difference is only [55294, 55547]. Still for every results there is some range of different results.
First of all, the sum of loss in the case of no rounding for a given max step (let's call it n) has a closed formula:
sum(n) = n + 0.0006*n*(n+1)*(2n+1)/6 + 0.2*n*(n+1)/2
So theoretically finding such n so that sum(n) < inputValue < sum(n+1) can by done by solving the cubic equation sum(x) = inputValue which has a closed formula and then checking values like floor(x) and ceil(x). However the math behind this is a bit complicated so I didn't went that route.
Please also note that since int has a limited range, theoretically even your implementation of the algorithm is O(1) (because it will never take more steps than to compute calculate(Integer.MAX_VALUE) which is a constant). So probably what you really want is just a significant speed up.
Unfortunately the coefficients 0.0006 and 0.2 are small enough to make different summands the dominant part of the sum for different n. Still you can use binary search for a much better performance:
static int sum(int step) {
// n + 0.2 * n*(n+1)/2 + 0.0006 * n*(n+1)*(2n+1)/6
// ((0.0001*(2n+1) + 0.1) * (n+1) + 1) * n
double s = ((0.0001 * (2 * step + 1) + 0.1) * (step + 1) + 1) * step;
return (int) s;
}
static int calc_bin_search2(int inputValue) {
int left = 0;
// inputValue / 2 is a safe estimate, the answer for 100 is 27 or 28
int right = inputValue < 100 ? inputValue : inputValue / 2;
// for big inputValue reduce right more aggressively before starting the binary search
if (inputValue > 1000) {
while (true) {
int test = right / 8;
int tv = sum(test);
if (tv > inputValue)
right = test;
else {
left = test;
break;
}
}
}
// just usual binary search
while (true) {
int mid = (left + right) / 2;
int mv = sum(mid);
if (mv == inputValue)
return mid;
else if (mid == left) {
return mid + 1;
} else if (mv < inputValue)
left = mid;
else
right = mid;
}
}
Note: the return mid + 1 is the copy of your original logic that returns one step after the last loss was subtracted.
In my tests this implementation matches the output of calculate_double and has roughly the same performance for inputValue under 1000, is x50 faster for values around 1_000_000, and x200 faster for values around 1_000_000_000

Related

Fast random/mutation algorithms (vector to vector) [duplicate]

I've been trying to create a generalized Gradient Noise generator (which doesn't use the hash method to get gradients). The code is below:
class GradientNoise {
std::uint64_t m_seed;
std::uniform_int_distribution<std::uint8_t> distribution;
const std::array<glm::vec2, 4> vector_choice = {glm::vec2(1.0, 1.0), glm::vec2(-1.0, 1.0), glm::vec2(1.0, -1.0),
glm::vec2(-1.0, -1.0)};
public:
GradientNoise(uint64_t seed) {
m_seed = seed;
distribution = std::uniform_int_distribution<std::uint8_t>(0, 3);
}
// 0 -> 1
// just passes the value through, origionally was perlin noise activation
double nonLinearActivationFunction(double value) {
//return value * value * value * (value * (value * 6.0 - 15.0) + 10.0);
return value;
}
// 0 -> 1
//cosine interpolation
double interpolate(double a, double b, double t) {
double mu2 = (1 - cos(t * M_PI)) / 2;
return (a * (1 - mu2) + b * mu2);
}
double noise(double x, double y) {
std::mt19937_64 rng;
//first get the bottom left corner associated
// with these coordinates
int corner_x = std::floor(x);
int corner_y = std::floor(y);
// then get the respective distance from that corner
double dist_x = x - corner_x;
double dist_y = y - corner_y;
double corner_0_contrib; // bottom left
double corner_1_contrib; // top left
double corner_2_contrib; // top right
double corner_3_contrib; // bottom right
std::uint64_t s1 = ((std::uint64_t(corner_x) << 32) + std::uint64_t(corner_y) + m_seed);
std::uint64_t s2 = ((std::uint64_t(corner_x) << 32) + std::uint64_t(corner_y + 1) + m_seed);
std::uint64_t s3 = ((std::uint64_t(corner_x + 1) << 32) + std::uint64_t(corner_y + 1) + m_seed);
std::uint64_t s4 = ((std::uint64_t(corner_x + 1) << 32) + std::uint64_t(corner_y) + m_seed);
// each xy pair turns into distance vector from respective corner, corner zero is our starting corner (bottom
// left)
rng.seed(s1);
corner_0_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x, dist_y});
rng.seed(s2);
corner_1_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x, dist_y - 1});
rng.seed(s3);
corner_2_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x - 1, dist_y - 1});
rng.seed(s4);
corner_3_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x - 1, dist_y});
double u = nonLinearActivationFunction(dist_x);
double v = nonLinearActivationFunction(dist_y);
double x_bottom = interpolate(corner_0_contrib, corner_3_contrib, u);
double x_top = interpolate(corner_1_contrib, corner_2_contrib, u);
double total_xy = interpolate(x_bottom, x_top, v);
return total_xy;
}
};
I then generate an OpenGL texture to display with like this:
int width = 1024;
int height = 1024;
unsigned char *temp_texture = new unsigned char[width*height * 4];
double octaves[5] = {2,4,8,16,32};
for( int i = 0; i < height; i++){
for(int j = 0; j < width; j++){
double d_noise = 0;
d_noise += temp_1.noise(j/octaves[0], i/octaves[0]);
d_noise += temp_1.noise(j/octaves[1], i/octaves[1]);
d_noise += temp_1.noise(j/octaves[2], i/octaves[2]);
d_noise += temp_1.noise(j/octaves[3], i/octaves[3]);
d_noise += temp_1.noise(j/octaves[4], i/octaves[4]);
d_noise/=5;
uint8_t noise = static_cast<uint8_t>(((d_noise * 128.0) + 128.0));
temp_texture[j*4 + (i * width * 4) + 0] = (noise);
temp_texture[j*4 + (i * width * 4) + 1] = (noise);
temp_texture[j*4 + (i * width * 4) + 2] = (noise);
temp_texture[j*4 + (i * width * 4) + 3] = (255);
}
}
Which give good results:
But gprof is telling me that the Mersenne twister is taking up 62.4% of my time and growing with larger textures. Nothing else individual takes any where near as much time. While the Mersenne twister is fast after initialization, the fact that I initialize it every time I use it seems to make it pretty slow.
This initialization is 100% required for this to make sure that the same x and y generates the same gradient at each integer point (so you need either a hash function or seed the RNG each time).
I attempted to change the PRNG to both the linear congruential generator and Xorshiftplus, and while both ran orders of magnitude faster, they gave odd results:
LCG (one time, then running 5 times before using)
Xorshiftplus
After one iteration
After 10,000 iterations.
I've tried:
Running the generator several times before utilizing output, this results in slow execution or simply different artifacts.
Using the output of two consecutive runs after initial seed to seed the PRNG again and use the value after wards. No difference in result.
What is happening? What can i do to get faster results that are of the same quality as the mersenne twister?
OK BIG UPDATE:
I don't know why this works, I know it has something to do with the prime number utilized, but after messing around a bit, it appears that the following works:
Step 1, incorporate the x and y values as seeds separately (and incorporate some other offset value or additional seed value with them, this number should be a prime/non trivial factor)
Step 2, Use those two seed results into seeding the generator again back into the function (so like geza said, the seeds made were bad)
Step 3, when getting the result, instead of using modulo number of items (4) trying to get, or & 3, modulo the result by a prime number first then apply & 3. I'm not sure if the prime being a mersenne prime matters or not.
Here is the result with prime = 257 and xorshiftplus being used! (note I used 2048 by 2048 for this one, the others were 256 by 256)
LCG is known to be inadequate for your purpose.
Xorshift128+'s results are bad, because it needs good seeding. And providing good seeding defeats the whole purpose of using it. I don't recommend this.
However, I recommend using an integer hash. For example, one from Bob's page.
Here's a result of the first hash of that page, it looks OK to me, and it is fast (I think it is much faster than Mersenne Twister):
Here's the code I've written to generate this:
#include <cmath>
#include <stdio.h>
unsigned int hash(unsigned int a) {
a = (a ^ 61) ^ (a >> 16);
a = a + (a << 3);
a = a ^ (a >> 4);
a = a * 0x27d4eb2d;
a = a ^ (a >> 15);
return a;
}
unsigned int ivalue(int x, int y) {
return hash(y<<16|x)&0xff;
}
float smooth(float x) {
return 6*x*x*x*x*x - 15*x*x*x*x + 10*x*x*x;
}
float value(float x, float y) {
int ix = floor(x);
int iy = floor(y);
float fx = smooth(x-ix);
float fy = smooth(y-iy);
int v00 = ivalue(iy+0, ix+0);
int v01 = ivalue(iy+0, ix+1);
int v10 = ivalue(iy+1, ix+0);
int v11 = ivalue(iy+1, ix+1);
float v0 = v00*(1-fx) + v01*fx;
float v1 = v10*(1-fx) + v11*fx;
return v0*(1-fy) + v1*fy;
}
unsigned char pic[1024*1024];
int main() {
for (int y=0; y<1024; y++) {
for (int x=0; x<1024; x++) {
float v = 0;
for (int o=0; o<=9; o++) {
v += value(x/64.0f*(1<<o), y/64.0f*(1<<o))/(1<<o);
}
int r = rint(v*0.5f);
pic[y*1024+x] = r;
}
}
FILE *f = fopen("x.pnm", "wb");
fprintf(f, "P5\n1024 1024\n255\n");
fwrite(pic, 1, 1024*1024, f);
fclose(f);
}
If you want to understand, how a hash function work (or better yet, which properties a good hash have), check out Bob's page, for example this.
You (unknowingly?) implemented a visualization of PRNG non-random patterns. That looks very cool!
Except Mersenne Twister, all your tested PRNGs do not seem fit for your purpose. As I have not done further tests myself, I can only suggest to try out and measure further PRNGs.
The randomness of LCGs are known to be sensitive to the choice of their parameters. In particular, the period of a LCG is relative to the m parameter - at most it will be m (your prime factor) & for many values it can be less.
Similarly, the careful parameters selection is required to get a long period from Xorshift PRNGs.
You've noted that some PRNGs give good procedural generation results while other do not. In order to isolate the cause, I would factor out the proc gen stuff & examine the PRNG output directly. An easy way to visualize the data is to build a grey scale image where each pixel value is a (possibly scaled) random value. For image based stuff, I find this to be an easy way to find stuff that may lead to visual artifacts. Any artifacts you see with this are likely to cause issues with your proc gen output.
Another option is to try something like the Diehard tests. If the aforementioned image test failed to reveal any problems, I might use this just to be sure my PRNG techniques were trustworthy.
Note that your code seeds the PRNG, then generates one pseudorandom number from the PRNG. The reason for the nonrandomness in xorshift128+ that you discovered is that xorshift128+ simply adds the two halves of the seed (and uses the result mod 264 as the generated number) before changing its state (review its source code). This makes that PRNG considerably different from a hash function.
What you see is the practical demonstration of quality of PRNG. Mersenne Twister is one of the best PRNGs with good performance, it passes DIEHARD tests. One should know that generating a random numbers is not an easy computational task, so looking for a better performance will inevitably result in poor quality. LCG is known to be simplest and worst PRNG ever designed and it clearly shows two-dimensional correlation as in your picture. The quality of Xorshift generators largely depend on bitness and parameters. They are definitely worse than Mersenne Twister, but some (xorshift128+) may work good enough to pass BigCrush battery of TestU01 tests.
In other words, if you are making an important physical modelling numerical experiment, you better continue to use Mersenne Twister as known to be a good trade-off between speed and quality and it comes in many standard libraries. On a less important case you may try to use xorshift128+ generator. For an ultimate results you need to use cryptographical-quality PRNG (none of mentioned here may be used for cryptographical purposes).

How to find trend (growth/decrease/stationarity) of a data series

I am trying to extract the OEE trend of a manufacturing machine. I already have a dataset of OEE calculated more or less every 30 seconds for each manufacturing machine and stored in a database.
What I want to do is to extract a subset of the dataset (say, last 30 minutes) and state if the OEE has grown, decreased or has been stable (withing a certain threshold). My task is NOT to forecast what will be the next value of OEE, but just to know if has decreased (desired return value: -1), grown (desired return value: +1) or been stable (desired return value: 0) based on the dataset. I am using Java 8 in my project.
Here is an example of dataset:
71.37
71.37
70.91
70.30
70.30
70.42
70.42
69.77
69.77
69.29
68.92
68.92
68.61
68.61
68.91
68.91
68.50
68.71
69.27
69.26
69.89
69.85
69.98
69.93
69.39
68.97
69.03
From this dataset is possible to state that the OEE has been decreasing (of couse based on a threshold), thus the algorithm would return -1.
I have been searching on the web unsuccessfully. I have found this, or this github project, or this stackoverflow question. However, all those are (more or less) complex forecasting algorithm. I am searching for a much easier solution. Any help is apreciated.
You could go for a
sliding average of the last n values.
Or a
sliding median of the last n values.
It highly depends on your application what is appropriate. But both these are very simple to implement and in a lot of cases more than good enough.
As you know from math, one would use d/dt, which more or less is using the step differences.
A trend is should have some weight.
class Trend {
int direction;
double probability;
}
Trend trend(double[] lastData) {
double[] deltas = Arrays.copyOf(lastData, lastData.length - 1);
for (int i = 0; i < deltas.length; ++i) {
deltas[i] -= lastData[i + 1];
}
// Trend based on two parts:
int parts = 2;
int splitN = (deltas.length + 1) / parts;
int i = 0;
int[] trends = new int[parts];
for (int j = 0; j < parts.length; ++j) {
int n = Math.min(splitN, parts.length - i);
double partAvg = DoubleStream.of(deltas).skip(i).limit(n).sum() / n;
trends[j] = tendency(partAvg);
}
Trend result = new Trend();
trend.direction = trends[parts - 1];
double avg = IntStream.of(trends).average().orElse((double)trend.direction);
trend.probability = ((direction - avg) + 1) / 2;
return trends[parts - 1];
}
int tendency(double sum) {
final double EPS = 0.0001;
return sum < -EPS ? -1 : sum > EPS ? 1 : 0;
}
This is not very sophisticated. For more elaborate treatment a math forum might be useful.

Algorithm for calculating trigonometry, logarithms or something like that. ONLY addition-subtraction

I am restoring the Ascota 170 antique mechanical programmable computer. It is already working.
Now I’m looking for an algorithm to demonstrate its capabilities — like calculating trigonometric or logarithmic tables. Or something like that.
Unfortunately, from mathematical operations, a computer is only capable of adding and subtracting integers (55 registers from -1E12 to 1E12). There is not even a shift-to-digit operation — so that it can be programmatically implemented to multiply only by very small numbers.
But its logical operations are very well developed.
Could you advise me any suitable algorithm?
So what you're doing is really kinda awesome. And as it happens, I can explain quite a bit about how to implement fractional logarithms using only integer addition and subtraction! This post is going to be long, but there's lots of detail included, and a working implementation at the end, and it should be enough for you to do some fun things with your weird mechanical computer.
Implementing Comparisons
You're going to need to be able to compare numbers. While you said you can perform comparisons == 0 and > 0, that's not really quite enough for most of the interesting algorithms you'll want to implement. You need relative comparisons, which can be determined via subtraction:
isLessThan(a, b):
diff = b - a
if diff > 0 then return true
else return false
isGreaterThan(a, b):
diff = a - b
if diff > 0 then return true
else return false
isLessThanOrEqual(a, b):
diff = a - b
if diff > 0 then return false
else return true
isGreaterThanOrEqual(a, b):
diff = b - a
if diff > 0 then return false
else return true
For the rest of this post, I'm just going to write the simpler form of a > b, but if you can't do that directly, you can substitute in one of the operations above.
Implementing Shifts
Now, since you don't have digit-shifting hardware, you'll have to create "routines" to implement it. A left-shift is easy: Add a number to itself, and again, and again, and then add the original number, and then add it one more time; and that's the equivalent of shifting left by 1 digit.
So shift left by one digit, or multiply-by-ten:
shiftLeft(value):
value2 = value + value
value4 = value2 + value2
value5 = value4 + value
return value5 + value5
Shifting by many digits is just repeated invocation of shiftLeft():
shl(value, count):
repeat:
if count <= 0 then goto done
value = shiftLeft(value)
count = count - 1
done:
return value
Shifting right by one digit is a little harder: We need to do this with repeated subtraction and addition, as in the pseudocode below:
shr(value, count):
if count == 0 then return value
index = 11
shifted = 0
repeat1:
if index < 0 then goto done
adder = shl(1, index - count)
subtractor = shl(adder, count)
repeat2:
if value <= subtractor then goto next
value = value - subtractor
shifted = shifted + adder
goto repeat2
next:
index = index - 1
goto repeat1
done:
return count
Conveniently, since it's hard to shift right in the first place, the algorithm lets us directly choose how many digits to shift by.
Multiplication
It looks like your hardware might have multiplication? But if it doesn't, you can implement multiplication using repeated addition and shifting. Binary multiplication is the easiest form to implement that's actually efficient, and that requires us to first implement multiplyByTwo() and divideByTwo(), using the same basic techniques that we used to implement shiftLeft() and shr().
Once you have those implemented, multiplication involves repeatedly slicing off the last bit of one of the numbers, and if that bit is a 1, then adding a growing version of the other number to the running total:
multiply(a, b):
product = 0
repeat:
if b <= 0 then goto done
nextB = divideByTwo(b)
bit = b - multiplyByTwo(nextB)
if bit == 0 then goto skip
product = product + a
skip:
a = a + a
b = nextB
goto repeat
done:
return product
A full implementation of this is included below, if you need it.
Integer Logarithms
We can use our ability to shift right by a digit to calculate the integer part of the base-10 logarithm of a number — this is really just how many times you can shift the number right before you reach a number too small to shift.
integerLogarithm(value):
count = 0
repeat:
if value <= 9 then goto done
value = shiftRight(value)
count = count + 1
goto repeat
done:
return count
So for 0-9, this returns 0; for 10-99, this returns 1; for 100-999 this returns 2, and so on.
Integer Exponents
The opposite of the above algorithm is pretty trivial: To calculate 10 raised to an integer power, we just shift the digits left by the power.
integerExponent(count):
value = shl(1, count)
return value
So for 0, this returns 1; for 1, this return 10; for 2, this returns 100; for 3, this returns 1000; and so on.
Splitting the Integer and Fraction
Now that we can handle integer powers and logarithms, we're almost ready to handle the fractional part. But before we can really talk about how to compute the fractional part of the logarithm, we have to talk about how to divide up the problem so we can compute the fractional part separately from the integer part. Ideally, we only want to deal with computing logarithms for numbers in a fixed range — say, from 1 to 10, rather than from 1 to infinity.
We can use our integer logarithm and exponent routines to slice up the full logarithm problem so that we're always dealing with a value in the range of [1, 10), no matter what the input number was.
First, we calculate the integer logarithm, and then the integer exponent, and then we subtract that from the original number. Whatever is left over is the fractional part that we need to calculate: And then the only remaining exercise is to shift that fractional part so that it's always in a consistent range.
normalize(value):
intLog = integerLogarithm(value) // From 0 to 12 (meaningful digits)
if intLog <= 5 then goto lessThan
value = shr(value, intLog - 5)
goto done
lessThan:
value = shl(value, 5 - intLog)
done:
return value
You can convince yourself with relatively little effort that no matter what the original value was, its highest nonzero digit will be moved to column 7: So "12345" will become "000000123450" (i.e., "0000001.23450"). This allows us to pretend that there's always an invisible decimal point a little more than halfway down the number, so that now we only need to solve the problem of calculating logarithms of values in the range of [1, 10).
(Why "more than halfway"? We will need the upper half of the value to always be zero, and you'll see why in a moment.)
Fractional Logarithms
Knuth explains how to do this in The Art of Computer Programming, section 1.2.2. Our goal will be to calculate log10(x) so that for some values of b1, b2, b3 ... , where n is already 0 (because we split out the integer portion above):
log10(x) = n + b1/2 + b2/4 + b3/8 + b4/16 + ...
Knuth says that we can obtain b1, b2, b3 ... like this:
To obtain b1, b2, ..., we now set x0 = x / 10^n and, for k >= 1,
b[k] = 0, x[k] = x[k-1] ^ 2, if x[k-1] ^ 2 < 10;
b[k] = 1, x[k] = x[k-1] ^ 2 / 10, if x[k-1] ^ 2 >= 10.
That is to say, each step uses pseudocode loop something like this:
fractionalLogarithm(x):
for i = 1 to numberOfBinaryDigitsOfPrecision:
nextX = x * x
if nextX < 10 then:
b[i] = 0
else:
b[i] = 1
nextX = nextX / 10
In order for this to work using the fixed-point numbers we have above, we have to implement x * x using a shift to move the decimal point back into place, which will lose some digits. This will cause error to propagate, as Knuth says, but it will give enough accuracy that it's good enough for demonstration purposes.
So given a fractional value generated by normalize(value), we can compute its fractional binary logarithm like this:
fractionalLogarithm(value):
for i = 1 to 20:
value = shr(value * value, 6)
if value < 1000000 then:
b[i] = 0
else:
b[i] = 1
value = shr(value, 1)
But a binary fractional logarithm — individual bits! — isn't especially useful, especially since we computed an decimal version of the integer part of the logarithm in the earlier step. So we'll modify this one more time, to calculate a decimal fractional logarithm, to five places, instead of calculating an array of bits; for that, we'll need a table of 20 values that represent the conversions of each of those bits to decimal, and we'll store them as fixed-point as well:
table[1] = 1/(2^1) = 1/2 = 500000
table[2] = 1/(2^2) = 1/4 = 250000
table[3] = 1/(2^3) = 1/8 = 125000
table[4] = 1/(2^4) = 1/16 = 062500
table[5] = 1/(2^5) = 1/32 = 031250
table[6] = 1/(2^6) = 1/64 = 015625
...
table[17] = 1/(2^17) = 1/131072 = 000008
table[18] = 1/(2^18) = 1/262144 = 000004
table[19] = 1/(2^19) = 1/514288 = 000002
table[20] = 1/(2^20) = 1/1048576 = 000001
So now with that table, we can produce the whole fractional logarithm, using pure integer math:
fractionalLogarithm(value):
log = 0
for i = 1 to 20:
value = shr(value * value, 6)
if value >= 1000000 then:
log = log + table[i]
value = shr(value, 1)
return log
Putting It All Together
Finally, for a complete logarithm of any integer your machine can represent, this is the whole thing, which will compute the logarithm with six digits of precision, in the form "0000XX.XXXXXX":
log(value):
intPart = integerLogarithm(value)
value = normalize(value)
fracPart = fractionalLogarithm(value)
result = shl(intPart, 6) + fracPart
return result
Demonstration
To show that the math works — and that it works pretty well! — below is a JavaScript implementation of the above algorithm. It uses pure integer math: Only addition, subtraction, and relative comparison. Functions are used to organize the code, but they behave like subroutines: They're not recursive, and don't nest very deeply.
You can try it out live (click the 'Run' button and type 12345 in the input field). Compare the result to the standard Math.log() function, and you'll see how close the pure-integer version gets:
function shiftLeft(value) {
var value2 = value + value;
var value4 = value2 + value2;
var value5 = value4 + value;
return value5 + value5;
}
function shl(value, count) {
while (count > 0) {
value = shiftLeft(value);
count = count - 1;
}
return value;
}
function shr(value, count) {
if (count == 0) return value;
var index = 11;
var shifted = 0;
while (index >= 0) {
var adder = shl(1, index - count);
var subtractor = shl(adder, count);
while (value > subtractor) {
value = value - subtractor;
shifted = shifted + adder;
}
index = index - 1;
}
return shifted;
}
//-----------------------------------
function multiplyByTwo(value) {
return value + value;
}
function multiplyByPowerOfTwo(value, count) {
while (count > 0) {
value = value + value;
count = count - 1;
}
return value;
}
function divideByPowerOfTwo(value, count) {
if (count == 0) return value;
var index = 39; // lg(floor(pow(10, 12)))
var shifted = 0;
while (index >= 0) {
var adder = multiplyByPowerOfTwo(1, index - count);
var subtractor = multiplyByPowerOfTwo(adder, count);
while (value >= subtractor) {
value = value - subtractor;
shifted = shifted + adder;
}
index = index - 1;
}
return shifted;
}
function divideByTwo(value) {
return divideByPowerOfTwo(value, 1);
}
function multiply(a, b) {
var product = 0;
while (b > 0) {
nextB = divideByTwo(b);
bit = b - multiplyByTwo(nextB);
if (bit != 0) {
product += a;
}
a = a + a;
b = nextB;
}
return product;
}
//-----------------------------------
var logTable = {
"1": 500000,
"2": 250000,
"3": 125000,
"4": 62500,
"5": 31250,
"6": 15625,
"7": 7813,
"8": 3906,
"9": 1953,
"10": 977,
"11": 488,
"12": 244,
"13": 122,
"14": 61,
"15": 31,
"16": 15,
"17": 8,
"18": 4,
"19": 2,
"20": 1,
};
//-----------------------------------
function integerLogarithm(value) {
var count = 0;
while (value > 9) {
value = shr(value, 1);
count = count + 1;
}
return count;
}
function normalize(value) {
var intLog = integerLogarithm(value);
if (intLog > 5)
value = shr(value, intLog - 5);
else
value = shl(value, 5 - intLog);
return value;
}
function fractionalLogarithm(value) {
var log = 0;
for (i = 1; i < 20; i++) {
var squaredValue = multiply(value, value);
value = shr(squaredValue, 5);
if (value >= 1000000) {
log = log + logTable[i];
value = shr(value, 1);
}
}
return log;
}
function log(value) {
var intPart = integerLogarithm(value);
value = normalize(value);
var fracPart = fractionalLogarithm(value);
var result = shl(intPart, 6) + fracPart;
return result;
}
//-----------------------------------
// Just a little jQuery event handling to wrap a UI around the above functions.
$("#InputValue").on("keydown keyup keypress focus blur", function(e) {
var inputValue = Number(this.value.replace(/[^0-9]+/g, ''));
var outputValue = log(inputValue);
$("#OutputValue").text(outputValue / 1000000);
var trueResult = Math.floor((Math.log(inputValue) / Math.log(10)) * 1000000 + 0.5) / 1000000
$("#TrueResult").text(trueResult);
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
Input integer: <input type="text" id="InputValue" /><br /><br />
Result using integer algorithm: <span id="OutputValue"></span><br /><br />
True logarithm: <span id="TrueResult"></span><br />
As I mentioned in your Original question on SE/RC for pow,sqrt,n-root,log,exp see:
Power by squaring for negative exponents
and all the sub-links in there.
Once you got working *,/,<<,>> (which the other answer covers well) and may fixed point instead of floating you can also start computing goniometrics. For that the best is use Chebyshev series but as I lack the math behind them I can use only already precomputed ones ... Taylor is a common knowledge so computing that should be easy here what I code for my arithmetics template to cover math for arbitrary math data types (bignums):
// Taylor goniometric https://en.wikipedia.org/wiki/Taylor_series
friend T sin (const T &x) // = sin(x)
{
int i; T z,dz,x2,a,b;
x2=x/(pi+pi); x2-=::integer(x2); x2*=pi+pi;
for (z=x2,a=x2,b=1,x2*=x2,i=2;;)
{
a*=x2; b*=i; i++; b*=i; i++; dz=a/b; z-=dz;
a*=x2; b*=i; i++; b*=i; i++; dz=a/b; z+=dz;
if (::abs(dz)<zero) break;
}
return z;
}
friend T cos (const T &x) // = cos(x)
{
int i; T z,dz,x2,a,b;
x2=x/(pi+pi); x2-=::integer(x2); x2*=pi+pi;
for (z=1,a=1,b=1,x2*=x2,i=1;;)
{
a*=x2; b*=i; i++; b*=i; i++; dz=a/b; z-=dz;
a*=x2; b*=i; i++; b*=i; i++; dz=a/b; z+=dz;
if (::abs(dz)<zero) break;
}
return z;
}
friend T tan (const T &x) // = tan(x)
{
int i; T z0,z1,dz,x1,x2,a,b;
x1=x/pi; x1-=::integer(x1); x1*=pi; x2=x1*x1;
for (z0=1,z1=1,a=1,b=1,i=2;;)
{
a*=x2; b*=i; i++; dz=a/b; z0-=dz; // z0=cos(x)
b*=i; i++; dz=a/b; z1-=dz; // z1=sin(x)/x
a*=x2; b*=i; i++; dz=a/b; z0+=dz;
b*=i; i++; dz=a/b; z1+=dz;
if (::abs(dz)<zero) break;
}
return (x1*z1)/z0;
}
friend T ctg (const T &x) // = cotan(x)
{
int i; T z0,z1,dz,x1,x2,a,b;
x1=x/pi; x1-=::integer(x1); x1*=pi; x2=x1*x1;
for (z0=1,z1=1,a=1,b=1,i=2;;)
{
a*=x2; b*=i; i++; dz=a/b; z0-=dz; // z0=cos(x)
b*=i; i++; dz=a/b; z1-=dz; // z1=sin(x)/x
a*=x2; b*=i; i++; dz=a/b; z0+=dz;
b*=i; i++; dz=a/b; z1+=dz;
if (::abs(dz)<zero) break;
}
return z0/(x1*z1);
}
friend T asin (const T &x) // = asin(x)
{
if (x<=-1.0) return -0.5*pi;
if (x>=+1.0) return +0.5*pi;
return ::atan(x/::sqrt(1.0-(x*x)));
}
friend T acos (const T &x){ T z; z=0.5*pi-::asin(x); return z; } // = acos(x)
friend T atan (const T &x) // = atan(x)
{
bool _shift=false;
bool _invert=false;
bool _negative=false;
T z,dz,x1,x2,a,b; x1=x;
if (x1<0.0) { _negative=true; x1=-x1; }
if (x1>1.0) { _invert=true; x1=1.0/x1; }
if (x1>0.7) { _shift=true; b=::sqrt(3.0)/3.0; x1=(x1-b)/(1.0+(x1*b)); }
for (x2=x1*x1,z=x1,a=x1,b=1;;) // if x1>0.8 convergence is slow
{
a*=x2; b+=2; dz=a/b; z-=dz;
a*=x2; b+=2; dz=a/b; z+=dz;
if (::abs(dz)<zero) break;
}
if (_shift) z+=pi/6.0;
if (_invert) z=0.5*pi-z;
if (_negative) z=-z;
return z;
}
friend T actg (const T &x){ T z; z=::atan(1.0/x); return z; } // = acotan(x)
friend T atan2 (const T &y,const T &x){ return atanxy(x,y); } // = atan(y/x)
friend T atanxy (const T &x,const T &y) // = atan(y/x)
{
int sx,sy; T a;
T _zero=1.0e-30;
sx=0; if (x<-_zero) sx=-1; if (x>+_zero) sx=+1;
sy=0; if (y<-_zero) sy=-1; if (y>+_zero) sy=+1;
if ((sy==0)&&(sx==0)) return 0.0;
if ((sx==0)&&(sy> 0)) return 0.5*x.pi;
if ((sx==0)&&(sy< 0)) return 1.5*x.pi;
if ((sy==0)&&(sx> 0)) return 0.0;
if ((sy==0)&&(sx< 0)) return x.pi;
a=y/x; if (a<0) a=-a;
a=::atan(a);
if ((sx>0)&&(sy>0)) a=a;
if ((sx<0)&&(sy>0)) a=x.pi-a;
if ((sx<0)&&(sy<0)) a=x.pi+a;
if ((sx>0)&&(sy<0)) a=x.pi+x.pi-a;
return a;
}
As I mentioned you need to use floating or fixed point for this as the results are not integers !!!
But as I mentioned before CORDIC is better suited for computing on integers (if you search there where some QAs here on SE/SO with C++ code for this).
IIRC it exploit some (arc)tan angle summation identity that leads to a nicely computable on integers delta angle something like sqrt(1+x*x) which is easily computable on integers. With binary search or approximation/iteration you can compute the tan of any angle and using goniometric identities you can compute any cotan sin and cos ... But I might be wrong as I do not use CORDIC and read about it a long time ago
Anyway once you got some function its inverse can be usually computed with binary search.

How much time will a typical home computer take to perform calculations of millions of digits?

I know that machines find it difficult to make calculations involving very large numbers.
Let's say I want to find square of a million digit number. Will a typical computer give an answer almost instantly? How much time does it take for them to handle million digit calculations?
Also what is the reason for them to be slow in such calculations?
I found some calculator websites which claim that they can do the task instantly. Will a computer become faster if they use the method those websites use?
On my PC it takes more than 21 minutes to draw a square root of a number with 1 million digits. See the details below. It should be possible to achieve faster times, but "almost instantly" is probably not feasible without making use of special hardware (like graphics boards with CUDA support).
I have written a test program in C# to find the runtimes for calculating the square root with Newton's method. It uses the System.Numerics library which features the BigInteger class for arbitrary accuracy arithmetic.
The runtime depends on the initial value assumed for the iterative calculation method. To look for the highest non-zero bit of the number turned out to be faster than simply always using 1 as initial value.
using System;
using System.Diagnostics;
using System.Numerics;
namespace akBigSquareRoot
{
class Program
{
static void Main(string[] args)
{
Stopwatch stopWatch = new Stopwatch();
Console.WriteLine(" nDigits error iterations elapsed ");
Console.WriteLine("-----------------------------------------");
for (int nDigits = 10; nDigits <= 1e6; nDigits *= 10)
{
// create a base number with nDigits/2 digits
BigInteger x = 1;
for (int i = 0; i < nDigits / 2; i++)
{
x *= 10;
}
BigInteger square = x * x;
stopWatch.Restart();
int iterations;
BigInteger root = sqrt(square, out iterations);
stopWatch.Stop();
BigInteger error = x - root;
TimeSpan ts = stopWatch.Elapsed;
string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}",
ts.Hours, ts.Minutes, ts.Seconds,
ts.Milliseconds / 10);
Console.WriteLine("{0,8} {1,6} {2,6} {3}", nDigits, error, iterations, elapsedTime);
}
Console.WriteLine("\n<end reached>");
Console.ReadKey();
}
public static BigInteger sqrt(BigInteger x, out int iterations)
{
BigInteger div = BigInteger.One << (bitLength(x) / 2);
// BigInteger div = 1;
BigInteger div2 = div;
BigInteger y;
// Loop until we hit the same value twice in a row, or wind
// up alternating.
iterations = 0;
while (true)
{
iterations++;
y = (div + (x / div)) >> 1;
if ((y == div) || (y == div2))
return y;
div2 = div;
div = y;
}
}
private static int bitLength(BigInteger x) {
int len = 0;
do
{
len++;
} while ((x >>= 1) != 0);
return len;
}
}
}
The results on a DELL XPS 8300 with Intel Core i7-2600 CPU 3.40 GHz
nDigits error iterations elapsed
----------------------------------------
10 0 4 00:00:00.00
100 0 7 00:00:00.00
1000 0 10 00:00:00.00
10000 0 14 00:00:00.09
100000 0 17 00:00:09.81
1000000 0 20 00:21:18.38
Increasing the number of digits by a factor of 10 results in three additional iterations in the search procedure. But due to the increased bit-length, the search iterations a slowed down substantially.
The computational complexity of calculating square (and higher degree) roots is discussed in a related post.

Finding the number of digits of an integer

What is the best method to find the number of digits of a positive integer?
I have found this 3 basic methods:
conversion to string
String s = new Integer(t).toString();
int len = s.length();
for loop
for(long long int temp = number; temp >= 1;)
{
temp/=10;
decimalPlaces++;
}
logaritmic calculation
digits = floor( log10( number ) ) + 1;
where you can calculate log10(x) = ln(x) / ln(10) in most languages.
First I thought the string method is the dirtiest one but the more I think about it the more I think it's the fastest way. Or is it?
There's always this method:
n = 1;
if ( i >= 100000000 ) { n += 8; i /= 100000000; }
if ( i >= 10000 ) { n += 4; i /= 10000; }
if ( i >= 100 ) { n += 2; i /= 100; }
if ( i >= 10 ) { n += 1; }
Well the correct answer would be to measure it - but you should be able to make a guess about the number of CPU steps involved in converting strings and going through them looking for an end marker
Then think how many FPU operations/s your processor can do and how easy it is to calculate a single log.
edit: wasting some more time on a monday morning :-)
String s = new Integer(t).toString();
int len = s.length();
One of the problems with high level languages is guessing how much work the system is doing behind the scenes of an apparently simple statement. Mandatory Joel link
This statement involves allocating memory for a string, and possibly a couple of temporary copies of a string. It must parse the integer and copy the digits of it into a string, possibly having to reallocate and move the existing memory if the number is large. It might have to check a bunch of locale settings to decide if your country uses "," or ".", it might have to do a bunch of unicode conversions.
Then finding the length has to scan the entire string, again considering unicode and any local specific settings such as - are you in a right->left language?.
Alternatively:
digits = floor( log10( number ) ) + 1;
Just because this would be harder for you to do on paper doesn't mean it's hard for a computer! In fact a good rule in high performance computing seems to have been - if something is hard for a human (fluid dynamics, 3d rendering) it's easy for a computer, and if it's easy for a human (face recognition, detecting a voice in a noisy room) it's hard for a computer!
You can generally assume that the builtin maths functions log/sin/cos etc - have been an important part of computer design for 50years. So even if they don't map directly into a hardware function in the FPU you can bet that the alternative implementation is pretty efficient.
I don't know, and the answer may well be different depending on how your individual language is implemented.
So, stress test it! Implement all three solutions. Run them on 1 through 1,000,000 (or some other huge set of numbers that's representative of the numbers the solution will be running against) and time how long each of them takes.
Pit your solutions against one another and let them fight it out. Like intellectual gladiators. Three algorithms enter! One algorithm leaves!
Test conditions
Decimal numeral system
Positive integers
Up to 10 digits
Language: ActionScript 3
Results
digits: [1,10],
no. of runs: 1,000,000
random sample: 8777509,40442298,477894,329950,513,91751410,313,3159,131309,2
result: 7,8,6,6,3,8,3,4,6,1
CONVERSION TO STRING: 724ms
LOGARITMIC CALCULATION: 349ms
DIV 10 ITERATION: 229ms
MANUAL CONDITIONING: 136ms
Note: Author refrains from making any conclusions for numbers with more than 10 digits.
Script
package {
import flash.display.MovieClip;
import flash.utils.getTimer;
/**
* #author Daniel
*/
public class Digits extends MovieClip {
private const NUMBERS : uint = 1000000;
private const DIGITS : uint = 10;
private var numbers : Array;
private var digits : Array;
public function Digits() {
// ************* NUMBERS *************
numbers = [];
for (var i : int = 0; i < NUMBERS; i++) {
var number : Number = Math.floor(Math.pow(10, Math.random()*DIGITS));
numbers.push(number);
}
trace('Max digits: ' + DIGITS + ', count of numbers: ' + NUMBERS);
trace('sample: ' + numbers.slice(0, 10));
// ************* CONVERSION TO STRING *************
digits = [];
var time : Number = getTimer();
for (var i : int = 0; i < numbers.length; i++) {
digits.push(String(numbers[i]).length);
}
trace('\nCONVERSION TO STRING - time: ' + (getTimer() - time));
trace('sample: ' + digits.slice(0, 10));
// ************* LOGARITMIC CALCULATION *************
digits = [];
time = getTimer();
for (var i : int = 0; i < numbers.length; i++) {
digits.push(Math.floor( Math.log( numbers[i] ) / Math.log(10) ) + 1);
}
trace('\nLOGARITMIC CALCULATION - time: ' + (getTimer() - time));
trace('sample: ' + digits.slice(0, 10));
// ************* DIV 10 ITERATION *************
digits = [];
time = getTimer();
var digit : uint = 0;
for (var i : int = 0; i < numbers.length; i++) {
digit = 0;
for(var temp : Number = numbers[i]; temp >= 1;)
{
temp/=10;
digit++;
}
digits.push(digit);
}
trace('\nDIV 10 ITERATION - time: ' + (getTimer() - time));
trace('sample: ' + digits.slice(0, 10));
// ************* MANUAL CONDITIONING *************
digits = [];
time = getTimer();
var digit : uint;
for (var i : int = 0; i < numbers.length; i++) {
var number : Number = numbers[i];
if (number < 10) digit = 1;
else if (number < 100) digit = 2;
else if (number < 1000) digit = 3;
else if (number < 10000) digit = 4;
else if (number < 100000) digit = 5;
else if (number < 1000000) digit = 6;
else if (number < 10000000) digit = 7;
else if (number < 100000000) digit = 8;
else if (number < 1000000000) digit = 9;
else if (number < 10000000000) digit = 10;
digits.push(digit);
}
trace('\nMANUAL CONDITIONING: ' + (getTimer() - time));
trace('sample: ' + digits.slice(0, 10));
}
}
}
This algorithm might be good also, assuming that:
Number is integer and binary encoded (<< operation is cheap)
We don't known number boundaries
var num = 123456789L;
var len = 0;
var tmp = 1L;
while(tmp < num)
{
len++;
tmp = (tmp << 3) + (tmp << 1);
}
This algorithm, should have speed comparable to for-loop (2) provided, but a bit faster due to (2 bit-shifts, add and subtract, instead of division).
As for Log10 algorithm, it will give you only approximate answer (that is close to real, but still), since analytic formula for computing Log function have infinite loop and can't be calculated precisely Wiki.
Use the simplest solution in whatever programming language you're using. I can't think of a case where counting digits in an integer would be the bottleneck in any (useful) program.
C, C++:
char buffer[32];
int length = sprintf(buffer, "%ld", (long)123456789);
Haskell:
len = (length . show) 123456789
JavaScript:
length = String(123456789).length;
PHP:
$length = strlen(123456789);
Visual Basic (untested):
length = Len(str(123456789)) - 1
conversion to string: This will have to iterate through each digit, find the character that maps to the current digit, add a character to a collection of characters. Then get the length of the resulting String object. Will run in O(n) for n=#digits.
for-loop: will perform 2 mathematical operation: dividing the number by 10 and incrementing a counter. Will run in O(n) for n=#digits.
logarithmic: Will call log10 and floor, and add 1. Looks like O(1) but I'm not really sure how fast the log10 or floor functions are. My knowledge of this sort of things has atrophied with lack of use so there could be hidden complexity in these functions.
So I guess it comes down to: is looking up digit mappings faster than multiple mathematical operations or whatever is happening in log10? The answer will probably vary. There could be platforms where the character mapping is faster, and others where doing the calculations is faster. Also to keep in mind is that the first method will creats a new String object that only exists for the purpose of getting the length. This will probably use more memory than the other two methods, but it may or may not matter.
You can obviously eliminate the method 1 from the competition, because the atoi/toString algorithm it uses would be similar to method 2.
Method 3's speed depends on whether the code is being compiled for a system whose instruction set includes log base 10.
For very large integers, the log method is much faster. For instance, with a 2491327 digit number (the 11920928th Fibonacci number, if you care), Python takes several minutes to execute the divide-by-10 algorithm, and milliseconds to execute 1+floor(log(n,10)).
import math
def numdigits(n):
return ( int(math.floor(math.log10(n))) + 1 )
Regarding the three methods you propose for "determining the number of digits necessary to represent a given number in a given base", I don't like any of them, actually; I prefer the method I give below instead.
Re your method #1 (strings): Anything involving converting back-and-forth between strings and numbers is usually very slow.
Re your method #2 (temp/=10): This is fatally flawed because it assumes that x/10 always means "x divided by 10". But in many programming languages (eg: C, C++), if "x" is an integer type, then "x/10" means "integer division", which isn't the same thing as floating-point division, and it introduces round-off errors at every iteration, and they accumulate in a recursive formula such as your solution #2 uses.
Re your method #3 (logs): it's buggy for large numbers (at least in C, and probably other languages as well), because floating-point data types tend not to be as precise as 64-bit integers.
Hence I dislike all 3 of those methods: #1 works but is slow, #2 is broken, and #3 is buggy for large numbers. Instead, I prefer this, which works for numbers from 0 up to about 18.44 quintillion:
unsigned NumberOfDigits (uint64_t Number, unsigned Base)
{
unsigned Digits = 1;
uint64_t Power = 1;
while ( Number / Power >= Base )
{
++Digits;
Power *= Base;
}
return Digits;
}
Keep it simple:
long long int a = 223452355415634664;
int x;
for (x = 1; a >= 10; x++)
{
a = a / 10;
}
printf("%d", x);
You can use a recursive solution instead of a loop, but somehow similar:
#tailrec
def digits (i: Long, carry: Int=1) : Int = if (i < 10) carry else digits (i/10, carry+1)
digits (8345012978643L)
With longs, the picture might change - measure small and long numbers independently against different algorithms, and pick the appropriate one, depending on your typical input. :)
Of course nothing beats a switch:
switch (x) {
case 0: case 1: case 2: case 3: case 4: case 5: case 6: case 7: case 8: case 9: return 1;
case 10: case 11: // ...
case 99: return 2;
case 100: // you get the point :)
default: return 10; // switch only over int
}
except a plain-o-array:
int [] size = {1,1,1,1,1,1,1,1,1,2,2,2,2,2,... };
int x = 234561798;
return size [x];
Some people will tell you to optimize the code-size, but yaknow, premature optimization ...
log(x,n)-mod(log(x,n),1)+1
Where x is a the base and n is the number.
Here is the measurement in Swift 4.
Algorithms code:
extension Int {
var numberOfDigits0: Int {
var currentNumber = self
var n = 1
if (currentNumber >= 100000000) {
n += 8
currentNumber /= 100000000
}
if (currentNumber >= 10000) {
n += 4
currentNumber /= 10000
}
if (currentNumber >= 100) {
n += 2
currentNumber /= 100
}
if (currentNumber >= 10) {
n += 1
}
return n
}
var numberOfDigits1: Int {
return String(self).count
}
var numberOfDigits2: Int {
var n = 1
var currentNumber = self
while currentNumber > 9 {
n += 1
currentNumber /= 10
}
return n
}
}
Measurement code:
var timeInterval0 = Date()
for i in 0...10000 {
i.numberOfDigits0
}
print("timeInterval0: \(Date().timeIntervalSince(timeInterval0))")
var timeInterval1 = Date()
for i in 0...10000 {
i.numberOfDigits1
}
print("timeInterval1: \(Date().timeIntervalSince(timeInterval1))")
var timeInterval2 = Date()
for i in 0...10000 {
i.numberOfDigits2
}
print("timeInterval2: \(Date().timeIntervalSince(timeInterval2))")
Output
timeInterval0: 1.92149806022644
timeInterval1: 0.557608008384705
timeInterval2: 2.83262193202972
On this measurement basis String conversion is the best option for the Swift language.
I was curious after seeing #daniel.sedlacek results so I did some testing using Swift for numbers having more than 10 digits. I ran the following script in the playground.
let base = [Double(100090000000), Double(100050000), Double(100050000), Double(100000200)]
var rar = [Double]()
for i in 1...10 {
for d in base {
let v = d*Double(arc4random_uniform(UInt32(1000000000)))
rar.append(v*Double(arc4random_uniform(UInt32(1000000000))))
rar.append(Double(1)*pow(1,Double(i)))
}
}
print(rar)
var timeInterval = NSDate().timeIntervalSince1970
for d in rar {
floor(log10(d))
}
var newTimeInterval = NSDate().timeIntervalSince1970
print(newTimeInterval-timeInterval)
timeInterval = NSDate().timeIntervalSince1970
for d in rar {
var c = d
while c > 10 {
c = c/10
}
}
newTimeInterval = NSDate().timeIntervalSince1970
print(newTimeInterval-timeInterval)
Results of 80 elements
0.105069875717163 for floor(log10(x))
0.867973804473877 for div 10 iterations
Adding one more approach to many of the already mentioned approaches.
The idea is to use binarySearch on an array containing the range of integers based on the digits of the int data type.
The signature of Java Arrays class binarySearch is :
binarySearch(dataType[] array, dataType key) which returns the index of the search key, if it is contained in the array; otherwise, (-(insertion point) – 1).
The insertion point is defined as the point at which the key would be inserted into the array.
Below is the implementation:
static int [] digits = {9,99,999,9999,99999,999999,9999999,99999999,999999999,Integer.MAX_VALUE};
static int digitsCounter(int N)
{
int digitCount = Arrays.binarySearch(digits , N<0 ? -N:N);
return 1 + (digitCount < 0 ? ~digitCount : digitCount);
}
Please note that the above approach only works for : Integer.MIN_VALUE <= N <= Integer.MAX_VALUE, but can be easily extended for Long data type by adding more values to the digits array.
For example,
I) for N = 555, digitCount = Arrays.binarySearch(digits , 555) returns -3 (-(2)-1) as it's not present in the array but is supposed to be inserted at point 2 between 9 & 99 like [9, 55, 99].
As the index we got is negative we need to take the bitwise compliment of the result.
At last, we need to add 1 to the result to get the actual number of digits in the number N.
In Swift 5.x, you get the number of digit in integer as below :
Convert to string and then count number of character in string
let nums = [1, 7892, 78, 92, 90]
for i in nums {
let ch = String(describing: i)
print(ch.count)
}
Calculating the number of digits in integer using loop
var digitCount = 0
for i in nums {
var tmp = i
while tmp >= 1 {
tmp /= 10
digitCount += 1
}
print(digitCount)
}
let numDigits num =
let num = abs(num)
let rec numDigitsInner num =
match num with
| num when num < 10 -> 1
| _ -> 1 + numDigitsInner (num / 10)
numDigitsInner num
F# Version, without casting to a string.

Resources