Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I recently posted one of my favourite interview whiteboard coding questions in "What's your more controversial programming opinion", which is to write a function that computes Pi using the Leibniz formula.
It can be approached in a number of different ways, and the exit condition takes a bit of thought, so I thought it might make an interesting code golf question. Shortest code wins!
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to within 0.00001.
Edit: 3 Jan 2008
As suggested in the comments I changed the exit condition to be within 0.00001 as that's what I really meant (an accuracy 5 decimal places is much harder due to rounding and so I wouldn't want to ask that in an interview, whereas within 0.00001 is an easier to understand and implement exit condition).
Also, to answer the comments, I guess my intention was that the solution should compute the number of iterations, or check when it had done enough, but there's nothing to prevent you from pre-computing the number of iterations and using that number. I really asked the question out of interest to see what people would come up with.
J, 14 chars
4*-/%>:+:i.1e6
Explanation
1e6 is number 1 followed by 6 zeroes (1000000).
i.y generates the first y non negative numbers.
+: is a function that doubles each element in the list argument.
>: is a function that increments by one each element in the list argument.
So, the expression >:+:i.1e6 generates the first one million odd numbers:
1 3 5 7 ...
% is the reciprocal operator (numerator "1" can be omitted).
-/ does an alternate sum of each element in the list argument.
So, the expression -/%>:+:i.1e6 generates the alternate sum of the reciprocals of the first one million odd numbers:
1 - 1/3 + 1/5 - 1/7 + ...
4* is multiplication by four. If you multiply by four the previous sum, you have π.
That's it! J is a powerful language for mathematics.
Edit: since generating 9! (362880) terms for the alternate sum is sufficient to have 5 decimal digit accuracy, and since the Leibniz formula can be written also this way:
4 - 4/3 + 4/5 - 4/7 + ...
...you can write a shorter, 12 chars version of the program:
-/4%>:+:i.9!
Language: Brainfuck, Char count: 51/59
Does this count? =]
Because there are no floating-point numbers in Brainfuck, it was pretty difficult to get the divisions working properly. Grr.
Without newline (51):
+++++++[>+++++++<-]>++.-----.+++.+++.---.++++.++++.
With newline (59):
+++++++[>+++++++>+<<-]>++.-----.+++.+++.---.++++.++++.>+++.
Perl
26 chars
26 just the function, 27 to compute, 31 to print. From the comments to this answer.
sub _{$-++<1e6&&4/$-++-&_} # just the sub
sub _{$-++<1e6&&4/$-++-&_}_ # compute
sub _{$-++<1e6&&4/$-++-&_}say _ # print
28 chars
28 just computing, 34 to print. From the comments. Note that this version cannot use 'say'.
$.=.5;$\=2/$.++-$\for 1..1e6 # no print
$.=.5;$\=2/$.++-$\for$...1e6;print # do print, with bonus obfuscation
36 chars
36 just computing, 42 to print. Hudson's take at dreeves's rearrangement, from the comments.
$/++;$\+=8/$//($/+2),$/+=4for$/..1e6
$/++;$\+=8/$//($/+2),$/+=4for$/..1e6;print
About the iteration count: as far as my math memories go, 400000 is provably enough to be accurate to 0.00001. But a million (or as low as 8e5) actually makes the decimal expansion actually match 5 fractional places, and it's the same character count so I kept that.
Ruby, 33 characters
(0..1e6).inject{|a,b|2/(0.5-b)-a}
Another C# version:
(60 characters)
4*Enumerable.Range(0, 500000).Sum(x => Math.Pow(-1, x)/(2*x + 1)); // = 3,14159
52 chars in Python:
print 4*sum(((-1.)**i/(2*i+1)for i in xrange(5**8)))
(51 dropping the 'x' from xrange.)
36 chars in Octave (or Matlab):
l=0:5^8;disp((-1).^l*(4./(2.*l+1))')
(execute "format long;" to show all the significant digits.) Omitting 'disp' we reach 30 chars:
octave:5> l=0:5^8;(-1).^l*(4./(2.*l+1))'
ans = 3.14159009359631
Oracle SQL 73 chars
select -4*sum(power(-1,level)/(level*2-1)) from dual connect by level<1e6
Language: C, Char count: 71
float p;main(i){for(i=1;1E6/i>5;i+=2)p-=(i%4-2)*4./i;printf("%g\n",p);}
Language: C99, Char count: 97 (including required newline)
#include <stdio.h>
float p;int main(){for(int i=1;1E6/i>5;i+=2)p-=(i%4-2)*4./i;printf("%g\n",p);}
I should note that the above versions (which are the same) keep track of whether an extra iteration would affect the result at all. Thus, it performs a minimum number of operations. To add more digits, replace 1E6 with 1E(num_digits+1) or 4E5 with 4E(num_digits) (depending on the version). For the full programs, %g may need to be replaced. float may need to be changed to double as well.
Language: C, Char count: 67 (see notes)
double p,i=1;main(){for(;i<1E6;i+=4)p+=8/i/(i+2);printf("%g\n",p);}
This version uses a modified version of posted algorithm, as used by some other answers. Also, it is not as clean/efficient as the first two solutions, as it forces 100 000 iterations instead of detecting when iterations become meaningless.
Language: C, Char count: 24 (cheating)
main(){puts("3.14159");}
Doesn't work with digit counts > 6, though.
Haskell
I got it down to 34 characters:
foldl subtract 4$map(4/)[3,5..9^6]
This expression yields 3.141596416935556 when evaluated.
Edit: here's a somewhat shorter version (at 33 characters) that uses foldl1 instead of foldl:
foldl1 subtract$map(4/)[1,3..9^6]
Edit 2: 9^6 instead of 10^6. One has to be economical ;)
Edit 3: Replaced with foldl' and foldl1' with foldl and foldl1 respectively—as a result of Edit 2, it no longer overflows. Thanks to ShreevatsaR for noticing this.
23 chars in MATLAB:
a=1e6;sum(4./(1-a:4:a))
F#:
Attempt #1:
let pi = 3.14159
Cheating? No, its winning with style!
Attempt #2:
let pi =
seq { 0 .. 100 }
|> Seq.map (fun x -> float x)
|> Seq.fold (fun x y -> x + (Math.Pow(-1.0, y)/(2.0 * y + 1.0))) 0.0
|> (fun x -> x * 4.0)
Its not as compact as it could possibly get, but pretty idiomatic F#.
common lisp, 55 chars.
(loop for i from 1 upto 4e5 by 4 sum (/ 8d0 i (+ i 2)))
Mathematica, 27 chars (arguably as low as 26, or as high as 33)
NSum[8/i/(i+2),{i,1,9^9,4}]
If you remove the initial "N" then it returns the answer as a (huge) fraction.
If it's cheating that Mathematica doesn't need a print statement to output its result then prepend "Print#" for a total of 33 chars.
NB:
If it's cheating to hardcode the number of terms, then I don't think any answer has yet gotten this right. Checking when the current term is below some threshold is no better than hardcoding the number of terms. Just because the current term is only changing the 6th or 7th digit doesn't mean that the sum of enough subsequent terms won't change the 5th digit.
Using the formula for the error term in an alternating series (and thus the necessary number of iterations to achieve the desired accuracy is not hard coded into the program):
public static void Main(string[] args) {
double tolerance = 0.000001;
double piApproximation = LeibnizPi(tolerance);
Console.WriteLine(piApproximation);
}
private static double LeibnizPi(double tolerance) {
double quarterPiApproximation = 0;
int index = 1;
double term;
int sign = 1;
do {
term = 1.0 / (2 * index - 1);
quarterPiApproximation += ((double)sign) * term;
index++;
sign = -sign;
} while (term > tolerance);
return 4 * quarterPiApproximation;
}
C#:
public static double Pi()
{
double pi = 0;
double sign = 1;
for (int i = 1; i < 500002; i += 2)
{
pi += sign / i;
sign = -sign;
}
return 4 * pi;
}
Perl :
$i+=($_&1?4:-4)/($_*2-1)for 1..1e6;print$i
for a total of 42 chars.
Ruby, 41 chars (using irb):
s=0;(3..3e6).step(4){|i|s+=8.0/i/(i-2)};s
Or this slightly longer, non-irb version:
s=0;(3..3e6).step(4){|i|s+=8.0/i/(i-2)};p s
This is a modified Leibniz:
Combine pairs of terms. This gives you 2/3 + 2/35 + 2/99 + ...
Pi becomes 8 * (1/(1 * 3) + 1/(5 * 7) + 1/(9 * 11) + ...)
F# (Interactive Mode) (59 Chars)
{0.0..1E6}|>Seq.fold(fun a x->a+ -1.**x/(2.*x+1.))0.|>(*)4.
(Yields a warning but omits the casts)
Here's a solution in MUMPS.
pi(N)
N X,I
S X=1 F I=3:4:N-2 S X=X-(1/I)+(1/(I+2))
Q 4*X
Parameter N indicates how many repeated fractions to use. That is, if you pass in 5 it will evaluate 4 * (1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11)
Some empirical testing showed that N=272241 is the lowest value that gives a correct value of 3.14159 when truncated to 5 decimal points. You have to go to N=852365 to get a value that rounds to 3.14159.
C# using iterator block:
static IEnumerable<double> Pi()
{
double i = 4, j = 1, k = 4;
for (;;)
{
yield return k;
k += (i *= -1) / (j += 2);
}
}
For the record, this Scheme implementation has 95 characters ignoring unnecessary whitespace.
(define (f)
(define (p a b)
(if (> a b)
0
(+ (/ 1.0 (* a (+ a 2))) (p (+ a 4) b))))
(* 8 (p 1 1e6)))
Javascript:
a=0,b=-1,d=-4,c=1e6;while(c--)a+=(d=-d)/(b+=2)
In javascript. 51 characters. Obviously not going to win but eh. :P
Edit -- updated to be 46 characters now, thanks to Strager. :)
UPDATE (March 30 2010)
A faster (precise only to 5 decimal places) 43 character version by David Murdoch
for(a=0,b=1,d=4,c=~4e5;++c;d=-d)a-=d/(b-=2)
Here's a recursive answer using C#. It will only work using the x64 JIT in Release mode because that's the only JIT that applies tail-call optimisation, and as the series converges so slowly it will result in a StackOverflowException without it.
It would be nice to have the IteratePi function as an anonymous lambda, but as it's self-recursive we'd have to start doing all manner of horrible things with Y-combinators so I've left it as a separate function.
public static double CalculatePi()
{
return IteratePi(0.0, 1.0, true);
}
private static double IteratePi(double result, double denom, bool add)
{
var term = 4.0 / denom;
if (term < 0.00001) return result;
var next = add ? result + term : result - term;
return IteratePi(next, denom + 2.0, !add);
}
Most of the current answers assume that they'll get 5 digits accuracy within some number of iterations and this number is hardcoded into the program. My understanding of the question was that the program itself is supposed to figure out when it's got an answer accurate to 5 digits and stop there. On that assumption here's my C# solution. I haven't bothered to minimise the number of characters since there's no way it can compete with some of the answers already out there, so I thought I'd make it readable instead. :)
private static double GetPi()
{
double acc = 1, sign = -1, lastCheck = 0;
for (double div = 3; ; div += 2, sign *= -1)
{
acc += sign / div;
double currPi = acc * 4;
double currCheck = Math.Round(currPi, 5);
if (currCheck == lastCheck)
return currPi;
lastCheck = currCheck;
}
}
Language: C99 (implicit return 0), Char count: 99 (95 + 4 required spaces)
exit condition depends on current value, not on a fixed count
#include <stdio.h>
float p, s=4, d=1;
int main(void) {
for (; 4/d > 1E-5; d += 2)
p -= (s = -s) / d;
printf("%g\n", p);
}
compacted version
#include<stdio.h>
float
p,s=4,d=1;int
main(void){for(;4/d>1E-5;d+=2)p-=(s=-s)/d;printf("%g\n",p);}
Language: dc, Char count: 35
dc -e '9k0 1[d4r/r2+sar-lad274899>b]dsbxrp'
Ruby:
irb(main):031:0> 4*(1..10000).inject {|s,x| s+(-1)**(x+1)*1.0/(2*x-1)}
=> 3.14149265359003
64 chars in AWK:
~# awk 'BEGIN {p=1;for(i=3;i<10^6;i+=4){p=p-1/i+1/(i+2)}print p*4}'
3.14159
C# cheating - 50 chars:
static single Pi(){
return Math.Round(Math.PI, 5));
}
It only says "taking into account the formula write a function..." it doesn't say reproduce the formula programmatically :) Think outside the box...
C# LINQ - 78 chars:
static double pi = 4 * Enumerable.Range(0, 1000000)
.Sum(n => Math.Pow(-1, n) / (2 * n + 1));
C# Alternate LINQ - 94 chars:
static double pi = return 4 * (from n in Enumerable.Range(0, 1000000)
select Math.Pow(-1, n) / (2 * n + 1)).Sum();
And finally - this takes the previously mentioned algorithm and condenses it mathematically so you don't have to worry about keep changing signs.
C# longhand - 89 chars (not counting unrequired spaces):
static double pi()
{
var t = 0D;
for (int n = 0; n < 1e6; t += Math.Pow(-1, n) / (2 * n + 1), n++) ;
return 4 * t;
}
#!/usr/bin/env python
from math import *
denom = 1.0
imm = 0.0
sgn = 1
it = 0
for i in xrange(0, int(1e6)):
imm += (sgn*1/denom)
denom += 2
sgn *= -1
print str(4*imm)
Related
I have seen this question asked a lot but never seen a true concrete answer to it. So I am going to post one here which will hopefully help people understand why exactly there is "modulo bias" when using a random number generator, like rand() in C++.
So rand() is a pseudo-random number generator which chooses a natural number between 0 and RAND_MAX, which is a constant defined in cstdlib (see this article for a general overview on rand()).
Now what happens if you want to generate a random number between say 0 and 2? For the sake of explanation, let's say RAND_MAX is 10 and I decide to generate a random number between 0 and 2 by calling rand()%3. However, rand()%3 does not produce the numbers between 0 and 2 with equal probability!
When rand() returns 0, 3, 6, or 9, rand()%3 == 0. Therefore, P(0) = 4/11
When rand() returns 1, 4, 7, or 10, rand()%3 == 1. Therefore, P(1) = 4/11
When rand() returns 2, 5, or 8, rand()%3 == 2. Therefore, P(2) = 3/11
This does not generate the numbers between 0 and 2 with equal probability. Of course for small ranges this might not be the biggest issue but for a larger range this could skew the distribution, biasing the smaller numbers.
So when does rand()%n return a range of numbers from 0 to n-1 with equal probability? When RAND_MAX%n == n - 1. In this case, along with our earlier assumption rand() does return a number between 0 and RAND_MAX with equal probability, the modulo classes of n would also be equally distributed.
So how do we solve this problem? A crude way is to keep generating random numbers until you get a number in your desired range:
int x;
do {
x = rand();
} while (x >= n);
but that's inefficient for low values of n, since you only have a n/RAND_MAX chance of getting a value in your range, and so you'll need to perform RAND_MAX/n calls to rand() on average.
A more efficient formula approach would be to take some large range with a length divisible by n, like RAND_MAX - RAND_MAX % n, keep generating random numbers until you get one that lies in the range, and then take the modulus:
int x;
do {
x = rand();
} while (x >= (RAND_MAX - RAND_MAX % n));
x %= n;
For small values of n, this will rarely require more than one call to rand().
Works cited and further reading:
CPlusPlus Reference
Eternally Confuzzled
Keep selecting a random is a good way to remove the bias.
Update
We could make the code fast if we search for an x in range divisible by n.
// Assumptions
// rand() in [0, RAND_MAX]
// n in (0, RAND_MAX]
int x;
// Keep searching for an x in a range divisible by n
do {
x = rand();
} while (x >= RAND_MAX - (RAND_MAX % n))
x %= n;
The above loop should be very fast, say 1 iteration on average.
#user1413793 is correct about the problem. I'm not going to discuss that further, except to make one point: yes, for small values of n and large values of RAND_MAX, the modulo bias can be very small. But using a bias-inducing pattern means that you must consider the bias every time you calculate a random number and choose different patterns for different cases. And if you make the wrong choice, the bugs it introduces are subtle and almost impossible to unit test. Compared to just using the proper tool (such as arc4random_uniform), that's extra work, not less work. Doing more work and getting a worse solution is terrible engineering, especially when doing it right every time is easy on most platforms.
Unfortunately, the implementations of the solution are all incorrect or less efficient than they should be. (Each solution has various comments explaining the problems, but none of the solutions have been fixed to address them.) This is likely to confuse the casual answer-seeker, so I'm providing a known-good implementation here.
Again, the best solution is just to use arc4random_uniform on platforms that provide it, or a similar ranged solution for your platform (such as Random.nextInt on Java). It will do the right thing at no code cost to you. This is almost always the correct call to make.
If you don't have arc4random_uniform, then you can use the power of opensource to see exactly how it is implemented on top of a wider-range RNG (ar4random in this case, but a similar approach could also work on top of other RNGs).
Here is the OpenBSD implementation:
/*
* Calculate a uniformly distributed random number less than upper_bound
* avoiding "modulo bias".
*
* Uniformity is achieved by generating new random numbers until the one
* returned is outside the range [0, 2**32 % upper_bound). This
* guarantees the selected random number will be inside
* [2**32 % upper_bound, 2**32) which maps back to [0, upper_bound)
* after reduction modulo upper_bound.
*/
u_int32_t
arc4random_uniform(u_int32_t upper_bound)
{
u_int32_t r, min;
if (upper_bound < 2)
return 0;
/* 2**32 % x == (2**32 - x) % x */
min = -upper_bound % upper_bound;
/*
* This could theoretically loop forever but each retry has
* p > 0.5 (worst case, usually far better) of selecting a
* number inside the range we need, so it should rarely need
* to re-roll.
*/
for (;;) {
r = arc4random();
if (r >= min)
break;
}
return r % upper_bound;
}
It is worth noting the latest commit comment on this code for those who need to implement similar things:
Change arc4random_uniform() to calculate 2**32 % upper_bound as
-upper_bound % upper_bound. Simplifies the code and makes it the
same on both ILP32 and LP64 architectures, and also slightly faster on
LP64 architectures by using a 32-bit remainder instead of a 64-bit
remainder.
Pointed out by Jorden Verwer on tech#
ok deraadt; no objections from djm or otto
The Java implementation is also easily findable (see previous link):
public int nextInt(int n) {
if (n <= 0)
throw new IllegalArgumentException("n must be positive");
if ((n & -n) == n) // i.e., n is a power of 2
return (int)((n * (long)next(31)) >> 31);
int bits, val;
do {
bits = next(31);
val = bits % n;
} while (bits - val + (n-1) < 0);
return val;
}
Definition
Modulo Bias is the inherent bias in using modulo arithmetic to reduce an output set to a subset of the input set. In general, a bias exists whenever the mapping between the input and output set is not equally distributed, as in the case of using modulo arithmetic when the size of the output set is not a divisor of the size of the input set.
This bias is particularly hard to avoid in computing, where numbers are represented as strings of bits: 0s and 1s. Finding truly random sources of randomness is also extremely difficult, but is beyond the scope of this discussion. For the remainder of this answer, assume that there exists an unlimited source of truly random bits.
Problem Example
Let's consider simulating a die roll (0 to 5) using these random bits. There are 6 possibilities, so we need enough bits to represent the number 6, which is 3 bits. Unfortunately, 3 random bits yields 8 possible outcomes:
000 = 0, 001 = 1, 010 = 2, 011 = 3
100 = 4, 101 = 5, 110 = 6, 111 = 7
We can reduce the size of the outcome set to exactly 6 by taking the value modulo 6, however this presents the modulo bias problem: 110 yields a 0, and 111 yields a 1. This die is loaded.
Potential Solutions
Approach 0:
Rather than rely on random bits, in theory one could hire a small army to roll dice all day and record the results in a database, and then use each result only once. This is about as practical as it sounds, and more than likely would not yield truly random results anyway (pun intended).
Approach 1:
Instead of using the modulus, a naive but mathematically correct solution is to discard results that yield 110 and 111 and simply try again with 3 new bits. Unfortunately, this means that there is a 25% chance on each roll that a re-roll will be required, including each of the re-rolls themselves. This is clearly impractical for all but the most trivial of uses.
Approach 2:
Use more bits: instead of 3 bits, use 4. This yield 16 possible outcomes. Of course, re-rolling anytime the result is greater than 5 makes things worse (10/16 = 62.5%) so that alone won't help.
Notice that 2 * 6 = 12 < 16, so we can safely take any outcome less than 12 and reduce that modulo 6 to evenly distribute the outcomes. The other 4 outcomes must be discarded, and then re-rolled as in the previous approach.
Sounds good at first, but let's check the math:
4 discarded results / 16 possibilities = 25%
In this case, 1 extra bit didn't help at all!
That result is unfortunate, but let's try again with 5 bits:
32 % 6 = 2 discarded results; and
2 discarded results / 32 possibilities = 6.25%
A definite improvement, but not good enough in many practical cases. The good news is, adding more bits will never increase the chances of needing to discard and re-roll. This holds not just for dice, but in all cases.
As demonstrated however, adding an 1 extra bit may not change anything. In fact if we increase our roll to 6 bits, the probability remains 6.25%.
This begs 2 additional questions:
If we add enough bits, is there a guarantee that the probability of a discard will diminish?
How many bits are enough in the general case?
General Solution
Thankfully the answer to the first question is yes. The problem with 6 is that 2^x mod 6 flips between 2 and 4 which coincidentally are a multiple of 2 from each other, so that for an even x > 1,
[2^x mod 6] / 2^x == [2^(x+1) mod 6] / 2^(x+1)
Thus 6 is an exception rather than the rule. It is possible to find larger moduli that yield consecutive powers of 2 in the same way, but eventually this must wrap around, and the probability of a discard will be reduced.
Without offering further proof, in general using double the number
of bits required will provide a smaller, usually insignificant,
chance of a discard.
Proof of Concept
Here is an example program that uses OpenSSL's libcrypo to supply random bytes. When compiling, be sure to link to the library with -lcrypto which most everyone should have available.
#include <iostream>
#include <assert.h>
#include <limits>
#include <openssl/rand.h>
volatile uint32_t dummy;
uint64_t discardCount;
uint32_t uniformRandomUint32(uint32_t upperBound)
{
assert(RAND_status() == 1);
uint64_t discard = (std::numeric_limits<uint64_t>::max() - upperBound) % upperBound;
RAND_bytes((uint8_t*)(&randomPool), sizeof(randomPool));
while(randomPool > (std::numeric_limits<uint64_t>::max() - discard)) {
RAND_bytes((uint8_t*)(&randomPool), sizeof(randomPool));
++discardCount;
}
return randomPool % upperBound;
}
int main() {
discardCount = 0;
const uint32_t MODULUS = (1ul << 31)-1;
const uint32_t ROLLS = 10000000;
for(uint32_t i = 0; i < ROLLS; ++i) {
dummy = uniformRandomUint32(MODULUS);
}
std::cout << "Discard count = " << discardCount << std::endl;
}
I encourage playing with the MODULUS and ROLLS values to see how many re-rolls actually happen under most conditions. A sceptical person may also wish to save the computed values to file and verify the distribution appears normal.
Mark's Solution (The accepted solution) is Nearly Perfect.
int x;
do {
x = rand();
} while (x >= (RAND_MAX - RAND_MAX % n));
x %= n;
edited Mar 25 '16 at 23:16
Mark Amery 39k21170211
However, it has a caveat which discards 1 valid set of outcomes in any scenario where RAND_MAX (RM) is 1 less than a multiple of N (Where N = the Number of possible valid outcomes).
ie, When the 'count of values discarded' (D) is equal to N, then they are actually a valid set (V), not an invalid set (I).
What causes this is at some point Mark loses sight of the difference between N and Rand_Max.
N is a set who's valid members are comprised only of Positive Integers, as it contains a count of responses that would be valid. (eg: Set N = {1, 2, 3, ... n } )
Rand_max However is a set which ( as defined for our purposes ) includes any number of non-negative integers.
In it's most generic form, what is defined here as Rand Max is the Set of all valid outcomes, which could theoretically include negative numbers or non-numeric values.
Therefore Rand_Max is better defined as the set of "Possible Responses".
However N operates against the count of the values within the set of valid responses, so even as defined in our specific case, Rand_Max will be a value one less than the total number it contains.
Using Mark's Solution, Values are Discarded when: X => RM - RM % N
EG:
Ran Max Value (RM) = 255
Valid Outcome (N) = 4
When X => 252, Discarded values for X are: 252, 253, 254, 255
So, if Random Value Selected (X) = {252, 253, 254, 255}
Number of discarded Values (I) = RM % N + 1 == N
IE:
I = RM % N + 1
I = 255 % 4 + 1
I = 3 + 1
I = 4
X => ( RM - RM % N )
255 => (255 - 255 % 4)
255 => (255 - 3)
255 => (252)
Discard Returns $True
As you can see in the example above, when the value of X (the random number we get from the initial function) is 252, 253, 254, or 255 we would discard it even though these four values comprise a valid set of returned values.
IE: When the count of the values Discarded (I) = N (The number of valid outcomes) then a Valid set of return values will be discarded by the original function.
If we describe the difference between the values N and RM as D, ie:
D = (RM - N)
Then as the value of D becomes smaller, the Percentage of unneeded re-rolls due to this method increases at each natural multiplicative. (When RAND_MAX is NOT equal to a Prime Number this is of valid concern)
EG:
RM=255 , N=2 Then: D = 253, Lost percentage = 0.78125%
RM=255 , N=4 Then: D = 251, Lost percentage = 1.5625%
RM=255 , N=8 Then: D = 247, Lost percentage = 3.125%
RM=255 , N=16 Then: D = 239, Lost percentage = 6.25%
RM=255 , N=32 Then: D = 223, Lost percentage = 12.5%
RM=255 , N=64 Then: D = 191, Lost percentage = 25%
RM=255 , N= 128 Then D = 127, Lost percentage = 50%
Since the percentage of Rerolls needed increases the closer N comes to RM, this can be of valid concern at many different values depending on the constraints of the system running he code and the values being looked for.
To negate this we can make a simple amendment As shown here:
int x;
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) );
x %= n;
This provides a more general version of the formula which accounts for the additional peculiarities of using modulus to define your max values.
Examples of using a small value for RAND_MAX which is a multiplicative of N.
Mark'original Version:
RAND_MAX = 3, n = 2, Values in RAND_MAX = 0,1,2,3, Valid Sets = 0,1 and 2,3.
When X >= (RAND_MAX - ( RAND_MAX % n ) )
When X >= 2 the value will be discarded, even though the set is valid.
Generalized Version 1:
RAND_MAX = 3, n = 2, Values in RAND_MAX = 0,1,2,3, Valid Sets = 0,1 and 2,3.
When X > (RAND_MAX - ( ( RAND_MAX % n ) + 1 ) % n )
When X > 3 the value would be discarded, but this is not a vlue in the set RAND_MAX so there will be no discard.
Additionally, in the case where N should be the number of values in RAND_MAX; in this case, you could set N = RAND_MAX +1, unless RAND_MAX = INT_MAX.
Loop-wise you could just use N = 1, and any value of X will be accepted, however, and put an IF statement in for your final multiplier. But perhaps you have code that may have a valid reason to return a 1 when the function is called with n = 1...
So it may be better to use 0, which would normally provide a Div 0 Error, when you wish to have n = RAND_MAX+1
Generalized Version 2:
int x;
if n != 0 {
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) );
x %= n;
} else {
x = rand();
}
Both of these solutions resolve the issue with needlessly discarded valid results which will occur when RM+1 is a product of n.
The second version also covers the edge case scenario when you need n to equal the total possible set of values contained in RAND_MAX.
The modified approach in both is the same and allows for a more general solution to the need of providing valid random numbers and minimizing discarded values.
To reiterate:
The Basic General Solution which extends mark's example:
// Assumes:
// RAND_MAX is a globally defined constant, returned from the environment.
// int n; // User input, or externally defined, number of valid choices.
int x;
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) ) );
x %= n;
The Extended General Solution which Allows one additional scenario of RAND_MAX+1 = n:
// Assumes:
// RAND_MAX is a globally defined constant, returned from the environment.
// int n; // User input, or externally defined, number of valid choices.
int x;
if n != 0 {
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) ) );
x %= n;
} else {
x = rand();
}
In some languages ( particularly interpreted languages ) doing the calculations of the compare-operation outside of the while condition may lead to faster results as this is a one-time calculation no matter how many re-tries are required. YMMV!
// Assumes:
// RAND_MAX is a globally defined constant, returned from the environment.
// int n; // User input, or externally defined, number of valid choices.
int x; // Resulting random number
int y; // One-time calculation of the compare value for x
y = RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n)
if n != 0 {
do {
x = rand();
} while (x > y);
x %= n;
} else {
x = rand();
}
There are two usual complaints with the use of modulo.
one is valid for all generators. It is easier to see in a limit case. If your generator has a RAND_MAX which is 2 (that isn't compliant with the C standard) and you want only 0 or 1 as value, using modulo will generate 0 twice as often (when the generator generates 0 and 2) as it will generate 1 (when the generator generates 1). Note that this is true as soon as you don't drop values, whatever the mapping you are using from the generator values to the wanted one, one will occurs twice as often as the other.
some kind of generator have their less significant bits less random than the other, at least for some of their parameters, but sadly those parameter have other interesting characteristic (such has being able to have RAND_MAX one less than a power of 2). The problem is well known and for a long time library implementation probably avoid the problem (for instance the sample rand() implementation in the C standard use this kind of generator, but drop the 16 less significant bits), but some like to complain about that and you may have bad luck
Using something like
int alea(int n){
assert (0 < n && n <= RAND_MAX);
int partSize =
n == RAND_MAX ? 1 : 1 + (RAND_MAX-n)/(n+1);
int maxUsefull = partSize * n + (partSize-1);
int draw;
do {
draw = rand();
} while (draw > maxUsefull);
return draw/partSize;
}
to generate a random number between 0 and n will avoid both problems (and it avoids overflow with RAND_MAX == INT_MAX)
BTW, C++11 introduced standard ways to the the reduction and other generator than rand().
With a RAND_MAX value of 3 (in reality it should be much higher than that but the bias would still exist) it makes sense from these calculations that there is a bias:
1 % 2 = 1
2 % 2 = 0
3 % 2 = 1
random_between(1, 3) % 2 = more likely a 1
In this case, the % 2 is what you shouldn't do when you want a random number between 0 and 1. You could get a random number between 0 and 2 by doing % 3 though, because in this case: RAND_MAX is a multiple of 3.
Another method
There is much simpler but to add to other answers, here is my solution to get a random number between 0 and n - 1, so n different possibilities, without bias.
the number of bits (not bytes) needed to encode the number of possibilities is the number of bits of random data you'll need
encode the number from random bits
if this number is >= n, restart (no modulo).
Really random data is not easy to obtain, so why use more bits than needed.
Below is an example in Smalltalk, using a cache of bits from a pseudo-random number generator. I'm no security expert so use at your own risk.
next: n
| bitSize r from to |
n < 0 ifTrue: [^0 - (self next: 0 - n)].
n = 0 ifTrue: [^nil].
n = 1 ifTrue: [^0].
cache isNil ifTrue: [cache := OrderedCollection new].
cache size < (self randmax highBit) ifTrue: [
Security.DSSRandom default next asByteArray do: [ :byte |
(1 to: 8) do: [ :i | cache add: (byte bitAt: i)]
]
].
r := 0.
bitSize := n highBit.
to := cache size.
from := to - bitSize + 1.
(from to: to) do: [ :i |
r := r bitAt: i - from + 1 put: (cache at: i)
].
cache removeFrom: from to: to.
r >= n ifTrue: [^self next: n].
^r
Modulo reduction is a commonly seen way to make a random integer generator avoid the worst case of running forever.
When the range of possible integers is unknown, however, there is no way in general to "fix" this worst case of running forever without introducing bias. It's not just modulo reduction (rand() % n, discussed in the accepted answer) that will introduce bias this way, but also the "multiply-and-shift" reduction of Daniel Lemire, or if you stop rejecting an outcome after a set number of iterations. (To be clear, this doesn't mean there is no way to fix the bias issues present in pseudorandom generators. For example, even though modulo and other reductions are biased in general, they will have no issues with bias if the range of possible integers is a power of 2 and if the random generator produces unbiased random bits or blocks of them.)
The following answer of mine discusses the relationship between running time and bias in random generators, assuming we have a "true" random generator that can produce unbiased and independent random bits. The answer doesn't even involve the rand() function in C because it has many issues. Perhaps the most serious here is the fact that the C standard does not explicitly specify a particular distribution for the numbers returned by rand(), not even a uniform distribution.
How to generate a random integer in the range [0,n] from a stream of random bits without wasting bits?
As the accepted answer indicates, "modulo bias" has its roots in the low value of RAND_MAX. He uses an extremely small value of RAND_MAX (10) to show that if RAND_MAX were 10, then you tried to generate a number between 0 and 2 using %, the following outcomes would result:
rand() % 3 // if RAND_MAX were only 10, gives
output of rand() | rand()%3
0 | 0
1 | 1
2 | 2
3 | 0
4 | 1
5 | 2
6 | 0
7 | 1
8 | 2
9 | 0
So there are 4 outputs of 0's (4/10 chance) and only 3 outputs of 1 and 2 (3/10 chances each).
So it's biased. The lower numbers have a better chance of coming out.
But that only shows up so obviously when RAND_MAX is small. Or more specifically, when the number your are modding by is large compared to RAND_MAX.
A much better solution than looping (which is insanely inefficient and shouldn't even be suggested) is to use a PRNG with a much larger output range. The Mersenne Twister algorithm has a maximum output of 4,294,967,295. As such doing MersenneTwister::genrand_int32() % 10 for all intents and purposes, will be equally distributed and the modulo bias effect will all but disappear.
I just wrote a code for Von Neumann's Unbiased Coin Flip Method, that should theoretically eliminate any bias in the random number generation process. More info can be found at (http://en.wikipedia.org/wiki/Fair_coin)
int unbiased_random_bit() {
int x1, x2, prev;
prev = 2;
x1 = rand() % 2;
x2 = rand() % 2;
for (;; x1 = rand() % 2, x2 = rand() % 2)
{
if (x1 ^ x2) // 01 -> 1, or 10 -> 0.
{
return x2;
}
else if (x1 & x2)
{
if (!prev) // 0011
return 1;
else
prev = 1; // 1111 -> continue, bias unresolved
}
else
{
if (prev == 1)// 1100
return 0;
else // 0000 -> continue, bias unresolved
prev = 0;
}
}
}
I am doing research work. for which I need to compute and store the square root of 2 up to 10^6 places. I have googled for this but I got only a NASA page but how they computed that I don't know. I used set_precision of c++. but that is giving the result up to around 50 places only.what should I do?
NASA page link: https://apod.nasa.gov/htmltest/gifcity/sqrt2.1mil
I have tried binary search also but not fruitful.
long double ans = sqrt(n);
cout<<fixed<<setprecision(50)<<ans<<endl;
You have various options here. You can work with an arbitrary-precision floating-point library (for example MPFR with C or C++, or mpmath or the built-in decimal library in Python). Provided you know what error guarantees that library gives, you can ensure that you get the correct decimal digits. For example, both MPFR and Python's decimal guarantee correct rounding here, but MPFR has the disadvantage (for your particular use-case of getting decimal digits) that it works in binary, so you'd also need to analyse the error induced by the binary-to-decimal conversion.
You can also work with pure integer methods, using an arbitrary-precision integer library (like GMP), or a language that supports arbitrary-precision integers out of the box (for example, Java with its BigInteger class: recent versions of Java provide a BigInteger.sqrt method): scale 2 by 10**2n, where n is the number of places after the decimal point that you need, take the integer square root (i.e., the integer part of the exact mathematical square root), and then scale back by 10**n. See below for a relatively simple but efficient algorithm for computing integer square roots.
The simplest out-of-the-box option here, if you're willing to use another language, is to use Python's decimal library. Here's all the code you need, assuming Python 3 (not Python 2, where this will be horribly slow).
>>> from decimal import Decimal, getcontext
>>> getcontext().prec = 10**6 + 1 # number of significant digits needed
>>> sqrt2_digits = str(Decimal(2).sqrt())
The str(Decimal(2).sqrt()) operation takes less than 10 seconds on my machine. Let's check the length, and the first and last hundred digits (we obviously can't reproduce the whole output here):
>>> len(sqrt2_digits)
1000002
>>> sqrt2_digits[:100]
'1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157'
>>> sqrt2_digits[-100:]
'2637136344700072631923515210207475200984587509349804012374947972946621229489938420441930169048412044'
There's a slight problem with this: the result is guaranteed to be correctly rounded, but that's rounded, not truncated. So that means that that final "4" digit could be the result of a final round up - that is, the actual digit in that position could be a "3", with an "8" or "9" (for example) following it.
We can get around this by computing a couple of extra digits, and then truncating them (after double checking that rounding of those extra digits doesn't affect the truncation).
>>> getcontext().prec = 10**6 + 3
>>> sqrt2_digits = str(Decimal(2).sqrt())
>>> sqrt2_digits[-102:]
'263713634470007263192351521020747520098458750934980401237494797294662122948993842044193016904841204391'
So indeed the millionth digit after the decimal point is a 3, not a 4. Note that if the last 3 digits computed above had been "400", we still wouldn't have known whether the millionth digit was a "3" or a "4", since that "400" could again be the result of a round up. In that case, you could compute another two digits and try again, and so on, stopping when you have an unambiguous output. (For further reading, search for "The table maker's dilemma".)
(Note that setting the decimal module's rounding mode to ROUND_DOWN does not work here, since the Decimal.sqrt method ignores the rounding mode.)
If you want to do this using pure integer arithmetic, Python 3.8 offers a math.isqrt function for computing exact integer square roots. In this case, we'd use it as follows:
>>> from math import isqrt
>>> sqrt2_digits = str(isqrt(2*10**(2*10**6)))
This takes a little longer: around 20 seconds on my laptop. Half of that time is for the binary-to-decimal conversion implicit in the str call. But this time, we got the truncated result directly, and didn't have to worry about the possibility of rounding giving us the wrong final digit(s).
Examining the results again:
>>> len(sqrt2_digits)
1000001
>>> sqrt2_digits[:100]
'1414213562373095048801688724209698078569671875376948073176679737990732478462107038850387534327641572'
>>> sqrt2_digits[-100:]
'2637136344700072631923515210207475200984587509349804012374947972946621229489938420441930169048412043'
This is a bit of a cheat, because (at the time of writing) Python 3.8 hasn't been released yet, although beta versions are available. But there's a pure Python version of the isqrt algorithm in the CPython source, that you can copy and paste and use directly. Here it is in full:
import operator
def isqrt(n):
"""
Return the integer part of the square root of the input.
"""
n = operator.index(n)
if n < 0:
raise ValueError("isqrt() argument must be nonnegative")
if n == 0:
return 0
c = (n.bit_length() - 1) // 2
a = 1
d = 0
for s in reversed(range(c.bit_length())):
# Loop invariant: (a-1)**2 < (n >> 2*(c - d)) < (a+1)**2
e = d
d = c >> s
a = (a << d - e - 1) + (n >> 2*c - e - d + 1) // a
return a - (a*a > n)
The source also contains an explanation of the above algorithm and an informal proof of its correctness.
You can check that the results by the two methods above agree (modulo the extra decimal point in the first result). They're computed by completely different methods, so that acts as a sanity check on both methods.
You could use big integers, e.g. BigInteger in Java. Then you calculate the square root of 2e12 or 2e14. Note that sqrt(2) = 1.4142... and sqrt(200) = 14.142... Then you can use the Babylonian method to get all the digits: E.g. S = 10^14. x(n+1) = (x(n) + S / x(n)) / 2. Repeat until x(n) doesn't change. Maybe there are more efficient algorithms that converge faster.
// Input: a positive integer, the number of precise digits after the decimal point
// Output: a string representing the long float square root
function findSquareRoot(number, numDigits) {
function get_power(x, y) {
let result = 1n;
for (let i = 0; i < y; i ++) {
result = result * BigInt(x);
}
return result;
}
let a = 5n * BigInt(number);
let b = 5n;
const precision_digits = get_power(10, numDigits + 1);
while (b < precision_digits) {
if (a >= b) {
a = a - b;
b = b + 10n;
} else {
a = a * 100n;
b = (b / 10n) * 100n + 5n;
}
}
let decimal_pos = Math.floor(Math.log10(number))
if (decimal_pos == 0) decimal_pos = 1
let result = (b / 100n).toString()
result = result.slice(0, decimal_pos) + '.' + result.slice(decimal_pos)
return result
}
A couple of days ago I played around with Befunge which is an esoteric programming language. Befunge uses a LIFO stack to store data. When you write programs the digits from 0 to 9 are actually Befunge-instructions which push the corresponding values onto the stack. So for exmaple this would push a 7 to stack:
34+
In order to push a number greater than 9, calculations must be done with numbers less than or equal to 9. This would yield 123.
99*76*+
While solving Euler Problem 1 with Befunge I had to push the fairly large number 999 to the stack. Here I began to wonder how I could accomplish this task with as few instructions as possible. By writing a term down in infix notation and taking out common factors I came up with
9993+*3+*
One could also simply multiply two two-digit numbers which produce 999, e.g.
39*66*1+*
I thought about this for while and then decided to write a program which puts out the smallest expression according to these rules in reverse polish notation for any given integer. This is what I have so far (written in NodeJS with underscorejs):
var makeExpr = function (value) {
if (value < 10) return value + "";
var output = "", counter = 0;
(function fn (val) {
counter++;
if(val < 9) { output += val; return; };
var exp = Math.floor(Math.log(val) / Math.log(9));
var div = Math.floor(val / Math.pow(9, exp));
_( exp ).times(function () { output += "9"; });
_(exp-1).times(function () { output += "*"; });
if (div > 1) output += div + "*";
fn(val - Math.pow(9, exp) * div);
})(value);
_(counter-1).times(function () { output+= "+"; });
return output.replace(/0\+/, "");
};
makeExpr(999);
// yields 999**99*3*93*++
This piece of code constructs the expression naively and is obvously way to long. Now my questions:
Is there an algorithm to simplify expressions in reverse polish notation?
Would simplification be easier in infix notation?
Can an expression like 9993+*3+* be proofed to be the smallest one possible?
I hope you can give some insights. Thanks in advance.
When only considering multiplication and addition, it's pretty easy to construct optimal formula's, because that problem has the optimal substructure property. That is, the optimal way to build [num1][num2]op is from num1 and num2 that are both also optimal. If duplication is also considered, that's no longer true.
The num1 and num2 give rise to overlapping subproblems, so Dynamic Programming is applicable.
We can simply, for a number i:
For every 1 < j <= sqrt(i) that evenly divides i, try [j][i / j]*
For every 0 < j < i/2, try [j][i - j]+
Take the best found formula
That is of course very easy to do bottom-up, just start at i = 0 and work your way up to whatever number you want. Step 2 is a little slow, unfortunately, so after say 100000 it starts to get annoying to wait for it. There might be some trick that I'm not seeing.
Code in C# (not tested super well, but it seems to work):
string[] n = new string[10000];
for (int i = 0; i < 10; i++)
n[i] = "" + i;
for (int i = 10; i < n.Length; i++)
{
int bestlen = int.MaxValue;
string best = null;
// try factors
int sqrt = (int)Math.Sqrt(i);
for (int j = 2; j <= sqrt; j++)
{
if (i % j == 0)
{
int len = n[j].Length + n[i / j].Length + 1;
if (len < bestlen)
{
bestlen = len;
best = n[j] + n[i / j] + "*";
}
}
}
// try sums
for (int j = 1; j < i / 2; j++)
{
int len = n[j].Length + n[i - j].Length + 1;
if (len < bestlen)
{
bestlen = len;
best = n[j] + n[i - j] + "+";
}
}
n[i] = best;
}
Here's a trick to optimize searching for the sums. Suppose there is an array that contains, for every length, the highest number that can be made with that length. An other thing that is perhaps less obvious that this array also gives us, is a quick way to determine the shortest number that is bigger than some threshold (by simply scanning through the array and noting the first position that crosses the threshold). Together, that gives a quick way to discard huge portions of the search space.
For example, the biggest number of length 3 is 81 and the biggest number of length 5 is 728. Now if we want to know how to get 1009 (prime, so no factors found), first we try the sums where the first part has length 1 (so 1+1008 through 9+1000), finding 9+1000 which is 9 characters long (95558***+).
The next step, checking the sums where the first part has length 3 or less, can be skipped completely. 1009 - 81 = 929, and 929 (the lowest that the second part of the sum can be if the first part is to be 3 characters or less) is bigger than 728 so numbers of 929 and over must be at least 7 characters long. So if the first part of the sum is 3 characters, the second part must be at least 7 characters, and then there's also a + sign on the end, so the total is at least 11 characters. The best so far was 9, so this step can be skipped.
The next step, with 5 characters in the first part, can also be skipped, because 1009 - 728 = 280, and to make 280 or high we need at least 5 characters. 5 + 5 + 1 = 11, bigger than 9, so don't check.
Instead of checking about 500 sums, we only had to check 9 this way, and the check to make the skipping possible is very quick. This trick is good enough that generating all numbers up to a million only takes 3 seconds on my PC (before, it would take 3 seconds to get to 100000).
Here's the code:
string[] n = new string[100000];
int[] biggest_number_of_length = new int[n.Length];
for (int i = 0; i < 10; i++)
n[i] = "" + i;
biggest_number_of_length[1] = 9;
for (int i = 10; i < n.Length; i++)
{
int bestlen = int.MaxValue;
string best = null;
// try factors
int sqrt = (int)Math.Sqrt(i);
for (int j = 2; j <= sqrt; j++)
{
if (i % j == 0)
{
int len = n[j].Length + n[i / j].Length + 1;
if (len < bestlen)
{
bestlen = len;
best = n[j] + n[i / j] + "*";
}
}
}
// try sums
for (int x = 1; x < bestlen; x += 2)
{
int find = i - biggest_number_of_length[x];
int min = int.MaxValue;
// find the shortest number that is >= (i - biggest_number_of_length[x])
for (int k = 1; k < biggest_number_of_length.Length; k += 2)
{
if (biggest_number_of_length[k] >= find)
{
min = k;
break;
}
}
// if that number wasn't small enough, it's not worth looking in that range
if (min + x + 1 < bestlen)
{
// range [find .. i] isn't optimal
for (int j = find; j < i; j++)
{
int len = n[i - j].Length + n[j].Length + 1;
if (len < bestlen)
{
bestlen = len;
best = n[i - j] + n[j] + "+";
}
}
}
}
// found
n[i] = best;
biggest_number_of_length[bestlen] = i;
}
There's still room for improvement. This code will re-check sums that it has already checked. There are simple ways to make it at least not check the same sum twice (by remembering the last find), but that made no significant difference in my tests. It should be possible to find a better upper bound.
There's also 93*94*1+*, which is basically 27*37.
Were I to attack this problem, I'd start by first trying to evenly divide the number. So given 999 I would divide by 9 and get 111. Then I'd try to divide by 9, 8, 7, etc. until I discovered that 111 is 3*37.
37 is prime, so I go greedy and divide by 9, giving me 4 with a remainder of 1.
That seems to give me optimum results for the half dozen I've tried. It's a little expensive, of course, testing for even divisibility. But perhaps not more expensive than generating a too-long expression.
Using this, 100 becomes 55*4*. 102 works out to 29*5*6+.
101 brings up an interesting case. 101/9 = (9*11) + 2. Or, alternately, (9*9)+20. Let's see:
983+*2+ (9*11) + 2
99*45*+ (9*9) + 20
Whether it's easier to generate the postfix directly or generate infix and convert, I really don't know. I can see benefits and drawbacks to each.
Anyway, that's the approach I'd take: try to divide evenly at first, and then be greedy dividing by 9. Not sure exactly how I'd structure it.
I'd sure like to see your solution once you figure it out.
Edit
This is an interesting problem. I came up with a recursive function that does a credible job of generating postfix expressions, but it's not optimum. Here it is in C#.
string GetExpression(int val)
{
if (val < 10)
{
return val.ToString();
}
int quo, rem;
// first see if it's evenly divisible
for (int i = 9; i > 1; --i)
{
quo = Math.DivRem(val, i, out rem);
if (rem == 0)
{
// If val < 90, then only generate here if the quotient
// is a one-digit number. Otherwise it can be expressed
// as (9 * x) + y, where x and y are one-digit numbers.
if (val >= 90 || (val < 90 && quo <= 9))
{
// value is (i * quo)
return i + GetExpression(quo) + "*";
}
}
}
quo = Math.DivRem(val, 9, out rem);
// value is (9 * quo) + rem
// optimization reduces (9 * 1) to 9
var s1 = "9" + ((quo == 1) ? string.Empty : GetExpression(quo) + "*");
var s2 = GetExpression(rem) + "+";
return s1 + s2;
}
For 999 it generates 9394*1+**, which I believe is optimum.
This generates optimum expressions for values <= 90. Every number from 0 to 90 can be expressed as the product of two one-digit numbers, or by an expression of the form (9x + y), where x and y are one-digit numbers. However, I don't know that this guarantees an optimum expression for values greater than 90.
There is 44 solutions for 999 with lenght 9:
39149*+**
39166*+**
39257*+**
39548*+**
39756*+**
39947*+**
39499**+*
39669**+*
39949**+*
39966**+*
93149*+**
93166*+**
93257*+**
93548*+**
93756*+**
93947*+**
93269**+*
93349**+*
93366**+*
93439**+*
93629**+*
93636**+*
93926**+*
93934**+*
93939+*+*
93948+*+*
93957+*+*
96357**+*
96537**+*
96735**+*
96769+*+*
96778+*+*
97849+*+*
97858+*+*
97867+*+*
99689+*+*
956*99*+*
968*79*+*
39*149*+*
39*166*+*
39*257*+*
39*548*+*
39*756*+*
39*947*+*
Edit:
I have working on some search space pruning improvements so sorry I have not posted it immediately. There is script in Erlnag. Original one takes 14s for 999 but this one makes it in around 190ms.
Edit2:
There is 1074 solutions of length 13 for 9999. It takes 7 minutes and there is some of them below:
329+9677**+**
329+9767**+**
338+9677**+**
338+9767**+**
347+9677**+**
347+9767**+**
356+9677**+**
356+9767**+**
3147789+***+*
31489+77***+*
3174789+***+*
3177489+***+*
3177488*+**+*
There is version in C with more aggressive pruning of state space and returns only one solution. It is way faster.
$ time ./polish_numbers 999
Result for 999: 39149*+**, length 9
real 0m0.008s
user 0m0.004s
sys 0m0.000s
$ time ./polish_numbers 99999
Result for 99999: 9158*+1569**+**, length 15
real 0m34.289s
user 0m34.296s
sys 0m0.000s
harold was reporting his C# bruteforce version makes same number in 20s so I was curious if I can improve mine. I have tried better memory utilization by refactoring data structure. Searching algorithm mostly works with length of solution and it's existence so I separated this information to one structure (best_rec_header). I have also make solution as tree branches separated in another (best_rec_args). Those data are used only when new better solution for given number. There is code.
Result for 99999: 9158*+1569**+**, length 15
real 0m31.824s
user 0m31.812s
sys 0m0.012s
It was still too much slow. So I tried some other versions. First I added some statistics to demonstrate that mine code is not computing all smaller numbers.
Result for 99999: 9158*+1569**+**, length 15, (skipped 36777, computed 26350)
Then I have tried change code to compute + solutions for bigger numbers first.
Result for 99999: 1956**+9158*+**, length 15, (skipped 0, computed 34577)
real 0m17.055s
user 0m17.052s
sys 0m0.008s
It was almost as twice faster. But there was another idea that may be sometimes I give up find solution for some number as limited by current best_len limit. So I tried to make small numbers (up to half of n) unlimited (note 255 as best_len limit for first of operands finding).
Result for 99999: 9158*+1569**+**, length 15, (skipped 36777, computed 50000)
real 0m12.058s
user 0m12.048s
sys 0m0.008s
Nice improvement but what if I limit solutions for those numbers by best solution found so far. It needs some sort of computation global state. Code becomes more complicated but result even faster.
Result for 99999: 97484777**+**+*, length 15, (skipped 36997, computed 33911)
real 0m10.401s
user 0m10.400s
sys 0m0.000s
It was even able to compute ten times bigger number.
Result for 999999: 37967+2599**+****, length 17, (skipped 440855)
real 12m55.085s
user 12m55.168s
sys 0m0.028s
Then I decided to try also brute force method and this was even faster.
Result for 99999: 9158*+1569**+**, length 15
real 0m3.543s
user 0m3.540s
sys 0m0.000s
Result for 999999: 37949+2599**+****, length 17
real 5m51.624s
user 5m51.556s
sys 0m0.068s
Which shows, that constant matter. It is especially true for modern CPU when brute force approach gets advantage from better vectorization, better CPU cache utilization and less branching.
Anyway, I think there is some better approach using better understanding of number theory or space searching by algorithms as A* and so. And for really big numbers there may be good idea to use genetic algorithms.
Edit3:
harold came with new idea to eliminate trying to much sums. I have implemented it in this new version. It is order of magnitude faster.
$ time ./polish_numbers 99999
Result for 99999: 9158*+1569**+**, length 15
real 0m0.153s
user 0m0.152s
sys 0m0.000s
$ time ./polish_numbers 999999
Result for 999999: 37949+2599**+****, length 17
real 0m3.516s
user 0m3.512s
sys 0m0.004s
$ time ./polish_numbers 9999999
Result for 9999999: 9788995688***+***+*, length 19
real 1m39.903s
user 1m39.904s
sys 0m0.032s
Don't forget, you can also push ASCII values!!
Usually, this is longer, but for higher numbers it can get much shorter:
If you needed the number 123, it would be much better to do
"{" than 99*76*+
My objective is to find the sum of all numbers from 4 to 666554 which consists of 4,5,6 only.
SUM = 4+5+6+44+45+46+54+55+56+64+65+66+.....................+666554.
Simple method is to run a loop and add the numbers made of 4,5 and 6 only.
long long sum = 0;
for(int i=4;i <=666554;i++){
/*check if number contains only 4,5 and 6.
if condition is true then add the number to the sum*/
}
But it seems to be inefficient. Checking that the number is made up of 4,5 and 6 will take time. Is there any way to increase the efficiency. I have tried a lot but no new approach i have found.Please help.
For 1-digit numbers, note that
4 + 5 + 6 == 5 * 3
For 2-digits numbers:
(44 + 45 + 46) + (54 + 55 + 56) + (64 + 65 + 66)
== 45 * 3 + 55 * 3 + 65 * 3
== 55 * 9
and so on.
In general, for n-digits numbers, there are 3n of them consist of 4,5,6 only, their average value is exactly 5...5(n digits). Using code, the sum of them is ('5' * n).to_i * 3 ** n (Ruby), or int('5' * n) * 3 ** n (Python).
You calculate up to 6-digits numbers, then subtract the sum of 666555 to 666666.
P.S: for small numbers like 666554, using pattern matching is fast enough. (example)
Implement a counter in base 3 (number of digit values), e.g. 0,1,2,10,11,12,20,21,22,100.... and then translate the base-3 number into a decimal with the digits 4,5,6 (0->4, 1->5, 2->6), and add to running total. Repeat until the limit.
def compute_sum(digits, max_val):
def _next_val(cur_val):
for pos in range(len(cur_val)):
cur_val[pos]+=1
if cur_val[pos]<len(digits):
return
cur_val[pos]=0
cur_val.append(0)
def _get_val(cur_val):
digit_val=1
num_val=0
for x in cur_val:
num_val+=digits[x]*digit_val
digit_val*=10
return num_val
cur_val=[]
sum=0
while(True):
_next_val(cur_val)
num_val=_get_val(cur_val)
if num_val>max_val:
break
sum+=num_val
return sum
def main():
digits=[4,5,6]
max_val=666554
print(digits, max_val)
print(compute_sum(digits, max_val))
Mathematics are good, but not all problems are trivially "compressible", so knowing how to deal with them without mathematics can be worthwhile.
In this problem, the summation is trivial, the difficulty is efficiently enumerating the numbers that need be added, at first glance.
The "filter" route is a possibility: generate all possible numbers, incrementally, and filter out those which do not match; however it is also quite inefficient (in general):
the condition might not be trivial to match: in this case, the easier way is a conversion to string (fairly heavy on divisions and tests) followed by string-matching
the ratio of filtering is not too bad to start with at 30% per digit, but it scales very poorly as gen-y-s remarked: for a 4 digits number it is at 1%, or generating and checking 100 numbers to only get 1 out of them.
I would therefore advise a "generational" approach: only generate numbers that match the condition (and all of them).
I would note that generating all numbers composed of 4, 5 and 6 is like counting (in ternary):
starts from 4
45 becomes 46 (beware of carry-overs)
66 becomes 444 (extreme carry-over)
Let's go, in Python, as a generator:
def generator():
def convert(array):
i = 0
for e in array:
i *= 10
i += e
return i
def increment(array):
result = []
carry = True
for e in array[::-1]:
if carry:
e += 1
carry = False
if e > 6:
e = 4
carry = True
result = [e,] + result
if carry:
result = [4,] + result
return result
array = [4]
while True:
num = convert(array)
if num > 666554: break
yield num
array = increment(array)
Its result can be printed with sum(generator()):
$ time python example.py
409632209
python example.py 0.03s user 0.00s system 82% cpu 0.043 total
And here is the same in C++.
"Start with a simpler problem." —Polya
Sum the n-digit numbers which consist of the digits 4,5,6 only
As Yu Hao explains above, there are 3**n numbers and their average by symmetry is eg. 555555, so the sum is 3**n * (10**n-1)*5/9. But if you didn't spot that, here's how you might solve the problem another way.
The problem has a recursive construction, so let's try a recursive solution. Let g(n) be the sum of all 456-numbers of exactly n digits. Then we have the recurrence relation:
g(n) = (4+5+6)*10**(n-1)*3**(n-1) + 3*g(n-1)
To see this, separate the first digit of each number in the sum (eg. for n=3, the hundreds column). That gives the first term. The second term is sum of the remaining digits, one count of g(n-1) for each prefix of 4,5,6.
If that's still unclear, write out the n=2 sum and separate tens from units:
g(2) = 44+45+46 + 54+55+56 + 64+65+66
= (40+50+60)*3 + 3*(4+5+6)
= (4+5+6)*10*3 + 3*g(n-1)
Cool. At this point, the keen reader might like to check Yu Hao's formula for g(n) satisfies our recurrence relation.
To solve OP's problem, the sum of all 456-numbers from 4 to 666666 is g(1) + g(2) + g(3) + g(4) + g(5) + g(6). In Python, with dynamic programming:
def sum456(n):
"""Find the sum of all numbers at most n digits which consist of 4,5,6 only"""
g = [0] * (n+1)
for i in range(1,n+1):
g[i] = 15*10**(i-1)*3**(i-1) + 3*g[i-1]
print(g) # show the array of partial solutions
return sum(g)
For n=6
>>> sum456(6)
[0, 15, 495, 14985, 449955, 13499865, 404999595]
418964910
Edit: I note that OP truncated his sum at 666554 so it doesn't fit the general pattern. It will be less the last few terms
>>> sum456(6) - (666555 + 666556 + 666564 + 666565 + 666566 + 666644 + 666645 + 666646 + 666654 + 666655 + 666656 + + 666664 + 666665 + 666666)
409632209
The sum of 4 through 666666 is:
total = sum([15*(3**i)*int('1'*(i+1)) for i in range(6)])
>>> 418964910
The sum of the few numbers between 666554 and 666666 is:
rest = 666555+666556+666564+666565+666566+
666644+666645+666646+
666654+666655+666656+
666664+666665+666666
>>> 9332701
total - rest
>>> 409632209
Java implementation of question:-
This uses the modulo(10^9 +7) for the answer.
public static long compute_sum(long[] digits, long max_val, long count[]) {
List<Long> cur_val = new ArrayList<>();
long sum = 0;
long mod = ((long)Math.pow(10,9))+7;
long num_val = 0;
while (true) {
_next_val(cur_val, digits);
num_val = _get_val(cur_val, digits, count);
sum =(sum%mod + (num_val)%mod)%mod;
if (num_val == max_val) {
break;
}
}
return sum;
}
public static void _next_val(List<Long> cur_val, long[] digits) {
for (int pos = 0; pos < cur_val.size(); pos++) {
cur_val.set(pos, cur_val.get(pos) + 1);
if (cur_val.get(pos) < digits.length)
return;
cur_val.set(pos, 0L);
}
cur_val.add(0L);
}
public static long _get_val(List<Long> cur_val, long[] digits, long count[]) {
long digit_val = 1;
long num_val = 0;
long[] digitAppearanceCount = new long[]{0,0,0};
for (Long x : cur_val) {
digitAppearanceCount[x.intValue()] = digitAppearanceCount[x.intValue()]+1;
if (digitAppearanceCount[x.intValue()]>count[x.intValue()]){
num_val=0;
break;
}
num_val = num_val+(digits[x.intValue()] * digit_val);
digit_val *= 10;
}
return num_val;
}
public static void main(String[] args) {
long [] digits=new long[]{4,5,6};
long count[] = new long[]{1,1,1};
long max_val= 654;
System.out.println(compute_sum(digits, max_val, count));
}
The Answer by #gen-y-s (https://stackoverflow.com/a/31286947/8398943) is wrong (It includes 55,66,44 for x=y=z=1 which is exceeding the available 4s, 5s, 6s). It gives output as 12189 but it should be 3675 for x=y=z=1.
The logic by #Yu Hao (https://stackoverflow.com/a/31285816/8398943) has the same mistake as mentioned above. It gives output as 12189 but it should be 3675 for x=y=z=1.
Suppose I have an int x = 54897, old digit index (0 based), and the new value for that digit. What's the fastest way to get the new value?
Example
x = 54897
index = 3
value = 2
y = f(x, index, value) // => 54827
Edit: by fastest, I definitely mean faster performance. No string processing.
In simplest case (considering the digits are numbered from LSB to MSB, the first one being 0) AND knowing the old digit, we could do as simple as that:
num += (new_digit - old_digit) * 10**pos;
For the real problem we would need:
1) the MSB-first version of the pos, that could cost you a log() or at most log10(MAX_INT) divisions by ten (could be improved using binary search).
2) the digit from that pos that would need at most 2 divisions (or zero, using results from step 1).
You could also use the special fpu instruction from x86 that is able to save a float in BCD (I have no idea how slow it is).
UPDATE: the first step could be done even faster, without any divisions, with a binary search like this:
int my_log10(unsigned short n){
// short: 0.. 64k -> 1.. 5 digits
if (n < 1000){ // 1..3
if (n < 10) return 1;
if (n < 100) return 2;
return 3;
} else { // 4..5
if (n < 10000) return 4;
return 5;
}
}
If your index started at the least significant digit, you could do something like
p = pow(10,index);
x = (x / (p*10) * (p*10) + value * p + x % p).
But since your index is backwards, a string is probably the way to go. It would also be more readable and maintainable.
Calculate the "mask" M: 10 raised to the power of index, where index is a zero-based index from the right. If you need to index from the left, recalculate index accordingly.
Calculate the "prefix" PRE = x / (M * 10) * (M * 10)
Calculate the "suffix" SUF = x % M
Calculate the new "middle part" MID = value * M
Generate the new number new_x = PRE + MID + POST.
P.S. ruslik's answer does it more elegantly :)
You need to start by figuring out how many digits are in your input. I can think of two ways of doing that, one with a loop and one with logarithms. Here's the loop version. This will fail for negative and zero inputs and when the index is out of bounds, probably other conditions too, but it's a starting point.
def f(x, index, value):
place = 1
residual = x
while residual > 0:
if index < 0:
place *= 10
index -= 1
residual /= 10
digit = (x / place) % 10
return x - (place * digit) + (place * value)
P.S. This is working Python code. The principle of something simple like this is easy to work out, but the details are so tricky that you really need to iterate it a bit. In this case I started with the principle that I wanted to subtract out the old digit and add the new one; from there it was a matter of getting the correct multiplier.
You gotta get specific with your compute platform if you're talking about performance.
I would approach this by converting the number into pairs of decimal digits, 4 bit each.
Then I would find and process the pair that needs modification as a byte.
Then I would put the number back together.
There are assemblers that do this very well.