How to coerce math.Inf to an integer? - go

I've got some code I'm using to do comparisons, and I want to start with infinite values. Here's a snippet of my code.
import (
"fmt"
"math"
)
func snippet(arr []int) {
least := int(math.Inf(1))
greatest := int(math.Inf(-1))
fmt.Println("least", math.Inf(1), least)
fmt.Println("greatest", math.Inf(-1), greatest)
}
and here's the output I get from the console
least +Inf -9223372036854775808
greatest -Inf -9223372036854775808
why is +Inf coerced into a negative int ?

Infinity is not representable by int.
According to the go spec,
In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.
Maybe you are looking for the largest representable int? How to get it is explained here.

math.Inf() returns an IEEE double-precision float representing positive infinity if the sign of the argument is >= 0, and negative infinity if the sign is < 0, so your code is incorrect.
But, the Go language specifiction (always good to read the specifications) says this:
Conversions between numeric types
.
.
.
In all non-constant conversions involving floating-point or complex values,
if the result type cannot represent the value the conversion succeeds but
the result value is implementation-dependent.
Two's complement integer values don't have the concept of infinity, so the result is implementation dependent.
Myself, I'd have expected to get the largest or smallest integer value for the integer type the cast is targeting, but apparently that's not the case.
This looks to the runtime source file responsible for the conversion, https://go.dev/src/runtime/softfloat64.go
And this is the actual source code.
Note that an IEEE-754 double-precision float is a 64-bit double word, consisting of
a sign bit, the high-order (most significant/leftmost bit), 0 indicating positive, 1 indicating negative.
an exponent (biased), consisting of the next 11 bits, and
a mantissa, consisting of the remaining 52 bits, which can be denormalized.
Positive Infinity is a special value with a sign bit of 0, a exponent of all 1 bits, and a mantissa of all 0 bits:
0 11111111111 0000000000000000000000000000000000000000000000000000
or 0x7FF0000000000000.
Negative infinity is the same, with the exception that the sign bit is 1:
1 11111111111 0000000000000000000000000000000000000000000000000000
or 0xFFF0000000000000.
Looks like `funpack64() returns 5 values:
a uint64 representing the sign (0 or the very large non-zero value 0x8000000000000000),
a uint64 representing the normalized mantissa,
an int representing the exponent,
a bool indicating whether or not this is +/- infinity, and
a bool indicating whether or not this is NaN.
From that, you should be able to figure out why it returns the value it does.
[Frankly, I'm surprised that f64toint() doesn't short-circuit when funpack64() returns fi = true.]
const mantbits64 uint = 52
const expbits64 uint = 11
const bias64 = -1<<(expbits64-1) + 1
func f64toint(f uint64) (val int64, ok bool) {
fs, fm, fe, fi, fn := funpack64(f)
switch {
case fi, fn: // NaN
return 0, false
case fe < -1: // f < 0.5
return 0, false
case fe > 63: // f >= 2^63
if fs != 0 && fm == 0 { // f == -2^63
return -1 << 63, true
}
if fs != 0 {
return 0, false
}
return 0, false
}
for fe > int(mantbits64) {
fe--
fm <<= 1
}
for fe < int(mantbits64) {
fe++
fm >>= 1
}
val = int64(fm)
if fs != 0 {
val = -val
}
return val, true
}
func funpack64(f uint64) (sign, mant uint64, exp int, inf, nan bool) {
sign = f & (1 << (mantbits64 + expbits64))
mant = f & (1<<mantbits64 - 1)
exp = int(f>>mantbits64) & (1<<expbits64 - 1)
switch exp {
case 1<<expbits64 - 1:
if mant != 0 {
nan = true
return
}
inf = true
return
case 0:
// denormalized
if mant != 0 {
exp += bias64 + 1
for mant < 1<<mantbits64 {
mant <<= 1
exp--
}
}
default:
// add implicit top bit
mant |= 1 << mantbits64
exp += bias64
}
return
}

Related

Go - Ratios of its elements that are positive, negative, and zero number in a slice

I wanna show the ratios of its elements that are positive, negative, and zero in a slice. I need the ratios in float32. This is my code:
arr := []int32{-2, -1, 0, 1, 2}
var negativeNumber, positiveNumber, zeroNumber, totalNumber int32
var negativeRatio, positiveRatio, zeroRatio float32
for i := 0; i < len(arr); i++ {
totalNumber += 1
}
for i := 0; i < len(arr); i++ {
if arr[i] < 0 {
negativeNumber += 1
} else if arr[i] == 0 {
zeroNumber += 1
} else if arr[i] > 0 {
positiveNumber += 1
}
}
negativeRatio = float32(negativeNumber / totalNumber)
zeroRatio = float32(zeroNumber / totalNumber)
positiveRatio = float32(positiveNumber / totalNumber)
fmt.Printf("total number: %d\n", totalNumber)
fmt.Printf("positive number: %d\n", positiveNumber)
fmt.Printf("negative number: %d\n", negativeNumber)
fmt.Printf("zero number: %d\n", zeroNumber)
fmt.Printf("positive ratio: %f\n", positiveRatio)
fmt.Printf("negative ratio: %f\n", negativeRatio)
fmt.Printf("zero ratio: %f\n", zeroRatio)
But, when I print the variables, I get the positive, negative, and zero numbers right but wrong ratio. Here is the output:
total number: 5
positive number: 2
negative number: 2
zero number: 1
positive ratio: 0.000000
negative ratio: 0.000000
zero ratio: 0.000000
What do I do wrong?
You are using integer division instead of floating point division, which discards the remainder and returns the integer part of the division result only.
Since negativeNumber and totalNumber are both of type int32, negativeNumber / totalNumber performs integer division, which performs the division and returns the floor of the result (which is 0). When you then cast it to a float32 with float32(negativeNumber / totalNumber), you get 0.0, which is expected.
In order to use floating point division, one of the operands must be a floating point type. To do this, you can use: float32(negativeNumber) / totalNumber

static_cast use to convert int to char

I have written this code to convert Decimal to binary:
string Solution::findDigitsInBinary(int A) {
if(A == 0 )
return "0" ;
else
{
string bin = "";
while(A > 0)
{
int rem = (A % 2);
bin.push_back(static_cast<char>(A % 2));
A = A/2 ;
}
reverse(bin.begin(),bin.end()) ;
return bin ;
}
}
But not getting the desired result using static_cast.
I have seen something related to this that is giving the desired result :
(char)('0'+ rem).
What's the difference between static_cast? why I am not getting the correct binary output?
With:
(char) '0' + rem;
The important difference is not the cast, but that the remainder, which always results in 0 or 1, is added to the character '0', which means that you adding a character of '0' or '1' to your string.
In your version you are adding either the integer representation of 0 or 1, but the string representations of 0 and 1 are either 48 or 49. By adding the remainder of 0 or 1 to '0' it gives a value of either 48 (character 0) or 49 (character 1).
If you do the same thing in your code it will also work.
string findDigitsInBinary(int A) {
if (A == 0)
return "0";
else
{
string bin = "";
while (A > 0)
{
int rem = (A % 2);
bin.push_back(static_cast<char>(A % 2 + '0')); // Remainder + '0'
A = A / 2;
}
reverse(bin.begin(), bin.end());
return bin;
}
Basically you should be adding characters to the string, and not numbers. So you shouldn't be adding 0 and 1 to the string, you should be adding the numbers 48 (character 0) and 49 (character 1).
This chart might illustrate better. See how the character value/digit '0' is 48 in decimal? Let's just say you wanted to add the digit 4 to the string, then because decimal 48 is 0, then you would actually want to add the decimal value of 52 to the string, 48 + 4. This is what the '0' + rem does. This is done automatically for you if you insert a character, that is, if you do:
mystring += 'A';
It will add an 'A' character to your string, but what it's actually doing in reality is converting that 'A' to decimal 65 and adding it to the string. What you have in your code is you're adding decimal numbers/integers 0 and 1, and these aren't characters in the Unicode/ASCII representation.
Now that you understand how characters are encoded, to cast an integer to a char does not change the decimal/integer to its character representation, but it changes the data type from int to char, a 4-byte data type (most likely) to a 1-byte data type. Your cast did the following:
After the modulo % operation you got a result of either 1 or 0 as an integer, let's just say you got a 1 remainder, it would look like this as an int:
00000000 00000000 00000000 00000001
After the cast to a char it would convert it to a one-byte data type, which would make it look like this:
00000001 // Now it's a one-byte data type
Whereas what a '1' digit looks like encoded as a string character is 49, which looks like this:
00110000
As for the difference between static_cast and c-style cast, the static_cast does compile-time checks and allows casts between certain types based on particular rules, whereas a c-style cast isn't as restrictive.
char a = 5;
int* p = static_cast<int*>(&a); // Will not compile
int* p2 = (int*)&a; // Will compile and run, but is discouraged as there are risks.
*p2 = 7; // You've written past the single byte char into 3 extra bytes, which is an access violation, or undefined behaviour.

Whats happening with this method?

type IntSet struct {
words []uint64
}
func (s *IntSet) Has(x int) bool {
word, bit := x/64, uint(x%64)
return word < len(s.words) && s.words[word]&(1<<bit) != 0
}
Lets go through what I think is going on:
A new type is declared called IntSet. Underneath its new type declaration it is unint64 slice.
A method is created called Has(). It can only receive IntSet types, after playing around with ints she returns a bool
Before she can play she needs two ints. She stores these babies on the stack.
Lost for words
This methods purpose is to report whether the set contains the non-negative value x. Here is a the go test:
func TestExample1(t *testing.T) {
//!+main
var x, y IntSet
fmt.Println(x.Has(9), x.Has(123)) // "true false"
//!-main
// Output:
// true false
}
Looking for some guidance understanding what this method is doing inside. And why the programmer did it in such complicated means (I feel like I am missing something).
The return statement:
return word < len(s.words) && s.words[word]&(1<<bit) != 0
Are the order of operations this?
return ( word < len(s.words) && ( s.words[word]&(1<<bit)!= 0 )
And what is the [words] and & doing within:
s.words[word]&(1<<bit)!= 0
edit: Am beginning to see slightly seeing that:
s.words[word]&(1<<bit)!= 0
Is just a slice but don't understand the &
As I read the code, I scribbled some notes:
package main
import "fmt"
// A set of bits
type IntSet struct {
// bits are grouped into 64 bit words
words []uint64
}
// x is the index for a bit
func (s *IntSet) Has(x int) bool {
// The word index for the bit
word := x / 64
// The bit index within a word for the bit
bit := uint(x % 64)
if word < 0 || word >= len(s.words) {
// error: word index out of range
return false
}
// the bit set within the word
mask := uint64(1 << bit)
// true if the bit in the word set
return s.words[word]&mask != 0
}
func main() {
nBits := 2*64 + 42
// round up to whole word
nWords := (nBits + (64 - 1)) / 64
bits := IntSet{words: make([]uint64, nWords)}
// bit 127 = 1 * 64 + 63
bits.words[1] = 1 << 63
fmt.Printf("%b\n", bits.words)
for i := 0; i < nWords*64; i++ {
has := bits.Has(i)
if has {
fmt.Println(i, has)
}
}
has := bits.Has(127)
fmt.Println(has)
}
Playground: https://play.golang.org/p/rxquNZ_23w1
Output:
[0 1000000000000000000000000000000000000000000000000000000000000000 0]
127 true
true
The Go Programming Language Specification
Arithmetic operators
& bitwise AND integers
peterSO's answer is spot on - read it. But I figured this might also help you understand.
Imagine I want to store some random numbers in the range 1 - 8. After I store these numbers I will be asked if the number n (also in the range of 1 - 8) appears in the numbers I recorded earlier. How would we store the numbers?
One, probably obvious, way would be to store them in a slice or maybe a map. Maybe we would choose a map since lookups will be constant time. So we create our map
seen := map[uint8]struct{}{}
Our code might look something like this
type IntSet struct {
seen: map[uint8]struct{}
}
func (i *IntSet) AddValue(v uint8) {
i.seen[v] = struct{}{}
}
func (i *IntSet) Has(v uint8) bool {
_, ok := i.seen[v]
return ok
}
For each number we store we take up (at least) 1 byte (8 bits) of memory. If we were to store all 8 numbers we would be using 64 bits / 8 bytes.
However, as the name implies, this is an int Set. We don't care about duplicates, we only care about membership (which Has provides for us).
But there is another way we could store these numbers, and we could do it all within a single byte. Since a byte provides 8 bits, we can use these 8 bits as markers for values we have seen. The initial value (in binary notation) would be
00000000 == uint8(0)
If we did an AddValue(3) we could change the 3rd bit and end up with
00000100 == uint8(3)
^
|______ 3rd bit
If we then called AddValue(8) we would have
10000100 == uint8(132)
^ ^
| |______ 3rd bit
|___________ 8th bit
So after adding 3 and 8 to our IntSet we have the internally stored integer value of 132. But how do we take 132 and figure out whether a particular bit is set? Easy, we use bitwise operators.
The & operator is a logical AND. It will return the value of the bits common between the numbers on each side of the operator. For example
10001100 01110111 11111111
& 01110100 & 01110000 & 00000001
-------- -------- --------
00000100 01110000 00000001
So to find out if n is in our set we simply do
our_set_value & (1 << (value_we_are_looking_for - 1))
which if we were searching for 4 would yield
10000100
& 00001000
----------
0 <-- so 4 is not present
or if we were searching for 8
10000100
& 10000000
----------
10000000 <-- so 8 is present
You may have noticed I subtracted 1 from our value_we_are_looking for. This is because I am fitting 1-8 into our 8bit number. If we only wanted to store seven numbers then we could just skip using the very first bit and assume our counting starts at bit #2 then we wouldn't have to subtract 1, like the code you posted does.
Assuming you understand all of that, here's where things get interesting. So far we have been storing our values in a uint8 (so we could only have 8 values, or 7 if you omit the first bit). But there are larger numbers that have more bits, like uint64. Instead of 8 values, we can store 64 values! But what happens if the range of values we want to track exceed 1-64? What if we want to store 65? This is where the slice of words comes from in the original code.
Since the code posted skips the first bit, from now on I will do so as well.
We can use the first uint64 to store the numbers 1 - 63. When we want to store the numbers 64-127 we need a new uint64. So our slice would be something like
[ uint64_of_1-63, uint64_of_64-127, uint64_of_128-192, etc]
Now, to answer the question about whether a number is in our set we need to first find the uint64 whose range would contain our number. If we were searching for 110 we would want to use the uint64 located at index 1 (uint64_of_64-128) because 110 would fall in that range.
To find the index of the word we need to look at, we take the whole number value of n / 64. In the case of 110 we would get 1, which is exactly what we want.
Now we need to examine the specific bit of that number. The bit that needs to be checked would be the remainder when dividing 110 by 64, or 46. So if the 46th bit of the word at index 1 is set, then we have seen 110 before.
This is how it might look in code
type IntSet struct {
words []uint64
}
func (s *IntSet) Has(x int) bool {
word, bit := x/64, uint(x%64)
return word < len(s.words) && s.words[word]&(1<<bit) != 0
}
func (s *IntSet) AddValue(x int) {
word := x / 64
bit := x % 64
if word < len(s.words) {
s.words[word] |= (1 << uint64(bit))
}
}
And here is some code to test it
func main() {
rangeUpper := 1000
bits := IntSet{words: make([]uint64, (rangeUpper/64)+1)}
bits.AddValue(127)
bits.AddValue(8)
bits.AddValue(63)
bits.AddValue(64)
bits.AddValue(998)
fmt.Printf("%b\n", bits.words)
for i := 0; i < rangeUpper; i++ {
if ok := bits.Has(i); ok {
fmt.Printf("Found %d\n", i)
}
}
}
OUTPUT
Found 8
Found 63
Found 64
Found 127
Found 998
Playground of above
Note
The |= is another bitwise operator OR. It means combine the two values keeping anywhere there is a 1 in either value
10000000 00000001 00000001
& 01000000 & 10000000 & 00000001
-------- -------- --------
11000000 10000001 00000001 <-- important that we
can set the value
multiple times
Using this method we can reduce the cost of storage for 65535 numbers from 131KB to just 1KB. This type of bit manipulation for set membership is very common in implementations of Bloom Filters
An IntSet represents a Set of integers. The presence in the set of any of a contiguous range of integers can be established by writing a single bit in the IntSet. Likewise, checking whether a specific integer is in the IntSet can be done by checking whether the particular integer corresponding to that bit is set.
So the code is finding the specific uint64 in the Intset corresponding to the integer:
word := x/64
and then the specific bit in that uint64:
bit := uint(x%64)
and then checking first that the integer being tested is in the range supported by the IntSet:
word < len(s.words)
and then whether the specific bit corresponding to the specific integer is set:
&& s.words[word]&(1<<bit) != 0
This part:
s.words[word]
pulls out the specific uint64 of the IntSet that tracks whether the integer in question is in the set.
&
is a bitwise AND.
(1<<bit)
means take a 1, shift it to the bit position representing the specific integer being tested.
Performing the bitwise AND between the integer in question, and the bit-shifted 1 will return a 0 if the bit corresponding to the integer is not set, and a 1 if the bit is set (meaning, the integer in question is a member of the IntSet).

Convert a hexadecimal number to binary in Go and be able to access each bit

I am fiddling around with Go at the moment and have stumpled upon a problem where I want to get some feedback and help :)
My problem is that I have a string containing a hexadecimal value as input, such as this:
"60A100"
Now, I want to convert this to the binary representation of the number and be able to look at specific bits within.
My solution to this right now is:
i, err := strconv.ParseUint(rawHex, 16, 32)
if err != nil {
fmt.Printf("%s", err)
}
// Convert int to binary representation
// %024b indicates base 2, padding with 0, with 24 characters.
bin := fmt.Sprintf("%024b", i)
The variable bin now holds exactly what I want, except it is a string which I don't think is optimal. I would rather that I could have an array of the individual bits such that I could just choose index i to get bit number i :)
Because as far as I know right now, if I lookup index 8 like so; bin[8], I will get a decimal that corresponds to the binary number, in the ASCII table.
I have searched quite a bit, but I can't find a solution that fits perfectly, but maybe I am looking in the wrong spot.
I hope you guys can guide me to the correct / optimal solution in this case :)
Thanks in advance!
You could turn it into a slice representing bits
// This could also return []bool
func asBits(val uint64) []uint64 {
bits := []uint64{}
for i := 0; i < 24; i++ {
bits = append([]uint64{val & 0x1}, bits...)
// or
// bits = append(bits, val & 0x1)
// depending on the order you want
val = val >> 1
}
return bits
}
func main() {
rawHex := "60A100"
i, err := strconv.ParseUint(rawHex, 16, 32)
if err != nil {
fmt.Printf("%s", err)
}
fmt.Printf("%024b\n", i)
fmt.Println(asBits(i))
}
OUTPUT
011000001010000100000000
[0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0]
https://play.golang.org/p/KK_AUPgbZu
As #jimb points out, you can also just check an individual bit
fmt.Printf("9th bit is set? %t\n", (i >> 8) & 1 == 1)
which is what #n-carter's answer does.
After parsing the value you can directly access each bit. You can use something like this:
func getNthBit(val, n uint32) int {
n = 32 - n
if 1 << n & val > 0 {
return 1
}
return 0
}
Following #n-carter answer, you can access each bit individually
There are two approaches:
Option 1: Shifting the value:
Shift the bin number to the right n possitions to get the n-th bit the first one. then mask it with 1
func getNthBit(val, n uint32) int {
// 1. reverse the golang endian
nthBit := 32-n
// 2. move the nth bit to the first position
movedVal := val >> nthBit
// 3. mask the value, selecting only this first bit
maskedValue := movedVal & 1
return maskedValue
// can be shortened like so
// return (val >> (32-n)) & 1
}
Explanation:
1. Get the right bit index according to the endian
01100000101000010000000001000101
^
(32-3)=29nth bit
2. Shift the bits to get n-th in the first possition
01100000101000010000000001000101 >> 29
^^^
00000000000000000000000000000011
^^^
3. Mask first bit. This picks(extracts) the value from this bit
00000000000000000000000000000011
& ^
00000000000000000000000000000001
1
Option 2: shifting 1 and masking with it
This can be done the way #n-carter does. Shift a 1 to the left
func getNthBit(val, n uint32) int {
// 1. reverse the golang endian
nthBit := 32-n
// 2. move the mask 1 bit to the nth position
mask := 1 << nthBit
// 3. mask the value, selecting only this nth bit
maskedValue := val & mask
if maskedValue == 0 {
return 0
}
return 1
// can be written shorter like:
//if val & (1 << (32-n)) == 0 {
// return 0
//}
//return 1
}
Explanation:
1. Get the right bit index according to the endian
01100000101000010000000001000101
^
(32-3)=29nth bit
2. Shift the 1 to the n-th position (1 << 29 == 2^(29-1))
00000000000000000000000000000001 << 29
00100000000000000000000000000000
3. Mask n-th bit. This picks(extracts) the value from this bit
01100000101000010000000001000101
&
00100000000000000000000000000000
1
Hope this helps. It takes some time to visualise bit operations in your head.

Is there an efficient way to approximate (a / b)^n where a, b, and n are unsigned integers?

Exponentiation by squaring is an algorithm that quickly computes an, where a and n are signed integers. (It does so in O(log n) multiplications).
Is there a similar algorithm, that instead computes (a / b)n, where a, b, and n are all unsigned integers? The problem with the obvious approach (i.e., computing an / bn) is that it will return wrong results due to integer overflow on the intermediate values.
I don't have floating points in the host language, only ints.
I'm okay with an approximate answer.
If you want excellent accuracy for the value of (a/b)^n, where a, b, and n are unsigned integers and you do not have floating point arithmetic available--use extended-precision integer calculations to find a^n and b^n, then divide the two.
Some languages, such as Python, have extended-precision integer arithmetic built in. If your language does not have it, look for a package that implements it. If you cannot do that, just make your own package. It is not that hard--such a package was an assignment in my second-semester computer science class back in the day. The multiplications and powers are fairly straightforward; the most difficult part is the division, even if you just want the quotient and remainder. But "most difficult" does not mean "very difficult" and you could probably do it. The second must difficult routine is printing the extended integer to decimal format.
The basic idea is to store each integer in an array or list of regular unsigned integers, where is integer is a "digit" in arithmetic with a large base. You want to be able to handle the product of any two digits, so if your machine has 32-bit integers and you have no way of handling 64-bit integers, store "digits" of 16 bits each. The larger the "digit" the faster the calculations. If your calculations are few and your printing to decimal is frequent, use a power of 10 such as 10000 for each "digit".
Ask if you need more detail.
Here's a pow implementation in fixed point based on Feynman's log algorithm. It's quick and somewhat dirty; C libraries tend to use a polynomial approximation, but that approach is more complicated, and I'm not sure how well it would translate to fixed point.
// powFraction approximates (a/b)**n.
func powFraction(a uint64, b uint64, n uint64) uint64 {
if a == 0 || b == 0 || a < b {
panic("powFraction")
}
return expFixed((logFixed(a) - logFixed(b)) * n)
}
// logFixed approximates 2**58 * log2(x). [Feynman]
func logFixed(x uint64) uint64 {
if x == 0 {
panic("logFixed")
}
// Normalize x into [2**63, 2**64).
n := numberOfLeadingZeros(x)
x <<= n
p := uint64(1 << 63)
y := uint64(0)
for k := uint(1); k <= 63; k++ {
// Warning: if q > x-p, then p + q may overflow.
if q := p >> k; q <= x-p {
p += q
y += table[k-1]
}
}
return uint64(63-n)<<58 + y>>6
}
// expFixed approximately inverts logFixed.
func expFixed(y uint64) uint64 {
n := 63 - uint(y>>58)
y <<= 6
p := uint64(1 << 63)
for k := uint(1); k <= 63; k++ {
if z := table[k-1]; z <= y {
p += p >> k
y -= z
}
}
return p >> n
}
// numberOfLeadingZeros returns the number of leading zeros in the word x.
// [Hacker's Delight]
func numberOfLeadingZeros(x uint64) uint {
n := uint(64)
if y := x >> 32; y != 0 {
x = y
n = 32
}
if y := x >> 16; y != 0 {
x = y
n -= 16
}
if y := x >> 8; y != 0 {
x = y
n -= 8
}
if y := x >> 4; y != 0 {
x = y
n -= 4
}
if y := x >> 2; y != 0 {
x = y
n -= 2
}
if x>>1 != 0 {
return n - 2
}
return n - uint(x)
}
// table[k-1] approximates 2**64 * log2(1 + 2**-k). [MPFR]
var table = [...]uint64{
10790653543520307104, // 1
5938525176524057593, // 2
3134563013331062591, // 3
1613404648504497789, // 4
818926958183105433, // 5
412613322424486499, // 6
207106307442936368, // 7
103754619509458805, // 8
51927872466823974, // 9
25976601570169168, // 10
12991470209511302, // 11
6496527847636937, // 12
3248462157916594, // 13
1624280643531991, // 14
812152713665686, // 15
406079454902306, // 16
203040501980337, // 17
101520444623942, // 18
50760270720599, // 19
25380147462480, // 20
12690076756788, // 21
6345039134781, // 22
3172519756487, // 23
1586259925518, // 24
793129974578, // 25
396564990243, // 26
198282495860, // 27
99141248115, // 28
49570624104, // 29
24785312063, // 30
12392656035, // 31
6196328018, // 32
3098164009, // 33
1549082005, // 34
774541002, // 35
387270501, // 36
193635251, // 37
96817625, // 38
48408813, // 39
24204406, // 40
12102203, // 41
6051102, // 42
3025551, // 43
1512775, // 44
756388, // 45
378194, // 46
189097, // 47
94548, // 48
47274, // 49
23637, // 50
11819, // 51
5909, // 52
2955, // 53
1477, // 54
739, // 55
369, // 56
185, // 57
92, // 58
46, // 59
23, // 60
12, // 61
6, // 62
3, // 63
}
Just in case someone is looking for a constant-space solution, I've kind of solved the issue with binomial expansions, which are a decent approximation. I'm using the following code:
// Computes `k * (1+1/q) ^ N`, with precision `p`. The higher
// the precision, the higher the gas cost. It should be
// something around the log of `n`. When `p == n`, the
// precision is absolute (sans possible integer overflows).
// Much smaller values are sufficient to get a great approximation.
function fracExp(uint k, uint q, uint n, uint p) returns (uint) {
uint s = 0;
uint N = 1;
uint B = 1;
for (uint i = 0; i < p; ++i){
s += k * N / B / (q**i);
N = N * (n-i);
B = B * (i+1);
}
return s;
}
Which simply computes the p first terms of the binomial expansion of (1 + r)^N, where r is a small positive real number. I posted a more thoughtful explanation at Ethereum Stack Exchange.

Resources