How to encode 3 integers into 2 integers? - algorithm

I have three integers (x1,x2,x3) all in [0,255]. I need to encode them into two integers (a, and b) such that I can deterministically decode them back. The constraint is that the size of the new integers needs to be small.
So I can do a=256*x1+x2, but this makes a much larger than xi.
Any way to encode integers such that the resulting numbers stay small?
I am not defining what small is, as I want as small as possible.
A similar problem is to encode these 3 numbers into just 1. Again the new integer needs to be as small as possible. Any way to do this?

Welcome to information theory / the pigeonhole principle. If you wish to encode x different values, you need to have enough bits to distinguish between x different things. In your case there are 256 = 2**8 possibilities (using ** for exponentiation) for each of x1, x2, and x3. Therefore in total there are 2**24 possibilities for the combination. Therefore you will need 2**24 combinations. So 24 bits.
Your first encoding can be achieved using 12 bit numbers in the range 0-4095. And your encoding can be done as follows (where % is the remainder operation and // is integer division, as they are in Python3):
a = (x1%16) * 256 + x2
b = (x1//16) * 256 + x3
with a decoding of:
x1 = (a//256) + (b//256) * 16
x2 = a%256
x3 = b%256
Encoding into 1 number again needs 2**24 possibilities, so that number needs to be in the range 0..16777215. And the encoding this time is:
c = x1 + 256*x2 + 65536*x3
with a decoding of
x1 = c%256
x2 = (c//256)%256
x3 = c//65536
There are various other encodings/decodings that you can achieve. But they can't be achieved with smaller ranges of numbers than that.

Related

Generate random non repeating pairs of numbers within 2 ranges

I want to create random pairs of numbers within 2 ranges.
So for example if I want 3 random pairs of numbers where 10 < n1 < 20 and 30 < n2 < 50 then an acceptable output would be this: [[11,35],[15,30],[15,42]] but not [[11,35],[11,35],[12,39]]
I would like an efficient (both computationally and memory wise) algorithm to do this. The language doesn't really matter because I can adapt it later (although Python would be preferred).
So far the best idea I have had is to create a dictionary with all the possible numbers in n1 and as values a list of the numbers which have been used in n2. Then I can just pick a random n1 and find a number which hasn't been used in n1[n2] set.
This isn't very efficient space wise though and I'm hoping for something better. It also seems to be computationally inefficient to find a number not in n1[n2] many times.
I could also do the opposite and have the dictionary populated with all the numbers not used and just pop a random number off the list. But this would use much more space.
Is there any efficient way to do this? Is this a common problem?
Edit: It would be good if this could easily be expanded to more dimensions (so sets of N numbers). But this isn't really needed yet.
An integer pair (x, y) in [min_x, min_x + s) X [min_y, min_y + t) can be mapped to an integer m within the 1D space [min_x * t, (min_x + s) * t) by calculating m = x * t + y - min_y. The inverse mapping from m to (x, y) can be achieved by (m // t, min_y + m % t) in Python.
Therefore the problem is transformed to choosing multiple values from [min_x * t, (min_x + s) * t) without replacement (i.e. no duplicates in the returned sequence). This can be done by simply calling the random.sample function in Python. According to the doc, the underlying implementation is space efficient for sequence inputs. So the entire problem can be done in Python as shown in the following:
from random import sample
# max_x and max_y are exclusive while min_x and min_y are inclusive
t = max_y - min_y
sampled_pairs = [(m//t, min_y + m%t) for m in sample(range(min_x * t, max_x * t), k=3)]

How can I minimise number of additions?

Multiply two numbers without using * operator, and with minimum number of additions
For eg: If input is, 5*8, one of the following ways, can be add the bigger number smaller number of times, and that will be the answer. But how can I minimise the number of additions?
One strategy to minimize reduce the number of additions is to add things hierarchically. This is the same strategy that is used in the classic power algorithm, which follows the same technique for minimizing the number of multiplications.
Let's say you need
M = a * 8 = a + a + a + a + a + a + a + a
Once you calculate m2 = a + a, you can substitute it into the above addition and get
M = m2 + m2 + m2 + m2
Then you can calculate m4 = m2 + m2 and arrive at
M = m4 + m4
So, the result is calculated in 3 additions instead of the original 8. However, adding a value to itself can be replaced by a left-shift by 1 bit (if this is allowed), this greatly reducing the number of additions.
This technique can be elegantly implemented through analyzing the binary representation of one of the multiplicands (exactly as it is typically implemented in the power algorithm). E.g. if you need to calculate a * b you can do it in this fashion
int M = 0;
for (int m = a; b != 0; b >>= 1, m <<= 1)
if ((b & 1) != 0)
M += m;
The total number of additions such implementation will use is the total number of 1 bits in b. It will multiply 5 by 8 in 1 addition.
Note that in order to achieve the lowest the number of additions provided by this strategy, multiplying larger number by smaller number is not necessarily the best idea. E.g. multiplying by 8 uses less additions than multiplying by 5.
A better example will be 5 * 7. This is essentially the binary multiplication using old methods, but with clever choice of the multiplier.
If we can use left-shift and that doesn't count as an addition: choose the number with the smaller number of bits as the multiplier. This will be 5 in this case.
111
x 101
------
111
000x <== This is not an addition, only a left shift
111xx
-------
100011 <== 2 additions totally.
-------
If we cannot use left-shift: note that left shift is the same as doubling / additions. Then we will have to use a slightly different tactic. Since the multiplicand will be shifted the same number of times as the (position of MSB - 1), the number of additions will be the number with the lesser value of (position of MSB - 1) + (number of bits set). In the case of 5 * 8, the values are (3-1) + 2 = 4 and (4-1) = 3 respectively. The lesser is for 8 and hence use that as the multiplier.
101
x 1000
-------
000
000x <== left shift
000xx <== left shift
101xxx <== left shift
--------
101000 <== no addition needed, so 3 additions totally.
--------
The above has three shifts and zero additions.
I like Codor's suggestion of using shifts and having zero additions!
But if you can truly only use additions and no other operations like shifts, logs, subtractions, etc, I believe the minimal number of additions to compute a * b will be:
min{int[log2(a+1)] + numbits(a), int[log2(b+1)] + numbits(b)} - 2
where
numbits(n) is the number of ones in the binary representation of
integer n
For example, numbits(4)=1, numbits(5)=2, etc.
int[x] is the integer part of float x
For example, int[3.9]=3
Now, how did we get there? First look at your original example. You can at least group additions together. E.g.
8+8=16
16+16=32
32+8=40
To generalize this, if you need to multiply a b times by only using additions that used a or the results of additions already computed, you need:
int[log2(b+1)]-1 additions to compute all the 2^n.a intermediate numbers you need.
In your example, int[log2(5+1)]-1 = 2: you need 2 additions to compute 16 and 32
numbits(b)-1 additions to add all intermediate results together, where numbits(b) is the number of ones in the binary representation of b.
In your example, 5 = 2^2 + 2^0 so numbits(5)-1 = 1: you need 1 addition to do 32 + 8
Interestingly, this means that your statement
add the bigger number smaller number of times
is not always the recipe to minimize the number of additions.
For example, if you need to compute 2^9 * (2^9 - 1), you are better off computing additions based on (2^9-1) than on 2^9 even though 2^9 is larger. The fastest approach is:
x = (2^9-1) + (2^9-1)
And then
x = x+x
8 times for a total of 9 additions.
If instead you added 2^9 to itself, you would need 8 additions to get all the 2^k*2^9 first and then an additional 8 additions to add all these numbers together for a total of 16 additions.
suppose a is to be multiplied with b and we are storing the result in res, we add a to res only if b is odd, else keep dividing b by 2 and multiplying a by 2. this is done in a loop till b becomes 0. multiplication and division can be done using bitwise operator.
Let the two given numbers be 'a' and 'b'
1) Initialize result 'res' as 0.
2) Do following while 'b' is greater than 0
a) If 'b' is odd, add 'a' to 'res'
b) Double 'a' and halve 'b'
3) Return 'res'.

Splitting a floating point number as sums of floating point of fixed precision

Suppose i have an algorithm by which i can compute an infinitely precise floating point number (depending from a parameter N) lets say in pseudocode:
arbitrary_precision_float f = computeValue(n); //it could be a function which compute a specific value, like PI for instance.
I guess i can implement computeValue(int) with the library mpf of the gnump library for example...
Anyway how can i split such number in sums of floating point number where each number has L Mantissa digits?
//example
f = x1 + x2 + ... + xn;
/*
for i = 1:n
xi = 2^ei * Mi
Mi has exactly p digits.
*/
I don't know if i'm clear but i'm looking for something "simple".
You can use a very simple algorithm. Assume without loss of generality that the exponent of your original number is zero; if it's not, then you just add that exponent to all the exponents of the answer.
Split your number f into groups of L digits and treat each group as a separate xi. Any such group can be represented in the form you need: the mantissa will be exactly that group, and the exponent will be negated start position of the group in the original number (that is, i*L, where i is the group number).
If any of the resulting xis starts from zero, you just shift its mantissa correcting the exponent correspondingly.
For example, for L=4
f = 10010011100
1001
0011
100
-> x1=1.001 *2^0
x2=0.011 *2^{-4} = 1.1*2^{-6}
x3=1.00 *2^{-8}
Another question arises if you want to minimize the amount of numbers you get. In the example above, two numbers are sufficient: 1.001*2^0+1.11*2^{-6}. This is a separate question, and in fact is a simple problem for dynamic programming.

Keep uniform distribution after remapping to a new range

Since this is about remapping a uniform distribution to another with a different range, this is not a PHP question specifically although I am using PHP.
I have a cryptographicaly secure random number generator that gives me evenly distributed integers (uniform discrete distribution) between 0 and PHP_INT_MAX.
How do I remap these results to fit into a different range in an efficient manner?
Currently I am using $mappedRandomNumber = $randomNumber % ($range + 1) + $min where $range = $max - $min, but that obvioulsy doesn't work since the first PHP_INT_MAX%$range integers from the range have a higher chance to be picked, breaking the uniformity of the distribution.
Well, having zero knowledge of PHP definitely qualifies me as an expert, so
mentally converting to float U[0,1)
f = r / PHP_MAX_INT
then doing
mapped = min + f*(max - min)
going back to integers
mapped = min + (r * max - r * min)/PHP_MAX_INT
if computation is done via 64bit math, and PHP_MAX_INT being 2^31 it should work
This is what I ended up doing. PRNG 101 (if it does not fit, ignore and generate again). Not very sophisticated, but simple:
public function rand($min = 0, $max = null){
// pow(2,$numBits-1) calculated as (pow(2,$numBits-2)-1) + pow(2,$numBits-2)
// to avoid overflow when $numBits is the number of bits of PHP_INT_MAX
$maxSafe = (int) floor(
((pow(2,8*$this->intByteCount-2)-1) + pow(2,8*$this->intByteCount-2))
/
($max - $min)
) * ($max - $min);
// discards anything above the last interval N * {0 .. max - min -1}
// that fits in {0 .. 2^(intBitCount-1)-1}
do {
$chars = $this->getRandomBytesString($this->intByteCount);
$n = 0;
for ($i=0;$i<$this->intByteCount;$i++) {$n|=(ord($chars[$i])<<(8*($this->intByteCount-$i-1)));}
} while (abs($n)>$maxSafe);
return (abs($n)%($max-$min+1))+$min;
}
Any improvements are welcomed.
(Full code on https://github.com/elcodedocle/cryptosecureprng/blob/master/CryptoSecurePRNG.php)
Here is the sketch how I would do it:
Consider you have uniform random integer distribution in range [A, B) that's what your random number generator provide.
Let L = B - A.
Let P be the highest power of 2 such that P <= L.
Let X be a sample from this range.
First calculate Y = X - A.
If Y >= P, discard it and start with new X until you get an Y that fits.
Now Y contains log2(P) uniformly random bits - zero extend it up to log2(P) bits.
Now we have uniform random bit generator that can be used to provide arbitrary number of random bits as needed.
To generate a number in the target range, let [A_t, B_t) be the target range. Let L_t = B_t - A_t.
Let P_t be the smallest power of 2 such that P_t >= L_t.
Read log2(P_t) random bits and make an integer from it, let's call it X_t.
If X_t >= L_t, discard it and try again until you get a number that fits.
Your random number in the desired range will be L_t + A_t.
Implementation considerations: if your L_t and L are powers of 2, you never have to discard anything. If not, then even in the worst case you should get the right number in less than 2 trials on average.

How many digits will be after converting from one numeral system to another

The main question: How many digits?
Let me explain. I have a number in binary system: 11000000 and in decimal is 192.
After converting to decimal, how many digits it will have (in dicimal)? In my example, it's 3 digits. But, it isn't a problem. I've searched over internet and found one algorithm for integral part and one for fractional part. I'm not quite understand them, but (I think) they works.
When converting from binary to octal, it's more easy: each 3 bits give you 1 digit in octal. Same for hex: each 4 bits = 1 hex digit.
But, I'm very curious, what to do, if I have a number in P numeral system and want to convert it to the Q numeral system? I know how to do it (I think, I know :)), but, 1st of all, I want to know how many digits in Q system it will take (u no, I must preallocate space).
Writing n in base b takes ceiling(log base b (n)) digits.
The ratio you noticed (octal/binary) is log base 8 (n) / log base 2 (n) = 3.
(From memory, will it stick?)
There was an error in my previous answer: look at the comment by Ben Schwehn.
Sorry for the confusion, I found and explain the error I made in my previous answer below.
Please use the answer provided by Paul Tomblin. (rewritten to use P, Q and n)
Y = ln(P^n) / ln(Q)
Y = n * ln(P) / ln(Q)
So Y (rounded up) is the number of characters you need in system Q to express the highest number you can encode in n characters in system P.
I have no answer (that wouldn't convert the number already and take up that many space in a temporary variable) to get the bare minimum for a given number 1000(bin) = 8(dec) while you would reserve 2 decimal positions using this formula.
If a temporary memory usage isn't a problem, you might cheat and use (Python):
len(str(int(otherBaseStr,P)))
This will give you the number of decimals needed to convert a number in base P, cast as a string (otherBaseStr), into decimals.
Old WRONG answer:
If you have a number in P numeral system of length n
Then you can calculate the highest number that is possible in n characters:
P^(n-1)
To express this highest number in number system Q you need to use logarithms (because they are the inverse to exponentiation):
log((P^(n-1))/log(Q)
(n-1)*log(P) / log(Q)
For example
11000000 in binary is 8 characters.
To get it in Decimal you would need:
(8-1)*log(2) / log(10) = 2.1 digits (round up to 3)
Reason it was wrong:
The highest number that is possible in n characters is
(P^n) - 1
not
P^(n-1)
If you have a number that's X digits long in base B, then the maximum value that can be represented is B^X - 1. So if you want to know how many digits it might take in base C, then you have to find the number Y that C^Y - 1 is at least as big as B^X - 1. The way to do that is to take the logarithm in base C of B^X-1. And since the logarithm (log) of a number in base C is the same as the natural log (ln) of that number divided by the natural log of C, that becomes:
Y = ln((B^X)-1) / ln(C) + 1
and since ln(B^X) is X * ln(B), and that's probably faster to calculate than ln(B^X-1) and close enough to the right answer, rewrite that as
Y = X * ln(B) / ln(C) + 1
Covert that to your favourite language. Because we dropped the "-1", we might end up with one digit more than you need in some cases. But even better, you can pre-calculate ln(B)/ln(C) and just multiply it by new "X"s and the length of the number you are trying to convert changes.
Calculating the number of digit can be done using the formulas given by the other answers, however, it might actually be faster to allocate a buffer of maximum size first and then return the relevant part of that buffer instead of calculating a logarithm.
Note that the worst case for the buffer size happens when you convert to binary, which gives you a buffer size of 32 characters for 32-bit integers.
Converting a number to an arbitrary base could be done using the C# function below (The code would look very similar in other languages like C or Java):
public static string IntToString(int value, char[] baseChars)
{
// 32 is the worst cast buffer size for base 2 and int.MaxValue
int i = 32;
char[] buffer = new char[i];
int targetBase= baseChars.Length;
do
{
buffer[--i] = baseChars[value % targetBase];
value = value / targetBase;
}
while (value > 0);
char[] result = new char[32 - i];
Array.Copy(buffer, i, result, 0, 32 - i);
return new string(result);
}
The keyword here is "logarithm", here are some suggestive links:
http://www.adug.org.au/MathsCorner/MathsCornerLogs2.htm
http://staff.spd.dcu.ie/johnbcos/download/Fermat%20material/Fermat_Record_Number/HOW_MANY.html
look at the logarithms base P and base Q. Round down to nearest integer.
The logarithm base P can be computed using your favorite base (10 or e): log_P(x) = log_10(x)/log_10(P)
You need to compute the length of the fractional part separately.
For binary to decimal, there are as many decimal digits as there are bits. For example, binary 0.11001101001001 is decimal 0.80133056640625, both 14 digits after the radix point.
For decimal to binary, there are two cases. If the decimal fraction is dyadic, then there are as many bits as decimal digits (same as for binary to decimal above). If the fraction is not dyadic, then the number of bits is infinite.
(You can use my decimal/binary converter to experiment with this.)

Resources