Represent 10000 booleans using only 10000 bits - algorithm

I want to represent 10000 bits of information.(Each can be either one or zero). Is there any way I can do this?
Wikipedia explains a bit hack to achieve this. But then it asks me to have a number that's as large as 2^10000 for storing 10000 bits.
Is there some way that's tractable even for storing large number of bits?

As wikipedia explains, a bit field is an appropriate choice here. a bit field that can hold 10,000 bits has 2^10000 states.
A good choice for doing this (given that integers are 32/64 bits) is a bit vector, which is asked about and explained in excruciating detail here:
bit vector implementation of set in Programming Pearls, 2nd Edition
The general idea is that you use an array of integers which are used as bit fields.

You can make bool take 1 bit for example if you have a bunch of them eg. in a struct, like this:
struct A
{
bool a:1, b:1, c:1, d:1, e:1;
};
Above method won't be useful if the number of variables are large. So instead create an array of integers of size 10000/4*8. It will create exactly 10000 bits. Now you can access each bit by using offset and << or >>(like for accessing 55th bit, use floor(55/4*8) and >>55%32. you can reach that bit).

In C++ you can do this very simply, using one of two standard library containers:
std::vector<bool>
This specialization of a standard vector acts (almost) like any other vector, but compresses its contents to one bit per element. Aside from enjoying that fact, you can just treat it like a vector:
// Create a vector of 10000 booleans
std::vector<bool> lots_of_bits(10000);
// Set all the odd ones to true
for (int i = 1; i < lots_of_bits.size(); i += 2) {
lots_of_bits[i] = true;
}
// Add another 100 trues at the end
for (int j = 0; j < 100; ++j) {
lots_of_bits.push_back(true);
}
// etc.
std::bitset<N>
The "new, improved" bit vector which does not pretend to be a standard container. In particular, it's of fixed size and you need to know the size at compile time. That can be a bit restrictive, but it's otherwise a pretty useful class. Like std::vector<bool>, it implements the [] operator for getting and setting individual bits. It also supports the bitwise logical operators &, |, '^' and ~ (and, or, xor and not), as well as left and right bitshifts, and some other utilities.

Is your concern that accessing bit number n requires shifting n times? If so, you can make the problem tractable by dividing your 10,000 bits into 10,000 / 8 buckets using an array of characters (assuming C or C++ here). Now you can access bit number n by figuring out what bucket that bit is in (n / 8) and then what position within the bucket (n % 8). Then you just do the masking. No extra storage required (except the padding at the end, so a few extra bits if you don't have a perfect multiple of 32 bits).

Related

Uniform random bit from a mask

Suppose I have a 64 bit unsigned integer (u64) mask, with one or more bits set.
I want to select one of the set bits uniformly at random from m to give a new mask x such that x & mask has one bit set. Some pseudocode that does this might be:
def uniform_random_bit_from_mask(mask):
assert mask > 0
set_indices = get_set_indices(mask)
random_index = uniform_random_choice(set_indices)
new_mask = set_bit(random_index, 0)
return new_mask
However I need to do this as fast as possible (code similar to the above in a low-level language is slowing a hot loop). Does anyone have a more efficient scheme?
The details how to optimize this depend on several factors you did not specify – the target architecture, the expected number of set bits in the mask, the language you want to use, the requirements on the randomness and many more. Without knowing further details, it's hard to give a useful answer, but I'll give a few hints that may prove useful anyway.
Most modern architectures have an instruction to count the number of set bits in an integer, generally called "popcount", and this instruction is exposed in most low-level languages. In Rust, you can use the count_ones() method. This gives you the total number k of bits to select from.
You can then generate a random number i between 0 and k - 1 (inclusive). The next step is to select the ith set bit in mask. An efficient approach to do so is this loop (Rust code):
for _ in 0..i {
mask &= mask - 1;
}
let new_mask = 1 << mask.trailing_zeros();
The loop clears the least significant set bit in each iteration. Since i < k, we know that mask can't be zero after the loop. The last line generates a new mask from the least significant bit of mask that is still set.
On common architectures, it is likely that the bottleneck will be the random number generator. If you are using Rust's rand crate, you can use SmallRng for improved performance, at the cost of being cryptographically insecure, which may not be relevant for your use case.

A good hashing function for a non-uniform sequence of uniformly distributed 4 bits values?

I have a very specific problem:
I have uniformly random values spread on a 15x50 grid and the sample I want to hash corresponds to a square of 5x5 cells centered around any possible grid position.
The number of samples can thus vary from 25 (away from borders, most cases) to 20, 15 (near a border) down to a minimum of 9 (in a corner).
So even though the cell values are random, the location introduces a deterministic variation in the sequence length.
The hash table size is a small number, typically between 50 and 20.
The function will operate on a large set of randomly generated grids (a few hundreds/thousands), and might be called a few thousands times per grid. The positions on the grid can be considered random.
I would like a function that could spread the 15x50 possible samples as evenly as possible.
I have tried the following pseudo-code:
int32 hash = 0;
int i = 0; // I guess i could take any initial value and even be left uninitialized, but fixing one makes the function deterministic
foreach (value in block)
{
hash ^= (value << (i%28))
i++
}
hash %= table_size
but the results, though not grossly imbalanced, do not seem very smooth to me. Maybe it's because the sample is too small, but the circumstances make it difficult to run the code on a bigger sample, and I would rather not have to write a complete test harness if some computer savvy has an answer ready for me :).
I am not sure pairing the values two by two and using a general purpose byte hashing strategy would be the best solution, especially since the number of values might be odd.
I have tought of using a 17th value to represent off-grid cells, but that seems to introduce a bias (the sequences from cells near a border will have a lot of "off grid" values).
I am not sure either what would be the best way to test the efficiency of various solutions (how many grids shall I generate to have an idea of the performances, for instance).
http://www.partow.net/programming/hashfunctions/
Here are few different hash function from experts on various fields. Functions are designed for 8bit values, but I am sure you can extend for your case. I dont know what to suggest, but I think that any of them should work better than your current idea.
Problem with current approach you propose is that values are cyclic in field 2^n and if you make mod 64 at the end for example you lost most values out and only last 3 values remains in final result.
Despite your scepticism I would just shove them through a standard hash function.
If they are well randomised (and relatively independent - you don't say) to begin with you probably don't need to do too much work. Fowler-Noll-Vo (FNV) is a good candidate in these circumstances.
FNV operates on a series of 8-bit input and your input is (logically) 4-bit.
I would start without even bothering to pack 'two by two' as you describe.
If you feel like trying that, just logically pad odd length series with the message length (reduced to a 4 bit value obviously).
I wouldn't expect that packing to improve the hash. It may save you a tiny number of cycles because it swaps a relatively expensive * with a << and a |.
Try both and report back!
Here are implementations of packed and 'normal' versions of FNV1a in C:
#include <inttypes.h>
static const uint32_t sFNVOffsetBasis=2166136261;
static const uint32_t sFNVPrime= 16777619;
const uint32_t FNV1aPacked4Bit(const uint8_t*const pBytes,const size_t pSize) {
uint32_t rHash=sFNVOffsetBasis;
for(size_t i=0;i<pSize;i+=2){
rHash=rHash^(pBytes[i]|(pBytes[i+1]<<4));
rHash=rHash*sFNVPrime;
}
if(pSize%2){//Length is odd. The loop missed the last element.
rHash=rHash^(pBytes[pSize-1]|((pSize&0x1E)<<3));
rHash=rHash*sFNVPrime;
}
return rHash;
}
const uint32_t FNV1a(const uint8_t*const pBytes,const size_t pSize) {
uint32_t rHash=sFNVOffsetBasis;
for(size_t i=0;i<pSize;++i){
rHash=(rHash^pBytes[i])*sFNVPrime;
}
return rHash;
}
NB: I've edited it to skip the first bit when adding in the length. Obviously the bottom bit of an odd length is 100% biased to 1. I don't know how length is distributed. It may be wiser to put it in at the start than the end.

Sum reduction of binary sequence

Consider a binary sequence:
11000111
I have to find sum of this series (actually in parallel)
Sum =1+1+0+0+0+1+1+1= 5
This is a waste of resource as why invest time in adding 0s?
Is there any clever way to sum this sequence so I can avoid unnecessary additions?
Operate at the byte level rather than the bit level. Use a small LUT to convert a byte to a population count. That way you're only doing one lookup and one add per 8 bits. Unless your data is likely to be very sparse this should be quite efficient.
Well it depends on how you store your bitset.
If it's an array, then you can't do more than a plain for. If you want to do this in parallel, just split the array in chunks and process them concurrently.
If we are talking about a bitset (storing the bits in a native (32/64-bit) integer type), then the simplest way to count bits would be this one:
int bitset;
int s = 0;
for (; bitset; s++)
bitset &= bitset-1;
This removes the last bit of 1 at every step, so you have O(s).
Of course, you can combine these two methods if you need more than 32/64 bits
I dunno why people are answering, not even looking into link from the 1st comment to the question. You can easily make it under O(size_of_bitset). At lewast when it comes to constant factor.
You could use this method (found in link by J.F. Sebastian):
inline int count_bits(int num){
int sum = 0;
for (; bitset; sum++) bitset &= bitset-1;
return sum;
}
int main (void){
int array[N];
int total_sum = 0;
#pragma omp parallel for reduction(+:total_sum)
for (size_t i = 0; i < N, i++){
total_sum += count_bits(array[i]);
}
}
This will count number of bits in memory range of array in parallel. The inline is important to avoid unnecessary copying, also the compiler should optimize it much better.
You can swap the count_bits with anything better that counts bits in an integer to get faster if you find anything. This version has complexity of O(bits_set) (not size of the bit set!).
Invoking the parallel construct will introduce quite a lot of overhead compared to a single summation that it does need to be quite large to compensate.
The parallelism is done via OpenMP. The partial sum of each thread is summed at the end of the parallel loop and stored in total_sum. Note the total_sum will be private inside the loop for each thread reduction due to reduction clause.
You could alter the code to make it count bits set in arbitrary memory region but it is quite important for it to be memory aligned when you perform operations on such low level.
As far as I can see, it would be wasteful to try to handle the zeros specially. As #bdares said, addition is really cheap. At a minimum, you'll need to execute N instructions to sum up the an N-bit sequence, that would be if you unconditionally sum ever bit. If you add a test to see whether the bit is a 0 or 1, that's another instruction that needs to be executed for each bit. Even if there's no branch penalty, you're executing minimum 1 instruction for every bit (the conditional test), and then you're also executing the original instruction (the add) for any bits that are equal to 1. So even without branch penalty, this takes more time to execute.
#bdares mentions that the compiler will optimize out the branches, but that's only if the value of each bit is known at compile time, and if you know the values of the bits at compile time, you should just add them up yourself in advance.
There might be some cute things you can do with bit twiddling. For instance, if you take the bits two at a time you're adding up values of 0, 1, 2, or 3, and only have half as many additions to do. There may by something you can then do with the result to convert it into the value you want, but I haven't actually thought about how to do that.

Fastest/easiest way to average ARGB color ints?

I have five colors stored in the format #AARRGGBB as unsigned ints, and I need to take the average of all five. Obviously I can't simply divide each int by five and just add them, and the only way I thought of so far is to bitmask them, do each channel separately, and then OR them together again. Is there a clever or concise way of averaging all five of them?
Half way between your (OP) proposed solution and Patrick's solution looks quite neat:
Color colors[5]={ 0xAARRGGBB,...};
unsigned long sum1=0,sum2=0;
for (int i=0;i<5;i++)
{
sum1+= colors[i] &0x00FF00FF; // 0x00RR00BB
sum2+=(colors[i]>>8)&0x00FF00FF; // 0x00AA00GG
}
unsigned long output=0;
output|=(((sum1&0xFFFF)/5)&0xFF);
output|=(((sum2&0xFFFF)/5)&0xFF)<<8;
sum1>>=16;sum2>>=16; // and now the top halves
output|=(((sum1&0xFFFF)/5)&0xFF)<<16;
output|=(((sum2&0xFFFF)/5)&0xFF)<<24;
I don't think you could really divide sum1/sum2 by 5, because the bits from the top half would spill down...
If an approximation would be valid, you could try a multiplication by something like, 0.1875 (0.125+0.0625), (this means: multiply by 3 and shift down by 4 places. This you could do with bitmasking and care.)
The problem is, 0.2 has a crappy binary representation, so multiplying by it is an ass.
As ever, accuracy or speed. Your choice.
When using x86 machines with at least SSE, and if you need to approximate only, you could use the assembly instruction PAVGB (Packed Average Byte), which averages bytes. See http://www.tommesani.com/SSEPrimer.html for explanation.
Since you've got 5 values, you would need to be creative in calling PAVGB, since PAVGB will only do two values at a time.
I found smart solution of your problem, sadly it is only applicable if number of colors is power of 2. I'll show it in case of two colors:
mask = 01010101
pom = ~(a^b & mask) # ^ means xor here, ~ negation
a = a & pom
b = b & pom
avg = (a+b) >> 1
The trick of this method is — when you count average, LSB of sum (in case of two numbers) has no meaning, as it will be dropped in division (we're talking integers here, of course). In your problem, LSB of partial sums is at the same moment carry bit of sum of adjacent color. Provided, that LSB of every color sum will be 0 you can safely add those two integers — additions won't interfere with each other. Bit shift divides every color by two.
This method can be used with 4 colors as well, but you have to implement finding out the carry flag of sum of numbers made of two last bits of every color. It is also possible to omit this part and just zero last two bits of every color — biggest mistake made with this omission is 1 for every component.
EDIT I'll leave this attempt for posterity, but please note that it is incorrect and will not work.
One "clever" way you could do it would be to insert zeros between the components, parse into an unsigned long, average the numbers, convert back to a hex string, remove the zeros and finally parse into an unsigned int.
i.e. convert #AARRGGBB to #AA00RR00GG00BB
This method involves parsing and string manipulations, so will undoubtedly be slower than the method you proposed.
If you were to factor your own solution carefully, it might actually look quite clever itself.

What's better multiplication by 2 or adding the number to itself ? BIGnums

I need some help deciding what is better performance wise.
I'm working with bigints (more then 5 million digits) and most of the computation (if not all) is in the part of doubling the current bigint. So i wanted to know is it better to multiply every cell (part of the bigint) by 2 then mod it and you know the rest. Or is it better just add the bigint to itself.
I'm thinking a bit about the ease of implementation too (addition of 2 bigints is more complicated then multiplication by 2) , but I'm more concerned about the performance rather then the size of code or ease of implementation.
Other info:
I'll code it in C++ , I'm fairly familiar with bigints (just never came across this problem).
I'm not in the need of any source code or similar i just need a nice opinion and explanation/proof of it , since i need to make a good decision form the start as the project will be fairly large and mostly built around this part it depends heavily on what i chose now.
Thanks.
Try bitshifting each bit. That is probably the fastest method. When you bitshift an integer to the left, then you double it (multiply by 2). If you have several long integers in a chain, then you need to store the most significant bit, because after shifting it, it will be gone, and you need to use it as the least significant bit on the next long integer.
This doesn't actually matter a whole lot. Modern 64bit computers can add two integers in the same time it takes to bitshift them (1 clockcycle), so it will take just as long. I suggest you try different methods, and then report back if there is any major time differences. All three methods should be easy to implement, and generating a 5mb number should also be easy, using a random number generator.
To store a 5 million digit integer, you'll need quite a few bits -- 5 million if you were referring to binary digits, or ~17 million bits if those were decimal digits. Let's assume the numbers are stored in a binary representation, and your arithmetic happens in chunks of some size, e.g. 32 bits or 64 bits.
If adding the number to itself, each chunk is added to itself and to the carry from the addition of the previous chunk. Any carry forward is kept for the next chunk. That's a couple of addition operation, and some book keeping for tracking the carry.
If multiplying by two by left-shifting, that's one left-shift operation for the multiplication, and one right-shift operation + and with 1 to obtain the carry. Carry book keeping is a little simpler.
Superficially, the shift version appears slightly faster. The overall cost of doubling the number, however, is highly influenced by the size of the number. A 17 million bits number exceeds the cpu's L1 cache, and processing time is likely overwhelmed by memory fetch operations. On modern PC hardware, memory fetch is orders of magnitude slower than addition and shifting.
With that, you might want to pick the one that's simpler for you to implement. I'm leaning towards the left-shift version.
did you try shifting the bits?
<< multiplies by 2
>> divides by 2
Left bit shifting by one is the same as a multiplication by two !
This link explains the mecanism and give examples.
int A = 10; //...01010 = 10
int B = A<<1; //..010100 = 20
If it really matters, you need to write all three methods (including bit-shift!), and profile them, on various input. (Use small numbers, large numbers, and random numbers, to avoid biasing the results.)
Sorry for the "Do it yourself" answer, but that's really the best way. No one cares about this result more than you, which just makes you the best person to figure it out.
Well implemented multiplication of BigNums is O(N log(N) log(log(N)). Addition is O(n). Therefore, adding to itself should be faster than multiplying by two. However that's only true if you're multiplying two arbitrary bignums; if your library knows you're multiplying a bignum by a small integer it may be able to optimize to O(n).
As others have noted, bit-shifting is also an option. It should be O(n) as well but faster constant time. But that will only work if your bignum library supports bit shifting.
most of the computation (if not all) is in the part of doubling the current bigint
If all your computation is in doubling the number, why don't you just keep a distinct (base-2) scale field? Then just add one to scale, which can just be a plain-old int. This will surely be faster than any manipulation of some-odd million bits.
IOW, use a bigfloat.
random benchmark
use Math::GMP;
use Time::HiRes qw(clock_gettime CLOCK_REALTIME CLOCK_PROCESS_CPUTIME_ID);
my $n = Math::GMP->new(2);
$n = $n ** 1_000_000;
my $m = Math::GMP->new(2);
$m = $m ** 10_000;
my $str;
for ($bits = 1_000_000; $bits <= 2_000_000; $bits += 10_000) {
my $start = clock_gettime(CLOCK_PROCESS_CPUTIME_ID);
$str = "$n" for (1..3);
my $stop = clock_gettime(CLOCK_PROCESS_CPUTIME_ID);
print "$bits,#{[($stop-$start)/3]}\n";
$n = $n * $m;
}
Seems to show that somehow GMP is doing its conversion in O(n) time (where n the number of bits in the binary number). This may be due to the special case of having a 1 followed by a million (or two) zeros; the GNU MP docs say it should be slower (but still better than O(N^2).
http://img197.imageshack.us/img197/6527/chartp.png

Resources