Sum reduction of binary sequence - algorithm

Consider a binary sequence:
11000111
I have to find sum of this series (actually in parallel)
Sum =1+1+0+0+0+1+1+1= 5
This is a waste of resource as why invest time in adding 0s?
Is there any clever way to sum this sequence so I can avoid unnecessary additions?

Operate at the byte level rather than the bit level. Use a small LUT to convert a byte to a population count. That way you're only doing one lookup and one add per 8 bits. Unless your data is likely to be very sparse this should be quite efficient.

Well it depends on how you store your bitset.
If it's an array, then you can't do more than a plain for. If you want to do this in parallel, just split the array in chunks and process them concurrently.
If we are talking about a bitset (storing the bits in a native (32/64-bit) integer type), then the simplest way to count bits would be this one:
int bitset;
int s = 0;
for (; bitset; s++)
bitset &= bitset-1;
This removes the last bit of 1 at every step, so you have O(s).
Of course, you can combine these two methods if you need more than 32/64 bits

I dunno why people are answering, not even looking into link from the 1st comment to the question. You can easily make it under O(size_of_bitset). At lewast when it comes to constant factor.
You could use this method (found in link by J.F. Sebastian):
inline int count_bits(int num){
int sum = 0;
for (; bitset; sum++) bitset &= bitset-1;
return sum;
}
int main (void){
int array[N];
int total_sum = 0;
#pragma omp parallel for reduction(+:total_sum)
for (size_t i = 0; i < N, i++){
total_sum += count_bits(array[i]);
}
}
This will count number of bits in memory range of array in parallel. The inline is important to avoid unnecessary copying, also the compiler should optimize it much better.
You can swap the count_bits with anything better that counts bits in an integer to get faster if you find anything. This version has complexity of O(bits_set) (not size of the bit set!).
Invoking the parallel construct will introduce quite a lot of overhead compared to a single summation that it does need to be quite large to compensate.
The parallelism is done via OpenMP. The partial sum of each thread is summed at the end of the parallel loop and stored in total_sum. Note the total_sum will be private inside the loop for each thread reduction due to reduction clause.
You could alter the code to make it count bits set in arbitrary memory region but it is quite important for it to be memory aligned when you perform operations on such low level.

As far as I can see, it would be wasteful to try to handle the zeros specially. As #bdares said, addition is really cheap. At a minimum, you'll need to execute N instructions to sum up the an N-bit sequence, that would be if you unconditionally sum ever bit. If you add a test to see whether the bit is a 0 or 1, that's another instruction that needs to be executed for each bit. Even if there's no branch penalty, you're executing minimum 1 instruction for every bit (the conditional test), and then you're also executing the original instruction (the add) for any bits that are equal to 1. So even without branch penalty, this takes more time to execute.
#bdares mentions that the compiler will optimize out the branches, but that's only if the value of each bit is known at compile time, and if you know the values of the bits at compile time, you should just add them up yourself in advance.
There might be some cute things you can do with bit twiddling. For instance, if you take the bits two at a time you're adding up values of 0, 1, 2, or 3, and only have half as many additions to do. There may by something you can then do with the result to convert it into the value you want, but I haven't actually thought about how to do that.

Related

Uniform random bit from a mask

Suppose I have a 64 bit unsigned integer (u64) mask, with one or more bits set.
I want to select one of the set bits uniformly at random from m to give a new mask x such that x & mask has one bit set. Some pseudocode that does this might be:
def uniform_random_bit_from_mask(mask):
assert mask > 0
set_indices = get_set_indices(mask)
random_index = uniform_random_choice(set_indices)
new_mask = set_bit(random_index, 0)
return new_mask
However I need to do this as fast as possible (code similar to the above in a low-level language is slowing a hot loop). Does anyone have a more efficient scheme?
The details how to optimize this depend on several factors you did not specify – the target architecture, the expected number of set bits in the mask, the language you want to use, the requirements on the randomness and many more. Without knowing further details, it's hard to give a useful answer, but I'll give a few hints that may prove useful anyway.
Most modern architectures have an instruction to count the number of set bits in an integer, generally called "popcount", and this instruction is exposed in most low-level languages. In Rust, you can use the count_ones() method. This gives you the total number k of bits to select from.
You can then generate a random number i between 0 and k - 1 (inclusive). The next step is to select the ith set bit in mask. An efficient approach to do so is this loop (Rust code):
for _ in 0..i {
mask &= mask - 1;
}
let new_mask = 1 << mask.trailing_zeros();
The loop clears the least significant set bit in each iteration. Since i < k, we know that mask can't be zero after the loop. The last line generates a new mask from the least significant bit of mask that is still set.
On common architectures, it is likely that the bottleneck will be the random number generator. If you are using Rust's rand crate, you can use SmallRng for improved performance, at the cost of being cryptographically insecure, which may not be relevant for your use case.

Is there any probabilistic data structure that reduces the space complexity of a large number of counters?

Basically I need to keep track of a large number of counters. I can increment or decrement each counter by name. The simplest way to do so is to use a hash table, using counter_name as key and its corresponding count as the value for that key.
The counters don't need to be 100% accurate, approximate values for count are fine. So I'm wondering if there is any probabilistic data structure that can reduce the space complexity of N counters to lower than O(N), kinda similar to how HyperLogLog reduces the memory requirement of counting N items by giving only an approximate result. Any ideas?
In my opinion, the thing you are looking for is Count-min sketch.
Reading a stream of elements a1, a2, a3, ..., an where there can be a
lot of repeated elements, in any time it will give you the answer to
the following question: how many ai elements have you seen so far.
basically your unique elements can be bijected into your counters. Countmin sketch allows you to adjust parameters to trade your memory for the accuracy.
P.S. I described some other popular probabilistic data structures here.
Stefan Haustein's correct that the names are likely to take more space than the counters, and you may be able to prioritise certain names as he suggests, but failing that you can consider how best to store the names. If they're fairly short (e.g. 8 characters or less), you might consider using a closed hashing table that stores them directly in the buckets. If they're long, you could store them contiguously (NUL terminated) in a block of memory, and in the hash table store the offset into that block of their first character.
For the counter itself, you can save space by using a probabilistic approach as follows:
template <typename T, typename Q = unsigned>
class Approx_Counter
{
public:
Approx_Counter() : n_(0) { }
Approx_Counter& operator++()
{
if (n_ < 2 || rand() % (operator Q()) == 0)
++n_;
return *this;
}
operator Q() const { return n_ < 2 ? n_ : 1 << n_; }
private:
T n_;
};
Then you can use e.g. Approx_Counter<unsigned char, unsigned long>. Swap out rand() for a C++11 generator if you care.
The idea's simple:
when n_ is 0, ++ has definitely not be invoked
when n_ is 1, ++ has definitely been invoked exactly once
when n_ >= 2, it indicates ++ has probably been invoked about 2n_ times
To keep that last implication in line with the number of ++ invocations actually made, each invocation has a 1 in 2n_ chance of actually incrementing n_ again.
Just make sure your rand() or substitute returns values much larger than the largest counter value you want to track, otherwise you'll get rand() % (operator Q()) == 0 too often and increment inappropriately.
That said, having a smaller counter doesn't help much if you have pointers or offsets to it, so you'll want to squeeze the counter into the bucket too, another reason to prefer your own closed hashing implementation if you genuinely need to tighten up memory usage but want to stick with a hash table (a trie is another possibility).
The above is still O(N) in counter space, just with a smaller constant. For genuinely < O(N) options, you need to consider whether/how keys are related, such that incrementing a counter might reasonable impact multiple keys. You've given us no insights in your question to date.
The names probably take up more space than the counters.
How about having a fixed number of counters and only keep the ones with the highest counts, plus some kind of LRU mechanism to allow new counters to rise to the top? I guess it really depends on your use case...

Removing finished futures to keep their number constant

I have a program that needs to launch a large number of futures; specifically, more than size_t. A normal way to have many futures is to keep them in a container but since there are too many of them, I would have to remove the finished ones. The program needs to count the number of new lines in parallel.
This is what I want to work for n>size_t:
vector<future<int>> vf;
for(size_t i=0; i<n;++i){
vf.emplace_back(async([&](){ return count_lines(part_of_an_array);});
}
double cnt=0;
for(auto i:vf) cnt+=i;
One way I thought of doing it is to keep a vector<char> busy_f (vector<bool> is probably not thread safe). As count_lines starts --> busy_f[i_future]=0, and when it would finish --> busy_f[i_future]=1.
Is there a faster approach?
Creating the threads or even the futures "manually" in such cases is usually not a good idea, because it is difficult to create the "right amount" of them: remember you only have a relatively small number of actual cores/threads to execute on, and creating all the extra futures, which do not immediately map to a thread and just block and wait and take space in memory is wasteful.
I'd use some sort of higher-level parallelization primitive, like a 'parallel for' or a parallel map-reduce implementation.
I don't know what OS/compiler you're using, so I'm going to suggest to use TBB as a cross-platform solution. If you're on Microsoft stack, they have their own parallel library, which in some aspects is better than TBB.
In TBB they have a parallel_reduce template function, which looks exactly like what you need, and note what they promise:
If the range and body take O(1) space, and the range splits into
nearly equal pieces, then the space complexity is O(P log(N)), where N
is the size of the range and P is the number of threads.
However, all ranges in TBB are limited to size_t... Maybe you can write an outer loop, which "makes" "chunks" of size_t elements from the larger problem, and then for each chunk you could call a parallel_reduce and sum up their results.
double result = 0;
for(BingNumber offset = 0; offset < n; offset += BigNumber(size_t_size))
{
result += parallel_reduce( ... )
}

Represent 10000 booleans using only 10000 bits

I want to represent 10000 bits of information.(Each can be either one or zero). Is there any way I can do this?
Wikipedia explains a bit hack to achieve this. But then it asks me to have a number that's as large as 2^10000 for storing 10000 bits.
Is there some way that's tractable even for storing large number of bits?
As wikipedia explains, a bit field is an appropriate choice here. a bit field that can hold 10,000 bits has 2^10000 states.
A good choice for doing this (given that integers are 32/64 bits) is a bit vector, which is asked about and explained in excruciating detail here:
bit vector implementation of set in Programming Pearls, 2nd Edition
The general idea is that you use an array of integers which are used as bit fields.
You can make bool take 1 bit for example if you have a bunch of them eg. in a struct, like this:
struct A
{
bool a:1, b:1, c:1, d:1, e:1;
};
Above method won't be useful if the number of variables are large. So instead create an array of integers of size 10000/4*8. It will create exactly 10000 bits. Now you can access each bit by using offset and << or >>(like for accessing 55th bit, use floor(55/4*8) and >>55%32. you can reach that bit).
In C++ you can do this very simply, using one of two standard library containers:
std::vector<bool>
This specialization of a standard vector acts (almost) like any other vector, but compresses its contents to one bit per element. Aside from enjoying that fact, you can just treat it like a vector:
// Create a vector of 10000 booleans
std::vector<bool> lots_of_bits(10000);
// Set all the odd ones to true
for (int i = 1; i < lots_of_bits.size(); i += 2) {
lots_of_bits[i] = true;
}
// Add another 100 trues at the end
for (int j = 0; j < 100; ++j) {
lots_of_bits.push_back(true);
}
// etc.
std::bitset<N>
The "new, improved" bit vector which does not pretend to be a standard container. In particular, it's of fixed size and you need to know the size at compile time. That can be a bit restrictive, but it's otherwise a pretty useful class. Like std::vector<bool>, it implements the [] operator for getting and setting individual bits. It also supports the bitwise logical operators &, |, '^' and ~ (and, or, xor and not), as well as left and right bitshifts, and some other utilities.
Is your concern that accessing bit number n requires shifting n times? If so, you can make the problem tractable by dividing your 10,000 bits into 10,000 / 8 buckets using an array of characters (assuming C or C++ here). Now you can access bit number n by figuring out what bucket that bit is in (n / 8) and then what position within the bucket (n % 8). Then you just do the masking. No extra storage required (except the padding at the end, so a few extra bits if you don't have a perfect multiple of 32 bits).

Range extremes don't seem to get drawn by random()

For several valid reasons I have to use BSD's random() to generate awfully large amounts of random numbers, and since its cycle is quite short (~2^69, if I'm not mistaken) the quality of such numbers degrades pretty quickly for my use case. I could use the rng board I have access to but it's painfully slow so I thought I could do this trick: take one number from the board, use it to seed random(), use random() to draw numbers and reseed it when the board says a new number is available. The board generates about 100 numbers per second so my guess is that random() hardly gets to cycle over and the generation rate easily keeps up with my requirements of several millions numbers per second.
Anyway, the problem is that random() claims to uniformly draw numbers between 0 and (2^31)-1, but I've been drawing an uncountable amount of numbers and I've never ever seen a 0 nor a (2^31)-1 so far. Maybe some 1 and (2^31)-2, but I've never seen the extremes. Now, I know the problem with random numbers is that you can never be sure (see Dilbert, Debian), but this seem extremely odd nonetheless. Moreover I tried analysing the generated datasets with Octave using the histc() function, and the lowest and the highest bins contain between half and three quarter the amount of numbers of the middle bins (which in turn are uniformly filled, so I guess in some sense the distribution is "uniform").
Can anybody explain this?
EDIT Some code
The board outputs this structure with the three components, and then I do some mumbo-jumbo combining them to produce the seed. I have no specs about this board, it's an ancient piece of hardware thrown together by a previous student some years ago, there's little documentation and this formula I'm using is one of those suggested in the docs. The STEP parameter tells me how may numbers I can draw using one seed so I can optimise performance and throttle down CPU usage at the same time.
float n = fabsf(fmod(sqrt(a.s1*a.s1 + a.s2*a.s2 + a.s3*a.s3), 1.0));
unsigned int seed = n * UINT32_MAX;
srandom(seed);
for(int i = 0; i < STEP; i++) {
long r = random();
n = (float)r / (UINT32_MAX >> 1);
[_numbers addObject:[NSNumber numberWithFloat:n]];
}
Are you certain that
void main() {
while (random() != 0L);
}
hangs indefinitely? On my linux machine (the Gnu C library uses the same linear feedback shift register as BSD, albeit with a different seeding procedure) it doesn't.
According to this reference the algorithm produces 'runs' of consecutive zeroes or ones up to length n-1 where n is the size of the shift register. When this has a size of 31 integers (the default case) we can even be certain that, eventually, random() will return 0 a whopping 30 (but never 31) times in a row! Of course, we may have to wait a few centuries to see it happening...
To extend the cycle length, one method is to run two RNGs, with different periods, and XOR their output. See L'Ecuyer 1988 for some examples.

Resources