All combinations in columnal order - algorithm

I have a grid where the columns we can represent as ABC and the rows can be represented as x, y, z types for example.
There could be multiple rows that are all classified as the same type. This is not true of the columns.
Rows of the same type should not be combined or "mixed." And all combining needs to be in the order ABC, not CBA or anyhting else. Here is an example I've come up with.
I need to print out every combination (in columnal order) of the following table:
A B C
--------------
x | 10 20 30
x | 11 21 31
y | 40 50 60
y | 41 51 61
z | 70 80 90
The output needs to be like this (it doesn't have to output the pattern, that's just for reference):
(Pattern) (Result)
Ax Bx Cx {10 20 30} {11 21 31} (notice no mix-combinations on same letter x)
Ax Bx Cy {10 20 60} {10 20 61} {11 21 60} {11 21 61}
Ax Bx Cz {10 20 90} {11 21 90}
Ax By Cx {10 50 30} {10 51 30} {11 50 31} {11 51 31}
Ax By Cy {10 50 60} {10 51 61} {11 50 60} {11 51 61}
Ax By Cz {10 50 90} {10 51 90} {11 50 90} {11 51 90}
Ax Bz Cx {10 80 30} {11 80 31}
Ax Bz Cy {10 80 60} {10 80 61} {11 80 60} {11 80 61}
Ax Bz Cz {10 80 90} {11 80 90}
Ay Bx Cx {40 20 30} {40 21 31} {41 20 30} {41 21 31}
Ay Bx Cy ...
Ay Bx Cz ...
Ay By Cx ...
Ay By Cy ...
Ay By Cz ...
Ay Bz Cx ...
Ay Bz Cy ...
Ay Bz Cz ...
Az Bx Cx ...
Az Bx Cy ...
Az Bx Cz ...
Az By Cx ...
Az By Cy ...
Az By Cz ...
Az Bz Cx ...
Az Bz Cy ...
Az Bz Cz {30 60 90}
I have some Tcl code I've started making to do this but its not great. It doesn't take multiple rows of the same x y or z into consideration but here's what I got so far:
set dl {0 1 2}
set op {x y z}
set debug [open "debugloop.txt" "w"]
set i 0
set j 0
set k 0
set e 0
set r 0
set s 0
set g yes
while {$g} {
puts $debug A[lindex $op $i][lindex $dl $e]B[lindex $op $j][lindex $dl $r]C[lindex $op $k][lindex $dl $s]
incr s
if {$s > 2} {
puts $debug ""
incr r
set s 0
if {$r > 2} {
puts $debug ""
incr e
set r 0
if {$e > 2} {
puts $debug ""
incr k
set e 0
if {$k > 2} {
puts $debug ""
incr j
set k 0
if {$j > 2} {
puts $debug ""
incr i
set j 0
if {$i > 2} {
set g no
}
}
}
}
}
}
}
does anyone have a better way to do this than a series of hardcoded nested loops? I've had a lot of trouble with this

There are 2 main parts to your problem:
Generating all of the (Pattern) combinations
Storing the data in a way that allows you to look up the (Result) for each combination.
For the first of these you need to generate all of the permutations with repetition allowed of your Pattern values, x,y,z, in your example. There is some code for this on the tcl wiki.
In your case order is important, {x,y,z} is not the same as {z,y,x} so the algorithm needs to take that into account. Here is some code that uses a simple algorithm to generate the repeating permutations, it uses the idea that you can generate all of the permutations by counting up modulo the number of elements. The number of permutations grows pretty quickly, look at the way permCount is calculated!
# Permutaions
proc NextPerm {perm values} {
set result {}
set needIncr 1
foreach val $perm {
if { $needIncr == 1} {
set newVal [lindex $values [expr {[lsearch -exact $values $val] + 1}]]
if {$newVal != ""} {
# New value was found
lappend result $newVal
set needIncr 0
} else {
# No next value found so we need to carry
lappend result [lindex $values 0]
}
} else {
lappend result $val
}
}
return $result
}
set values {x y z}
set perm {x x x}
puts $perm
set permCount [expr {[llength $perm] ** [llength $perm]}]
for {set i 1} {$i < $permCount} {incr i} {
set perm [NextPerm $perm $values]
puts $perm
}
NB: I've made no attempt to optimise this code.
If the pattern values never change then rather than generate them yourself you could use an online resource like this (there are lots of other sites if you do a search) to generate the values and hard-code them into your program.
For 2 I'd look at storing the values in an array or a dict with a key that allows you to pull back the relevant values.

Related

Mutate new column from random value in existing columns

I'm looking to mutate my data and create a new column which randomly selects a value from the existing data. My data looks something like:
individual
age_2010
age_2011
age_2012
age_2013
a
20
21
NA
21
b
33
34
35
36
c
76
NA
78
79
d
46
46
48
49
And I want it to look like:
individual
age_2010
age_2011
age_2012
age_2013
Random Sample
a
20
21
22
NA
21
b
33
34
35
36
36
c
76
NA
78
79
78
d
46
46
48
49
48
Is there any way to add a new column which includes a random figure from any of the previous age columns, and preferably keeping the data in wide form?
I think this is an easier approach:
d[, RandomSample:=sample(na.omit(t(.SD)),1),individual]
If dealing with the edge cases discussed above is desired, and one wanted to follow this approach, we could do this:
f <- function(df) {
s = na.omit(t(df))
ifelse(length(s)>0, sample(s,1),NA_real_)
}
d[, RandomSample:=f(.SD),individual]
Or,
we could just wrap the original approach in tryCatch
d[, RandomSample:=tryCatch(sample(na.omit(t(.SD)),1),error=\(e) NA),individual]
You can reshape longer, then do grouped sampling:
library(data.table)
# Sample data
d <- structure(list(individual = c("a", "b", "c", "d"), age_2010 = c(20, 33, 76, 46), age_2011 = c(21, 34, NA, 46), age_2012 = c(NA, 35, 78, 48), age_2013 = c(21, 36, 79, 49)), row.names = c(NA, -4L), spec = structure(list(cols = list(individual = structure(list(), class = c("collector_character", "collector")), age_2010 = structure(list(), class = c("collector_double", "collector")), age_2011 = structure(list(), class = c("collector_double", "collector")), age_2012 = structure(list(), class = c("collector_double", "collector")), age_2013 = structure(list(), class = c("collector_double", "collector"))), default = structure(list(), class = c("collector_guess", "collector")), skip = 2L), class = "col_spec"), class = c("data.table", "data.frame"))
d
#> individual age_2010 age_2011 age_2012 age_2013
#> 1: a 20 21 NA 21
#> 2: b 33 34 35 36
#> 3: c 76 NA 78 79
#> 4: d 46 46 48 49
# Solution
d[, "Random Sample"] <- d |>
melt("individual") |> # go long
(`[`)(!is.na(value), # drop NAs
.(x = sample(value, 1)), # sampling
keyby = .(individual)) |> # Grouping variable
(`[[`)(2) # extract vector from frame
d
#> individual age_2010 age_2011 age_2012 age_2013 Random Sample
#> 1: a 20 21 NA 21 21
#> 2: b 33 34 35 36 33
#> 3: c 76 NA 78 79 76
#> 4: d 46 46 48 49 49
Alternatively, you can also use apply(), which is less verbose but much slower:
d[, "Random Sample"] <- apply(d[, -1], 1, \(x) x |> na.omit() |> sample(1))
See the benchmark here for speed comparison. On just 40k observations, apply() needs 59 times longer and 8 times the memory.
# Make large sample data set
d_large <- d |>
list() |>
rep(1e4) |>
rbindlist()
bench::mark(
base = apply(d_large[, -1], 1, \(x) x |> na.omit() |> sample(1)),
dt = d_large |>
melt("individual") |>
(`[`)(!is.na(value),
.(x = sample(value, 1)),
keyby = .(individual)) |>
(`[[`)(2),
check = F
)
#> Warning: Some expressions had a GC in every iteration; so filtering is disabled.
#> # A tibble: 2 × 6
#> expression min median `itr/sec` mem_alloc `gc/sec`
#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl>
#> 1 base 617.86ms 617.9ms 1.62 103.3MB 12.9
#> 2 dt 6.96ms 10.5ms 80.9 13.1MB 47.3
Created on 2022-07-27 by the reprex package (v2.0.1)
Edit:
Here are versions that work with the edge case where all years are NA. In the first case I went for a join with the original table, which is a bit more expensive than the other version
# Solution with Data Table
d <- d |>
melt("individual") |> # go long
(`[`)(!is.na(value), # drop NAs
.(`Random Sample` = sample(value, 1)), # sampling
keyby = .(individual)) |> # Grouping variable
(`[`)(d) # right join with original frame
Here I simply used purrr::possibly() to return NA when sampling a zero length vector.
# Solution with apply
d[, "Random Sample"] <- apply(d[, -1], 1,
\(x) x |> na.omit() |> purrr::possibly(sample, NA)(1))

How to generate Set Y with multiple elements in CPLEX?

I have written this code to generate a Set Y, with single element
int m=3 ;
range I= 1..m;
int w[i in I]=i;
int q= min(i in I)w[i] ;
int W=1000;
int Ea[I];
{int} B={381,198,291};
{int} E ={rand(f) | f in B: f>0};
execute
{
writeln("E is ", E)
var j=1
for(var k in E)
{
Ea[j]=k; //Array Ea has same values as set E
j=j+1;
}
}
int ok[i in I]=(sum(i in I)Ea[i]*w[i]<=W-q);
{int} Y= {sum(i in I)Ea[i]*w[i]|x in 0..W-q , i in I: ok[i]==1 } ;
execute{
writeln(Y);
}
The output of above code and variable values are
E is {93 42 31}
Y is {270}
Variable Values
How can I generate multiple elements in Set Y, since the rand function has been used while calculating E?
You can use arrays for several casts:
{int} B={381,198,291};
range casts=1..10;
{int} E[c in casts] ={rand(f) | f in B: f>0};
execute
{
writeln(E);
}
int Y[c in casts]= sum(e in E[c]) e;
execute{
writeln(Y);
}
gives
[{93 42 31} {378 131 243} {25 177 61} {4 48 212} {276 1 256} {289 138 264}
{366 192 177} {138 150 164} {125 163 246} {315 180 240}]
[166 752 263 264 533 691 735 452 534 735]

not able to sort list in tcl

I have below code in NS2 which calculates distance between two nodes and put it in a list "nbr". I want to sort out that list in ascending order as per value "d" and again store it in a list for further use for that I used lsort command but it is giving me same list that is unsorted.
please help
code:.
proc distance { n1 n2 nd1 nd2} {
set x1 [expr int([$n1 set X_])]
set y1 [expr int([$n1 set Y_])]
set x2 [expr int([$n2 set X_])]
set y2 [expr int([$n2 set Y_])]
set d [expr int(sqrt(pow(($x2-$x1),2)+pow(($y2-$y1),2)))]
if {$d<300} {
if {$nd2!=$nd1 && $nd2 == 11} {
set nbr "{$nd1 $nd2 $x1 $y1 $d}"
set m [lsort -increasing -index 4 $nbr]
puts $m
}
}
}
for {set i 1} {$i < $val(nn)} {incr i} {
for {set j 1} {$j < $val(nn)} {incr j} {
$ns at 5.5 "distance $node_($i) $node_($j) $i $j"
}
}
output:
{1 11 305 455 273}
{4 11 308 386 208}
{5 11 378 426 274}
{7 11 403 377 249}
{8 11 244 405 215}
{9 11 256 343 154}
{10 11 342 328 172}
{12 11 319 192 81}
{13 11 395 196 157}
{14 11 469 191 231}
{15 11 443 140 211}
{16 11 363 115 145}
{17 11 290 135 75}
{18 11 234 121 69}
{19 11 263 60 132}
{20 11 347 60 169}
Right now, you're calculating each of the distances separately, but aren't actually collecting them all into a list that can be sorted.
Let's fix this by first rewriting distance to just do the distance calculations themselves:
proc distance {n1 n2 nd1 nd2} {
set x1 [expr int([$n1 set X_])]
set y1 [expr int([$n1 set Y_])]
set x2 [expr int([$n2 set X_])]
set y2 [expr int([$n2 set Y_])]
set d [expr int(sqrt(pow(($x2-$x1),2)+pow(($y2-$y1),2)))]
# Why not: set d [expr hypot($x2-$x1,$y2-$y1)]
# I'm keeping *everything* we know at this point
return [list $nd1 $nd2 $n1 $n2 $d $x1 $y1 $x2 $y2]
}
Then, we need another procedure that will process the whole collection (at the time the simulator calls it) and do the sorting. It will call distance to get the individual record, since we've factored that information out.
proc processDistances {count threshold {filter ""}} {
global node_
set distances {}
for {set i 1} {$i < $count} {incr i} {
for {set j 1} {$j < $count} {incr j} {
# Skip self comparisons
if {$i == $j} continue
# Apply target filter
if {$filter ne "" && $j != $filter} continue
# Get the distance information
set thisDistance [distance $node_($i) $node_($j) $i $j]
# Check that the nodes are close enough
if {[lindex $thisDistance 4] < $threshold} {
lappend distances $thisDistance
}
}
}
# Sort the pairs, by distances
set distances [lsort -real -increasing -index 4 $distances]
# Print the sorted list
foreach tuple $distances {
puts "{$tuple}"
}
}
Then we arrange for that whole procedure to be called at the right time:
# We recommend building callbacks using [list], not double quotes
$ns at 5.5 [list processDistances $val(nn) 300 11]

Find nth SET bit in an int

Instead of just the lowest set bit, I want to find the position of the nth lowest set bit. (I'm NOT talking about value on the nth bit position)
For example, say I have:
0000 1101 1000 0100 1100 1000 1010 0000
And I want to find the 4th bit that is set. Then I want it to return:
0000 0000 0000 0000 0100 0000 0000 0000
If popcnt(v) < n, it would make sense if this function returned 0, but any behavior for this case is acceptable for me.
I'm looking for something faster than a loop if possible.
Nowadays this is very easy with PDEP from the BMI2 instruction set. Here is a 64-bit version with some examples:
#include <cassert>
#include <cstdint>
#include <x86intrin.h>
inline uint64_t nthset(uint64_t x, unsigned n) {
return _pdep_u64(1ULL << n, x);
}
int main() {
assert(nthset(0b0000'1101'1000'0100'1100'1000'1010'0000, 0) ==
0b0000'0000'0000'0000'0000'0000'0010'0000);
assert(nthset(0b0000'1101'1000'0100'1100'1000'1010'0000, 1) ==
0b0000'0000'0000'0000'0000'0000'1000'0000);
assert(nthset(0b0000'1101'1000'0100'1100'1000'1010'0000, 3) ==
0b0000'0000'0000'0000'0100'0000'0000'0000);
assert(nthset(0b0000'1101'1000'0100'1100'1000'1010'0000, 9) ==
0b0000'1000'0000'0000'0000'0000'0000'0000);
assert(nthset(0b0000'1101'1000'0100'1100'1000'1010'0000, 10) ==
0b0000'0000'0000'0000'0000'0000'0000'0000);
}
If you just want the (zero-based) index of the nth set bit, add a trailing zero count.
inline unsigned nthset(uint64_t x, unsigned n) {
return _tzcnt_u64(_pdep_u64(1ULL << n, x));
}
It turns out that it is indeed possible to do this with no loops. It is fastest to precompute the (at least) 8 bit version of this problem. Of course, these tables use up cache space, but there should still be a net speedup in virtually all modern pc scenarios. In this code, n=0 returns the least set bit, n=1 is second-to-least, etc.
Solution with __popcnt
There is a solution using the __popcnt intrinsic (you need __popcnt to be extremely fast or any perf gains over a simple loop solution will be moot. Fortunately most SSE4+ era processors support it).
// lookup table for sub-problem: 8-bit v
byte PRECOMP[256][8] = { .... } // PRECOMP[v][n] for v < 256 and n < 8
ulong nthSetBit(ulong v, ulong n) {
ulong p = __popcnt(v & 0xFFFF);
ulong shift = 0;
if (p <= n) {
v >>= 16;
shift += 16;
n -= p;
}
p = __popcnt(v & 0xFF);
if (p <= n) {
shift += 8;
v >>= 8;
n -= p;
}
if (n >= 8) return 0; // optional safety, in case n > # of set bits
return PRECOMP[v & 0xFF][n] << shift;
}
This illustrates how the divide and conquer approach works.
General Solution
There is also a solution for "general" architectures- without __popcnt. It can be done by processing in 8-bit chunks. You need one more lookup table that tells you the popcnt of a byte:
byte PRECOMP[256][8] = { .... } // PRECOMP[v][n] for v<256 and n < 8
byte POPCNT[256] = { ... } // POPCNT[v] is the number of set bits in v. (v < 256)
ulong nthSetBit(ulong v, ulong n) {
ulong p = POPCNT[v & 0xFF];
ulong shift = 0;
if (p <= n) {
n -= p;
v >>= 8;
shift += 8;
p = POPCNT[v & 0xFF];
if (p <= n) {
n -= p;
shift += 8;
v >>= 8;
p = POPCNT[v & 0xFF];
if (p <= n) {
n -= p;
shift += 8;
v >>= 8;
}
}
}
if (n >= 8) return 0; // optional safety, in case n > # of set bits
return PRECOMP[v & 0xFF][n] << shift;
}
This could, of course, be done with a loop, but the unrolled form is faster and the unusual form of the loop would make it unlikely that the compiler could automatically unroll it for you.
v-1 has a zero where v has its least significant "one" bit, while all more significant bits are the same. This leads to the following function:
int ffsn(unsigned int v, int n) {
for (int i=0; i<n-1; i++) {
v &= v-1; // remove the least significant bit
}
return v & ~(v-1); // extract the least significant bit
}
The version from bit-twiddling hacks adapted to this case is, for example,
unsigned int nth_bit_set(uint32_t value, unsigned int n)
{
const uint32_t pop2 = (value & 0x55555555u) + ((value >> 1) & 0x55555555u);
const uint32_t pop4 = (pop2 & 0x33333333u) + ((pop2 >> 2) & 0x33333333u);
const uint32_t pop8 = (pop4 & 0x0f0f0f0fu) + ((pop4 >> 4) & 0x0f0f0f0fu);
const uint32_t pop16 = (pop8 & 0x00ff00ffu) + ((pop8 >> 8) & 0x00ff00ffu);
const uint32_t pop32 = (pop16 & 0x000000ffu) + ((pop16 >>16) & 0x000000ffu);
unsigned int rank = 0;
unsigned int temp;
if (n++ >= pop32)
return 32;
temp = pop16 & 0xffu;
/* if (n > temp) { n -= temp; rank += 16; } */
rank += ((temp - n) & 256) >> 4;
n -= temp & ((temp - n) >> 8);
temp = (pop8 >> rank) & 0xffu;
/* if (n > temp) { n -= temp; rank += 8; } */
rank += ((temp - n) & 256) >> 5;
n -= temp & ((temp - n) >> 8);
temp = (pop4 >> rank) & 0x0fu;
/* if (n > temp) { n -= temp; rank += 4; } */
rank += ((temp - n) & 256) >> 6;
n -= temp & ((temp - n) >> 8);
temp = (pop2 >> rank) & 0x03u;
/* if (n > temp) { n -= temp; rank += 2; } */
rank += ((temp - n) & 256) >> 7;
n -= temp & ((temp - n) >> 8);
temp = (value >> rank) & 0x01u;
/* if (n > temp) rank += 1; */
rank += ((temp - n) & 256) >> 8;
return rank;
}
which, when compiled in a separate compilation unit, on gcc-5.4.0 using -Wall -O3 -march=native -mtune=native on Intel Core i5-4200u, yields
00400a40 <nth_bit_set>:
400a40: 89 f9 mov %edi,%ecx
400a42: 89 f8 mov %edi,%eax
400a44: 55 push %rbp
400a45: 40 0f b6 f6 movzbl %sil,%esi
400a49: d1 e9 shr %ecx
400a4b: 25 55 55 55 55 and $0x55555555,%eax
400a50: 53 push %rbx
400a51: 81 e1 55 55 55 55 and $0x55555555,%ecx
400a57: 01 c1 add %eax,%ecx
400a59: 41 89 c8 mov %ecx,%r8d
400a5c: 89 c8 mov %ecx,%eax
400a5e: 41 c1 e8 02 shr $0x2,%r8d
400a62: 25 33 33 33 33 and $0x33333333,%eax
400a67: 41 81 e0 33 33 33 33 and $0x33333333,%r8d
400a6e: 41 01 c0 add %eax,%r8d
400a71: 45 89 c1 mov %r8d,%r9d
400a74: 44 89 c0 mov %r8d,%eax
400a77: 41 c1 e9 04 shr $0x4,%r9d
400a7b: 25 0f 0f 0f 0f and $0xf0f0f0f,%eax
400a80: 41 81 e1 0f 0f 0f 0f and $0xf0f0f0f,%r9d
400a87: 41 01 c1 add %eax,%r9d
400a8a: 44 89 c8 mov %r9d,%eax
400a8d: 44 89 ca mov %r9d,%edx
400a90: c1 e8 08 shr $0x8,%eax
400a93: 81 e2 ff 00 ff 00 and $0xff00ff,%edx
400a99: 25 ff 00 ff 00 and $0xff00ff,%eax
400a9e: 01 d0 add %edx,%eax
400aa0: 0f b6 d8 movzbl %al,%ebx
400aa3: c1 e8 10 shr $0x10,%eax
400aa6: 0f b6 d0 movzbl %al,%edx
400aa9: b8 20 00 00 00 mov $0x20,%eax
400aae: 01 da add %ebx,%edx
400ab0: 39 f2 cmp %esi,%edx
400ab2: 77 0c ja 400ac0 <nth_bit_set+0x80>
400ab4: 5b pop %rbx
400ab5: 5d pop %rbp
400ab6: c3 retq
400ac0: 83 c6 01 add $0x1,%esi
400ac3: 89 dd mov %ebx,%ebp
400ac5: 29 f5 sub %esi,%ebp
400ac7: 41 89 ea mov %ebp,%r10d
400aca: c1 ed 08 shr $0x8,%ebp
400acd: 41 81 e2 00 01 00 00 and $0x100,%r10d
400ad4: 21 eb and %ebp,%ebx
400ad6: 41 c1 ea 04 shr $0x4,%r10d
400ada: 29 de sub %ebx,%esi
400adc: c4 42 2b f7 c9 shrx %r10d,%r9d,%r9d
400ae1: 41 0f b6 d9 movzbl %r9b,%ebx
400ae5: 89 dd mov %ebx,%ebp
400ae7: 29 f5 sub %esi,%ebp
400ae9: 41 89 e9 mov %ebp,%r9d
400aec: 41 81 e1 00 01 00 00 and $0x100,%r9d
400af3: 41 c1 e9 05 shr $0x5,%r9d
400af7: 47 8d 14 11 lea (%r9,%r10,1),%r10d
400afb: 41 89 e9 mov %ebp,%r9d
400afe: 41 c1 e9 08 shr $0x8,%r9d
400b02: c4 42 2b f7 c0 shrx %r10d,%r8d,%r8d
400b07: 41 83 e0 0f and $0xf,%r8d
400b0b: 44 21 cb and %r9d,%ebx
400b0e: 45 89 c3 mov %r8d,%r11d
400b11: 29 de sub %ebx,%esi
400b13: 5b pop %rbx
400b14: 41 29 f3 sub %esi,%r11d
400b17: 5d pop %rbp
400b18: 44 89 da mov %r11d,%edx
400b1b: 41 c1 eb 08 shr $0x8,%r11d
400b1f: 81 e2 00 01 00 00 and $0x100,%edx
400b25: 45 21 d8 and %r11d,%r8d
400b28: c1 ea 06 shr $0x6,%edx
400b2b: 44 29 c6 sub %r8d,%esi
400b2e: 46 8d 0c 12 lea (%rdx,%r10,1),%r9d
400b32: c4 e2 33 f7 c9 shrx %r9d,%ecx,%ecx
400b37: 83 e1 03 and $0x3,%ecx
400b3a: 41 89 c8 mov %ecx,%r8d
400b3d: 41 29 f0 sub %esi,%r8d
400b40: 44 89 c0 mov %r8d,%eax
400b43: 41 c1 e8 08 shr $0x8,%r8d
400b47: 25 00 01 00 00 and $0x100,%eax
400b4c: 44 21 c1 and %r8d,%ecx
400b4f: c1 e8 07 shr $0x7,%eax
400b52: 29 ce sub %ecx,%esi
400b54: 42 8d 14 08 lea (%rax,%r9,1),%edx
400b58: c4 e2 6b f7 c7 shrx %edx,%edi,%eax
400b5d: 83 e0 01 and $0x1,%eax
400b60: 29 f0 sub %esi,%eax
400b62: 25 00 01 00 00 and $0x100,%eax
400b67: c1 e8 08 shr $0x8,%eax
400b6a: 01 d0 add %edx,%eax
400b6c: c3 retq
When compiled as a separate compilation unit, timing on this machine is difficult, because the actual operation is as fast as calling a do-nothing function (also compiled in a separate compilation unit); essentially, the calculation is done during the latencies associated with the function call.
It seems to be slightly faster than my suggestion of a binary search,
unsigned int nth_bit_set(uint32_t value, unsigned int n)
{
uint32_t mask = 0x0000FFFFu;
unsigned int size = 16u;
unsigned int base = 0u;
if (n++ >= __builtin_popcount(value))
return 32;
while (size > 0) {
const unsigned int count = __builtin_popcount(value & mask);
if (n > count) {
base += size;
size >>= 1;
mask |= mask << size;
} else {
size >>= 1;
mask >>= size;
}
}
return base;
}
where the loop is executed exactly five times, compiling to
00400ba0 <nth_bit_set>:
400ba0: 83 c6 01 add $0x1,%esi
400ba3: 31 c0 xor %eax,%eax
400ba5: b9 10 00 00 00 mov $0x10,%ecx
400baa: ba ff ff 00 00 mov $0xffff,%edx
400baf: 45 31 db xor %r11d,%r11d
400bb2: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1)
400bb8: 41 89 c9 mov %ecx,%r9d
400bbb: 41 89 f8 mov %edi,%r8d
400bbe: 41 d0 e9 shr %r9b
400bc1: 41 21 d0 and %edx,%r8d
400bc4: c4 62 31 f7 d2 shlx %r9d,%edx,%r10d
400bc9: f3 45 0f b8 c0 popcnt %r8d,%r8d
400bce: 41 09 d2 or %edx,%r10d
400bd1: 44 38 c6 cmp %r8b,%sil
400bd4: 41 0f 46 cb cmovbe %r11d,%ecx
400bd8: c4 e2 33 f7 d2 shrx %r9d,%edx,%edx
400bdd: 41 0f 47 d2 cmova %r10d,%edx
400be1: 01 c8 add %ecx,%eax
400be3: 44 89 c9 mov %r9d,%ecx
400be6: 45 84 c9 test %r9b,%r9b
400be9: 75 cd jne 400bb8 <nth_bit_set+0x18>
400beb: c3 retq
as in, not more than 31 cycles in 95% of calls to the binary search version, compared to not more than 28 cycles in 95% of calls to the bit-hack version; both run within 28 cycles in 50% of the cases. (The loop version takes up to 56 cycles in 95% of calls, up to 37 cycles median.)
To determine which one is better in actual real-world code, one would have to do a proper benchmark within the real-world task; at least with current x86-64 architecture processors, the work done is easily hidden in latencies incurred elsewhere (like function calls).
My answer is mostly based on this implementation of a 64bit word select method (Hint: Look only at the MARISA_USE_POPCNT, MARISA_X64, MARISA_USE_SSE3 codepaths):
It works in two steps, first selecting the byte containing the n-th set bit and then using a lookup table inside the byte:
Extract the lower and higher nibbles for every byte (bitmasks 0xF, 0xF0, shift the higher nibbles down)
Replace the nibble values by their popcount (_mm_shuffle_epi8 with A000120)
Sum the popcounts of the lower and upper nibbles (Normal SSE addition) to get byte popcounts
Compute the prefix sum over all byte popcounts (multiplication with 0x01010101...)
Propagate the position n to all bytes (SSE broadcast or again multiplication with 0x01010101...)
Do a bytewise comparison (_mm_cmpgt_epi8 leaves 0xFF in every byte smaller than n)
Compute the byte offset by doing a popcount on the result
Now we know which byte contains the bit and a simple byte lookup table like in grek40's answer suffices to get the result.
Note however that I have not really benchmarked this result against other implementations, only that I have seen it to be quite efficient (and branchless)
I cant see a method without a loop, what springs to mind would be;
int set = 0;
int pos = 0;
while(set < n) {
if((bits & 0x01) == 1) set++;
bits = bits >> 1;
pos++;
}
after which, pos would hold the position of the nth lowest-value set bit.
The only other thing that I can think of would be a divide and conquer approach, which might yield O(log(n)) rather than O(n)...but probably not.
Edit: you said any behaviour, so non-termination is ok, right? :P
def bitN (l: Long, i: Int) : Long = {
def bitI (l: Long, i: Int) : Long =
if (i == 0) 1L else
2 * {
if (l % 2 == 0) bitI (l / 2, i) else bitI (l /2, i-1)
}
bitI (l, i) / 2
}
A recursive method (in scala). Decrement i, the position, if a modulo2 is 1. While returning, multiply by 2. Since the multiplication is invoced as last operation, it is not tail recursive, but since Longs are of known size in advance, the maximum stack is not too big.
scala> n.toBinaryString.replaceAll ("(.{8})", "$1 ")
res117: java.lang.String = 10110011 11101110 01011110 01111110 00111101 11100101 11101011 011000
scala> bitN (n, 40) .toBinaryString.replaceAll ("(.{8})", "$1 ")
res118: java.lang.String = 10000000 00000000 00000000 00000000 00000000 00000000 00000000 000000
Edit
After giving it some thought and using the __builtin_popcount function, I figured it might be better to decide on the relevant byte and then compute the whole result instead of incrementally adding/subtracting numbers. Here is an updated version:
int GetBitAtPosition(unsigned i, unsigned n)
{
unsigned bitCount;
bitCount = __builtin_popcount(i & 0x00ffffff);
if (bitCount <= n)
{
return (24 + LUT_BitPosition[i >> 24][n - bitCount]);
}
bitCount = __builtin_popcount(i & 0x0000ffff);
if (bitCount <= n)
{
return (16 + LUT_BitPosition[(i >> 16) & 0xff][n - bitCount]);
}
bitCount = __builtin_popcount(i & 0x000000ff);
if (bitCount <= n)
{
return (8 + LUT_BitPosition[(i >> 8) & 0xff][n - bitCount]);
}
return LUT_BitPosition[i & 0xff][n];
}
I felt like creating a LUT based solution where the number is inspected in byte-chunks, however, the LUT for the n-th bit position grew quite large (256*8) and the LUT-free version that was discussed in the comments might be better.
Generally the algorithm would look like this:
unsigned i = 0x000006B5;
unsigned n = 4;
unsigned result = 0;
unsigned bitCount;
while (i)
{
bitCount = LUT_BitCount[i & 0xff];
if (n < bitCount)
{
result += LUT_BitPosition[i & 0xff][n];
break; // found
}
else
{
n -= bitCount;
result += 8;
i >>= 8;
}
}
Might be worth to unroll the loop into its up to 4 iterations to get the best performance on 32 bit numbers.
The LUT for bitcount (could be replaced by __builtin_popcount):
unsigned LUT_BitCount[] = {
0, 1, 1, 2, 1, 2, 2, 3, // 0-7
1, 2, 2, 3, 2, 3, 3, 4, // 8-15
1, 2, 2, 3, 2, 3, 3, 4, // 16-23
2, 3, 3, 4, 3, 4, 4, 5, // 24-31
1, 2, 2, 3, 2, 3, 3, 4, // 32-39
2, 3, 3, 4, 3, 4, 4, 5, // 40-47
2, 3, 3, 4, 3, 4, 4, 5, // 48-55
3, 4, 4, 5, 4, 5, 5, 6, // 56-63
1, 2, 2, 3, 2, 3, 3, 4, // 64-71
2, 3, 3, 4, 3, 4, 4, 5, // 72-79
2, 3, 3, 4, 3, 4, 4, 5, // 80-87
3, 4, 4, 5, 4, 5, 5, 6, // 88-95
2, 3, 3, 4, 3, 4, 4, 5, // 96-103
3, 4, 4, 5, 4, 5, 5, 6, // 104-111
3, 4, 4, 5, 4, 5, 5, 6, // 112-119
4, 5, 5, 6, 5, 6, 6, 7, // 120-127
1, 2, 2, 3, 2, 3, 3, 4, // 128
2, 3, 3, 4, 3, 4, 4, 5, // 136
2, 3, 3, 4, 3, 4, 4, 5, // 144
3, 4, 4, 5, 4, 5, 5, 6, // 152
2, 3, 3, 4, 3, 4, 4, 5, // 160
3, 4, 4, 5, 4, 5, 5, 6, // 168
3, 4, 4, 5, 4, 5, 5, 6, // 176
4, 5, 5, 6, 5, 6, 6, 7, // 184
2, 3, 3, 4, 3, 4, 4, 5, // 192
3, 4, 4, 5, 4, 5, 5, 6, // 200
3, 4, 4, 5, 4, 5, 5, 6, // 208
4, 5, 5, 6, 5, 6, 6, 7, // 216
3, 4, 4, 5, 4, 5, 5, 6, // 224
4, 5, 5, 6, 5, 6, 6, 7, // 232
4, 5, 5, 6, 5, 6, 6, 7, // 240
5, 6, 6, 7, 6, 7, 7, 8, // 248-255
};
The LUT for bit position within a byte:
unsigned LUT_BitPosition[][8] = {
// 0-7
{UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
// 8-15
{3,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
// 16-31
{4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{3,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,4,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,4,UINT_MAX,UINT_MAX,UINT_MAX},
// 32-63
{5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{3,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,5,UINT_MAX,UINT_MAX,UINT_MAX},
{4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,4,5,UINT_MAX,UINT_MAX,UINT_MAX},
{3,4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,4,5,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,4,5,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,4,5,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,4,5,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,4,5,UINT_MAX,UINT_MAX},
// 64-127
{6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{3,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,6,UINT_MAX,UINT_MAX,UINT_MAX},
{4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,4,6,UINT_MAX,UINT_MAX,UINT_MAX},
{3,4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,4,6,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,4,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,4,6,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,4,6,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,4,6,UINT_MAX,UINT_MAX},
{5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,5,6,UINT_MAX,UINT_MAX,UINT_MAX},
{3,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,5,6,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,5,6,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,5,6,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,5,6,UINT_MAX,UINT_MAX},
{4,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,4,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,4,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,4,5,6,UINT_MAX,UINT_MAX,UINT_MAX},
{2,4,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,4,5,6,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,4,5,6,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,4,5,6,UINT_MAX,UINT_MAX},
{3,4,5,6,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,4,5,6,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,4,5,6,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,4,5,6,UINT_MAX,UINT_MAX},
{2,3,4,5,6,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,4,5,6,UINT_MAX,UINT_MAX},
{1,2,3,4,5,6,UINT_MAX,UINT_MAX},
{0,1,2,3,4,5,6,UINT_MAX},
// 128-255
{7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{3,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,7,UINT_MAX,UINT_MAX,UINT_MAX},
{4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,4,7,UINT_MAX,UINT_MAX,UINT_MAX},
{3,4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,4,7,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,4,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,4,7,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,4,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,4,7,UINT_MAX,UINT_MAX},
{5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,5,7,UINT_MAX,UINT_MAX,UINT_MAX},
{3,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,5,7,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,5,7,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,5,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,5,7,UINT_MAX,UINT_MAX},
{4,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,4,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,4,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,4,5,7,UINT_MAX,UINT_MAX,UINT_MAX},
{2,4,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,4,5,7,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,4,5,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,4,5,7,UINT_MAX,UINT_MAX},
{3,4,5,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,4,5,7,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,4,5,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,4,5,7,UINT_MAX,UINT_MAX},
{2,3,4,5,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,4,5,7,UINT_MAX,UINT_MAX},
{1,2,3,4,5,7,UINT_MAX,UINT_MAX},
{0,1,2,3,4,5,7,UINT_MAX},
{6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{2,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{3,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{2,3,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,3,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,3,6,7,UINT_MAX,UINT_MAX},
{4,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,4,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,4,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,4,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{2,4,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,4,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,4,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,4,6,7,UINT_MAX,UINT_MAX},
{3,4,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,4,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,4,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,4,6,7,UINT_MAX,UINT_MAX},
{2,3,4,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,4,6,7,UINT_MAX,UINT_MAX},
{1,2,3,4,6,7,UINT_MAX,UINT_MAX},
{0,1,2,3,4,6,7,UINT_MAX},
{5,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{1,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{2,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{1,2,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,2,5,6,7,UINT_MAX,UINT_MAX},
{3,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{1,3,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,3,5,6,7,UINT_MAX,UINT_MAX},
{2,3,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,3,5,6,7,UINT_MAX,UINT_MAX},
{1,2,3,5,6,7,UINT_MAX,UINT_MAX},
{0,1,2,3,5,6,7,UINT_MAX},
{4,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX,UINT_MAX},
{0,4,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{1,4,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,1,4,5,6,7,UINT_MAX,UINT_MAX},
{2,4,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,2,4,5,6,7,UINT_MAX,UINT_MAX},
{1,2,4,5,6,7,UINT_MAX,UINT_MAX},
{0,1,2,4,5,6,7,UINT_MAX},
{3,4,5,6,7,UINT_MAX,UINT_MAX,UINT_MAX},
{0,3,4,5,6,7,UINT_MAX,UINT_MAX},
{1,3,4,5,6,7,UINT_MAX,UINT_MAX},
{0,1,3,4,5,6,7,UINT_MAX},
{2,3,4,5,6,7,UINT_MAX,UINT_MAX},
{0,2,3,4,5,6,7,UINT_MAX},
{1,2,3,4,5,6,7,UINT_MAX},
{0,1,2,3,4,5,6,7},
};
My approach is to calculate the population count for each 8-bit quarters of the 32-bit integer in parallel, then find which quarter contains the nth bit. The population count of quarters that are lower than the found one can be summarized as the initial value of later calculation.
After that count set bits one-by-one until the n is reached. Without branches and using an incomplete implementation of population count algorithm, my example is the following:
#include <stdio.h>
#include <stdint.h>
int main() {
uint32_t n = 10, test = 3124375902u; /* 10111010001110100011000101011110 */
uint32_t index, popcnt, quarter = 0, q_popcnt;
/* count set bits of each quarter of 32-bit integer in parallel */
q_popcnt = test - ((test >> 1) & 0x55555555);
q_popcnt = (q_popcnt & 0x33333333) + ((q_popcnt >> 2) & 0x33333333);
q_popcnt = (q_popcnt + (q_popcnt >> 4)) & 0x0F0F0F0F;
popcnt = q_popcnt;
/* find which quarters can be summarized and summarize them */
quarter += (n + 1 >= (q_popcnt & 0xff));
quarter += (n + 1 >= ((q_popcnt += q_popcnt >> 8) & 0xff));
quarter += (n + 1 >= ((q_popcnt += q_popcnt >> 16) & 0xff));
quarter += (n + 1 >= ((q_popcnt += q_popcnt >> 24) & 0xff));
popcnt &= (UINT32_MAX >> (8 * quarter));
popcnt = (popcnt * 0x01010101) >> 24;
/* find the index of nth bit in quarter where it should be */
index = 8 * quarter;
index += ((popcnt += (test >> index) & 1) <= n);
index += ((popcnt += (test >> index) & 1) <= n);
index += ((popcnt += (test >> index) & 1) <= n);
index += ((popcnt += (test >> index) & 1) <= n);
index += ((popcnt += (test >> index) & 1) <= n);
index += ((popcnt += (test >> index) & 1) <= n);
index += ((popcnt += (test >> index) & 1) <= n);
index += ((popcnt += (test >> index) & 1) <= n);
printf("index = %u\n", index);
return 0;
}
A simple approach which uses loops and conditionals can be the following as well:
#include <stdio.h>
#include <stdint.h>
int main() {
uint32_t n = 11, test = 3124375902u; /* 10111010001110100011000101011110 */
uint32_t popcnt = 0, index = 0;
while(popcnt += ((test >> index) & 1), popcnt <= n && ++index < 32);
printf("index = %u\n", index);
return 0;
}
I know the question asks for something faster than a loop, but a complicated loop-less answer is likely to take longer than a quick loop.
If the computer has 32 bit ints and v is a random value then it might have for example 16 ones and if we are looking for a random place among the 16 ones, we might typically be looking for the 8th one. 7 or 8 times round a loop with just a couple of statements isn't too bad.
int findNthBit(unsigned int n, int v)
{
int next;
if (n > __builtin_popcount(v)) return 0;
while (next = v&v-1, --n)
{
v = next;
}
return v ^ next;
}
The loop works by removing the lowest set bit (n-1) times.
The n'th one bit that would be removed is the one bit we were looking for.
If anybody wants to test this ....
#include "stdio.h"
#include "assert.h"
// function here
int main() {
assert(findNthBit(1, 0)==0);
assert(findNthBit(1, 0xf0f)==1<<0);
assert(findNthBit(2, 0xf0f)==1<<1);
assert(findNthBit(3, 0xf0f)==1<<2);
assert(findNthBit(4, 0xf0f)==1<<3);
assert(findNthBit(5, 0xf0f)==1<<8);
assert(findNthBit(6, 0xf0f)==1<<9);
assert(findNthBit(7, 0xf0f)==1<<10);
assert(findNthBit(8, 0xf0f)==1<<11);
assert(findNthBit(9, 0xf0f)==0);
printf("looks good\n");
}
If there are concerns about the number of times the loop is executed, for example if the function is regularly called with large values of n, its simple to add an extra line or two of the following form
if (n > 8) return findNthBit(n-__builtin_popcount(v&0xff), v>>8) << 8;
or
if (n > 12) return findNthBit(n - __builtin_popcount(v&0xfff), v>>12) << 12;
The idea here is that the n'th one will never be located in the bottom n-1 bits. A better version clears not only the bottom 8 or 12 bits, but all the bottom (n-1) bits when n is large-ish and we don't want to loop that many times.
if (n > 7) return findNthBit(n - __builtin_popcount(v & ((1<<(n-1))-1)), v>>(n-1)) << (n-1);
I tested this with findNthBit(20, 0xaf5faf5f) and after clearing out the bottom 19 bits because the answer wasn't to be found there, it looked for the 5th bit in the remaining bits by looping 4 times to remove 4 ones.
So an improved version is
int findNthBit(unsigned int n, int v)
{
int next;
if (n > __builtin_popcount(v)) return 0;
if (n > 7) return findNthBit(n - __builtin_popcount(v & ((1<<(n-1))-1)), v>>(n-1)) << (n-1);
while (next = v&v-1, --n)
{
v = next;
}
return v ^ next;
}
The value 7, limiting looping is chosen fairly arbitrarily as a compromise between limiting looping and limiting recursion. The function could be further improved by removing recursion and keeping track of a shift amount instead. I may try this if I get some peace from home schooling my daughter!
Here is a final version with the recursion removed by keeping track of the number of low order bits shifted out from the bottom of the bits being searched.
Final version
int findNthBit(unsigned int n, int v)
{
int shifted = 0; // running total
int nBits; // value for this iteration
// handle no solution
if (n > __builtin_popcount(v)) return 0;
while (n > 7)
{
// for large n shift out lower n-1 bits from v.
nBits = n-1;
n -= __builtin_popcount(v & ((1<<nBits)-1));
v >>= nBits;
shifted += nBits;
}
int next;
// n is now small, clear out n-1 bits and return the next bit
// v&(v-1): a well known software trick to remove the lowest set bit.
while (next = v&(v-1), --n)
{
v = next;
}
return (v ^ next) << shifted;
}
Building on the answer given by Jukka Suomela, which uses a machine-specific instruction that may not necessarily be available, it is also possible to write a function that does exactly the same thing as _pdep_u64 without any machine dependencies. It must loop over the set bits in one of the arguments, but can still be described as a constexpr function for C++11.
constexpr inline uint64_t deposit_bits(uint64_t x, uint64_t mask, uint64_t b, uint64_t res) {
return mask != 0 ? deposit_bits(x, mask & (mask - 1), b << 1, ((x & b) ? (res | (mask & (-mask))) : res)) : res;
}
constexpr inline uint64_t nthset(uint64_t x, unsigned n) {
return deposit_bits(1ULL << n, x, 1, 0);
}
Based on a method by Juha Järvi published in the famous Bit Twiddling Hacks, I tested this implementation where n and i are used as in the question:
a = i - (i >> 1 & 0x55555555);
b = (a & 0x33333333) + (a >> 2 & 0x33333333);
c = b + (b >> 4) & 0x0f0f0f0f;
r = n + 1;
s = 0;
t = c + (c >> 8) & 0xff;
if (r > t) {
s += 16;
r -= t;
}
t = c >> s & 0xf;
if (r > t) {
s += 8;
r -= t;
}
t = b >> s & 0x7;
if (r > t) {
s += 4;
r -= t;
}
t = a >> s & 0x3;
if (r > t) {
s += 2;
r -= t;
}
t = i >> s & 0x1;
if (r > t)
s++;
return (s);
Based on my own tests, this is about as fast as the loop on x86, whereas it is 20% faster on arm64 and probably a lot faster on arm due to the fast conditional instructions, but I can't test this right now.
PDEP solution is great, but some languages like Java do not contain this intrinsic yet, however, are efficient in the other low-level operations. So I came up with the following fall back for such cases: a branchless binary search.
// n must be using 0-based indexing.
// This method produces correct results only if n is smaller
// than the number of set bits.
public static int getNthSetBit(long mask64, int n) {
// Binary search without branching
int base = 0;
final int low32 = (int) mask64;
final int high32n = n - Integer.bitCount(low32);
final int inLow32 = high32n >>> 31;
final int inHigh32 = inLow32 ^ 1;
final int shift32 = inHigh32 << 5;
final int mask32 = (int) (mask64 >>> shift32);
n = ((-inLow32) & n) | ((-inHigh32) & high32n);
base += shift32;
final int low16 = mask32 & 0xffff;
final int high16n = n - Integer.bitCount(low16);
final int inLow16 = high16n >>> 31;
final int inHigh16 = inLow16 ^ 1;
final int shift16 = inHigh16 << 4;
final int mask16 = (mask32 >>> shift16) & 0xffff;
n = ((-inLow16) & n) | ((-inHigh16) & high16n);
base += shift16;
final int low8 = mask16 & 0xff;
final int high8n = n - Integer.bitCount(low8);
final int inLow8 = high8n >>> 31;
final int inHigh8 = inLow8 ^ 1;
final int shift8 = inHigh8 << 3;
final int mask8 = (mask16 >>> shift8) & 0xff;
n = ((-inLow8) & n) | ((-inHigh8) & high8n);
base += shift8;
final int low4 = mask8 & 0xf;
final int high4n = n - Integer.bitCount(low4);
final int inLow4 = high4n >>> 31;
final int inHigh4 = inLow4 ^ 1;
final int shift4 = inHigh4 << 2;
final int mask4 = (mask8 >>> shift4) & 0xf;
n = ((-inLow4) & n) | ((-inHigh4) & high4n);
base += shift4;
final int low2 = mask4 & 3;
final int high2n = n - (low2 >> 1) - (low2 & 1);
final int inLow2 = high2n >>> 31;
final int inHigh2 = inLow2 ^ 1;
final int shift2 = inHigh2 << 1;
final int mask2 = (mask4 >>> shift2) & 3;
n = ((-inLow2) & n) | ((-inHigh2) & high2n);
base += shift2;
// For the 2 bits remaining, we can take a shortcut
return base + (n | ((mask2 ^ 1) & 1));
}

Generating random points to build a procedural line

I want to randomly generate points. Well at least there should be a limitation on the y-axis. Later I connect the points to a line which should proceed in a simple animation. You can imagine this as a random walk of a drunken person, going uphill and downhill.
This sounds very simple. I searched around the web and found that this could be accomplished using the markov chain. I think this idea is really interesting.
You can create the first state of your scene by yourself and pass this state as input to the markov chain algorithm. The algorithm randomly changes this state and creates a walk.
However I cannot find any example of that algorithm and no source code. I just found an applet that demonstrates the markov chain algorithm: http://www.probability.ca/jeff/java/unif.html
Please suggest some code. Any other ideas how to accomplish this are appreciated too.
I painted an example
So I want the line to proceed in a similar way. There are valleys, slopes ... they are random but the randomness still apply to the initial state of the line. This is why I found makrov chain so interesting here: http://www.suite101.com/content/implementing-markov-chains-a24146
Here's some code in Lua:
absstepmax = 25
ymin = -100
ymax = 100
x = 0
y = 5
for i = 1, 20 do
y = y + (math.random(2*absstepmax) - absstepmax - 1)
y = math.max(ymin, math.min(ymax, y))
x = x + 5
print (x,y)
end
absstepmax limits the size of a y step per iteration
ymin and ymax limit the extent of y
There is no bias in the example, i.e., y can change symmetrically up or down. If you want your "drunk" tending more "downhill" you can change the offset after the call to random from absstepmax - 1 to absstepmax - 5 or whatever bias you like.
In this example, the x step is fixed. You may make this random as well using the same mechanisms.
Here are some sample runs:
> absstepmax = 25
> ymin = -100
> ymax = 100
> x = 0
> y = 5
> for i = 1, 20 do
>> y = y + (math.random(2*absstepmax) - absstepmax - 1)
>> y = math.max(ymin, math.min(ymax, y))
>> x = x + 5
>> print (x,y)
>> end
5 4
10 22
15 37
20 39
25 50
30 40
35 21
40 22
45 12
50 16
55 16
60 12
65 -1
70 -8
75 -14
80 -17
85 -19
90 -25
95 -37
100 -59
> absstepmax = 25
> ymin = -100
> ymax = 100
> x = 0
> y = 5
> for i = 1, 20 do
>> y = y + (math.random(2*absstepmax) - absstepmax - 1)
>> y = math.max(ymin, math.min(ymax, y))
>> x = x + 5
>> print (x,y)
>> end
5 -2
10 -15
15 -7
20 1
25 1
30 12
35 23
40 45
45 43
50 65
55 56
60 54
65 54
70 62
75 57
80 62
85 86
90 68
95 76
100 68
>
Painted result added from OP:

Resources