Finding largest difference in array of compass headings - algorithm

I'm trying to have the "range" of compass headings over the last X seconds. Example: Over the last minute, my heading has been between 120deg and 140deg on the compass. Easy enough right? I have an array with the compass headings over the time period, say 1 reading every second.
[ 125, 122, 120, 125, 130, 139, 140, 138 ]
I can take the minimum and maximum values and there you go. My range is from 120 to 140.
Except it's not that simple. Take for example if my heading has shifted from 10 degrees, to 350 degrees (ie it "passed" through North, changing -20deg.
Now my array might look something like this:
[ 9, 10, 6, 3, 358, 355, 350, 353 ]
Now the Min is 3 and max 358, which is not what I need :( I'm looking for the most "right hand" (clockwise) value, and most "left hand" (counter-clockwise) value.
Only way I can think of, is finding the largest arc along the circle that includes none of the values in my array, but I don't even know how I would do that.
Would really appreciate any help!

Problem Analysis
To summarize the problem, it sounds like you want to find both of the following:
The two readings that are closest together (for simplicity: in a clockwise direction) AND
Contain all of the other readings between them.
So in your second example, 9 and 10 are only 1° apart, but they do not contain all the other readings. Conversely, traveling clockwise from 10 to 9 would contain all of the other readings, but they are 359° apart in that direction, so they are not closest.
In this case, I'm not sure if using the minimum and maximum readings will help. Instead, I'd recommend sorting all of the readings. Then you can more easily check the two criteria specified above.
Here's the second example you provided, sorted in ascending order:
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
If we start from the beginning, we know that traveling from reading 3 to reading 358 will encompass all of the other readings, but they are 358 - 3 = 355° apart. We can continue scanning the results progressively. Note that once we circle around, we have to add 360 to properly calculate the degrees of separation.
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
*--------------------------> 358 - 3 = 355° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
-> *----------------------------- (360 + 3) - 6 = 357° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
----> *-------------------------- (360 + 6) - 9 = 357° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
-------> *----------------------- (360 + 9) - 10 = 359° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
----------> *------------------- (360 + 10) - 350 = 20° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
--------------> *-------------- (360 + 350) - 353 = 357° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
-------------------> *--------- (360 + 353) - 355 = 358° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
------------------------> *---- (360 + 355) - 358 = 357° separation
Pseudocode Solution
Here's a pseudocode algorithm for determining the minimum degree range of reading values. There are definitely ways it could be optimized if performance is a concern.
// Somehow, we need to get our reading data into the program, sorted
// in ascending order.
// If readings are always whole numbers, you can use an int[] array
// instead of a double[] array. If we use an int[] array here, change
// the "minimumInclusiveReadingRange" variable below to be an int too.
double[] readings = populateAndSortReadingsArray();
if (readings.length == 0)
{
// Handle case where no readings are provided. Show a warning,
// throw an error, or whatever the requirement is.
}
else
{
// We want to track the endpoints of the smallest inclusive range.
// These values will be overwritten each time a better range is found.
int minimumInclusiveEndpointIndex1;
int minimumInclusiveEndpointIndex2;
double minimumInclusiveReadingRange; // This is convenient, but not necessary.
// We could determine it using the
// endpoint indices instead.
// Check the range of the greatest and least readings first. Since
// the readings are sorted, the greatest reading is the last element.
// The least reading is the first element.
minimumInclusiveReadingRange = readings[array.length - 1] - readings[0];
minimumInclusiveEndpointIndex1 = 0;
minimumInclusiveEndpointIndex2 = array.length - 1;
// Potential to skip some processing. If the ends are 180 or less
// degrees apart, they represent the minimum inclusive reading range.
// The for loop below could be skipped.
for (int i = 1; i < array.length; i++)
{
if ((360.0 + readings[i-1]) - readings[i] < minimumInclusiveReadingRange)
{
minimumInclusiveReadingRange = (360.0 + readings[i-1]) - readings[i];
minimumInclusiveEndpointIndex1 = i;
minimumInclusiveEndpointIndex2 = i - 1;
}
}
// Most likely, there will be some different readings, but there is an
// edge case of all readings being the same:
if (minimumInclusiveReadingRange == 0.0)
{
print("All readings were the same: " + readings[0]);
}
else
{
print("The range of compass readings was: " + minimumInclusiveReadingRange +
" spanning from " + readings[minimumInclusiveEndpointIndex1] +
" to " + readings[minimumInclusiveEndpointIndex2]);
}
}
There is one additional edge case that this pseudocode algorithm does not cover, and that is the case where there are multiple minimum inclusive ranges...
Example 1: [0, 90, 180, 270] which has a range of 270 (90 to 0/360, 180 to 90, 270 to 180, and 0 to 270).
Example 2: [85, 95, 265, 275] which has a range of 190 (85 to 275 and 265 to 95)
If it's necessary to report each possible pair of endpoints that create the minimum inclusive range, this edge case would increase the complexity of the logic a bit. If all that matters is determining the value of the minimum inclusive range or it is sufficient to report just one pair that represents the minimum inclusive range, the provided algorithm should suffice.

Related

How does one find the floor of the log-base-2 of an n-bit integer using bitwise operators?

I have a program in which it is necessary to calculate the floor of the log-base-2 of an integer very frequently. As a reuslt, the performance of the standard library's log2 method in my language of choice (floor(std::log2([INT])) from <cmath> in C++) is unsatisfactory, and I would like to implement the quickest version of this algorithm possible. I have found versions online which use bitwise operators to calculate this value for 32-bit and 64-bit integers:
INT Y(log2i)(const INT m)
{
/* Special case, zero or negative input. */
if (m <= 0)
return -1;
#if SIZEOF_PTRDIFF_T == 8
/* Hash map with return values based on De Bruijn sequence. */
static INT debruijn[64] =
{
0, 58, 1, 59, 47, 53, 2, 60, 39, 48, 27, 54, 33, 42, 3, 61, 51, 37, 40, 49,
18, 28, 20, 55, 30, 34, 11, 43, 14, 22, 4, 62, 57, 46, 52, 38, 26, 32, 41,
50, 36, 17, 19, 29, 10, 13, 21, 56, 45, 25, 31, 35, 16, 9, 12, 44, 24, 15,
8, 23, 7, 6, 5, 63
};
register uint64_t v = (uint64_t)(m);
/* Round down to one less than a power of 2. */
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
v |= v >> 32;
/* 0x03f6eaf2cd271461 is a hexadecimal representation of a De Bruijn
* sequence for binary words of length 6. The binary representation
* starts with 000000111111. This is required to make it work with one less
* than a power of 2 instead of an actual power of 2.
*/
return debruijn[(uint64_t)(v * 0x03f6eaf2cd271461LU) >> 58];
#elif SIZEOF_PTRDIFF_T == 4
/* Hash map with return values based on De Bruijn sequence. */
static INT debruijn[32] =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30, 8, 12, 20, 28,
15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
register uint32_t v = (uint32_t)(m);
/* Round down to one less than a power of 2. */
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
/* 0x07C4ACDD is a hexadecimal representation of a De Bruijn sequence for
* binary words of length 5. The binary representation starts with
* 0000011111. This is required to make it work with one less than a power of
* 2 instead of an actual power of 2.
*/
return debruijn[(uint32_t)(v * 0x07C4ACDDU) >> 27];
#else
#error Incompatible size of ptrdiff_t
#endif
}
(Above code taken from this link; the comments of said code reference this page, which gives a brief overview of how the algorithm works).
I need to implement a version of this algorithm for 256-bit integers. The general form, for an n-bit integer, is fairly easy to understand: (1) create an array of integers from the Debruijn sequence with n entries; (2) perform in-place bitwise-or on the integer in question right-shifted by 2^i for i=1...(n/2); and (3) return some the entry of the Debruijn array with an index equal to the integer multiplied by a constant right-shifted by another constant.
The third step is where I'm confused. How exactly does one derive 0x07C4ACDDU and 0x03f6eaf2cd271461LU as the multiplication constants for 32 and 64 bits, respectively? How does one derive 58 and 27 as the constants by which one should right-shift? What would these values be for a 256-bit integer in particular?
Thanks in advance. Sorry if this is obvious, I'm not very educated in math.
I agree with harold that std::countl_zero() is the way to go. Memory
has gotten a lot slower relative to compute since this bit-twiddling
hack was designed, and processors typically have built-in instructions.
To answer your question, however, this hack combines a couple
primitives. (When I talk about bit indexes, I'm counting from most to
least significant, starting the count at zero.)
The sequence of lines starting with v |= v >> 1; achieves its
stated goal of rounding up to the nearest power of two minus one
(i.e., a number whose binary representation matches 0*1*) by
setting every bit to the right of at least one set bit.
None of these lines clears bits in v.
Since there are right shifts only, every bit set by these lines
is to the right of at least one set bit.
Given a set bit at position i, we observe that a bit at
position i + delta will be guaranteed set by the lines
matching delta's binary representation, e.g., delta = 13
(1101 in binary) is handled by
v |= v >> 1; v |= v >> 4; v |= v >> 8;.
Extracting bits [L, L+delta) from an unsigned integer n with
WIDTH bits can be accomplished with (n << L) >> (WIDTH - delta).
The left shift truncates the upper bits that should be discarded,
and the right shift, which is logical in C++ for unsigned, truncates
the lower bits and fills the truncated upper bits with zeros.
Given that the answer is k, we want to extract 5 (= log2(32), for
32-bit) or 6 (= log2(64), for 64-bit) bits starting with index k
from the magic constant n. We can't shift by k because we only
know pow2(k) (sort of, I'll get to that in a second), but we can
use the equivalence between multiplying by pow2(k) and left
shifting by k as a workaround.
Actually we only know pow2(k+1) - 1. We're going to be greedy and
shave the two ops that we'd need to get pow2(k). By putting 5 or 6
ones after the initial zeros, we force that minus one to always
cause the answer to be one less than it should have been (mod 32 or
64).
So the de Bruijn sequence: the idea is that we can uniquely identify
our index in the sequence by reading the next 5 or 6 bits. We aren't
so lucky as to be able to have these bits be equal to the index,
which is where the look-up table comes in.
As an example, I'll adapt this algorithm to 8-bit words. We start with
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
The de Bruijn sequence is 00011101, which written out in three-bit
segments is
(index − 1) mod 8
bits
value
(value − 1) mod 8
7
000
0
7
0
001
1
0
1
011
3
2
2
111
7
6
3
110
6
5
4
101
5
4
5
010
2
1
6
100
4
3
The hex constant is 0x1D, the right shift is 8 − log2(8) = 5, the
table is derived by inverting the permutation above:
{0, 5, 1, 6, 4, 3, 2, 7}.
So, hypothetically, if we wanted to adapt this algorithm to a 256-bit
word size, we'd add v |= v >> 64; v |= v >> 128;, change the shift to
256 − log2(256) = 256 − 8 = 248, find a 256-bit de Bruijn sequence that
starts with 0000000011111111, encode it as a hex constant, and
construct the appropriate look-up table to pair with it.
But like, don't. If you insist on not using the library function, you're
still probably on a 64-bit machine, so you should just test whether each
of the four words from big to little is nonzero, and if so, apply the
64-bit code and add the appropriate offset.

How to Find All Overlapping Intervals summed weight?

Possible Interview Question: How to Find All Overlapping Intervals => provide us a solution to find all the overlapping intervals. On top of this problem, imagine each interval has a weight. I am aiming to find those overlap intervals summed weight, when a new interval is inserted.
Condition: Newly inserted interval's end value is always larger than the previously inserted interval's end point, this will lead us to have already sorted end points.
When a new interval and its weight is inserted, all the overlapped intervals summed weight should be checked that does it exceeds the limit or not. For example when we insert [15, 70] 2, [15, 20] 's summed weight will be 130 and it should give an error since it exceed the limit=128, if not the newly inserted interval will be append to the list.
int limit = 128;
Inserted itervals in order:
order_come | start | end | weight
0 [10, 20] 32
1 [15, 25] 32
2 [5, 30] 32
3 [30, 40] 64
4 [1, 50] 16
5 [1, 60] 16
6 [15, 70] 2 <=should not append to the list.
Final overall summed weight view of the List after `[15, 70] 2` is inserted:
[60, 70, 2]
[50, 60, 18]
[40, 50, 34]
[30, 40, 98]
[25, 30, 66]
[20, 25, 98]
[15, 20, 130] <= exceeds the limit=128, throw an error.
[10, 15, 96]
[5, 10, 64]
[1, 5, 32]
[0, 0, 0]
Thank you for your valuable time and help.
O(log n)-time inserts are doable with an augmented binary search tree. To store
order_come | start | end | weight
0 [10, 20] 32
1 [15, 25] 32
2 [5, 30] 32
3 [30, 40] 64
4 [1, 50] 16
5 [1, 60] 16
we have a tree shaped like
25
/ \
/ \
10 50
/ \ / \
5 20 40 60
/ / /
1 15 30 ,
where each number represents the interval from it to its successor. Associated with each tree node are two numbers. The first we call ∆weight, defined to be the weight of the node's interval minus the weight of the node's parent's interval, if extent (otherwise zero). The second we call ∆max, defined to be the maximum weight of an interval corresponding to a descendant of the node, minus the node's weight.
For the above example,
interval | tree node | total weight | ∆weight | ∆max
[1, 5) 1 32 -32 0
[5, 10) 5 64 -32 0
[10, 15) 10 96 32 32
[15, 20) 15 128 32 0
[20, 25) 20 96 0 32
[25, 30) 25 64 64 64
[30, 40) 30 96 64 0
[40, 50) 40 32 16 64
[50, 60) 50 16 -48 80
[60, ∞) 60 0 -16 0
Binary search tree operations almost invariably require rotations. When we rotate a tree like
p c
/ \ / \
c r => l p
/ \ / \
l g g r
we modify
c.∆weight += p.∆weight
g.∆weight += c.∆weight
g.∆weight -= p.∆weight
p.∆weight -= c.∆weight
p.∆max = max(0, g.∆max + g.∆weight, r.∆max + r.∆weight)
c.∆max = max(0, l.∆max + l.∆weight, p.∆max + p.∆weight).
The point of the augmentations is as follows. To find the max weight in the tree, compute r.∆max + r.∆value where r is the root. To increase every weight in a subtree by a given quantity ∂, increase the subtree root's ∆weight by ∂. By changing O(log n) nodes with inclusion-exclusion, we can increase a whole interval. Together, these operations allow us to evaluate an insertion in time O(log n).
To find the total weight of an interval, search for that interval as normal while also adding up the ∆weight values of that interval's ancestors. For example, to find the weight of [15, 30], we look for 15, traversing 25 (∆weight = 64), 10 (∆weight = 32), 20 (∆weight = 0), and 15 (∆weight = 32), for a total weight of 64 + 32 + 0 + 32 = 128.
To find the maximum total weight along a hypothetical interval, we do a modified search something like this. Using another modified search, compute the greatest tree value less than or equal to start (predstart; let predstart = -∞ if start is all tree values are greater than start) and pass it to this maxtotalweight.
maxtotalweight(root, predstart, end):
if root is nil:
return -∞
if end <= root.value:
return maxtotalweight(root.leftchild, predstart, end) + root.∆weight
if predstart > root.value:
return maxtotalweight(root.rightchild, predstart, end) + root.∆weight
lmtw = maxtotalweight1a(root.leftchild, predstart)
rmtw = maxtotalweight1b(root.rightchild, end)
return max(lmtw, 0, rmtw) + root.∆weight
maxtotalweight1a(root, predstart):
if root is nil:
return -∞
if predstart > root.value:
return maxtotalweight1a(root.rightchild, predstart) + root.∆weight
lmtw = maxtotalweight1a(root.leftchild, predstart)
return max(lmtw, 0, root.rightchild.∆max + root.rightchild.∆weight) + root.∆weight
maxtotalweight1b(root, end):
if root is nil:
return -∞
if end <= root.value:
return maxtotalweight1b(root.leftchild, end) + root.∆weight
rmtw = maxtotalweight1b(root.rightchild, end)
return max(root.leftchild.∆max + root.leftchild.∆weight, 0, rmtw) + root.∆weight
We assume that nil has ∆weight = 0 and ∆max = -∞. Sorry for all of the missing details.
Using the terminology of original answer when you have
'1E 2E 3E ... (n-1)E nE'
end-points already sorted and your (n+1)st end-point is grater than all previous end-points you only need to find intervals with end-point value greater then (n+1)st start-point (greater or equal in case of closed intervals).
In other words - iterate over intervals starting from most-right end-point to the left until you reach the interval with end-point lesser or equal than (n+1)st start-point and keep track of sum of weights. Then check if the sum fits into the limit. Worst case time-complexity is O(n) when all previous intervals have end-point grater then (n+1)st start-point.

find the index of 1 of a binay number which is power of 2

Let's say I have a number x which is a power of two that means x = 2^i for some i.
So the binary representation of x has only one '1'. I need to find the index of that one.
For example, x = 16 (decimal)
x = 10000 (in binary)
here index should be 4. Is it possible to find the index in O(1) time by just using bit operation?
The following is an explanation of the logic behind the use of de Bruijn sequences in the O(1) code of the answers provided by #Juan Lopez and #njuffa (great answers by the way, you should consider upvoting them).
The de Bruijn sequence
Given an alphabet K and a length n, a de Bruijn sequence is a sequence of characters from K that contains (in no particular order) all permutations with length n in it [1], for example, given the alphabet {a, b} and n = 3, the following is a list all permutations (with repetitions) of {a, b} with length 3:
[aaa, aab, aba, abb, baa, bab, bba, bbb]
To create the associated de Bruijn sequence we construct a minimum string that contains all these permutations without repetition, one of such strings would be: babbbaaa
"babbbaaa" is a de Bruijn sequence for our alphabet K = {a, b} and n = 3, the notation to represent this is B(2, 3), where 2 is the size of K also denoted as k. The size of the sequence is given by kn, in the previous example kn = 23 = 8
How can we construct such a string? One method consist on building a directed graph where every node represents a permutation and has an outgoing edge for every letter in the alphabet, the transition from one node to another adds the edge letter to the right of the next node and removes its leftmost letter. Once the graph is built grab the edges in a Hamiltonian path over it to construct the sequence.
The graph for the previous example would be:
Then, take a Hamiltonian path (a path that visits each vertex exactly once):
Starting from node aaa and following each edge, we end up having:
(aaa) -> b -> a -> b -> b -> b -> a -> a -> a (aaa) = babbbaaa
We could have started from the node bbb in which case the obtained sequence would have been "aaababbb".
Now that the de Bruijn sequence is covered, let's use it to find the number of leading zeroes in an integer.
The de Bruijn algorihtm [2]
To find out the number of leading zeroes in an integer value, the first step in this algorithm is to isolate the first bit from right to left, for example, given 848 (11010100002):
isolate rightmost bit
1101010000 ---------------------------> 0000010000
One way to do this is using x & (~x + 1), you can find more info on how this expression works on the Hacker's Delight book (chapter 2, section 2-1).
The question states that the input is a power of 2, so the rightmost bit is isolated from the beginning and no effort is required for that.
Once the bit is isolated (thus converting it in a power of two), the second step consist on using a hash table approach along with its hash function to map the filtered input with its corresponding number of leading 0's, p.e., applying the hash function h(x) to 00000100002 should return the the index on the table that contains the value 4.
The algorithm proposes the use of a perfect hash function highlighting these properties:
the hash table should be small
the hash function should be easy to compute
the hash function should not produce collisions, i.e., h(x) ≠ h(y) if x ≠ y
To achieve this, we could use a de Bruijn sequence, with an alphabet of binary elements K = {0, 1}, with n = 6 if we want to solve the problem for 64 bit integers (for 64 bit integers, there are 64 possible power of two values and 6 bits are required to count them all). B(2, 6) = 64, so we need to find a de Bruijn sequence of length 64 that includes all permutations (with repetition) of binary digits with length 6 (0000002, 0000012, ..., 1111112).
Using a program that implements a method like the one described above you can generate a de Bruijn sequence that meets the requirement for 64 bits integers:
00000111111011011101010111100101100110100100111000101000110000102 = 7EDD5E59A4E28C216
The proposed hashing function for the algorithm is:
h(x) = (x * deBruijn) >> (k^n - n)
Where x is a power of two. For every possible power of two within 64 bits, h(x) returns a corresponding binary permutation, and we need to associate every one of these permutations with the number of leading zeroes to fill the table. For example, if x is 16 = 100002, which has 4 leading zeroes, we have:
h(16) = (16 * 0x7EDD5E59A4E28C2) >> 58
= 9141566557743385632 >> 58
= 31 (011111b)
So, at index 31 of our table, we store 4. Another example, let's work with 256 = 1000000002 which has 8 leading zeroes:
h(256) = (256 * 0x7EDD5E59A4E28C2) >> 58
= 17137856407927308800 (due to overflow) >> 58
= 59 (111011b)
At index 59, we store 8. We repeat this process for every power of two until we fill up the table. Generating the table manually is unwieldy, you should use a program like the one found here for this endeavour.
At the end we'd end up with the following table:
int table[] = {
63, 0, 58, 1, 59, 47, 53, 2,
60, 39, 48, 27, 54, 33, 42, 3,
61, 51, 37, 40, 49, 18, 28, 20,
55, 30, 34, 11, 43, 14, 22, 4,
62, 57, 46, 52, 38, 26, 32, 41,
50, 36, 17, 19, 29, 10, 13, 21,
56, 45, 25, 31, 35, 16, 9, 12,
44, 24, 15, 8, 23, 7, 6, 5
};
And the code to calculate the required value:
// Assumes that x is a power of two
int numLeadingZeroes(uint64_t x) {
return table[(x * 0x7EDD5E59A4E28C2ull) >> 58];
}
What warranties that we are not missing an index for a power of two due to collision?
The hash function basically obtains every 6 bits permutation contained in the de Bruijn sequence for every power of two, the multiplication by x is basically just a shift to the left (multiplying a number by a power of two is the same as left shifting the number), then the right shift 58 is applied, isolating the 6 bits group one by one, no collision will appear for two different values of x (the third property of the desired hash function for this problem) thanks to the de Bruijn sequence.
References:
[1] De Bruijn Sequences - http://datagenetics.com/blog/october22013/index.html
[2] Using de Bruijn Sequences to Index a 1 in a Computer Word - http://supertech.csail.mit.edu/papers/debruijn.pdf
[3] The Magic Bitscan - http://elemarjr.net/2011/01/09/the-magic-bitscan/
The specifications of the problem are not entirely clear to me. For example, which operations count as "bit operations" and how many bits make up the input in question? Many processors have a "count leading zeros" or "find first bit" instruction exposed via intrinsic that basically provides the desired result directly.
Below I show how to find the bit position in 32-bit integer based on a De Bruijn sequence.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
/* find position of 1-bit in a = 2^n, 0 <= n <= 31 */
int bit_pos (uint32_t a)
{
static int tab[32] = { 0, 1, 2, 6, 3, 11, 7, 16,
4, 14, 12, 21, 8, 23, 17, 26,
31, 5, 10, 15, 13, 20, 22, 25,
30, 9, 19, 24, 29, 18, 28, 27};
// return tab [0x04653adf * a >> 27];
return tab [(a + (a << 1) + (a << 2) + (a << 3) + (a << 4) + (a << 6) +
(a << 7) + (a << 9) + (a << 11) + (a << 12) + (a << 13) +
(a << 16) + (a << 18) + (a << 21) + (a << 22) + (a << 26))
>> 27];
}
int main (void)
{
uint32_t nbr;
int pos = 0;
while (pos < 32) {
nbr = 1U << pos;
if (bit_pos (nbr) != pos) {
printf ("!!!! error: nbr=%08x bit_pos=%d pos=%d\n",
nbr, bit_pos(nbr), pos);
EXIT_FAILURE;
}
pos++;
}
return EXIT_SUCCESS;
}
You can do it in O(1) if you allow a single memory access:
#include <iostream>
using namespace std;
int indexes[] = {
63, 0, 58, 1, 59, 47, 53, 2,
60, 39, 48, 27, 54, 33, 42, 3,
61, 51, 37, 40, 49, 18, 28, 20,
55, 30, 34, 11, 43, 14, 22, 4,
62, 57, 46, 52, 38, 26, 32, 41,
50, 36, 17, 19, 29, 10, 13, 21,
56, 45, 25, 31, 35, 16, 9, 12,
44, 24, 15, 8, 23, 7, 6, 5
};
int main() {
unsigned long long n;
while(cin >> n) {
cout << indexes[((n & (~n + 1)) * 0x07EDD5E59A4E28C2ull) >> 58] << endl;
}
}
It depends on your definitions. First let's assume there are n bits, because if we assume there is a constant number of bits then everything we could possibly do with them is going to take constant time so we could not compare anything.
First, let's take the widest possible view of "bitwise operations" - they operate on bits but not necessarily pointwise, and furthermore we'll count operations but not include the complexity of the operations themselves.
M. L. Fredman and D. E. Willard showed that there is an algorithm of O(1) operations to compute lambda(x) (the floor of the base-2 logarithm of x, so the index of the highest set bit). It contains quite some multiplications though, so calling it bitwise is a bit funny.
On the other hand, there is an obvious O(log n) operations algorithm using no multiplications, just binary search for it. But can do better, Gerth Brodal showed that it can be done in O(log log n) operations (and none of them are multiplications).
All the algorithms I referenced are in The Art of Computer Programming 4A, bitwise tricks and techniques.
None of these really qualify as finding that 1 in constant time, and it should be obvious that you can't do that. The other answers don't qualify either, despite their claims. They're cool, but they're designed for a specific constant number of bits, any naive algorithm would therefore also be O(1) (trivially, because there is no n to depend on). In a comment OP said something that implied he actually wanted that, but it doesn't technically answer the question.
And the answer is ... ... ... ... ... yes!
Just for fun, since you commented below one of the answers that i up to 20 would suffice.
(multiplications here are by either zero or one)
#include <iostream>
using namespace std;
int f(int n){
return
0 | !(n ^ 1048576) * 20
| !(n ^ 524288) * 19
| !(n ^ 262144) * 18
| !(n ^ 131072) * 17
| !(n ^ 65536) * 16
| !(n ^ 32768) * 15
| !(n ^ 16384) * 14
| !(n ^ 8192) * 13
| !(n ^ 4096) * 12
| !(n ^ 2048) * 11
| !(n ^ 1024) * 10
| !(n ^ 512) * 9
| !(n ^ 256) * 8
| !(n ^ 128) * 7
| !(n ^ 64) * 6
| !(n ^ 32) * 5
| !(n ^ 16) * 4
| !(n ^ 8) * 3
| !(n ^ 4) * 2
| !(n ^ 2);
}
int main() {
for (int i=1; i<1048577; i <<= 1){
cout << f(i) << " "; // 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
}
}

Print the row and column wise sorted 2 D matrix in a sorted order

Given an n x n matrix, where every row and column is sorted in non-decreasing order. Print all elements of matrix in sorted order.
Example:
Input:
mat[][] = { {10, 20, 30, 40},
{15, 25, 35, 45},
{27, 29, 37, 48},
{32, 33, 39, 50},
};
Output:
(Elements of matrix in sorted order)
10 15 20 25 27 29 30 32 33 35 37 39 40 45 48 50
I am unable to figure out how to do this.But according to me we can put the 2 D matrix in one matrix and apply the sort function.But i am in a need of space optimized code.
Using a Heap would be a good idea here.
Please refer to the following for a very similar question:
http://www.geeksforgeeks.org/kth-smallest-element-in-a-row-wise-and-column-wise-sorted-2d-array-set-1/
Thought the problem in the link above is different, the same approach could be used for the problem you specify. Instead of looping k times as the link explains, you need to visit all elements in the matrix i.e you should loop till the heap is empty.

Evenly distributing a timeline in some length even when length is less than the number of points (compression)

I need to render a horizontal calendar and render events on it. So I get two dates and the width in pixels. I want to distribute the days between the two provided dates over those pixels and maintain a minimum distance between the visual points.
for instance, I have 365 days (each day should consume at least 10 pixels) and I need to distribute then over 300 pixels. So I need to "pack" them in groups so each pixel would represent multiple dates. How can I calculate this mathematically speaking?
i.e.
(days)
1/1 8/1 16/1 24/1 2/2 10/2 18/2 ......
in the above example for instance, how can I calculate that I need to "pack/skip" the 7 days?
What I need in the end is to produce an array with the dates (days) and the x offset where it should be positioned in the horizontal axis.
i.e.
1/1/2013 = 0
2/1/2013 = 0
3/1/2013 = 0
4/1/2013 = 0
5/1/2013 = 0
6/1/2013 = 0
7/1/2013 = 0
8/1/2013 = 10
9/1/2013 = 10
10/1/2013 = 10
....
You have 300 pixels to use. Each 'package' should be at least 10 pixels. This means you should have 300/10=30 packages. You have 365 which should be distributed over 30 packages so that's 365/30=12.17 days per package. Or simply 12.
The same logic can be used to calculate the amount of days needed in a package if you have a different amount of pixels to use.
I hope that this was what you were asking for.
Jannes
Edit: I have just read your edit so I will alter my reply a bit here.
If you have converted your date to a number between 1 and 365 you can simply calculate each element of your array days like this.
days[i]=floor(i/12)*10
Where the 12 came from above calculations.
date_width = 10
display_width = 300
date_range = 365
num_of_dates = display_len // date_len
date_offsets = [x * date_range // num_of_dates for x in range(num_of_dates)]
gives dates for every 10 "pixels"
[0, 12, 24, 36, 48, 60, 73, 85, 97, 109, 121, 133, 146, 158, 170, 182, 194, 206, 219, 231, 243, 255, 267, 279, 292, 304, 316, 328, 340, 352]
if seeing that you have about 12 days between data points you want to shift it up to 2 weeks
date_offset = 14
date_offsets = [x * date_offset for x in range(date_range//date_offset)]
date_positions = [display_width * o // date_range for o in date_offsets]

Resources