Fast CRC32 algorithm for reversed bit order - algorithm

I am working with a micro controller which calculates the CRC32 checksum of data I upload to it's flash memory on the fly. This can in turn be used to verify that the upload was correct, by verifying the resulting checksum after all data is uploaded.
The only problem is that the Micro Controller reverses the bit order of the input bytes when it's run through the otherwise standard crc32 calculation. This in turn means I need to reverse every byte in the data on the programming host in order to calculate the CRC32 sum to verify. As the programming host is somewhat constrained, this is quite slow.
I figure that if it's possible to modify the CRC32 lookuptable so I can do the lookup without having to reverse the bit order, the verification algorithm would run many times faster. But I seem unable to figure out a way to do this.
To clarify the byte reversal, I need to change the input bytes following way:
01 02 03 04 -> 80 40 C0 20
It's a lot easier to see the reversal in binary representation of course:
00000001 00000010 00000011 00000100 ->
10000000 01000000 11000000 00100000
Edit
Here is the PoC Python code I use to verify the correctness of the CRC32 calculation, however this reverses each byte (a.e the slow way).
EDIT2
I've also included my failed attempt to generate a permutated lookup table, and using a standard LUT CRC32 algorithm.
The code spits out the correct reference CRC value first, and then the wrong LUT calculated CRC afterwards.
import binascii
CRC32_POLY = 0xEDB88320
def reverse_byte_bits(x):
'''
Reverses the bit order of the giveb byte 'x' and returns the result
'''
x = ((x<<4) & 0xF0)|((x>>4) & 0x0F)
x = ((x<<2) & 0xCC)|((x>>2) & 0x33)
x = ((x<<1) & 0xAA)|((x>>1) & 0x55)
return x
def reverse_bits(ba, blen):
'''
Reverses all bytes in the given array of bytes
'''
bar = bytearray()
for i in range(0, blen):
bar.append(reverse_byte_bits(ba[i]))
return bar
def crc32_reverse(ba):
# Reverse all bits in the
bar = reverse_bits(ba, len(ba))
# Calculate the CRC value
return binascii.crc32(bar)
def gen_crc_table_msb():
crctable = [0] * 256
for i in range(0, 256):
remainder = i
for bit in range(0, 8):
if remainder & 0x1:
remainder = (remainder >> 1) ^ CRC32_POLY
else:
remainder = (remainder >> 1)
# The correct index for the calculated value is the reverse of the index
ix = reverse_byte_bits(i)
crctable[ix] = remainder
return crctable
def crc32_revlut(ba, lut):
crc = 0xFFFFFFFF
for x in ba:
crc = lut[x ^ (crc & 0xFF)] ^ (crc >> 8)
return ~crc
# Reference test which gives the correct CRC
test = bytearray([1, 2, 3, 4, 5, 6, 7, 8])
crcrev = crc32_reverse(test)
print("0x%08X" % (crcrev & 0xFFFFFFFF))
# Test using permutated lookup table, but standard CRC32 LUT algorithm
lut = gen_crc_table_msb()
crctst = crc32_revlut(test, lut)
print("0x%08X" % (crctst & 0xFFFFFFFF))
Does anyone have any hints to how this could be done?

By reversing the logic of which way the crc "streams", the reverse in the main calculation can be avoided. So instead of crc >> 8 there would be crc << 8 and instead of XORing with the bottom byte of the crc for the LUT index we take the top. Like this:
def reverse_dword_bits(x):
'''
Reverses the bit order of the given dword 'x' and returns the result
'''
x = ((x<<16) & 0xFFFF0000)|((x>>16) & 0x0000FFFF)
x = ((x<<8) & 0xFF00FF00)|((x>>8) & 0x00FF00FF)
x = ((x<<4) & 0xF0F0F0F0)|((x>>4) & 0x0F0F0F0F)
x = ((x<<2) & 0xCCCCCCCC)|((x>>2) & 0x33333333)
x = ((x<<1) & 0xAAAAAAAA)|((x>>1) & 0x55555555)
return x
def gen_crc_table_msb():
crctable = [0] * 256
for i in range(0, 256):
remainder = i
for bit in range(0, 8):
if remainder & 0x1:
remainder = (remainder >> 1) ^ CRC32_POLY
else:
remainder = (remainder >> 1)
# The correct index for the calculated value is the reverse of the index
ix = reverse_byte_bits(i)
crctable[ix] = reverse_dword_bits(remainder)
return crctable
def crc32_revlut(ba, lut):
crc = 0xFFFFFFFF
for x in ba:
crc = lut[x ^ (crc >> 24)] ^ ((crc << 8) & 0xFFFFFFFF)
return reverse_dword_bits(~crc)

Related

Caesar's cypher encryption algorithm

Caesar's cypher is the simplest encryption algorithm. It adds a fixed value to the ASCII (unicode) value of each character of a text. In other words, it shifts the characters. Decrypting a text is simply shifting it back by the same amount, that is, it substract the same value from the characters.
My task is to write a function that:
accepts two arguments: the first is the character vector to be encrypted, and the second is the shift amount.
returns one output, which is the encrypted text.
needs to work with all the visible ASCII characters from space to ~ (ASCII codes of 32 through 126). If the shifted code goes outside of this range, it should wrap around. For example, if we shift ~ by 1, the result should be space. If we shift space by -1, the result should be ~.
This is my MATLAB code:
function [coded] = caesar(input_text, shift)
x = double(input_text); %converts char symbols to double format
for ii = 1:length(x) %go through each element
if (x(ii) + shift > 126) & (mod(x(ii) + shift, 127) < 32)
x(ii) = mod(x(ii) + shift, 127) + 32; %if the symbol + shift > 126, I make it 32
elseif (x(ii) + shift > 126) & (mod(x(ii) + shift, 127) >= 32)
x(ii) = mod(x(ii) + shift, 127);
elseif (x(ii) + shift < 32) & (126 + (x(ii) + shift - 32 + 1) >= 32)
x(ii) = 126 + (x(ii) + shift - 32 + 1);
elseif (x(ii) + shift < 32) & (126 + (x(ii) + shift - 32 + 1) < 32)
x(ii) = abs(x(ii) - 32 + shift - 32);
else x(ii) = x(ii) + shift;
end
end
coded = char(x); % converts double format back to char
end
I can't seem to make the wrapping conversions correctly (e.g. from 31 to 126, 30 to 125, 127 to 32, and so on). How should I change my code to do that?
Before you even start coding something like this, you should have a firm grasp of how to approach the problem.
The main obstacle you encountered is how to apply the modulus operation to your data, seeing how mod "wraps" inputs to the range of [0 modPeriod-1], while your own data is in the range [32 126]. To make mod useful in this case we perform an intermediate step of shifting of the input to the range that mod "likes", i.e. from some [minVal maxVal] to [0 modPeriod-1].
So we need to find two things: the size of the required shift, and the size of the period of the mod. The first one is easy, since this is just -minVal, which is the negative of the ASCII value of the first character, which is space (written as ' ' in MATLAB). As for the period of the mod, this is just the size of your "alphabet", which happens to be "1 larger than the maximum value, after shifting", or in other words - maxVal-minVal+1. Essentially, what we're doing is the following
input -> shift to 0-based ("mod") domain -> apply mod() -> shift back -> output
Now take a look how this can be written using MATLAB's vectorized notation:
function [coded] = caesar(input_text, shift)
FIRST_PRINTABLE = ' ';
LAST_PRINTABLE = '~';
N_PRINTABLE_CHARS = LAST_PRINTABLE - FIRST_PRINTABLE + 1;
coded = char(mod(input_text - FIRST_PRINTABLE + shift, N_PRINTABLE_CHARS) + FIRST_PRINTABLE);
Here are some tests:
>> caesar('blabla', 1)
ans =
'cmbcmb'
>> caesar('cmbcmb', -1)
ans =
'blabla'
>> caesar('blabla', 1000)
ans =
'5?45?4'
>> caesar('5?45?4', -1000)
ans =
'blabla'
We can solve it using the idea of periodic functions :
periodic function repeats itself every cycle and every cycle is equal to 2π ...
like periodic functions ,we have a function that repeats itself every 95 values
the cycle = 126-32+1 ;
we add one because the '32' is also in the cycle ...
So if the value of the character exceeds '126' we subtract 95 ,
i.e. if the value =127(bigger than 126) then it is equivalent to
127-95=32 .
&if the value is less than 32 we subtract 95.
i.e. if the value= 31 (less than 32) then it is equivalent to 31+95
=126..
Now we will translate that into codes :
function out= caesar(string,shift)
value=string+shift;
for i=1:length(value)
while value(i)<32
value(i)=value(i)+95;
end
while value(i)>126
value(i)=value(i)-95;
end
end
out=char(value);
First i converted the output(shift+ text_input) to char.
function coded= caesar(text_input,shift)
coded=char(text_input+shift);
for i=1:length(coded)
while coded(i)<32
coded(i)=coded(i)+95;
end
while coded(i)>126
coded(i)=coded(i)-95;
end
end
Here Is one short code:
function coded = caesar(v,n)
C = 32:126;
v = double(v);
for i = 1:length(v)
x = find(C==v(i));
C = circshift(C,-n);
v(i) = C(x);
C = 32:126;
end
coded = char(v);
end

Is this SuperFashHash implementation computing a proper 31-bit hash of a string?

I'm implementing a variation of the SuperFastHash in VBA for use in Excel (32-bit version, so no LongLong available) to hash strings.
To get around the limitations of signed 32-bit Long values, I'm doing the addition and bit-shifting using Double types, and then converting from Double to Long in a way that truncates it at 31 bits (the maximum positive value -- don't want to deal with two's complement and signs).
I'm getting answers and avoiding overflows so far, but I have a suspicion I'm making some mistakes in translation, since most implementations use all 32 bits of a uint and also deal with individual bytes from an array rather than 16-bit values coming from AscW().
Specific portions of the implementation I'm hoping someone can gut-check:
How I'm testing 16-bit character words rather than 4-byte chunks.
Whether my bit-shifting operations are technically correct, given the caveat that I need to truncate Long values at 31 bits.
Whether the final avalanche piece is still appropriate given the hash only uses 31 bits.
Here's the current code:
Public Function shr(ByVal Value As Long, ByVal Shift As Byte) As Long
shr = Value
If Shift > 0 Then shr = shr \ (2 ^ Shift)
End Function
Public Function shl(ByVal Value As Long, ByVal Shift As Byte) As Long
If Shift > 0 Then
shl = LimitDouble(CDbl(Value) * (2& ^ Shift))
Else
shl = Value
End If
End Function
Public Function LimitDouble(ByVal d As Double) As Long
'' Prevent overflow by lopping off anything beyond 31 bits
Const MaxNumber As Double = 2 ^ 31
LimitDouble = CLng(d - (Fix(d / MaxNumber) * MaxNumber))
End Function
Public Function SuperFastHash(ByVal dataToHash As String) As Long
Dim dataLength As Long
dataLength = Len(dataToHash)
If (dataLength = 0) Then
SuperFastHash = 0
Exit Function
End If
Dim hash As Long
hash = dataLength
Dim remainingBytes As Integer
remainingBytes = dataLength Mod 2
Dim numberOfLoops As Integer
numberOfLoops = dataLength \ 2
Dim currentIndex As Integer
currentIndex = 0
Dim tmp As Double
Do While (numberOfLoops > 0)
hash = LimitDouble(CDbl(hash) + AscW(Mid$(dataToHash, currentIndex + 1, 1)))
tmp = shl(AscW(Mid$(dataToHash, currentIndex + 2, 1)), 11) Xor hash
hash = shl(hash, 16) Xor tmp
hash = LimitDouble(CDbl(hash) + shr(hash, 11))
currentIndex = currentIndex + 2
numberOfLoops = numberOfLoops - 1
Loop
If remainingBytes = 1 Then
hash = LimitDouble(CDbl(hash) + AscW(Mid$(dataToHash, currentIndex + 1, 1)))
hash = hash Xor shl(hash, 10)
hash = LimitDouble(CDbl(hash) + shr(hash, 1))
End If
'' Final avalanche
hash = hash Xor shl(hash, 3)
hash = LimitDouble(CDbl(hash) + shr(hash, 5))
hash = hash Xor shl(hash, 4)
hash = LimitDouble(CDbl(hash) + shr(hash, 17))
hash = hash Xor shl(hash, 25)
hash = LimitDouble(CDbl(hash) + shr(hash, 6))
SuperFastHash = hash
End Function
I would suggest that rather than messing around with doubles, you would probably be better off splitting the 32-bit word into two "16-bit" parts, each of which is held in a signed 32-bit variable (use the lower 16 bits of each variable, and then "normalize" the value between steps:
highPart = (highPart + (lowPart \ 65536)) and 65535
lowPart = lowPart and 65535
Shifting left 16 places simply means copying the low part to the high part and zeroing the low part. Shifting right 16 places simply means copying the high part to the low part and zeroing the high part. Shifting left a smaller number of places simply means shifting both parts separately and then normalizing. Shifting a normalized number right a smaller number of places means shifting both parts left (16-N) bits, normalizing, and shifting right 16 bits.

VB.NET enum declaration syntax

I recently saw a declaration of enum that looks like this:
<Serializable()>
<Flags()>
Public Enum SiteRoles
ADMIN = 10 << 0
REGULAR = 5 << 1
GUEST = 1 << 2
End Enum
I was wondering if someone can explain what does "<<" syntax do or what it is used for? Thank you...
The ENUM has a Flags attribute which means that the values are used as bit flags.
Bit Flags are useful when representing more than one attribute in a variable
These are the flags for a 16 bit (attribute) variable (hope you see the pattern which can continue on to X number of bits., limited by the platform/variable type of course)
BIT1 = 0x1 (1 << 0)
BIT2 = 0x2 (1 << 1)
BIT3 = 0x4 (1 << 2)
BIT4 = 0x8 (1 << 3)
BIT5 = 0x10 (1 << 4)
BIT6 = 0x20 (1 << 5)
BIT7 = 0x40 (1 << 6)
BIT8 = 0x80 (1 << 7)
BIT9 = 0x100 (1 << 8)
BIT10 = 0x200 (1 << 9)
BIT11 = 0x400 (1 << 10)
BIT12 = 0x800 (1 << 11)
BIT13 = 0x1000 (1 << 12)
BIT14 = 0x2000 (1 << 13)
BIT15 = 0x4000 (1 << 14)
BIT16 = 0x8000 (1 << 15)
To set a bit (attribute) you simply use the bitwise or operator:
UInt16 flags;
flags |= BIT1; // set bit (Attribute) 1
flags |= BIT13; // set bit (Attribute) 13
To determine of a bit (attribute) is set you simply use the bitwise and operator:
bool bit1 = (flags & BIT1) > 0; // true;
bool bit13 = (flags & BIT13) > 0; // true;
bool bit16 = (flags & BIT16) > 0; // false;
In your example above, ADMIN and REGULAR are bit number 5 ((10 << 0) and (5 << 1) are the same), and GUEST is bit number 3.
Therefore you could determine the SiteRole by using the bitwise AND operator, as shown above:
UInt32 SiteRole = ...;
IsAdmin = (SiteRole & ADMIN) > 0;
IsRegular = (SiteRole & REGULAR) > 0;
IsGuest = (SiteRole & GUEST) > 0;
Of course, you can also set the SiteRole by using the bitwise OR operator, as shown above:
UInt32 SiteRole = 0x00000000;
SiteRole |= ADMIN;
The real question is why do ADMIN and REGULAR have the same values? Maybe it's a bug.
These are bitwise shift operations. Bitwise shifts are used to transform the integer value of the enum mebers here to a different number. Each enum member will actually have the bit-shifted value. This is probably an obfuscation technique and is the same as setting a fixed integer value for each enum member.
Each integer has a binary reprsentation (like 0111011); bit shifting allows bits to move to the left (<<) or right (>>) depending on which operator is used.
For example:
10 << 0 means:
1010 (10 in binary form) moved with 0 bits left is 1010
5 << 1 means:
101 (5 in binary form) moved one bit to the left = 1010 (added a zero to the right)
so 5 << 1 is 10 (because 1010 represents the number 10)
and etc.
In general the x << y operation can be seen as a fast way to calculate x * Pow(2, y);
You can read this article for more detailed info on bit shifting in .NET http://www.blackwasp.co.uk/CSharpShiftOperators.aspx

Setting last N bits in an array

I'm sure this is fairly simple, however I have a major mental block on it, so I need a little help here!
I have an array of 5 integers, the array is already filled with some data. I want to set the last N bits of the array to be random noise.
[int][int][int][int][int]
eg. set last 40 bits
[unchanged][unchanged][unchanged][24 bits of old data followed 8 bits of randomness][all random]
This is largely language agnostic, but I'm working in C# so bonus points for answers in C#
Without any bit-fu in C#:
BitArray ba = new BitArray (originalIntArray);
for (int i = startToReplaceFrom; i < endToReplaceTo; i++)
ba.Set (i, randomValue);
When you XOR any data with random data, the result is random, so you can do this:
Random random = new Random();
x[x.Length - 2] ^= random.Next(1 << 8);
x[x.Length - 1] = random.Next(1 << 16);
x[x.Length - 1] ^= random.Next(1 << 16) << 16;
For a general solution for any N you can use a loop:
for (int i = 0; i < 40; ++i)
{
x[x.Length - i / 32 - 1] ^= random.Next(2) << (i % 32);
}
Note that this calls random more times than necessary, but is simple.
In pseudo-Python:
N = 5 # array size
bits = 40 # for instance
int_bits = 32 # bits in one integer
i = N
while bits > 0:
value_bits = min (bits, int_bits)
bits -= value_bits
mask = (1 << value_bits) - 1
i -= 1
array[i] ^= random () & mask
Int32 is 4 bytes or 32 bits.
So you need the last int, and 8 bits extra.
int lastEightBitsMask = 0x000F + 1;
Random rand = new Random();
arr[arr.Length - 1] = rand.Next();
arr[arr.Length - 2] ^= rand.Next(lastEightBitsMask);
Explanation:
The last element's modification should be pretty clear - if you need the last 40 bits, the last 32 bits are included in that.
The remaining eight bits's modification is bounded above by 0x000F + 1, since rand.Next's argument is an exclusive upper bound, the randoms generated will be no more than that. The remaining bits of the number will stay the same, since 1^0 == 1 and 0^0 == 0.

Counting, reversed bit pattern

I am trying to find an algorithm to count from 0 to 2n-1 but their bit pattern reversed. I care about only n LSB of a word. As you may have guessed I failed.
For n=3:
000 -> 0
100 -> 4
010 -> 2
110 -> 6
001 -> 1
101 -> 5
011 -> 3
111 -> 7
You get the idea.
Answers in pseudo-code is great. Code fragments in any language are welcome, answers without bit operations are preferred.
Please don't just post a fragment without even a short explanation or a pointer to a source.
Edit: I forgot to add, I already have a naive implementation which just bit-reverses a count variable. In a sense, this method is not really counting.
This is, I think easiest with bit operations, even though you said this wasn't preferred
Assuming 32 bit ints, here's a nifty chunk of code that can reverse all of the bits without doing it in 32 steps:
unsigned int i;
i = (i & 0x55555555) << 1 | (i & 0xaaaaaaaa) >> 1;
i = (i & 0x33333333) << 2 | (i & 0xcccccccc) >> 2;
i = (i & 0x0f0f0f0f) << 4 | (i & 0xf0f0f0f0) >> 4;
i = (i & 0x00ff00ff) << 8 | (i & 0xff00ff00) >> 8;
i = (i & 0x0000ffff) << 16 | (i & 0xffff0000) >> 16;
i >>= (32 - n);
Essentially this does an interleaved shuffle of all of the bits. Each time around half of the bits in the value are swapped with the other half.
The last line is necessary to realign the bits so that bin "n" is the most significant bit.
Shorter versions of this are possible if "n" is <= 16, or <= 8
At each step, find the leftmost 0 digit of your value. Set it, and clear all digits to the left of it. If you don't find a 0 digit, then you've overflowed: return 0, or stop, or crash, or whatever you want.
This is what happens on a normal binary increment (by which I mean it's the effect, not how it's implemented in hardware), but we're doing it on the left instead of the right.
Whether you do this in bit ops, strings, or whatever, is up to you. If you do it in bitops, then a clz (or call to an equivalent hibit-style function) on ~value might be the most efficient way: __builtin_clz where available. But that's an implementation detail.
This solution was originally in binary and converted to conventional math as the requester specified.
It would make more sense as binary, at least the multiply by 2 and divide by 2 should be << 1 and >> 1 for speed, the additions and subtractions probably don't matter one way or the other.
If you pass in mask instead of nBits, and use bitshifting instead of multiplying or dividing, and change the tail recursion to a loop, this will probably be the most performant solution you'll find since every other call it will be nothing but a single add, it would only be as slow as Alnitak's solution once every 4, maybe even 8 calls.
int incrementBizarre(int initial, int nBits)
// in the 3 bit example, this should create 100
mask=2^(nBits-1)
// This should only return true if the first (least significant) bit is not set
// if initial is 011 and mask is 100
// 3 4, bit is not set
if(initial < mask)
// If it was not, just set it and bail.
return initial+ mask // 011 (3) + 100 (4) = 111 (7)
else
// it was set, are we at the most significant bit yet?
// mask 100 (4) / 2 = 010 (2), 001/2 = 0 indicating overflow
if(mask / 2) > 0
// No, we were't, so unset it (initial-mask) and increment the next bit
return incrementBizarre(initial - mask, mask/2)
else
// Whoops we were at the most significant bit. Error condition
throw new OverflowedMyBitsException()
Wow, that turned out kinda cool. I didn't figure in the recursion until the last second there.
It feels wrong--like there are some operations that should not work, but they do because of the nature of what you are doing (like it feels like you should get into trouble when you are operating on a bit and some bits to the left are non-zero, but it turns out you can't ever be operating on a bit unless all the bits to the left are zero--which is a very strange condition, but true.
Example of flow to get from 110 to 001 (backwards 3 to backwards 4):
mask 100 (4), initial 110 (6); initial < mask=false; initial-mask = 010 (2), now try on the next bit
mask 010 (2), initial 010 (2); initial < mask=false; initial-mask = 000 (0), now inc the next bit
mask 001 (1), initial 000 (0); initial < mask=true; initial + mask = 001--correct answer
Here's a solution from my answer to a different question that computes the next bit-reversed index without looping. It relies heavily on bit operations, though.
The key idea is that incrementing a number simply flips a sequence of least-significant bits, for example from nnnn0111 to nnnn1000. So in order to compute the next bit-reversed index, you have to flip a sequence of most-significant bits. If your target platform has a CTZ ("count trailing zeros") instruction, this can be done efficiently.
Example in C using GCC's __builtin_ctz:
void iter_reversed(unsigned bits) {
unsigned n = 1 << bits;
for (unsigned i = 0, j = 0; i < n; i++) {
printf("%x\n", j);
// Compute a mask of LSBs.
unsigned mask = i ^ (i + 1);
// Length of the mask.
unsigned len = __builtin_ctz(~mask);
// Align the mask to MSB of n.
mask <<= bits - len;
// XOR with mask.
j ^= mask;
}
}
Without a CTZ instruction, you can also use integer division:
void iter_reversed(unsigned bits) {
unsigned n = 1 << bits;
for (unsigned i = 0, j = 0; i < n; i++) {
printf("%x\n", j);
// Find least significant zero bit.
unsigned bit = ~i & (i + 1);
// Using division to bit-reverse a single bit.
unsigned rev = (n / 2) / bit;
// XOR with mask.
j ^= (n - 1) & ~(rev - 1);
}
}
void reverse(int nMaxVal, int nBits)
{
int thisVal, bit, out;
// Calculate for each value from 0 to nMaxVal.
for (thisVal=0; thisVal<=nMaxVal; ++thisVal)
{
out = 0;
// Shift each bit from thisVal into out, in reverse order.
for (bit=0; bit<nBits; ++bit)
out = (out<<1) + ((thisVal>>bit) & 1)
}
printf("%d -> %d\n", thisVal, out);
}
Maybe increment from 0 to N (the "usual" way") and do ReverseBitOrder() for each iteration. You can find several implementations here (I like the LUT one the best).
Should be really quick.
Here's an answer in Perl. You don't say what comes after the all ones pattern, so I just return zero. I took out the bitwise operations so that it should be easy to translate into another language.
sub reverse_increment {
my($n, $bits) = #_;
my $carry = 2**$bits;
while($carry > 1) {
$carry /= 2;
if($carry > $n) {
return $carry + $n;
} else {
$n -= $carry;
}
}
return 0;
}
Here's a solution which doesn't actually try to do any addition, but exploits the on/off pattern of the seqence (most sig bit alternates every time, next most sig bit alternates every other time, etc), adjust n as desired:
#define FLIP(x, i) do { (x) ^= (1 << (i)); } while(0)
int main() {
int n = 3;
int max = (1 << n);
int x = 0;
for(int i = 1; i <= max; ++i) {
std::cout << x << std::endl;
/* if n == 3, this next part is functionally equivalent to this:
*
* if((i % 1) == 0) FLIP(x, n - 1);
* if((i % 2) == 0) FLIP(x, n - 2);
* if((i % 4) == 0) FLIP(x, n - 3);
*/
for(int j = 0; j < n; ++j) {
if((i % (1 << j)) == 0) FLIP(x, n - (j + 1));
}
}
}
How about adding 1 to the most significant bit, then carrying to the next (less significant) bit, if necessary. You could speed this up by operating on bytes:
Precompute a lookup table for counting in bit-reverse from 0 to 256 (00000000 -> 10000000, 10000000 -> 01000000, ..., 11111111 -> 00000000).
Set all bytes in your multi-byte number to zero.
Increment the most significant byte using the lookup table. If the byte is 0, increment the next byte using the lookup table. If the byte is 0, increment the next byte...
Go to step 3.
With n as your power of 2 and x the variable you want to step:
(defun inv-step (x n) ; the following is a function declaration
"returns a bit-inverse step of x, bounded by 2^n" ; documentation
(do ((i (expt 2 (- n 1)) ; loop, init of i
(/ i 2)) ; stepping of i
(s x)) ; init of s as x
((not (integerp i)) ; breaking condition
s) ; returned value if all bits are 1 (is 0 then)
(if (< s i) ; the loop's body: if s < i
(return-from inv-step (+ s i)) ; -> add i to s and return the result
(decf s i)))) ; else: reduce s by i
I commented it thoroughly as you may not be familiar with this syntax.
edit: here is the tail recursive version. It seems to be a little faster, provided that you have a compiler with tail call optimization.
(defun inv-step (x n)
(let ((i (expt 2 (- n 1))))
(cond ((= n 1)
(if (zerop x) 1 0)) ; this is really (logxor x 1)
((< x i)
(+ x i))
(t
(inv-step (- x i) (- n 1))))))
When you reverse 0 to 2^n-1 but their bit pattern reversed, you pretty much cover the entire 0-2^n-1 sequence
Sum = 2^n * (2^n+1)/2
O(1) operation. No need to do bit reversals
Edit: Of course original poster's question was about to do increment by (reversed) one, which makes things more simple than adding two random values. So nwellnhof's answer contains the algorithm already.
Summing two bit-reversal values
Here is one solution in php:
function RevSum ($a,$b) {
// loop until our adder, $b, is zero
while ($b) {
// get carry (aka overflow) bit for every bit-location by AND-operation
// 0 + 0 --> 00 no overflow, carry is "0"
// 0 + 1 --> 01 no overflow, carry is "0"
// 1 + 0 --> 01 no overflow, carry is "0"
// 1 + 1 --> 10 overflow! carry is "1"
$c = $a & $b;
// do 1-bit addition for every bit location at once by XOR-operation
// 0 + 0 --> 00 result = 0
// 0 + 1 --> 01 result = 1
// 1 + 0 --> 01 result = 1
// 1 + 1 --> 10 result = 0 (ignored that "1", already taken care above)
$a ^= $b;
// now: shift carry bits to the next bit-locations to be added to $a in
// next iteration.
// PHP_INT_MAX here is used to ensure that the most-significant bit of the
// $b will be cleared after shifting. see link in the side note below.
$b = ($c >> 1) & PHP_INT_MAX;
}
return $a;
}
Side note: See this question about shifting negative values.
And as for test; start from zero and increment value by 8-bit reversed one (10000000):
$value = 0;
$add = 0x80; // 10000000 <-- "one" as bit reversed
for ($count = 20; $count--;) { // loop 20 times
printf("%08b\n", $value); // show value as 8-bit binary
$value = RevSum($value, $add); // do addition
}
... will output:
00000000
10000000
01000000
11000000
00100000
10100000
01100000
11100000
00010000
10010000
01010000
11010000
00110000
10110000
01110000
11110000
00001000
10001000
01001000
11001000
Let assume number 1110101 and our task is to find next one.
1) Find zero on highest position and mark position as index.
11101010 (4th position, so index = 4)
2) Set to zero all bits on position higher than index.
00001010
3) Change founded zero from step 1) to '1'
00011010
That's it. This is by far the fastest algorithm since most of cpu's has instructions to achieve this very efficiently. Here is a C++ implementation which increment 64bit number in reversed patern.
#include <intrin.h>
unsigned __int64 reversed_increment(unsigned __int64 number)
{
unsigned long index, result;
_BitScanReverse64(&index, ~number); // returns index of the highest '1' on bit-reverse number (trick to find the highest '0')
result = _bzhi_u64(number, index); // set to '0' all bits at number higher than index position
result |= (unsigned __int64) 1 << index; // changes to '1' bit on index position
return result;
}
Its not hit your requirements to have "no bits" operations, however i fear there is now way how to achieve something similar without them.

Resources