Ruby unsigned right shift operator - ruby

I'm attempting to convert some of my Java code to (J)Ruby, and due to my lack of experience of bitwise operations, I ran into a problem that I can't seem to be able to solve by myself.
Simply put, I don't know how to convert this piece of Java code into Ruby, as Ruby does not appear to have the unsigned right shift operator (>>>).
private static short flipEndian(short signedShort) {
int input = signedShort & 0xFFFF;
return (short) (input << 8 | (input & 0xFF00) >>> 8);
}
def self.flip_endian(signed_short)
input = signed_short & 0xFFFF
input << 8 | (input & 0xFF00) >> 8
end

This will swap the first 2 bytes and cut off all the higher bits of an Integer:
def self.flip_endian(input)
input << 8 & 0xFF00 | input >> 8 & 0xFF
end

Related

VHDL equivalent of Verilog localparam

I found the following statement in a verilog modul:
localparam str2=" Display Demo ", str2len=16;
Seems to me that str2 is a string value but I wonder how this is processed in the following code snippet.
always#(write_base_addr)
case (write_base_addr[8:7])//select string as [y]
0: write_ascii_data <= 8'hff & (str1 >> ({3'b0, (str1len - 1 - write_base_addr[6:3])} << 3));//index string parameters as str[x]
1: write_ascii_data <= 8'hff & (str2 >> ({3'b0, (str2len - 1 - write_base_addr[6:3])} << 3));
2: write_ascii_data <= 8'hff & (str3 >> ({3'b0, (str3len - 1 - write_base_addr[6:3])} << 3));
3: write_ascii_data <= 8'hff & (str4 >> ({3'b0, (str4len - 1 - write_base_addr[6:3])} << 3));
endcase
Will the string value be convertet into a bit value first? Write_ascii_data is only 8 bits long, seems to me that it is too short for fully storing the end result of the case process. Is there any vhdl equivalent of localparam string ?
Verilog has no string types. A string literal gets converted to the equivalent ASCII bit vector, 8 bits per character. So str2 is a 128 bit vector parameter. The RHS expressions are shifting str2 to the left by some multiple of 8 bits, selecting one ASCII character.

Fast CRC32 algorithm for reversed bit order

I am working with a micro controller which calculates the CRC32 checksum of data I upload to it's flash memory on the fly. This can in turn be used to verify that the upload was correct, by verifying the resulting checksum after all data is uploaded.
The only problem is that the Micro Controller reverses the bit order of the input bytes when it's run through the otherwise standard crc32 calculation. This in turn means I need to reverse every byte in the data on the programming host in order to calculate the CRC32 sum to verify. As the programming host is somewhat constrained, this is quite slow.
I figure that if it's possible to modify the CRC32 lookuptable so I can do the lookup without having to reverse the bit order, the verification algorithm would run many times faster. But I seem unable to figure out a way to do this.
To clarify the byte reversal, I need to change the input bytes following way:
01 02 03 04 -> 80 40 C0 20
It's a lot easier to see the reversal in binary representation of course:
00000001 00000010 00000011 00000100 ->
10000000 01000000 11000000 00100000
Edit
Here is the PoC Python code I use to verify the correctness of the CRC32 calculation, however this reverses each byte (a.e the slow way).
EDIT2
I've also included my failed attempt to generate a permutated lookup table, and using a standard LUT CRC32 algorithm.
The code spits out the correct reference CRC value first, and then the wrong LUT calculated CRC afterwards.
import binascii
CRC32_POLY = 0xEDB88320
def reverse_byte_bits(x):
'''
Reverses the bit order of the giveb byte 'x' and returns the result
'''
x = ((x<<4) & 0xF0)|((x>>4) & 0x0F)
x = ((x<<2) & 0xCC)|((x>>2) & 0x33)
x = ((x<<1) & 0xAA)|((x>>1) & 0x55)
return x
def reverse_bits(ba, blen):
'''
Reverses all bytes in the given array of bytes
'''
bar = bytearray()
for i in range(0, blen):
bar.append(reverse_byte_bits(ba[i]))
return bar
def crc32_reverse(ba):
# Reverse all bits in the
bar = reverse_bits(ba, len(ba))
# Calculate the CRC value
return binascii.crc32(bar)
def gen_crc_table_msb():
crctable = [0] * 256
for i in range(0, 256):
remainder = i
for bit in range(0, 8):
if remainder & 0x1:
remainder = (remainder >> 1) ^ CRC32_POLY
else:
remainder = (remainder >> 1)
# The correct index for the calculated value is the reverse of the index
ix = reverse_byte_bits(i)
crctable[ix] = remainder
return crctable
def crc32_revlut(ba, lut):
crc = 0xFFFFFFFF
for x in ba:
crc = lut[x ^ (crc & 0xFF)] ^ (crc >> 8)
return ~crc
# Reference test which gives the correct CRC
test = bytearray([1, 2, 3, 4, 5, 6, 7, 8])
crcrev = crc32_reverse(test)
print("0x%08X" % (crcrev & 0xFFFFFFFF))
# Test using permutated lookup table, but standard CRC32 LUT algorithm
lut = gen_crc_table_msb()
crctst = crc32_revlut(test, lut)
print("0x%08X" % (crctst & 0xFFFFFFFF))
Does anyone have any hints to how this could be done?
By reversing the logic of which way the crc "streams", the reverse in the main calculation can be avoided. So instead of crc >> 8 there would be crc << 8 and instead of XORing with the bottom byte of the crc for the LUT index we take the top. Like this:
def reverse_dword_bits(x):
'''
Reverses the bit order of the given dword 'x' and returns the result
'''
x = ((x<<16) & 0xFFFF0000)|((x>>16) & 0x0000FFFF)
x = ((x<<8) & 0xFF00FF00)|((x>>8) & 0x00FF00FF)
x = ((x<<4) & 0xF0F0F0F0)|((x>>4) & 0x0F0F0F0F)
x = ((x<<2) & 0xCCCCCCCC)|((x>>2) & 0x33333333)
x = ((x<<1) & 0xAAAAAAAA)|((x>>1) & 0x55555555)
return x
def gen_crc_table_msb():
crctable = [0] * 256
for i in range(0, 256):
remainder = i
for bit in range(0, 8):
if remainder & 0x1:
remainder = (remainder >> 1) ^ CRC32_POLY
else:
remainder = (remainder >> 1)
# The correct index for the calculated value is the reverse of the index
ix = reverse_byte_bits(i)
crctable[ix] = reverse_dword_bits(remainder)
return crctable
def crc32_revlut(ba, lut):
crc = 0xFFFFFFFF
for x in ba:
crc = lut[x ^ (crc >> 24)] ^ ((crc << 8) & 0xFFFFFFFF)
return reverse_dword_bits(~crc)

CRC Reverse Engineer (Checksum from Machine / PC)

I'm currently looking for on how to determine the CRC produced from the machine to PC (and vice-versa).
The devices are communicating using serial communication or RS232 cable.
I do only have data to be able for us to create a program to be used for both devices.
The data given was from my boss and the program was corrupted. So we are trying for it to work out.
I hope everyone can help.
Thanks :)
The sequence to use for the CRC calculation in your protocol is the ASCII string
starting from the first printing character (e.g. the 'R' from REQ)
until and including the '1E' in the calculation.
It's a CRC with the following specs according to our CRC calculator
CRC:16,1021,0000,0000,No,No
which means:
CRC width: 16 bit (of course)
polynomial: 1021 HEX (truncated CRC polynomial)
init value: 0000
final Xor applied: 0000
reflectedInput: No
reflectedOutput: No`
(If 'init value' were FFFF, it would be a "16 bit width CRC as designated by CCITT").
See also the Docklight CRC glossary and the Boost CRC library on what the CRC terms mean plus sample code.
What I did is to write a small script that tries out the popular 16 bit CRCs on varying parts of the first simple "REQ=INI" command, and see if I end up with a sum of 4255. This failed, but instead of going a full brute force with trying all sorts of polynoms, I assumed that it was maybe just an oddball / flawed implementation of the known standards, and indeed succeeded with a variation of the CRC-CCITT.
Heres is some slow & easy C code (not table based!) to calculate all sorts of CRCs:
// Generic, not table-based CRC calculation
// Based on and credits to the following:
// CRC tester v1.3 written on 4th of February 2003 by Sven Reifegerste (zorc/reflex)
unsigned long reflect (unsigned long crc, int bitnum) {
// reflects the lower 'bitnum' bits of 'crc'
unsigned long i, j=1, crcout=0;
for (i=(unsigned long)1<<(bitnum-1); i; i>>=1) {
if (crc & i) crcout|=j;
j<<= 1;
}
return (crcout);
}
calcCRC(
const int width, const unsigned long polynominal, const unsigned long initialRemainder,
const unsigned long finalXOR, const int reflectedInput, const int reflectedOutput,
const unsigned char message[], const long startIndex, const long endIndex)
{
// Ensure the width is in range: 1-32 bits
assert(width >= 1 && width <= 32);
// some constant parameters used
const bool b_refInput = (reflectedInput > 0);
const bool b_refOutput = (reflectedOutput > 0);
const unsigned long crcmask = ((((unsigned long)1<<(width-1))-1)<<1)|1;
const unsigned long crchighbit = (unsigned long)1<<(width-1);
unsigned long j, c, bit;
unsigned long crc = initialRemainder;
for (long msgIndex = startIndex; msgIndex <= endIndex; ++msgIndex) {
c = (unsigned long)message[msgIndex];
if (b_refInput) c = reflect(c, 8);
for (j=0x80; j; j>>=1) {
bit = crc & crchighbit;
crc<<= 1;
if (c & j) bit^= crchighbit;
if (bit) crc^= polynominal;
}
}
if (b_refOutput) crc=reflect(crc, width);
crc^= finalXOR;
crc&= crcmask;
return(crc);
}
With this code and the CRCs specs listed above, I have been able to re-calculate the following three sample CRCs:
10.03.2014 22:20:57.109 [TX] - REQ=INI<CR><LF>
<RS>CRC=4255<CR><LF>
<GS>
10.03.2014 22:20:57.731 [TX] - ANS=INI<CR><LF>
STATUS=0<CR><LF>
<RS>CRC=57654<CR><LF>
<GS>
10.03.2014 22:20:59.323 [TX] - ANS=INI<CR><LF>
STATUS=0<CR><LF>
MID="CTL1"<CR><LF>
DEF="DTLREQ";1025<CR><LF>
INFO=0<CR><LF>
<RS>CRC=1683<CR><LF>
<GS>
I failed on the very complex one with the DEF= parts - probably didn't understand the character sequence correctly.
The Docklight script I used to reverse engineer this:
Sub crcReverseEngineer()
Dim crctypes(7)
crctypes(0) = "CRC:16,1021,FFFF,0000" ' CCITT
crctypes(1) = "CRC:16,8005,0000,0000" ' CRC-16
crctypes(2) = "CRC:16,8005,FFFF,0000" ' CRC-MODBUS
' lets try also some nonstandard variations with different init and final Xor, but stick
' to the known two polynoms.
crctypes(3) = "CRC:16,1021,FFFF,FFFF"
crctypes(4) = "CRC:16,1021,0000,FFFF"
crctypes(5) = "CRC:16,1021,0000,0000"
crctypes(6) = "CRC:16,8005,FFFF,FFFF"
crctypes(7) = "CRC:16,8005,FFFF,0000"
crcString = "06 1C 52 45 51 3D 49 4E 49 0D 0A 1E 43 52 43 3D 30 30 30 30 0D 0A 1D"
For reflectedInOrOut = 0 To 3
For cType = 0 To 7
crcSpec = crctypes(cType) & "," & IIf(reflectedInOrOut Mod 2 = 1, "Yes", "No") & "," & IIf(reflectedInOrOut > 1, "Yes", "No")
For cStart = 1 To 3
For cEnd = 9 To (Len(crcString) + 1) / 3
subDataString = Mid(crcString, (cStart - 1) * 3 + 1, (cEnd - cStart + 1) * 3)
result = DL.CalcChecksum(crcSpec, subDataString, "H")
resultInt = CLng("&h" + Left(result, 2)) * 256 + CLng("&h" + Right(result, 2))
If resultInt = 4255 Then
DL.AddComment "Found it!"
DL.AddComment "sequence: " & subDataString
DL.AddComment "CRC spec: " & crcSpec
DL.AddComment "CRC result: " & result & " (Integer = " & resultInt & ")"
Exit Sub
End If
Next
Next
Next
Next
End Sub
Public Function IIf(blnExpression, vTrueResult, vFalseResult)
If blnExpression Then
IIf = vTrueResult
Else
IIf = vFalseResult
End If
End Function
Hope this helps and I'm happy to provide extra information or clarify details.

VB.NET enum declaration syntax

I recently saw a declaration of enum that looks like this:
<Serializable()>
<Flags()>
Public Enum SiteRoles
ADMIN = 10 << 0
REGULAR = 5 << 1
GUEST = 1 << 2
End Enum
I was wondering if someone can explain what does "<<" syntax do or what it is used for? Thank you...
The ENUM has a Flags attribute which means that the values are used as bit flags.
Bit Flags are useful when representing more than one attribute in a variable
These are the flags for a 16 bit (attribute) variable (hope you see the pattern which can continue on to X number of bits., limited by the platform/variable type of course)
BIT1 = 0x1 (1 << 0)
BIT2 = 0x2 (1 << 1)
BIT3 = 0x4 (1 << 2)
BIT4 = 0x8 (1 << 3)
BIT5 = 0x10 (1 << 4)
BIT6 = 0x20 (1 << 5)
BIT7 = 0x40 (1 << 6)
BIT8 = 0x80 (1 << 7)
BIT9 = 0x100 (1 << 8)
BIT10 = 0x200 (1 << 9)
BIT11 = 0x400 (1 << 10)
BIT12 = 0x800 (1 << 11)
BIT13 = 0x1000 (1 << 12)
BIT14 = 0x2000 (1 << 13)
BIT15 = 0x4000 (1 << 14)
BIT16 = 0x8000 (1 << 15)
To set a bit (attribute) you simply use the bitwise or operator:
UInt16 flags;
flags |= BIT1; // set bit (Attribute) 1
flags |= BIT13; // set bit (Attribute) 13
To determine of a bit (attribute) is set you simply use the bitwise and operator:
bool bit1 = (flags & BIT1) > 0; // true;
bool bit13 = (flags & BIT13) > 0; // true;
bool bit16 = (flags & BIT16) > 0; // false;
In your example above, ADMIN and REGULAR are bit number 5 ((10 << 0) and (5 << 1) are the same), and GUEST is bit number 3.
Therefore you could determine the SiteRole by using the bitwise AND operator, as shown above:
UInt32 SiteRole = ...;
IsAdmin = (SiteRole & ADMIN) > 0;
IsRegular = (SiteRole & REGULAR) > 0;
IsGuest = (SiteRole & GUEST) > 0;
Of course, you can also set the SiteRole by using the bitwise OR operator, as shown above:
UInt32 SiteRole = 0x00000000;
SiteRole |= ADMIN;
The real question is why do ADMIN and REGULAR have the same values? Maybe it's a bug.
These are bitwise shift operations. Bitwise shifts are used to transform the integer value of the enum mebers here to a different number. Each enum member will actually have the bit-shifted value. This is probably an obfuscation technique and is the same as setting a fixed integer value for each enum member.
Each integer has a binary reprsentation (like 0111011); bit shifting allows bits to move to the left (<<) or right (>>) depending on which operator is used.
For example:
10 << 0 means:
1010 (10 in binary form) moved with 0 bits left is 1010
5 << 1 means:
101 (5 in binary form) moved one bit to the left = 1010 (added a zero to the right)
so 5 << 1 is 10 (because 1010 represents the number 10)
and etc.
In general the x << y operation can be seen as a fast way to calculate x * Pow(2, y);
You can read this article for more detailed info on bit shifting in .NET http://www.blackwasp.co.uk/CSharpShiftOperators.aspx

To convert RGB 12 bit data to RGB 12 bit packed data

I have some RGB(image) data which is 12 bit. Each R,G,B has 12 bits, total 36 bits.
Now I need to club this 12 bit RGB data into a packed data format. I have tried to mention the packing as below:-
At present I have input data as -
B0 - 12 bits G0 - 12 bits R0 - 12 bits B1 - 12 bits G1 - 12 bits R1 - 12 bits .. so on.
I need to convert it to packed format as:-
Byte1 - B8 (8 bits of B0 data)
Byte2 - G4B4 (remaining 4 bits of B0 data+ first 4 bits of G0)
Byte3 - G8 (remaining 8 bits of G0)
Byte4 - R8 (first 8 bits of R0)
Byte5 - B4R4 (first 4 bits of B1 + last 4 bits of R0)
I have to write these individual bytes to a file in text format. one byte below another.
Similar thing i have to do for a 10 bit RGB input data.
Is there any tool/software to get the conversion of data i am looking to get done.
I am trying to do it in a C program - I am forming a 64 bit from the individual 12 bits of R,G,B (total 36 bits). But after that I am not able to come up with a logic to pick
the necessary bits from a R,G,B data to form a byte stream, and to dump them to a text file.
Any pointers will be helpful.
This is pretty much untested, super messy code I whipped together to give you a start. It's probably not packing the bytes exactly as you want, but you should get the general idea.
Apologies for the quick and nasty code, only had a couple of minutes, hope it's of some help anyway.
#include <stdio.h>
typedef struct
{
unsigned short B;
unsigned short G;
unsigned short R;
} UnpackedRGB;
UnpackedRGB test[] =
{
{0x0FFF, 0x000, 0x0EEE},
{0x000, 0x0FEF, 0xDEF},
{0xFED, 0xDED, 0xFED},
{0x111, 0x222, 0x333},
{0xA10, 0xB10, 0xC10}
};
UnpackedRGB buffer = {0, 0, 0};
int main(int argc, char** argv)
{
int numSourcePixels = sizeof(test)/sizeof(UnpackedRGB);
/* round up to the last byte */
int destbytes = ((numSourcePixels * 45)+5)/10;
unsigned char* dest = (unsigned char*)malloc(destbytes);
unsigned char* currentDestByte = dest;
UnpackedRGB *pixel1;
UnpackedRGB *pixel2;
int ixSource;
for (ixSource = 0; ixSource < numSourcePixels; ixSource += 2)
{
pixel1 = &test[ixSource];
pixel2 = ((ixSource + 1) < numSourcePixels ? &test[ixSource] : &buffer);
*currentDestByte++ = (0x0FF) & pixel1->B;
*currentDestByte++ = ((0xF00 & pixel1->B) >> 8) | (0x0F & pixel1->G);
*currentDestByte++ = ((0xFF0 & pixel1->G) >> 4);
*currentDestByte++ = (0x0FF & pixel1->R);
*currentDestByte++ = ((0xF00 & pixel1->R) >> 8) | (0x0F & pixel2->B);
if ((ixSource + 1) >= numSourcePixels)
{
break;
}
*currentDestByte++ = ((0xFF0 & pixel2->B) >> 4);
*currentDestByte++ = (0x0FF & pixel2->G);
*currentDestByte++ = ((0xF00 & pixel2->G) >> 8) | (0x0F & pixel2->R);
*currentDestByte++ = (0xFF0 & pixel2->R);
}
FILE* outfile = fopen("output.bin", "w");
fwrite(dest, 1, destbytes,outfile);
fclose(outfile);
}
Use bitwise & (and), | (or), and shift <<, >> operators.

Resources