Horizontally Flip a One Bit Bitmap Line - algorithm

I'm looking for an algorithm to flip a 1 Bit Bitmap line horizontally. Remember these lines are DWORD aligned!
I'm currently unencoding an RLE stream to an 8 bit-per-pixel buffer, then re-encoding to a 1 bit line, however, I would like to try and keep it all in the 1 bit space in an effort to increase its speed. Profiling indicates this portion of the program to be relatively slow compared to the rest.
Example line (Before Flip):
FF FF FF FF 77 AE F0 00
Example line (After Flip):
F7 5E EF FF FF FF F0 00

Create a conversion table to swap the bits in a byte:
byte[] convert = new byte[256];
for (int i = 0; i < 256; i++) {
int value = 0;
for (int bit = 1; bit <= 128; bit<<=1) {
value <<= 1;
if ((i & bit) != 0) value++;
}
convert[i] = (byte)value;
}
Now you can use the table to swap a byte, then you just have to store the byte in the right place in the result:
byte[] data = { 0xFF, 0xFF, 0xFF, 0xFF, 0x77, 0xAE, 0xF0, 0x00 };
int width = 52;
int shift = data.Length * 8 - width;
int shiftBytes = data.Length - 1 - shift / 8;
int shiftBits = shift % 8;
byte[] result = new byte[data.Length];
for (int i = 0; i < data.Length; i++) {
byte swap = convert[data[i]];
if (shiftBits == 0) {
result[shiftBYtes - i] = swap;
} else {
if (shiftBytes - i >= 0) {
result[shiftBytes - i] |= (byte)(swap << shiftBits);
}
if (shiftBytes - i - 1 >= 0) {
result[shiftBytes - i - 1] |= (byte)(swap >> (8 - shiftBits));
}
}
}
Console.WriteLine(BitConverter.ToString(result));
Output:
F7-5E-EF-FF-FF-FF-F0-00

The following code reads and reverses the data in blocks of 32 bits as integers. The code to reverse the bits is split into two parts because on a little endian machine reading four bytes as an 32 bit integer reverses the byte order.
private static void Main()
{
var lineLength = 52;
var input = new Byte[] { 0xFF, 0xFF, 0xFF, 0xFF, 0x77, 0xAE, 0xF0, 0x00 };
var output = new Byte[input.Length];
UInt32 lastValue = 0x00000000;
var numberBlocks = lineLength / 32 + ((lineLength % 32 == 0) ? 0 : 1);
var numberBitsInLastBlock = lineLength % 32;
for (Int32 block = 0; block < numberBlocks; block++)
{
var rawValue = BitConverter.ToUInt32(input, 4 * block);
var reversedValue = (ReverseBitsA(rawValue) << (32 - numberBitsInLastBlock)) | (lastValue >> numberBitsInLastBlock);
lastValue = rawValue;
BitConverter.GetBytes(ReverseBitsB(reversedValue)).CopyTo(output, 4 * (numberBlocks - block - 1));
}
Console.WriteLine(BitConverter.ToString(input).Replace('-', ' '));
Console.WriteLine(BitConverter.ToString(output).Replace('-', ' '));
}
private static UInt32 SwapBitGroups(UInt32 value, UInt32 mask, Int32 shift)
{
return ((value & mask) << shift) | ((value & ~mask) >> shift);
}
private static UInt32 ReverseBitsA(UInt32 value)
{
value = SwapBitGroups(value, 0x55555555, 1);
value = SwapBitGroups(value, 0x33333333, 2);
value = SwapBitGroups(value, 0x0F0F0F0F, 4);
return value;
}
private static UInt32 ReverseBitsB(UInt32 value)
{
value = SwapBitGroups(value, 0x00FF00FF, 8);
value = SwapBitGroups(value, 0x0000FFFF, 16);
return value;
}
It is a bit ugly and not robust against errors ... but it is just sample code. And it outputs the following.
FF FF FF FF 77 AE F0 00
F7 5E EF FF FF FF F0 00

Related

How to calculate reconciliation factor in CRC32 for CVN number and CALID

How to calculate reconciliation factor in CRC32 for CVN number and CALID
Can someone help he with this How to calculate reconciliation factor in CRC32 and crc16.
but calculated factor value should be like to get same sum Every time for CVN calculation
I want formula of get offset or Reconciliation factor for CRC32 calculation .
Let me clarify for this : CVN : calibration Vehicle network identification . The solution as i want for ex: I have two different structure where we have 10 paramater and from that check sum value as ex 0xfefeABCD and 0x12345678 and on that structure im have to add one more parameter which calibration factor , When I add this parameter on both structure the checksum value get modify , but I need a algorithm for that to get same checksum value for both the structure by adding calibration factor and offset . NOTE : both structure have same 10 Variables but value is different , Im not having idea about value of these structure ,but still i need same checksum by adding factor value on structure .
Im using this function :
As the data im passing to function is :
final result will be store in buffer
Please let me know what thing im missing so to get result as same we want .
Im sure that im missing something .
Start code :
#include <stdio.h>
#define CRCPOLY 0xEDB88320
#define CRCINV 0x5B358FD3 // inverse poly of (x ^N) mod CRCPOLY
#define INITXOR 0xFFFFFFFF
#define FINALXOR 0xFFFFFFFF
void make_crc_revtable ( unsigned int * crc_revtable ) ;
int crc32_bitoriented ( unsigned char * buffer , int length );
unsigned int crc_table[256];
unsigned char buffer[]= { 0x0, 0x1, 0x2, 0x3, 0x4,0x5, 0x6, 0x7, 0x8, 0x9,0xA, 0xB, 0xC, 0xD, 0xE,0xF, 0x0, 0x1, 0x2, 0x3,0x0, 0x0, 0x0, 0x0, 0x0 };
unsigned int crc_revtable [256];
unsigned int tcrcreg ;
unsigned int CRC_32 ;
unsigned int fix_pos = 21;
unsigned int length = 256;
void fix_crc_pos ( unsigned char * buffer ,int length ,unsigned int tcrcreg ,int fix_pos ,unsigned int * crc_table ,unsigned int * crc_revtable )
{
int i;
// make sure fix_pos is within 0..( length -1)
fix_pos = ((fix_pos % length) + length) % length;
// calculate crc register at position fix_pos ; this is essentially crc32 ()
unsigned int crcreg = INITXOR ;
for (i = 0; i < fix_pos ; ++i)
{
crcreg = (crcreg >> 8) ^ crc_table[((crcreg ^ buffer [i]) & 0xFF)];
}
// inject crcreg as content
for (i = 0; i < 4; ++i)
{
buffer[fix_pos + i] = ((crcreg >> i * 8) & 0xFF);
}
// calculate crc backwards to fix_pos , beginning at the end
tcrcreg = (tcrcreg ^FINALXOR) ;
for (i = length - 1; i >= fix_pos ; --i)
{
tcrcreg = ((tcrcreg << 8) ^ (crc_revtable[tcrcreg >> 3*8] ^ buffer[i]));
}
// inject new content
for (i = 0; i < 4; ++i)
{
buffer[fix_pos + i] = (tcrcreg >> i * 8) & 0xFF;
}
}
void make_crc_revtable ( unsigned int *crc_revtable )
{
unsigned int c;
int n , k;
for (n = 0; n < 256; n ++)
{
c = n << 3*8;
for (k = 0; k < 8; k ++)
{
if (( c & 0x80000000 ) != 0)
{
c = ((( c ^ CRCPOLY ) << 1) | 1);
}
else
{
c = (c <<1);
}
}
crc_revtable [n] = c;
}
}
void make_crc_table ( unsigned int * table )
{
unsigned int c;
int n , k;
for (n = 0; n < 256; n ++)
{
c = n ;
for (k = 0; k < 8; k ++)
{
if (( c & 1) != 0)
{
c = CRCPOLY ^ ( c >> 1);
}
else
{
c = c >> 1;
}
}
table [n] = c;
}
}
int crc32_bitoriented ( unsigned char * buffer , int length )
{
int i , j;
unsigned int crcreg = INITXOR ;
for (j = 0; j < length ; ++ j )
{
unsigned char b = buffer [ j ];
for (i = 0; i < 8; ++ i)
{
if (( crcreg ^ b ) & 1)
{
crcreg = ( crcreg >> 1) ^ CRCPOLY ;
}
else
{
crcreg >>= 1;
}
b >>= 1;
}
}
return crcreg ^ FINALXOR ;
}
int main()
{
length = sizeof(buffer);
CRC_32 = crc32_bitoriented( buffer , length );
printf("\nCRC_32 :%x ",CRC_32);
make_crc_table(&crc_table[0]);
make_crc_revtable(&crc_revtable[0]);
fix_crc_pos(buffer, length, tcrcreg, fix_pos, &crc_table[0], &crc_revtable[0]);
printf("\nModified Buffer:\n");
for(int i=1;i<=length ;i++)
{
printf("0x%x ",buffer[i-1]);
if(0== (i%5))
{
printf("\n");
}
}printf("\n");
CRC_32 = crc32_bitoriented( buffer , length );
printf("\nFinal CRC_32 :%x ",CRC_32);
return 0;
}
----------------------END Code---------------
How do we get Offset and reconciliation factor value to get same CRC every time ?
unchanged data in buffer:
0x0 0x1 0x2 0x3 0x4
0x5 0x6 0x7 0x8 0x9
0xA 0xB 0xC 0xD 0xE
0xF 0x0 0x1 0x2 0x3
0x0 0x0 0x0 0x0 0x0
Your code lost some things in copying. There needs to be a length parameter in fix_crc_pos(). In make_crc_revtable() you go to 256, not 25. You need a place to put the reverse table: uint32_t crc_revtable[256];. You need to make the forward table:
void make_crc_table(uint32_t *crc_table) {
for (int n = 0; n < 256; n++) {
uint32_t crc = n;
for (int k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ CRCPOLY : crc >> 1;
crc_table[n] = crc;
}
}
and a place to put that somewhere: uint32_t crc_table[256];.
Then fix_crc_pos() will work. You give it the data you want the CRC over, buffer with length length. You give it the CRC you want to force the data to have, tcrcreg and the offset in the data, fix_pos, where there are four bytes you are allowing it to modify to get that CRC. And you provide it with the two tables you built by calling the make functions.

Decoding hexadecimal format

I have this string in hexadecimal:
"0000803F00000000000000B4B410D1A90000803FB41051B500000034B41051350000803F000000000000000000C05B400000000000C06B400000000000D07440"
and I know what it contains:
(1, 0, -1.192093e-007),
(-9.284362e-014, 1, -7.788287e-007),
(1.192093e-007, 7.788287e-007, 1),
(111, 222, 333).
And yes, it is a tranform matrix!
Decoding the first 72 characters (8 chars per number) was trivial, you only need to split by 8 and use IEEE floating point format ie. 0x0000803F = 1.0f
So we still have "000000000000000000C05B400000000000C06B400000000000D07440" that contains the fourth vector but I never saw such kind of numeric codification.
Any though on this?
It looks like these are 8-byte IEEE floating point numbers, starting at byte 40. So the layout is:
Bytes 0-11: first vector, 3 single-precision numbers
Bytes 12-23: second vector, 3 single-precision numbers
Bytes 25-35: third vector, 3 single-precision numbers
Bytes 36-39: Unused? (Padding?)
Bytes 40-63: fourth vector, 3 double-precision numbers
The code below shows an example of parsing this in C#. The output of the code is:
(1, 0, -1.192093E-07)
(-9.284362E-14, 1, -7.788287E-07)
(1.192093E-07, 7.788287E-07, 1)
(111, 222, 333)
Sample code:
using System;
class Program
{
static void Main(string[] args)
{
string text = "0000803F00000000000000B4B410D1A90000803FB41051B500000034B41051350000803F000000000000000000C05B400000000000C06B400000000000D07440";
byte[] bytes = ParseHex(text);
for (int i = 0; i < 3; i++)
{
float x = BitConverter.ToSingle(bytes, i * 12);
float y = BitConverter.ToSingle(bytes, i * 12 + 4);
float z = BitConverter.ToSingle(bytes, i * 12 + 8);
Console.WriteLine($"({x}, {y}, {z})");
}
// Final vector
{
double x = BitConverter.ToDouble(bytes, 40);
double y = BitConverter.ToDouble(bytes, 48);
double z = BitConverter.ToDouble(bytes, 56);
Console.WriteLine($"({x}, {y}, {z})");
}
}
// From https://stackoverflow.com/a/854026/9574109
public static byte[] ParseHex(string hex)
{
int offset = hex.StartsWith("0x") ? 2 : 0;
if ((hex.Length % 2) != 0)
{
throw new ArgumentException("Invalid length: " + hex.Length);
}
byte[] ret = new byte[(hex.Length-offset)/2];
for (int i=0; i < ret.Length; i++)
{
ret[i] = (byte) ((ParseNybble(hex[offset]) << 4)
| ParseNybble(hex[offset+1]));
offset += 2;
}
return ret;
}
static int ParseNybble(char c)
{
if (c >= '0' && c <= '9')
{
return c-'0';
}
if (c >= 'A' && c <= 'F')
{
return c-'A'+10;
}
if (c >= 'a' && c <= 'f')
{
return c-'a'+10;
}
throw new ArgumentException("Invalid hex digit: " + c);
}
}

Processing Image data changes during save

I'm trying to create a program to hide data in a image file. Data bits are hidden into last bit of every pixels blue value. First four pixels contain the length of following data bytes.
Everything works fine when I encrypt the data to image and then decrypt it without saving the image in between. However if I encrypt the data to an image and then save it and then open the file again and try to decrypt it, decryption fails since the values seem to have changed.
I wonder if there is something similar happening as with txt files where there is BOM containing byte order data prepended into the file?
The code works if I change the color c = crypted.pixels[pos + i];
to color c = original.pixels[pos + i]; in readByteAt function
and run the encrypting function first and then the decryption function.
This causes the code to run the decryption function on the just encrypted image still in program memory instead reading it from the file.
Any ideas on what causes this or how to prevent it are welcome!
here is the full (messy) code:
PImage original;
PImage crypted;
int imagesize;
boolean ready = false;
void setup() {
size(100, 100);
imagesize = width * height;
}
void draw() {
}
void encrypt()
{
original = loadImage("image.jpg");
original.loadPixels();
println("begin encrypt");
int pos = 0;
byte b[] = loadBytes("DATA.txt");
println("encrypting in image...");
int len = b.length;
println("len " + len);
writeByteAt((len >> (3*8)) & 0xFF, 0);
writeByteAt((len >> (2*8)) & 0xFF, 8);
writeByteAt((len >> (1*8)) & 0xFF, 16);
writeByteAt(len & 0xFF, 24);
pos = 32;
for (int i = 3; i < b.length; i++) {
int a = b[i] & 0xff;
print(char(a));
writeByteAt(a, pos);
pos += 8;
}
original.updatePixels();
println();
println("done");
original.save("encrypted.jpg");
}
void writeByteAt(int b, int pos)
{
println("writing " + b + " at " + pos);
for (int i = 0; i < 8; i++)
{
color c = original.pixels[pos + i];
int v = int(blue(c));
if ((b & (1 << i)) > 0)
{
v = v | 1;
} else
{
v = v & 0xFE;
}
original.pixels[pos+i] = color(red(c), green(c), v);
//original.pixels[pos+i] = color(255,255,255);
}
}
int readByteAt(int pos)
{
int b = 0;
for (int i = 0; i < 8; i++)
{
color c = crypted.pixels[pos + i];
int v = int(blue(c));
if ((v & 1) > 0)
{
b += (1 << i);
}
}
return b;
}
void decrypt()
{
crypted = loadImage("encrypted.jpg");
crypted.loadPixels();
println("begin decrypt");
int pos = 0;
PrintWriter output = createWriter("out.txt");
println("decrypting...");
int len = 0;
len += readByteAt(0) << 3*8;
len += readByteAt(8) << 2*8;
len += readByteAt(16) << 1*8;
len += readByteAt(24);
pos = 32;
if(len >= imagesize)
{
println("ERROR: DATA LENGTH OVER IMAGE SIZE");
return;
}
println(len);
while (pos < ((len+1)*8)) {
output.print(char(readByteAt(pos)));
print(char(readByteAt(pos)));
pos += 8;
}
output.flush(); // Writes the remaining data to the file
output.close();
println("\nDone");
}
void keyPressed()
{
if(key == 'e')
{
encrypt();
}
if(key == 'd')
{
decrypt();
}
}

ByteBuffer stores -1 and 0 values from BufferedImage.getRGB() NOT 1 and 0

I'm currently working on a program that stores RGB-info from two images to compare them.
I created two example images with paint.net.
Both are 16x16 and one is BLUE and the other one is RED.
I set the value in paint.net to (255, 0 ,0) in the RGB value for RED and in the blue image to (0,0,255).
As I loaded it into a ByteBuffer and looked inside it.
// Buffer for texture data
ByteBuffer res = BufferUtils.makeByteBufferT4(w * h);
// Convert pixel format
for (int y = 0; y != h; y++) {
for (int x = 0; x != w; x++) {
int pp = bi.getRGB(x, y);
byte a = (byte) ((pp & 0xff000000) >> 24);
byte r = (byte) ((pp & 0x00ff0000) >> 16);
byte g = (byte) ((pp & 0x0000ff00) >> 8);
byte b = (byte) (pp & 0x000000ff);
res.put((y * w + x) * 4 + 0, r);
res.put((y * w + x) * 4 + 1, g);
res.put((y * w + x) * 4 + 2, b);
res.put((y * w + x) * 4 + 3, a);
}
}
public static ByteBuffer makeByteBufferT4(int length){
// As "int" in java has 4 bytes we have to multiply our length with 4 for every single int value
ByteBuffer res = null;
return res = ByteBuffer.allocateDirect(length * 4);
}
Via res.get(0) I expected 1 as value, but got -1
I recognized that against my expectation it stores the value -1.
I expected the value 1.
Why is this so, should'nt it store the value 1?
This is not problem that a affects my coding negatively,
But more an understanding issue, I have.

Decode a websocket frame

I am trying to decode a websocket frame, but I'm not successful when it comes to decoding the extended payload. Here what I did achieve so far:
char *in = data;
char *buffer;
unsigned int i;
unsigned char mask[4];
unsigned int packet_length = 0;
int rc;
/* Expect a finished text frame. */
assert(in[0] == '\x81');
packet_length = ((unsigned char) in[1]) & 0x7f;
mask[0] = in[2];
mask[1] = in[3];
mask[2] = in[4];
mask[3] = in[5];
if (packet_length <= 125) { **// This decoding works**
/* Unmask the payload. */
for (i = 0; i < packet_length; i++)
in[6 + i] ^= mask[i % 4];
rc = asprintf(&buffer, "%.*s", packet_length, in + 6);
} else
if (packet_length == 126) { **//This decosing does NOT work**
/* Unmask the payload. */
for (i = 0; i < packet_length; i++)
in[8 + i] ^= mask[i % 4];
rc = asprintf(&buffer, "%.*s", packet_length, in + 8);
}
What am I doing wrong? How do I encode the extended payload?
The sticking point is at > 125 bytes payload.
The format is pretty simple, lets say you send ten a's in JavaScript:
ws.send("a".repeat(10))
Then the server will receive:
bytes[16]=818a8258a610e339c771e339c771e339
byte 0: The 0x81 is just an indicator that a message received
byte 1: the 0x8a is the length, substract 0x80 from it, 0x0A == 10
byte 2, 3, 4, 5: the 4 byte xor key to decrypt the payload
the rest: payload
But now lets say you send 126 a's in JavaScript:
ws.send("a".repeat(126))
Then the server will receive:
bytes[134]=81fe007ee415f1e5857490848574908485749084857490848574908485749084857490848574908485749084857490848574908485749084857490848574908485749084857490848574908485749084857490848574908485749084857490848574908485749084857490848574908485749084857490848574908485749084857490848574
If the length of the payload is > 125, the byte 1 will have the value 0xfe, the format changes then to:
byte 0: The 0x81 is just an indicator that a message received
byte 1: will be 0xfe
byte 2, 3: the length of the payload as a uint16 number
byte 4, 5, 6, 7: the 4 byte xor key to decrypt the payload
the rest: payload
Example code in C#:
List<byte[]> decodeWebsocketFrame(Byte[] bytes)
{
List<Byte[]> ret = new List<Byte[]>();
int offset = 0;
while (offset + 6 < bytes.Length)
{
// format: 0==ascii/binary 1=length-0x80, byte 2,3,4,5=key, 6+len=message, repeat with offset for next...
int len = bytes[offset + 1] - 0x80;
if (len <= 125)
{
//String data = Encoding.UTF8.GetString(bytes);
//Debug.Log("len=" + len + "bytes[" + bytes.Length + "]=" + ByteArrayToString(bytes) + " data[" + data.Length + "]=" + data);
Debug.Log("len=" + len + " offset=" + offset);
Byte[] key = new Byte[] { bytes[offset + 2], bytes[offset + 3], bytes[offset + 4], bytes[offset + 5] };
Byte[] decoded = new Byte[len];
for (int i = 0; i < len; i++)
{
int realPos = offset + 6 + i;
decoded[i] = (Byte)(bytes[realPos] ^ key[i % 4]);
}
offset += 6 + len;
ret.Add(decoded);
} else
{
int a = bytes[offset + 2];
int b = bytes[offset + 3];
len = (a << 8) + b;
//Debug.Log("Length of ws: " + len);
Byte[] key = new Byte[] { bytes[offset + 4], bytes[offset + 5], bytes[offset + 6], bytes[offset + 7] };
Byte[] decoded = new Byte[len];
for (int i = 0; i < len; i++)
{
int realPos = offset + 8 + i;
decoded[i] = (Byte)(bytes[realPos] ^ key[i % 4]);
}
offset += 8 + len;
ret.Add(decoded);
}
}
return ret;
}
If packet_length is 126, the following 2 bytes give the length of data to be read.
If packet_length is 127, the following 8 bytes give the length of data to be read.
The mask is contained in the following 4 bytes (after the length).
The message to be decoded follows this.
The data framing section of the spec has a useful illustration of this.
If you re-order your code to something like
Read packet_length
Check for packet_length of 126 or 127. Reassign packet_length to value of following 2/4 bytes if required.
Read mask (the 4 bytes after packet_length, including any additional 2 or 8 bytes read for the step above).
Decode message (everything after the mask).
then things should work.

Resources