Extending SRecord to handle crc32_mpeg2? - crc32

statement of problem:
I'm working with a Kinetis L series (ARM Cortex M0+) that has a dedicated CRC hardware module. Through trial and error and using this excellent online CRC calculator, I determined that the CRC hardware is configured to compute CRC32_MPEG2.
I'd like to use srec_input (a part of SRecord 1.64) to generate a CRC for a .srec file whose results must match the CRC_MPEG2 computed by the hardware. However, srec's built-in CRC algos (CRC32 and STM32) don't generate the same results as the CRC_MPEG2.
the question:
Is there a straightforward way to extend srec to handle CRC32_MPEG2? My current thought is to fork the srec source tree and extend it, but it seems likely that someone's already been down this path.
Alternatively, is there a way for srec to call an external program? (I didn't see one after a quick scan.) That might do the trick as well.
some details
The parameters of the hardware CRC32 algorithm are:
Input Reflected: No
Output Reflected: No
Polynomial: 0x4C11DB7
Initial Seed: 0xFFFFFFFF
Final XOR: 0x0
To test it, an input string of:
0x10 0xB5 0x06 0x4C 0x23 0x78 0x00 0x2B
0x07 0xD1 0x05 0x4B 0x00 0x2B 0x02 0xD0
should result in a CRC32 value of:
0x938F979A
what generated the CRC value in the first place?
In response to Mark Adler's well-posed question, the firmware uses the Freescale fsl_crc library to compute the CRC. The relevant code and parameters (mildly edited) follows:
void crc32_update(crc32_data_t *crc32Config, const uint8_t *src, uint32_t lengthInBytes)
{
crc_config_t crcUserConfigPtr;
CRC_GetDefaultConfig(&crcUserConfigPtr);
crcUserConfigPtr.crcBits = kCrcBits32;
crcUserConfigPtr.seed = 0xffffffff;
crcUserConfigPtr.polynomial = 0x04c11db7U;
crcUserConfigPtr.complementChecksum = false;
crcUserConfigPtr.reflectIn = false;
crcUserConfigPtr.reflectOut = false;
CRC_Init(g_crcBase[0], &crcUserConfigPtr);
CRC_WriteData(g_crcBase[0], src, lengthInBytes);
crcUserConfigPtr.seed = CRC_Get32bitResult(g_crcBase[0]);
crc32Config->currentCrc = crcUserConfigPtr.seed;
crc32Config->byteCountCrc += lengthInBytes;
}

Peter Miller be praised...
It turns out that if you supply enough filters to srec_cat, you can make it do anything! :) In fact, the following arguments the correct checksum:
$ srec_cat test.srec -Bit_Reverse -CRC32LE 0x1000 -Bit_Reverse -XOR 0xff -crop 0x1000 0x1004 -Output -HEX_DUMP
00001000: 93 8F 97 9A #....
In other words, bit reverse the bits going to the CRC32 algorithm, bit reverse them on the way out, and 1's compliment them.

Related

question about byte order and go standard library?

we have num 0x1234
In bigEndian:
low address -----------------> high address
0x12 | 0x34
In littleEndian:
low address -----------------> high address
0x34 | 0x12
we can see function below in binary.go:
func (bigEndian) PutUint16(b []byte, v uint16) {
_ = b[1] // early bounds check to guarantee safety of writes below
b[0] = byte(v >> 8)
b[1] = byte(v)
}
I download golang code of x86 and powpc arch and find the same definition.
https://golang.org/dl/
go1.12.7.linux-ppc64le.tar.gz Archive Linux ppc64le 99MB 8eda20600d90247efbfa70d116d80056e11192d62592240975b2a8c53caa5bf3
Now let's see what happen in this function.
If cpu is little endian, we store 0x1234 in memory like this:
low address -----------------> high address
0x34 | 0x12
v >> 8 means shift 8 bits right, means /2^8, so we get this in memory:
low address -----------------> high address
0x12 | 0x00
byte(v>>8), we get byte 0x12 which is in low address -> b[0]
byte(v), we get byte 0x34 -> b[1]
so we get the result which i think it's right:
[0x12,0x34]
=====================================
If cpu is big endian, we store 0x1234 in memory like this:
low address -----------------> high address
0x12 | 0x34
v >> 8 means shift 8 bits right, means /2^8, so we get this in memory:
low address -----------------> high address
0x00 | 0x12
byte(v>>8), we get byte 0x00 which is in low address -> b[0]
byte(v), we get byte 0x12 -> b[1]
so we get the result which i think it's not right:
[0x00,0x12]
I find in web how to check your cpu bigendian or little endian, and i write function below:
func IsBigEndian() bool {
test16 := uint16(0x1234)
test8 := *(*uint8)(unsafe.Pointer(&test16))
if test8 == 0x12{
return true
}else{
fmt.Printf("little")
return false
}
}
According to this function, I think byte() means get low address byte, am I right?
If right, why i get wrong result in analysis of "if cpu is big endian ..." ?
thanks a lot #Volker, I found this post Does bit-shift depend on endianness? . And know "byte(xxx)" operate in processor's register which not depend on the endianness in memory, so byte(0x1234) always get 0x34.

find ASCII value of, 7 digit 1’s complement of a string(char) in java

I have to find ASCII value of 7 digit 1’s complement of a string(char) in java.
Thanks in advance.
OK, so let's start with the basics.
You need the one's complement on 7 bits only of the byte. Therefore it is true that, given 7 bytes only:
0x3E = 0 011 1110
0x41 = 0 100 0001
0x41 is indeed the one's complement of 0x3E.
Now, you have a problem to begin with, and that problem is that in Java, a char is not interchangeable with a byte because of character codings.
However, since your range of characters is limited to ASCII, you can use US-ASCII as an encoding. So, the first step is to:
final Charset ascii = StandardCharsets.US_ASCII;
final byte[] bytes = theInput.getBytes(ascii);
final byte[] transformedBytes = new byte[bytes.length];
byte original, transformed;
for (int index = 0; index < bytes.length; index++) {
original = bytes[index];
transformed = transformByte(original);
transformedBytes[index] = transformed;
}
return new String(transformedBytes, ascii);
And now, the transformedByte() method needs to be written.
One's complement simply consists of a bitwise not on all the bytes, but here you want to limit that to 7 bytes; the solution is therefore to first do the negation normally, and then mask with 0x7f, which is 0111 1111; this is made possible by the fact that none of your byte values have the highest bit set:
private static void transformByte(final byte original)
{
return ~original & 0x7f;
}
This can be substituted directly into the original method, it's not even worth a separate method ;)

How to calculate g values from LIS3DH sensor?

I am using LIS3DH sensor with ATmega128 to get the acceleration values to get motion. I went through the datasheet but it seemed inadequate so I decided to post it here. From other posts I am convinced that the sensor resolution is 12 bit instead of 16 bit. I need to know that when finding g value from the x-axis output register, do we calculate the two'2 complement of the register values only when the sign bit MSB of OUT_X_H (High bit register) is 1 or every time even when this bit is 0.
From my calculations I think that we calculate two's complement only when MSB of OUT_X_H register is 1.
But the datasheet says that we need to calculate two's complement of both OUT_X_L and OUT_X_H every time.
Could anyone enlighten me on this ?
Sample code
int main(void)
{
stdout = &uart_str;
UCSRB=0x18; // RXEN=1, TXEN=1
UCSRC=0x06; // no parit, 1-bit stop, 8-bit data
UBRRH=0;
UBRRL=71; // baud 9600
timer_init();
TWBR=216; // 400HZ
TWSR=0x03;
TWCR |= (1<<TWINT)|(1<<TWSTA)|(0<<TWSTO)|(1<<TWEN);//TWCR=0x04;
printf("\r\nLIS3D address: %x\r\n",twi_master_getchar(0x0F));
twi_master_putchar(0x23, 0b000100000);
printf("\r\nControl 4 register 0x23: %x", twi_master_getchar(0x23));
printf("\r\nStatus register %x", twi_master_getchar(0x27));
twi_master_putchar(0x20, 0x77);
DDRB=0xFF;
PORTB=0xFD;
SREG=0x80; //sei();
while(1)
{
process();
}
}
void process(void){
x_l = twi_master_getchar(0x28);
x_h = twi_master_getchar(0x29);
y_l = twi_master_getchar(0x2a);
y_h = twi_master_getchar(0x2b);
z_l = twi_master_getchar(0x2c);
z_h = twi_master_getchar(0x2d);
xvalue = (short int)(x_l+(x_h<<8));
yvalue = (short int)(y_l+(y_h<<8));
zvalue = (short int)(z_l+(z_h<<8));
printf("\r\nx_val: %ldg", x_val);
printf("\r\ny_val: %ldg", y_val);
printf("\r\nz_val: %ldg", z_val);
}
I wrote the CTRL_REG4 as 0x10(4g) but when I read them I got 0x20(8g). This seems bit bizarre.
Do not compute the 2s complement. That has the effect of making the result the negative of what it was.
Instead, the datasheet tells us the result is already a signed value. That is, 0 is not the lowest value; it is in the middle of the scale. (0xffff is just a little less than zero, not the highest value.)
Also, the result is always 16-bit, but the result is not meant to be taken to be that accurate. You can set a control register value to to generate more accurate values at the expense of current consumption, but it is still not guaranteed to be accurate to the last bit.
the datasheet does not say (at least the register description in chapter 8.2) you have to calculate the 2' complement but stated that the contents of the 2 registers is in 2's complement.
so all you have to do is receive the two bytes and cast it to an int16_t to get the signed raw value.
uint8_t xl = 0x00;
uint8_t xh = 0xFC;
int16_t x = (int16_t)((((uint16)xh) << 8) | xl);
or
uint8_t xa[2] {0x00, 0xFC}; // little endian: lower byte to lower address
int16_t x = *((int16*)xa);
(hope i did not mixed something up with this)
I have another approach, which may be easier to implement as the compiler will do all of the work for you. The compiler will probably do it most efficiently and with no bugs too.
Read the raw data into the raw field in:
typedef union
{
struct
{
// in low power - 8 significant bits, left justified
int16 reserved : 8;
int16 value : 8;
} lowPower;
struct
{
// in normal power - 10 significant bits, left justified
int16 reserved : 6;
int16 value : 10;
} normalPower;
struct
{
// in high resolution - 12 significant bits, left justified
int16 reserved : 4;
int16 value : 12;
} highPower;
// the raw data as read from registers H and L
uint16 raw;
} LIS3DH_RAW_CONVERTER_T;
than use the value needed according to the power mode you are using.
Note: In this example, bit fields structs are BIG ENDIANS.
Check if you need to reverse the order of 'value' and 'reserved'.
The LISxDH sensors are 2's complement, left-justified. They can be set to 12-bit, 10-bit, or 8-bit resolution. This is read from the sensor as two 8-bit values (LSB, MSB) that need to be assembled together.
If you set the resolution to 8-bit, just can just cast LSB to int8, which is the likely your processor's representation of 2's complement (8bit). Likewise, if it were possible to set the sensor to 16-bit resolution, you could just cast that to an int16.
However, if the value is 10-bit left justified, the sign bit is in the wrong place for an int16. Here is how you convert it to int16 (16-bit 2's complement).
1.Read LSB, MSB from the sensor:
[MMMM MMMM] [LL00 0000]
[1001 0101] [1100 0000] //example = [0x95] [0xC0] (note that the LSB comes before MSB on the sensor)
2.Assemble the bytes, keeping in mind the LSB is left-justified.
//---As an example....
uint8_t byteMSB = 0x95; //[1001 0101]
uint8_t byteLSB = 0xC0; //[1100 0000]
//---Cast to U16 to make room, then combine the bytes---
assembledValue = ( (uint16_t)(byteMSB) << UINT8_LEN ) | (uint16_t)byteLSB;
/*[MMMM MMMM LL00 0000]
[1001 0101 1100 0000] = 0x95C0 */
//---Shift to right justify---
assembledValue >>= (INT16_LEN-numBits);
/*[0000 00MM MMMM MMLL]
[0000 0010 0101 0111] = 0x0257 */
3.Convert from 10-bit 2's complement (now right-justified) to an int16 (which is just 16-bit 2's complement on most platforms).
Approach #1: If the sign bit (in our example, the tenth bit) = 0, then just cast it to int16 (since positive numbers are represented the same in 10-bit 2's complement and 16-bit 2's complement).
If the sign bit = 1, then invert the bits (keeping just the 10bits), add 1 to the result, then multiply by -1 (as per the definition of 2's complement).
convertedValueI16 = ~assembledValue; //invert bits
convertedValueI16 &= ( 0xFFFF>>(16-numBits) ); //but keep just the 10-bits
convertedValueI16 += 1; //add 1
convertedValueI16 *=-1; //multiply by -1
/*Note that the last two lines could be replaced by convertedValueI16 = ~convertedValueI16;*/
//result = -425 = 0xFE57 = [1111 1110 0101 0111]
Approach#2: Zero the sign bit (10th bit) and subtract out half the range 1<<9
//----Zero the sign bit (tenth bit)----
convertedValueI16 = (int16_t)( assembledValue^( 0x0001<<(numBits-1) ) );
/*Result = 87 = 0x57 [0000 0000 0101 0111]*/
//----Subtract out half the range----
convertedValueI16 -= ( (int16_t)(1)<<(numBits-1) );
[0000 0000 0101 0111]
-[0000 0010 0000 0000]
= [1111 1110 0101 0111];
/*Result = 87 - 512 = -425 = 0xFE57
Link to script to try out (not optimized): http://tpcg.io/NHmBRR

Breaking a 32 bit integer into 8 bit chucks for Radix Sort

I am basically a beginner in Computer Science. Please forgive me if I ask elementary questions. I am trying to understand radix sort. I read that a 32 bit unsigned integer can be broken down into 4 8-bit chunks. After that, all it takes is "4 passes" to complete the radix sort. Can somebody please show me an example for how this breakdown (32 bit into 4 8-bit chunks) works? Maybe, a 32-bit integer like 2147507648.
Thanks!
You would divide the 32 bit integer up in 4 pieces of 8 bits. Extracting those pieces is a matter of using using some of the operators available in C.:
uint32_t x = 2147507648;
uint8_t chunk1 = x & 0x000000ff; //lower 8 bits
uint8_t chunk2 = (x & 0x0000ff00) >> 8;
uint8_t chunk3 = (x & 0x00ff0000) >> 16;
uint8_t chunk4 = (x & 0xff000000) >> 24; //highest 8 bits
2147507648 decimal is 0x80005DC0 hex. You an pretty much eyeball those 8 bits out of the hex representation, since each hex digit represents 4 bits, two and two of them represents 8 bits.
So that now means chunk 1 is 0xC0, chunk 2 is 0x5D, chunk3 is 0x00 and chunk 4 is 0x80
It's done as follows:
2147507648
=> 0x80005DC0 (hex value of 2147507648)
=> 0x80 0x00 0x5D 0xC0
=> 128 0 93 192
To do this, you'd need bitwise operations as nos suggested.

Invalid operands for binary AND (&)

I have this "assembly" file (containing only directives)
// declare protected region as somewhere within the stack
.equiv prot_start, $stack_top & 0xFFFFFF00 - 0x1400
.equiv prot_end, $stack_top & 0xFFFFFF00 - 0x0C00
Combined with this linker script:
SECTIONS {
"$stack_top" = 0x10000;
}
Assembling produces this output
file.s: Assembler messages:
file.s: Error: invalid operands (*UND* and *ABS* sections) for `&' when setting `prot_start'
file.s: Error: invalid operands (*UND* and *ABS* sections) for `&' when setting `prot_end'
How can I make this work?
Why it is not possible?
You have linked to the GAS docs, but what is the rationale for that inability?
Answer: GAS must communicate operations to the linker through the ELF object file, and the only things that can be conveyed like this are + and - (- is just + a negative value). So this is a fundamental limition of the ELF format, and not just lazyness from GAS devs.
When GAS compiles to the object file, a link step will follow, and it is the relocation which will determine the final value of the symbol.
Question: why can + be conveyed, but not &?
Answer: because + is transitive: (a + b) + c == a + (b + c) but + and & are not "transitive together": (a & b) + c!= a & (b + c).
Let us see how + is conveyed through the ELF format to convince ourselves that & is not possible.
First learn what relocation is if you are not familiar with it: https://stackoverflow.com/a/30507725/895245
Let's minimize your example with another that would generate the same error:
a: .long s
b: .long s + 0x12345678
/* c: .long s & 1 */
s:
Compile and decompile:
as --32 -o main.o main.S
objdump -dzr main.o
The output contains:
00000000 <a>:
0: 08 00 or %al,(%eax)
0: R_386_32 .text
2: 00 00 add %al,(%eax)
00000004 <b>:
4: 80 56 34 12 adcb $0x12,0x34(%esi)
4: R_386_32 .text
Ignore the disassembly since this is not code, and look just at the symbols, bytes and relocations.
We have two R_386_32 relocations. From the System V ABI for IA-32 (which defines the ELF format), that type of relocation is calculated as:
S + A
where:
S: the value before relocation in the object file.
Value of a before relocation == 08 00 00 00 == 8 in little endian
Value of b before relocation == 80 56 34 12 == 0x12345680 in little endian
A: addend, a field of the rellocation entry, here 0 (not shown by objdump), so lets just forget about it.
When relocation happens:
a will be replaced with:
address of text section + 8
There is a + 8 because s: is the 8th byte of the text section, preceded by 2 longs.
b will be replaced with:
address of text section + (0x12345678 + 8)
==
address of text section + 0x12345680
Aha, so that is why 0x12345680 appeared on the object file!
So as we've just seen, it is possible to express + on the ELF file by just adding to the actual offset.
But it would not possible to express & with this mechanism (or any other that I know of), because we don't what the address of the text section will be after relocation, so we can't apply & to it.
Darn:
Infix Operators
Infix operators take two arguments, one on either side. Operators have precedence, but operations with equal precedence are performed left to right. Apart from + or -, both arguments must be absolute, and the result is absolute.

Resources