Are these endian transformations correct? - endianness

I am struggling to figure this out, I am trying to represent a 32bit variable in both big and little endian. For the sake of argument let's say we try the number, "666."
Big Endian: 0010 1001 1010 0000 0000 0000 0000
Little Endian: 0000 0000 0000 0000 0010 1001 1010
Is this correct, or is my thinking wrong here?

666 (decimal) as 32-bit binary is represented as:
[0000 0000] [0000 0000] [0000 0010] [1001 1010] (big endian, most significant byte first))
[1001 1010] [0000 0010] [0000 0000] [0000 0000] (little endian, least significant byte first)
Ref.
(I have used square brackets to group 4-bit nibbles into bytes)

Related

Go Float32bit() result not expected

eg: 16777219.0 dec to bit
16777219 -> 1 0000 0000 0000 0000 0000 0011
Mantissa: 23 bit
0000 0000 0000 0000 0000 001
Exponent:
24+127 = 151
151 -> 10010111
Result shoud be:
0_10010111_000 0000 0000 0000 0000 0001
1001011100000000000000000000001
but:
fmt.Printf("%b\n", math.Float32bits(float32(16777219.0)))
// 1001011100000000000000000000010
why the Go Float32bit() result not expected?
reference:
base-conversion.ro
update:
fmt.Printf("16777216.0:%b\n", math.Float32bits(float32(16777216.0)))
fmt.Printf("16777217.0:%b\n", math.Float32bits(float32(16777217.0)))
//16777216.0:1001011100000000000000000000000
//16777217.0:1001011100000000000000000000000
fmt.Printf("16777218.0:%b\n", math.Float32bits(float32(16777218.0)))
//16777218.0:1001011100000000000000000000001
fmt.Printf("16777219.0:%b\n", math.Float32bits(float32(16777219.0)))
fmt.Printf("16777220.0:%b\n", math.Float32bits(float32(16777220.0)))
fmt.Printf("16777221.0:%b\n", math.Float32bits(float32(16777221.0)))
//16777219.0:1001011100000000000000000000010
//16777220.0:1001011100000000000000000000010
//16777221.0:1001011100000000000000000000010
fmt.Printf("000:%f\n", math.Float32frombits(0b_10010111_00000000000000000000000))
// 000:16777216.000000
fmt.Printf("001:%f\n", math.Float32frombits(0b_10010111_00000000000000000000001))
// 001:16777218.000000
fmt.Printf("010:%f\n", math.Float32frombits(0b_10010111_00000000000000000000010))
// 010:16777220.000000
fmt.Printf("011:%f\n", math.Float32frombits(0b_10010111_00000000000000000000011))
// 011:16777222.000000
what is the rules?
Go gives the correct IEEE-754 binary floating point result - round to nearest, ties to even.
The Go Programming Language Specification
float32 the set of all IEEE-754 32-bit floating-point numbers
Decimal
16777219
is binary
1000000000000000000000011
For 32-bit IEEE-754 floating-point binary binary, the 24-bit integer mantissa is
100000000000000000000001.1
Round to nearest, ties to even gives
100000000000000000000010
Removing the implicit one bit for the 23-bit mantissa gives
00000000000000000000010
package main
import (
"fmt"
"math"
)
func main() {
const n = 16777219
fmt.Printf("%d\n", n)
fmt.Printf("%b\n", n)
f := float32(n)
fmt.Printf("%g\n", f)
fmt.Printf("%b\n", math.Float32bits(f))
}
https://go.dev/play/p/yMaVkuiSJ5A
16777219
1000000000000000000000011
1.677722e+07
1001011100000000000000000000010
Why the result is not expected: You're expecting the wrong result.
The IEEE 754 standard specifies that, per Wikipedia:
if the number falls midway, it is rounded to the nearest value with an even least significant digit.
So when rounding 16777219, which is midway between 16777218 and 16777218. The "round up" option 16777220 gives an even LSB, and is the correct result you're observing.

How to find tag bit in cache given word address

Caches are important to providing a high-performance memory hierarchy
to processors. Below is a list of 32-bit memory address references,
given as word addresses.
3, 180, 43, 2, 191, 88, 190, 14, 181, 44, 186, 253
For each of these references, identify the binary address, the tag,
and the index given a direct-mapped cache with two-word blocks and a
total size of 8 blocks. Also list if each reference is a hit or a
miss, assuming the cache is initially empty.
the answer is :
I understood that it was to find tag, index, and offset values ​​from the 32-bit memory address value and use it in the cache table, but I do not understand well that the memory address is given as a word. For example, does the word address 3 actually mean 0000 0000 0000 0000 0000 0000 0000 0011? Given a word address, how can it be thought of as a 32-bit address in the figure below?
For the word address 3 (0000 0000 0000 0000 0000 0000 0000 0011), the offset would be 1, the index would be 001, and the tag would be 0000 0000 0000 0000 0000 0000 0000.
2 words in block = 1 bit
for offset (2^1).
8 blocks in cache = 3 bits for index (2^3).
32 - 4 = 28 bits for tag.

Why does the algorithm expression result get trucated?

If I run $((0x100 - 0 & 0xff)), I got 0.
However $((0x100 - 0)) gives me 256.
Why the result from the first expression got truncated?
Because & is a bitwise operator, and there are no matching bits in 0x100 and 0xff.
What that means is it looks at the bits that make up your numbers and you get a 1 back in the position where both inputs have a 1.
So if you do $((0x06 & 0x03))
In binary you end up with
6 = 0110
3 = 0011
So when you logical and those together, you'll get
0010 (binary) or 0x02
For the numbers you have, there are no bits in common:
0x100 in binary is
0000 0001 0000 0000
0xff in binary is
0000 0000 1111 1111
If you bitwise and them together, there are no matching bits, so you'll end up with
0000 0000 0000 0000
Interestingly, it does the subtraction before it does the bitwise and operation (I expected it to do the other way):
$((0x100 - 1 & 0xff)) gives 255 or 0xff because 0x100 - 1 = 0xff

Why does the CRC of "1" yield the generator polynomial itself?

While testing a CRC implementation, I noticed that the CRC of 0x01 usually (?) seems to be the polynomial itself. When trying to manually do the binary long division however, I keep ending up losing the leading "1" of the polynomial, e.g. with a message of "0x01" and the polynomial "0x1021", I would get
1 0000 0000 0000 (zero padded value)
(XOR) 1 0000 0010 0001
-----------------
0 0000 0010 0001 = 0x0021
But any sample implementation (I'm dealing with XMODEM-CRC here) results in 0x1021 for the given input.
Looking at https://en.wikipedia.org/wiki/Computation_of_cyclic_redundancy_checks, I can see how the XOR step of the upper bit leaving the shift register with the generator polynomial will cause this result. What I don't get is why this step is performed in that manner at all, seeing as it clearly alters the result of a true polynomial division?
I just read http://www.ross.net/crc/download/crc_v3.txt and noticed that in section 9, there is mention of an implicitly prepended 1 to enforce the desired polynomial width.
In my example case, this means that the actual polynomial used as divisor would not be 0x1021, but 0x11021. This results in the leading "1" being dropped, and the remainder being the "intended" 16-bit polynomial:
1 0000 0000 0000 0000 (zero padded value)
(XOR) 1 0001 0000 0010 0001
-----------------
0 0001 0000 0010 0001 = 0x1021

Reverse Engineering - AND 0FF

I do some reverse engineering stuff with simple crackme app and I'am debugging it with OllyDbg.
I'm stuck at the behavior of instruction AND with operand 0x0FF. I mean It's equivalent in C++ to
if(... = true).
So what's confusing is that:
ECX = CCCCCC01
ZF = 1
AND ECX, 0FF
### After instruction
ECX = 00000001
ZF = 0
ZF - Should be active
I don't know why is result of ECX register 1 and ZF isn't active.
AND => 1 , 1 = 1 (Same operands)
Otherwise = 0
Can someone explain me that?
thankx for help
It's a bit-wise AND, so in binary you have
1100 1100 1100 1100 1100 1100 0000 0001
AND 0000 0000 0000 0000 0000 0000 1111 1111
----------------------------------------
0000 0000 0000 0000 0000 0000 0000 0001

Resources