Little Endian vs Big Endian? - endianness

I'm having troubles wrapping my head on the two. I understand how to represent something in big endian.
For example -12 is 1111 1111 1111 0100
But why is the little endian representation 1111 0100 1111 1111 instead of 0100 1111 1111 1111?

Endianness is about byte address order. Little endian means the lower significant bytes get the lower addresses. Big endian means the other way around. So it's about the bytes (8-bit chunks) not nibbles (4-bit chunks). Most computers we use (there are a few exceptions) address bytes at the individual address level.
Taking the -12 example:
Little endian, in memory, would be:
000000: F4
000001: FF
Big endian, in memory, would be:
000000: FF
000001: F4

Little endian is basically reversing the byte order for a multi byte value.
1111 1111 1111 0100 is a 2 byte value where 1111 1111 is the first byte and 1111 0100 is the second byte. In little endian, the second byte (or least significant byte) is read in first so the final representation is 1111 0100 1111 1111.

Related

How is byte-ordering actually done in little-endian architectures when the data type size is bigger than the word size?

First I want to apologize because English is not my native language. I'm taking CS50 Introduction to Computer Science course and I've come across the concepts of 'Endianness' and 'Word size' and even though I think I've understood them pretty well, there's still some confusion.
As far as I know, 'Word size' refers to the number of bytes a processor can read or write from memory in one cycle, the instruction bytes it can send at a time, and the max size of memory addresses also; being them 4 bytes in 32-bits architectures and 8 bytes in 64-bits architectures. Correct me if I'm wrong with this.
Now, 'Endianness' refers to the ordering of bytes of a multi-byte data type (like an int or float, not a char) when the processor handles them, to either storage or transmit them. According to some definitions I've read, this concept is linked to a word size. For example, in Wikipedia, it says: "endianness is the ordering or sequencing of bytes of a word of digital data". Big-endian means when the most significant byte is placed in the smallest memory address and low-endian when the least significant byte is placed in the smallest memory address instead.
I've seen many examples and diagrams like this one:
Little-endian / Big-endian explanation diagram
I understand big-endian very well and little-endian when the data type being processed has a size equal or smaller than the word size is also clear. But what happens when it's bigger than the word size? Imagine an 8-byte data type in a 32-bit little-endian architecture (4-byte words), how are the bytes actually stored:
Ordering #1:
----------------->
lower address to higher address
b7 b6 b5 b4 |b3 b2 b1 b0
word 0 |word 1
Ordering #2:
----------------->
lower address to higher address
b3 b2 b1 b0 | b7 b6 b5 b4
word 0 | word 1
I've found mixed answers to this question, and I wanted to have this concept clear to continue. Thank you in advance!

Looking for workaround for 32 cpu limitation using GetProcessAffinityMask in 32 bit process

I've just realized that GetProcessAffinityMask can't return values larger than 4'294'967'295 (1111 1111 1111 1111 1111 1111 1111 1111) in 32 bit applications, even on a 64 bit system.
This means that I'm unable to correctly detect system affinity mask on machines with more than 32 logical processors. Is there any hack to get other half of the affinity mask in this case?
The supported way to do this is to use a 64 bit process.
If you are unable to covert your application to 64 bit then create and call a small helper process to do the work, and pass the information back to your 32 bit application.

Most significant bit in 2 bytes

I have got the number 317 saved in 2 bytes (00000001 00111101) and it should be transfered via SPI (serial) to a slave device.
The device expect the two bytes B11 and B12, but in a certain order:
"The highest bits of the data words are send first and the lowest bits
are send last, that is, byte B11 is the highest byte and byte B12 is
the lowest byte."
My question is, what do they exactly mean? Am I supposed to flip the bytes itselves (10000000 10111100) or to flip bytes AND bits (10111100 10000000)?
Flip the bytes. Or switch , rather:
00000001 00111101
->
00111101 00000001
This is known as endianness (which should help you find related questions)

Little and Big Endian

Suppose the number Ox12345678 is stored at memory location 1000 in Big
Endian format. If the processor now assumes the data to be in Little Endian
format, what will it get if it reads (i) a byte, (ii) a half-word, (iii) a word from
the location 1000?
This is in relation to the ARM processor
If the number Ox12345678 in your question is given in Little Endian format and you store it in Big Endian format at address 1000, then the memory image at that address will be:
1000 - 0x78
1001 - 0x56
1002 - 0x34
1003 - 0x12
Now, using a Little-Endian processor:
If you read a byte from address 1000 then you'll get 0x78.
If you read a half-word from address 1000 then you'll get 0x7856.
If you read a word from address 1000 then you'll get 0x78563412.
If the number Ox12345678 in your question is given in Big Endian format and you store it in Big Endian format at address 1000, then the memory image at that address will be:
1000 - 0x12
1001 - 0x34
1002 - 0x56
1003 - 0x78
Now, using a Little-Endian processor:
If you read a byte from address 1000 then you'll get 0x12.
If you read a half-word from address 1000 then you'll get 0x1234.
If you read a word from address 1000 then you'll get 0x12345678.

Error correction code for lost bit

If we have to receive a data from sender as a chunk of bits (say 8 bits).
But, the transferring is unreliable which result in bit-lost. (Not bit-flip)
That mean, any bit in the chunk can be absent and receiver would receive only 7 bits.
I studied some error-correction coding, such as 'Hamming code'. But, the code is designed to recover fliped-bit not lost-bit in this situation.
If you are happy with a low entropy code and you can detect an end-of-message, you can simply send each bit twice.
This code can recover from any number of deletions that happen in different runs:
On receiving, find all odd-sized runs and extend them by one bit. If you end up with less than the expected count of bits, you are unable to recover from multiple deletion.
If you want to ensure a fixed error rate that can be recovered, use bit-stuffing.
Example:
0110
encoded as:
00 11 11 00
a double error occurs:
0x 11 x1 00
received:
011100
'0' and '111' are odd-sized runs. Fix:
00111100
we have 8 bits, and have recovered from a double error.
decode:
0110
Example 2:
0101
encoded as
00110011
transmitted as
0xxx0011
received as
00011
corrected as
000011
decoded as
001
which is shorter than expected. A transmission error has occured.
Example 3 (bit stuffing after a run of 3 bits):
0000 1111
stuffed as
00010 11101
sent as
0000001100 1111110011

Resources