On an AVR MCU, how does the S flag in the status register work? - avr

How exactly do the V and S flags function on the ATMEGA328?
The ATMEGA328 has separate sign (S), carry (C), 2's complement overflow (V) and negative (N) flags. N is the MSB (corresponding to the sign bit on other processors). How exactly the V flag operates is not well explained in the datasheet. As I understand it, V is generally calculated as N⊕C. But S is defined in the datasheet as V⊕N which implies that it equals N⊕N⊕C or just C. If that's true then it doesn't make much sense to have a separate flag for it so I suspect I've misunderstood something here.

V is not generally the same as N XOR C.
I found a counterexample by looking at the ADC (Add with Carry) instruction in the manual and considering what would happen if you add 0xFF to 0x02 to get 0x01.
C would be 1 because 0xFF + 0x02 = 0x101, which is larger than 0xFF, so there is a carry from the most significant bit of the addition.
N would be 0 because it is simply the MSB of the result.
V would be 0 because it is defined to be 1 if two's complement overflow happened. From the perspective of two's complement, we simply added -1 and 2 to get 1, so there is no overflow. And you can confirm this by carefully evaluating the formula in the manual to calculate V.

Related

What does 1 << i means and how can I use it for?

I saw someone used 1 << i, sometimes 1 << n. What is this and how can I use it for?
When you think of the value in binary representation, it shifts the value to the left n times. n zero digit(s) will be added to the right.
So 1b becomes 100b etc if n == 2.
If you look at it decimally, shifting one time (n==1) is equivalent to multiplying the value with 2. Shifting two times equals a times 4 operation and so on.
One advantage is that bit shifting can be faster than a "real" integer multiplication.
Also often in computing you will see so called bit fields where each bit toggles on or off something, or otherwise has some special meaning.
For instance on microcontrollers each bit of a register might represent a digital output that is connected to a LED.
There, the notation can be used to create a "mask" that represents that bit number (i) that the programmer wants to manipulate.
For instance
x &= ~(1<<4) clears bit number 4, while x |= (1<<4) would set the same bit.
Be aware that shifting might cause undefined behavior on some systems if i is too high or if the left side of the operation has a negative value.

why CRC32 is non-linear in gnuradio?

I have a question regarding the non-linearity of the CRC32 in gnuradio.
I am working on a project where i need a linear CRC32 meaning that: crc(a xor b) = crc(a) xor crc(b), where a and b represent a packet.
The implementation of CRC32 in gnuradio is by default non-linear so i had to modify the code to make it linear.
I did some research on the theory behind CRC and i found out 2 reasons behind a non-linear CRC implementation:
1- with a linear CRC, we can have the same CRC for 2 different packets of zeros, for example crc(0000 0000) = crc(00000 00 00000). So if i add additional zeroes to a packet containing only zeros, well, the CRC will nont be able to detect the errors(additional zeros).
2- the second reason is that with a linear CRC, if i add zeros to the beginning of a packet, the CRC won't be able to detect the errors. for example: crc(10010 1101) = crc(0000 1000 1101)
Now my question is:
When transmitting packets between two USRPs, bits could have errors(due to bad SNR for example), so a bit "1" could become a bit "0" and vice versa. However, I don't think that bits could be added (like the two cases stated above) to the packets and thus the reasons of implementing a non-linear CRC should not apply to gnuradio.
So why do we have a non-linear CRC in gnuradio by default?
And, if i use a linear CRC when trasmitting between two USRP, would that be a problem?
Thank you,
Such CRCs are still linear, just with an added constant. As an analogy, y = a x is linear, but so is y = a x + b, where b is a non-zero constant.
In this case, crc(a xor b) xor crc(a) xor crc(b) is a constant for all equal length messages a and b. That constant is crc(0), i.e. the CRC of all zeros of that same message length.
There is absolutely no problem whatsoever with this sort of linearity, and in fact it has benefits. In particular, a change in the message that adds a prefix of zeros would be detected as an error.

ORing N-bit input to get 1-bit output

Design_picture
As in the graph, is this possible?
So I am trying to check if the N-bit input is zero or not.
I thought of doing this, ORing every bit in N-bit and then following it with Not-gate, and so if all bits are zero, Or gate would generate a 0 output, but yet I am not sure, would how would OR-gate access every bit? I am really confused! How to OR every bit in the N-bit? How does it work? without knowing whats N?
And how can I check if the most significant bit is 1?
Thanks!
you can use shift operator along with & operator (concept of masking with 1). Right shift the bits until the number becomes 0 and in each step do Anding with 1.Check at each step if the result after Anding with 1 is 0 or non zero.

Finding CRC collisions for specific divisor

My current textbook (Information Security: Principles and Practice by Mark Stamp) discusses how to determine the CRC of data via long-division, using XOR instead of subtraction to determine the remainder.
If our divisor has N-bits, we append (N-1) 0 bits to the dividend (the data) and then use long-division with XOR to solve for the CRC.
For example:
Divisor: 10011
Dividend: 10101011
101010110000 / 10011 = 10110110 R 1010, where 1010 = CRC
I'm able to perform this computation fine. However, the book mentions that in the case of the divisor being 10011, it's easy to find collisions.
I'm missing something here -- why is it easier to find a collision when the divisor is 10011?
See http://en.wikipedia.org/wiki/Cyclic_redundancy_check#Designing_CRC_polynomials for more details.
10011 corresponds to polynomial x^5+x+1 which is irreducible. And, using such codes decreases the chance of collisions.

Least significant bit first

While working on ruby I came across:
> "cc".unpack('b8B8')
=> ["11000110", "01100011"]
Then I tried Googleing to find a good answer on "least significant bit", but could not find any.
Anyone care to explain, or point me in the right direction where I can understand the difference between "LSB first" and "MSB first".
It has to do with the direction of the bits. Notice that in this example it's unpacking two ascii "c" characters, and yet the bits are mirror images of each other. LSB means the rightmost (least-significant) bit is the first bit. MSB means the leftmost (most-significant) is the first bit.
As a simple example, consider the number 5, which in "normal" (readable) binary looks like this:
00000101
The least significant bit is the rightmost 1, because that is the 2^0 position (or just plan 1). It doesn't impact the value too much. The one next to it is the 2^1 position (or just plain 0 in this case), which is a bit more significant. The bit to its left (2^2 or just plain 4) is more significant still. So we say this is MSB notation because the most significant bit (2^7) comes first. If we change it to LSB, it simply becomes:
10100000
Easy right?
(And yes, for all you hardware gurus out there I'm aware that this changes from one architecture to another depending on endianness, but this is a simple answer for a simple question)
The term "significance" of a bit or byte only makes sense in the context of interpreting a sequence of bits or bytes as an integer. The bigger the impact of the bit or byte on the value of the resulting integer - the higher its significance. The more "significant" it is to the value.
So, for example, when we talk about a sequence of four bytes having the least significant byte first (aka little-endian), what we mean is that when we interpret those four bytes as a 32bit integer, the first byte denotes the lowest eight binary digits of the integer, the second bytes denotes the 17th through 24th binary digits of the integer, the third denotes the 9th through 16th, and the last byte denotes the highest eight bits of the integer.
Likewise, if we say a sequence of 8 bits is in most significant bit first order, what we mean is that if we interpret the 8 bits as an 8-bit integer, the first bit in the sequence denotes the highest binary digit of the integer, the second bit the second highest, and so on, until the last bit denotes the lowest binary digit of the integer.
Another thing to think about is that one can say that the usual decimal notation has a convention that is most significant digit first. For example, a decimal number like:
1250
is read to mean:
1 x 1000 +
2 x 100 +
5 x 10 +
0 x 1
Right? Now imagine a different convention that is least significant digit first. The same number would be written:
0521
and would be read as:
0 x 1 +
5 x 10 +
2 x 100 +
1 x 1000
Another thing you should observe in passing is that in the C family of languages (most modern programming languages), the shift-left operator (<<) and shift-right operator (>>) are pointing in a most significant bit direction. That is shifting a bit left increases its significance and shifting it right decreases it, meaning that left is most significant (and the left side is usually what we mean by first, at least in the west).

Resources