My current textbook (Information Security: Principles and Practice by Mark Stamp) discusses how to determine the CRC of data via long-division, using XOR instead of subtraction to determine the remainder.
If our divisor has N-bits, we append (N-1) 0 bits to the dividend (the data) and then use long-division with XOR to solve for the CRC.
For example:
Divisor: 10011
Dividend: 10101011
101010110000 / 10011 = 10110110 R 1010, where 1010 = CRC
I'm able to perform this computation fine. However, the book mentions that in the case of the divisor being 10011, it's easy to find collisions.
I'm missing something here -- why is it easier to find a collision when the divisor is 10011?
See http://en.wikipedia.org/wiki/Cyclic_redundancy_check#Designing_CRC_polynomials for more details.
10011 corresponds to polynomial x^5+x+1 which is irreducible. And, using such codes decreases the chance of collisions.
Related
I'm implementing algorithm D of section 4.3.2 of volume 2 of The Art of Computer Programming by D. E. Knuth.
On step D3 I'm supposed to compute q = floor(u[j+n]*BASE+u[j+n-1] / v[n-1]) and r = u[j+n]*BASE+u[j+n-1] mod v[n-1]. Here, u (dividend) and v (divisor) are single-precision* arrays of length m+n and n, respectively. BASE is the representation base, which for a binary computer of 32 or 64 bits equals to 2^32 or 2^64, respectively.
My question is about the precision in which q and r are represented. As I understand the rest of the algorithm, they are supposed to be single-precision*, but its easy to spot many cases where they must be double-precision* to fit the result.
How are those values supposed to be computed? In what precision?
* The expression single/double-precision refers to integer arithmetic, not to floating-point arithmetic.
When divisor is normalized (most significant bit set), quotient always will fit in a single word. With a power of two base representation, normalization is accomplished by cheap left shift operations.
Link to a more detailed and formal answer.
I have a question regarding the non-linearity of the CRC32 in gnuradio.
I am working on a project where i need a linear CRC32 meaning that: crc(a xor b) = crc(a) xor crc(b), where a and b represent a packet.
The implementation of CRC32 in gnuradio is by default non-linear so i had to modify the code to make it linear.
I did some research on the theory behind CRC and i found out 2 reasons behind a non-linear CRC implementation:
1- with a linear CRC, we can have the same CRC for 2 different packets of zeros, for example crc(0000 0000) = crc(00000 00 00000). So if i add additional zeroes to a packet containing only zeros, well, the CRC will nont be able to detect the errors(additional zeros).
2- the second reason is that with a linear CRC, if i add zeros to the beginning of a packet, the CRC won't be able to detect the errors. for example: crc(10010 1101) = crc(0000 1000 1101)
Now my question is:
When transmitting packets between two USRPs, bits could have errors(due to bad SNR for example), so a bit "1" could become a bit "0" and vice versa. However, I don't think that bits could be added (like the two cases stated above) to the packets and thus the reasons of implementing a non-linear CRC should not apply to gnuradio.
So why do we have a non-linear CRC in gnuradio by default?
And, if i use a linear CRC when trasmitting between two USRP, would that be a problem?
Thank you,
Such CRCs are still linear, just with an added constant. As an analogy, y = a x is linear, but so is y = a x + b, where b is a non-zero constant.
In this case, crc(a xor b) xor crc(a) xor crc(b) is a constant for all equal length messages a and b. That constant is crc(0), i.e. the CRC of all zeros of that same message length.
There is absolutely no problem whatsoever with this sort of linearity, and in fact it has benefits. In particular, a change in the message that adds a prefix of zeros would be detected as an error.
Which exponent(s) d will
require this many?
Would greatly appreciate any advice as to how to go about solving this problem.
assuming unsigned integers and simple power by squaring algo like:
DWORD powuu(DWORD a,DWORD b)
{
int i,bits=32;
DWORD d=1;
for (i=0;i<bits;i++)
{
d*=d;
if (DWORD(b&0x80000000)) d*=a;
b<<=1;
}
return d;
}
You need just replace a*b with modmul(a,b,n) or (a*b)%n so the answer is:
if exponent has k bits and l from them are set you need k+l multiplications
worst case is 2k multiplications for exponent (2^k)-1
For more info see related QAs:
Power by squaring for negative exponents
modular arithmetics and NTT (finite field DFT) optimizations
For a naive implementation, it's clearly the exponent with the largest Hamming weight (number of set bits). In this case (2^k - 1) would require the most multiplication steps: (k).
For k-ary window methods, the number of multiplications can be made independent of the exponent. e.g., for a fixed window size: w = 3 we could compute {m^0, m^1, m^2, m^3, .., m^7} group coefficients (all mod n in this case, and probably in Montgomery representation for efficient reduction). The result is ceil(k/w) multiplications. This is often preferred in cryptographic implementations, as the exponent is not revealed by simple timing attacks. Any k-bit exponent has the same timing. (The reality is a bit more complex if it is assumed the attacker has 'fine-grained' access to things like cache performance, etc.)
Sliding window techniques are typically more efficient, and only slightly more difficult to implement than fixed-window methods. however, they also leak side channel data, as timing will be dependent on the exponent. Furthermore, the 'best' sequence to use is known to be a hard problem.
I'm currently building a 16 bit ALU using Logisim (ie logic gates only), and am stuck on a division process. I am currently just using the simple standard "division algorithm loop" (as shown below):
Read input values;
Compare input values. Wait until comparison process has finished;
If A<B go to step 10. If A≥B, go to next step;
Subtract B from A;
Wait until subtraction process has finished;
Add one to count;
Wait until counting process has finished;
Write value from subtraction process to input;
Go to step 1;
Answer is count remainder A
This, however, takes a very long time for processes with large answers (repeating a 300 tick cycle 65,000 times isn't fun).
I'm just wondering if there are similar algorithms which are quicker (that exclusively use addition and/or subtraction and/or multiplication and any boolean logic) that could be implemented using logic gates.
Any help or ideas would be greatly appreciated!
Fraser
Use long-division. In binary, there is no multiplication, since the quotient at each bit position can only be 1 or 0. So it can be implemented as a conditional subtract (subtract if result non-negative) and shift.
That's just a crude outline, of course.
A typical approach for a 32/16:16+16 division would be to have the dividend stored in a pair of 16-bit registers (which get updated during operation) and the divisor in its own register (which doesn't). Sixteen times, subtract the upper 17 bits of the dividend from the divisor; if a borrow results, discard the result and shift the divisor left one place, putting a 0 into the lsb. If no borrow results, store the result into the divisor while shifting it left, but put a 1 into the lsb. After sixteen such steps, the lower 16 bits of the dividend register will hold the quotient, and the upper 16 bits will hold the remainder. Note that this operation will only work if the quotient is representable in 16 bits. Note also that on a processor which implements 32/16:16+16 division in this fashion, one may conveniently divide arbitrarily-large numbers by a 16-bit quantity, since the upper 16 bits of dividend for each step should be the remainder from the previous step.
I am looking for a int32->int32 function that is
bijection (one-to-one correspondence)
cheap to calculate at least in one direction
transforms the increasing sequence 0, 1, 2, 3, ... into a sequence looking like a good pseudo-random sequence (~ half bits flip when argument changes by a small number, no obvious patterns)
Multiply by a large odd number and xor with a different one.
Bijection: odd numbers have a multiplicative inverse modulo powers of two, so the multiplication is undone by a multiplication by the inverse. And xor is, of course, undone by another xor.
This is basically how the linear congruence pseudo random number generator works.
Probably an overkill for this task, but have you consider applying any crypto pseudo random permutation or other primitives comes from block ciphers. For example, it may be done using des with known key in counter mode:
younumber xor (des (key, number counter))