I was trying to understand how the CRC-32 for ethernet works and am having some issues. I looked at an ARP request that had the hex values:
00000024e8cc96beffffffffffff080600010800060400010024e8cc96be0a3307fa0000000000000a3307fb0000000000000000000000000000000000000ff0fdca
and threw that into one of the many online generators and could not get the CRC to match up. It looks to me like the first two bytes (0x00 0x00) are the beginning of the frame, and that the last four bytes (0x0F 0xF0 0xFD 0xCA) are the CRC, but I cannot calculate that when I put the middle bytes into the online generators.
Any idea what I am assuming incorrectly?
I had a mistake in my original post. I was looking at the FPGA's loop-backed response, the real ARP looked like:
0000ffffffffffff0024e8cc96be080600010800060400010024e8cc96be0a3307fa0000000000000a3307fb0000000000000000000000000000000000000ff0fdca
(the src and dst mac are swapped). If I put everything after 0000 and before ff0fdca into the second link I had listed in the question, I get 0xCAFDF00F as a CRC, which is the same as originally sent out in the ARP request (just in reverse byte order). So it makes sense if you use the right data from the get go....
Related
I have a server receiving UDP packets with the payload being a number of CRC32 checksumed 4 byte words. The header in each UDP packet has a 2 byte field holding the "repeating" key used for the words in the payload. The way I understand it is that in CRC32 the keys must start and end with a 1 in the binary representation of the key. In other words the least and most significant bits of the key must be a 1 and not 0. So my issue is that I get, for example, the first UDP packet received has the key holding field reading 0x11BC which would have the binary representation 00010001 10111100. So the 1's are neither right nor left aligned to the key holding word. There are trailing 0's on both sides. Is my understanding on valid CRC32 keys wrong then? I ask as I'm trying to write the code to check each word using the key as is and it seems to always give a remainder meaning every word in the payload has an error and yet the instructions I've been given guarantee that the first packet received in the sample given has no errors.
Although it is true that CRC polynomials always have the top and bottom bit set, often this is dealt with implicitly; a 32-bit CRC is actually a 33-bit calculation and the specified polynomial ordinarily omits the top bit.
So e.g. the standard quoted polynomial for a CCITT CRC16 is 0x1021, which does not have its top bit set.
It is normal to include the LSB, so if you're certain you know which way around the polynomial has been specified then either the top or the bottom bit of your word should be set.
However, for UDP purposes you've possibly also made a byte ordering error on one side of the connection or the other? Network byte ordering is conventionally big endian whereas most processors today are little — is one side of the link switching byte order but not the other?
I have two image files. One is a regular picture [here], while the other is it's foreignly modified counterpart retrieved from a remote server [here]. I have no idea what the server did to this image or the rest of them stored on the server, but they are obviously modified in some way because the second file cannot be read. They are both the exact same picture, even the same byte count, but I can't figure out how to reverse whatever was done. What should I be trying?
Note: The modified file was retrieved from a packet capture as an octet stream. I wrote the raw binary to a file and then base64 decoded it.
Unlike the original JPEG file, the encrypted data is very "random" — every byte value from 0 to 255 appears with almost exactly the same probability. This rules out the possibility of a transposition cipher.
Also, the files are exactly the same length (3,018,705 bytes), which makes it unlikely that a block cipher (like DES) was used.
So that makes a stream cipher (like RC4) the most likely candidate. If this is the case, you can obtain the keystream simply by XORing each byte of the two files together. However, you might find it difficult to figure out the cryptographic key from this data. Good luck with that :-)
In developing my own SNMP poller, I've come across the problem of being able to poll devices with 32-bit interface indexes. I can't find anything out there explaining how to covert the hex (5 bytes) into the 32 bit integer or from the integer into hex as it doesn't use simple hex conversion. Example, the interface index is 436219904. While doing a pcap with a snmpget, I see the hex for this is 81 d0 80 e0 00 which makes no sense. I cannot for the life of me figure out how that converts to an integer value. I've tried to find a RFC dealing with this and have had no luck. The 16 bit interface values convert as they should. 0001 = 1 and so on. Only the 32-bit ones seem to be giving me this problem. Any help is appreciated.
SNMP uses ASN.1 syntax to encode data. Thus, you need to learn the BER rules,
http://en.wikipedia.org/wiki/X.690
For your case, I can say you watched the wrong data, as if 436219904 is going to be encoded as Integer32 in SNMP, the bytes should be 1A 00 30 00.
I guess you have missed some details in the analysis, so you might want to do it once again and add more descriptions (screenshot and so on) to enrich your question.
I suspect the key piece of info missing from your question is that the ifIndex value in question as used in your polling is an index for the table polled (not mentioned which, but we could assume ifTable), which means it will be encoded in as a subidentifier of the OID being polled (give me [some property] for [this ifIndex]) versus a requested value (give me [the ifIndex] for [some other row of some other table]).
Per X.209 (the version of the ASN.1 Basic Encoding Rules used by SNMP) subidentifiers in OIDs (other than the first two) are encoded in one or more octets (8 bits) with the highest order bit used as a continuation bit (i.e. "next octet is part of this subidentifier too"), and then remaining 7 bits for the actual value.
In other words, in your value "81 d0 80 e0 00", the highest bit is set in each of the first 4 octets and cleared in the last octet: this is how you know there are 5 octets in the subidentifier. The remaining 7 bits of each of those octets are concatenated to arrive at the integer value.
The converse of course is that to encode an integer value into a subidentifier of an OID, you have to build it 7 bits at a time.
I'm using the Spartan 3E Starter Kit and I'm trying to receive Ethernet frames on it via a 100 MBit link.
For those who don't know, the board features a PHY chip, exposing the receiving clock with 25 MHz. I have (pretty much) verified that receiving works fine by buffering the received frames and resending them via a serial link.
Furthermore, I'm using a CRC32 generator from outputlogic.com. I aggregate the received nybbles to bytes and forward them to the CRC. At the end of the frame, I latch the generated CRC and display it on the LCD, together with the CRC I found in the ethernet frame.
However, (as you might have guessed) the two numbers do not match.
527edb0d -- FCS extracted from the frame
43a4d833 -- calculated using the CRC32 generator
The first one can also be verified by running the package through pythons crc32 function, both with the frame captured by wireshark and the frame captured and retrieved via serial port from the FPGA.
I guess it must be something more or less trivial. I pasted the receiving process over here. I stripped off everything which was not neccessary. When capturing the output via serial, I added a fifo (readily made unit from Xilinx) which latched at the same time as the CRC generator to get exactly the same bytes.
Does anyone have an idea what's wrong with that?
I started working on an ethernet MAC a while back, and although I never got round to finishing it I do have a working CRC generator that you can use here:
CRC.vhd
Its based on a Xilinx App note on the IEEE 802.3 CRC, which you can find here.
The CRC is instantiated in the ethernet receieve component, if you look at the ETH_RECEIVE_SM process you can see how the FCS is loaded into the checker.
Hopefully you can spot your mistake by comparing with my code.
Edit:
I took the sample ethernet frame from fpga4fun and passed it through the CRC checker, see the simulation screenshot below (right click, copy URL and view in a new browser tab for full resolution):
You can see the residual C704DD7B at the end there, try doing the same with your own CRC checker and see what you get.
The generator you used may not be pre-processing and post-processing the data. If that generator takes a string of zeros and produces a zero crc, then that's the problem. A string of zeros should not produce zero. (What it produces depends on the number of zeros.)
The processing for the Ethernet crc is to invert the crc, then apply the crc algorithm, then invert the crc again.
A while back I was trying to bruteforce a remote control which sent a 12 bit binary 'key'.
The device I made worked, but was very slow as it was trying every combination at about 50 bits per second (4096 codes = 49152 bits = ~16 minutes)
I opened the receiver and found it was using a shift register to check the codes and no delay was required between attempts. This meant that the receiver was simply looking at the last 12 bits to be received to see if they were a match to the key.
This meant that if the stream 111111111111000000000000 was sent through, it had effectively tried all of these codes.
111111111111 111111111110 111111111100 111111111000
111111110000 111111100000 111111000000 111110000000
111100000000 111000000000 110000000000 100000000000
000000000000
In this case, I have used 24 bits to try 13 12 bit combinations (>90% compression).
Does anyone know of an algorithm that could reduce my 49152 bits sent by taking advantage of this?
What you're talking about is a de Bruijn sequence. If you don't care about how it works, you just want the result, here it is.
Off the top of my head, I suppose flipping one bit in each 12-bit sequence would take care of another 13 combinations, for example 111111111101000000000010, then 111111111011000000000100, etc. But you still have to do a lot permutations, even with one bit I think you still have to do 111111111101000000000100 etc. Then flip two bits on one side and 1 on the other, etc.