Reveng to find out CRC algorithm in closed protocol - algorithm

I have one machine with closed protocol and another device "gateway Modbus" from the same manufacter. This gateway convert this protocol to RS-485 Modbus.
When I send a command packet (modbus function 16) to gateway, gateway send (converted) specific packet to the machine and when I inject this packet over simple UART communication, machine can understand and change values too. I create a list with some cloned commands, but I need to know how CRC/checksum/etc is calculed (I think) to create custom packets.
I already used RevEng tool (https://reveng.sourceforge.io/) and CRCcalculator (https://crccalc.com/) trying to find some common crc algorithm with cloned packets, but none worked.
Some cloned packets, where 2 last bytes is CRC/etc. In this packets I changed the temperature value from 0x11 to 0x15 and last 2 bytes changed too (maybe crc/checksum/etc):
9A56F1FE0EB9001100000100641C
9A56F1FE0EB90012000001006720
9A56F1FE0EB90013000001006620
9A56F1FE0EB9001400000100611C
9A56F1FE0EB9001500000100601C
RevEng output:
./reveng -w 16 -l -s 9A56F1FE0EB9001100000100641C 9A56F1FE0EB90012000001006720 9A56F1FE0EB90013000001006620 9A56F1FE0EB9001400000100611C 9A56F1FE0EB9001500000100601C
./reveng: no models found
Someone can help me?

It's not a CRC. The second-to-last byte is the exclusive-or sum of the preceding bytes. I'm not sure what the last byte is, but since it is only taking on two different values in your example, it does not appear to be part of a check value. Or if it is, it's a rather ineffective check value algorithm.

Related

How does the Linux kernel find the right offset to parse IP packets?

I've found what code parses IP (v4) packets in the kernel source tree. This function, ip_rcv, can to a high degree of certainty detect whether a packet is correct or not, as is outlined in one of the comments:
Length at least the size of an ip header
Version of 4
Checksums correctly. [Speed optimisation for later, skip loopback checksums]
Doesn't have a bogus length
Malformed packets are simply dropped. This function seems to get a bunch of bytes that should resemble an IP packet, but what if some malicious actor would sneak an extra byte on the line? If not handled correctly, all the chunks of bytes that ip_rcv receives from now on will start 1 byte off and no correct IP packet can be reconstructed anymore. I assume the kernel does something smarter than to try all different byte offsets at which to start parsing an IP packet. What exactly, I'm unable to find. Could anyone shed some light on this?
I haven't taken the time to look at the kernel code but most protocol stacks are going to work by parsing data immediately following the previous stack location and not by searching for data.
In the case of Ethernet, an Ethernet frame header is typically 14 bytes in size. It can vary but the header itself indicates the different length in the etherType field when necessary. In this example, the NIC (Network Interface Card) will receive an Ethernet frame. If the frame is destined for this NIC then the data passed from the NIC driver to the IP stack will be an Ethernet frame containing this 14-byte header followed immediately by the IP header (first nibble will be 4 if it is a version 4 IP header for instance).
Again, I didn't look at the network stack code but there are two common cases here:
1) The IP stack is told this is an Ethernet frame and only needs to parse the Ethernet frame header for its length and the very next byte must be an IP header or the data is deemed not an IP frame.
2) The IP stack is given a pointer to the beginning of the data immediately following the Ethernet frame header and the IP stack then starts parsing at that location.

TFTP packet example?

I'm writing a TFTP server in Ruby and I don't understand a couple things.
First, I read through the entire RFC and I understand the TFTP part of the packet (2 bytes opcode, etc), but I don't know where the TID's go. Also, I've never done anything in Ruby at a byte level. I don't know how to create a variable that's 2 bytes this and then 1 byte that and then whatever.
If someone could show me an example of how to construct a read request packet in ruby, that'd be sweet. Say I'm on the client side and I select port #20000 (for my local TID) and I want to read the file named /Users/pachun/documents/hello.txt on the server which has a TID of 69 right now because it's the first request. How would I construct that packet in Ruby?
Check out this project:
https://github.com/spiceworks/net-tftp
The code there should answer your questions regarding how to construct byte sequences for communicating with tftp protocol.

Mac changes IP total length field

I'm programming with sockets in Mac 10.6.8. Whenever I receive a packet, it starts with the IP header. I've been using Wireshark to analyze incoming packets, and I noticed that my machine's socket implementation will consistently change the "total length" field in the IP header. Specifically, it will subtract the IP header length and reverse the bytes (from network to host order).
For example, here's the beginning of an IP header as reported by Wireshark:
45 c0 00 38 ...
That breaks down as follows:
4 bits (0x4): IP version: 4
4 bits (0x5): IP header length: 5 words (20 bytes)
8 bits (0xc0): differentiated services flags
16 bits (0x0038): total length: 56 bytes
However, when I print the contents of the buffer filled by recvfrom for the same packet, I get a different lede:
ssize_t recvbytes = recvfrom(sock->fd, buffer, size, /*flags=*/0,
(struct sockaddr*)src, &src_len);
returns
45 c0 24 00 ...
4 bits (0x4): IP version: 4
4 bits (0x5): IP header length: 5 words (20 bytes)
8 bits (0xc0): differentiated services flags
16 bits (0x2400): total length: 9216 bytes (?!)
I figured out that before I get access to the buffer, the socket implementation is reading the total length, subtracting the IP header length, and then writing it back in host order (little endian on my machine) rather than network order (big endian). In this example, that means:
read the total length: 0x0038 = 56
subtract the header length: 56 - 20 = 36
write back in host order: 36 = 0x0024 (big endian) = 0x2400 (little endian = host order on my machine)
The problem gets worse. It won't just change the total length of the outermost IP header. It will also change the total length fields of internal IP headers, e.g., the one buried in an ICMP "time exceeded" message (which must include the original IP header of the dropped packet). Funnier still, it won't subtract the IP header length from the internal headers; it just reverses the byte order.
Is this happening to anyone else? Is it part of a standard I'm unaware of? Is there a way to fix my machine's socket implementation to stop tampering with packets? How is Wireshark able to get around this problem?
Thanks in advance for your consideration.
EDIT: My code and Makefile are available on GitHub. I wrote a fixip_osx function to allow verifying IP checksums:
https://github.com/thejohnfreeman/netutils/blob/master/lib/ip.c
void fixip_osx(struct ip* ip) {
/* Something on my Mac subtracts the header length from `ip_len` and stores
* it in host order (little endian). */
u16_t ip_hdrlen = ip->ip_hl << 2;
u16_t ip_totlen = ip->ip_len + ip_hdrlen;
ip->ip_len = htons(ip_totlen);
}
However, it's still a problem for verifying ICMP checksums when the payload contains another IP header.
The problem exists whether I compile with Clang 3.2 (built from trunk) or GCC 4.7 (MacPorts port), so I assume the problem lies in either the sockets implementation (packaged with Mac OS) or in Mac OS X itself.
The BSD suite of platforms (excluding OpenBSD) present the IP offset and length in host byte order. All other platforms present in the received network byte order. This is a "feature", and is referenced in the man page for IP(4) - Internet Protocol (FreeBSD, OS X).
The ip_len and ip_off fields must be provided in host byte order .
All other fields must be provided in network byte order.
IP length can equal packet length - IP header length in FreeBSD/NetBSD.
Reference: Stevens/Fenner/Rudolph, Unix Network Programming Vol.1, p.739
I have to deal with these anomalies with a user space implementation of the PGM network protocol, specific code:
https://code.google.com/p/openpgm/source/browse/trunk/openpgm/pgm/packet_parse.c#76
It's actually quite annoying to detect for AutoConf, I think all packages have this hard coded on a per-platform basis. I've seen a bug report (header byte order config options detected incorrectly) raised this week on this very issue.
It is very unlikely that Mac itself is doing that. That would fundamentally break the IP protocol if it were. More likely whatever is capturing the packets and delivering them to recvfrom() (presumably you are doing a promiscuous network capture, right?) is what is transforming the data after Mac is done processing it. ireshark operates on a lower level and has access to the actual network data.

What's CRC with 64b in and 32b out?

I'm developing a software utility to transfer some data to a pci-e board. To avoid the data transfer fault, I've add a CRC field in every packet, so that the pci-e board can verify the received data with the CRC value.
Then we found out that the CRC value failed to pass the verification.
I'm using the boost::crc_32_type to generate the CRC value while the hardware guy told me that the CRC implementation in the board is from http://www.easics.com/webtools/crctool and they're using the 64 data bus width version of CRC32 - ETHERNET / AAL5.
So, is it possible to use boost::crc_32_type to work with the one they're using?
Any suggestions will be greatly appreciated!
[edit 2013.02.20]
the working crc template shall have the following definition:
boost::crc_optimal<32, 0x04C11DB7, 0xFFFFFFFF, 0x0, false, false>
the order of every 8 bytes shall be reversed before being processed
std::for_each is used instead of process_bytes to get the result, I still don't quite understand the difference between them though.
You can use crc_32_type - first you have to make sure your bytes are going in the same order as the bytes the hardware guys are. The convention used by the EASICS code is that the first byte in the stream goes into Data[63:56].

MIDIPacketList, numPackets is always 1

I'm processing Midi on the iPad and everything is working fine and I can log everything that comes in and all works as expected. However, in trying to recieve long messages (ie Sysex), I can only get one packet with a maximum of 256 bytes and nothing afterwards.
Using the code provided by Apple:
MIDIPacket *packet = &packetList->packet[0];
for (int i = 0; i > packetList->numPackets; ++i) {
// ...
packet = MIDIPacketNext (packet);
}
packetList->numPackets is always 1. After I get that first message, no other callback methods are called until a 'new' sysex message is sent. I don't think that my MIDI processing method would be called with the full packetList (which could potentially be any size). I would have thought I would recieve the data as a stream. Is this correct?
After digging around the only thing I could find was this: http://lists.apple.com/archives/coreaudio-api/2010/May/msg00189.html, which mentions the exact same thing but was not much help. I understand I probably need to implement buffering, but I can't even see anything past the first 256 bytes so I'm not sure where to even start with it.
My gut feeling here is that the system is either cramming the entire sysex message into one packet, or breaking it up into multiple packets. According to the CoreMidi documentation, the data field of the MIDIPacket structure has some interesting properties:
A variable-length stream of MIDI messages. Running status is not allowed. In the case of system-exclusive messages, a packet may only contain a single message, or portion of one, with no other MIDI events.
The MIDI messages in the packet must always be complete, except for system-exclusive.
(This is declared to be 256 bytes in length so clients don't have to create custom data structures in simple situations.)
So basically, you should look at the declared length field of the MIDIPacket and see if it is larger than 256. According to the spec, 256 bytes is just the standard allocation, but that array can hold more if necessary. You might find that the entire message has been crammed into that array.
Otherwise, it seems that the system is breaking the sysex messages up into multiple packets. Since the spec says that running status is not allowed, then it would have to send multiple packets, each with a leading 0xF0 byte. You would then need to create your own internal buffer to store the contents of these messages, stripping away the status bytes or header as necessary, and appending the data to your buffer until you read a 0xF7 byte which denotes the end of the sequence.
I had a similar issue on iOS. You are right MIDI packets number is always 1.
In my case, when receiving multiple MIDI events with the same timestamp (MIDI events received at the same time), iOS does not split those multiple MIDI events in multiple packets, as expected.
But, fortunately nothing is lost ! Indeed instead of receiving multiple packets with their correct number of bytes, you will receive a single packet with multiple events in it and the number of bytes will be increased accordingly.
So here what you have to do is:
In your MIDI IN callback, parse all packets received (always 1 for iOS), then for each packet received you must check the length of the packet as well as the MIDI status, then loop into that packet to retrieve all MIDI events in the current packet.
For instance, if the packet contains 9 bytes, and the MIDI status is a note ON (3 bytes message), that means your current packet contains more than a single note ON, you must then parse the first Note ON (bytes 0 to 2) then check the following MIDI status from byte 3 and so on ..
Hope this helps ...
Jerome
There is a good reference of how to walk through a MIDI packet in this file of a GitHub project : https://github.com/krevis/MIDIApps/blob/master/Frameworks/SnoizeMIDI/SMMessageParser.m
(Not mine, but it helped me solve the problems that got me to this thread)

Resources