WinPCap packet reassembly - packet-sniffers

I am writing an application using WinPCap that will capture packets on the network and allow me to troubleshoot some of our network applications. Everything works fine until the packets are fragmented.
WinPCap appears to be sending me incorrect offsets and fragmentation options for the packets.
Here is the data for each packet.
Packet.IpV4.Fragmentation.Offset = 136
Packet.IpV4.Fragmentation.Options = None
Packet.IpV4.Fragmentation._value = 17
Packet.IpV4.Identification = 61876
I always get 5 of these packets (I'm sending the same test data every time) with these values. I thought the Offset and Options should change depending on the packet so I could reassemble the payload data correctly.
I'm now at a loss as to how to use WinPCap to get packet data for anything that is fragmented.
Anyone have any ideas? Any help would be greatly appreciated.

Related

Zero-length packets for USB control transfers

In the context of a DFU driver, I'm trying to respond with a packet of length zero (not ZLP as in multiples of max size, just zero bytes) to a USB control in transfer. However, the host returns with a timeout condition. I tried both, the dfu-util tool and the corresponding protocol, as well as a minimal working example with pyusb just issuing a control in transfer of some length and the device returning no data.
My key question is: Do I achieve this by responding with a NAK or should I set the endpoint valid but without any data? The specs are rather vague about this, imo.
Here are some technical details since I'm not sure where the problem is:
Host: Linux Kernel 5.16.10, dfu-util and pyusb (presumably) both using libusb 0.1.12
Device: STM32L1 with ChibiOS 21.11.1 USB stack (sends NAK in the above situation, I also tried to modify it to send a zero-length packet without success)
It sounds like you are programming the firmware of a device, and you want your device to give a response that is 0 bytes long when the host starts a control read transfer.
You can't simply send a NAK token: that is what the device does when the data isn't ready yet, and it causes the host to try again later to read the data.
Instead, you must actually send a 0-length IN packet to the host. When the host receives this packet, it sees that the packet is shorter than the maximum packet size, so it knows the data phase of the control transfer is done, and it moves on to the status stage.

Embarcadero RAD Studio & Indy. Error message “Package Size Too Big”

I use Embarcadero RAD Studio & Indy and I have a problem.
I am using component IdUDPServer and try to send file via UDP messages.
The file is successfully transferred if it is not large.
However, the error message “Package Size Too Big” appears if the file is large.
My code:
FileAPP->ReadBuffer(Buffer, Size);
IdUDPServer1->SendBuffer("10.6.1.255", 34004, RawToBytes(&Buffer, Size));
I see two ways to solve this problem:
1) Compose small messages and send these messages in a loop.
This is an easy way to solve the problem, however I will need to fix the remote application.
This is currently difficult. I do not have much time.
2)I want to find a condition in the source code where this error message is generated.
Maybe I can fix the condition and it will work for my specific task.
I have added some pictures. This is all the information that I now have.
Screenshots from help file: Picture 1, Picture 2, Picture 3
Error message: Picture 4
Please help me if you know the solution.
"Package Size Too Big" means you tried to send a datagram that is too large for UDP to handle. UDP has an effective max limit of 65507 bytes of payload data per datagram. You can't send a large file in a single datagram, it just won't fit.
So yes, you will have to break up your file into smaller chucks that are sent in separate datagrams. And as such, because UDP is connection-less and does not guarantee delivery, you will also have to solve the issues of lost packets and out-of-order packets. You will have to put a rolling sequence number inside your packets so that the receiver can save packets in the proper order. And you will have to implement a re-transmission mechanism to re-send lost packets that the receiver did not receive.
I would suggest not using TIdUDPClient/TIdUDPServer to implement file transfers manually, if possible. Indy has TIdTrivialFTP/TIdTrivialFTPServer components that implement the standardized UDP-based TFTP protocol, which handles these kind of details for you. By default, TFTP has a limit of ~32MB per file (512 bytes per datagram * 65535 datagrams max), but you can increase that limit using the TIdTrivialFTP.RequestedBlockSize property to request permission from a TFTP server to send more bytes per datagram. RequestedBlockSize is set to 1500 bytes by default, which can accommodate files up to ~93.7MB in size (though in practice it should not be set higher than 1468 bytes so TFTP datagrams won't exceed the MTU size of Ethernet packets, thus limiting the max file size closer to ~91.7MB). The RequestedBlockSize can be set as high as 65464 bytes per datagram to handle files up to ~3.9GB (at the risk of fragmenting UDP packets at the IP layer).
It is also possible for TFTP to handle unlimited file sizes with smaller datagram sizes, though TidTrivialFTP/TIdTrivialFTPServer do not currently implement this, because this behavior is not currently standardized in the TFTP protocol, but is commonly implemented as an extension in 3rd party implementations.
Otherwise, you should consider using TCP instead of UDP for your file transfers. TCP doesn't suffer from these problems.

RFduino - Faulty Transmissions

I have a project where I send data to an Android phone. I get data via the serial port to the rfduino and then send this data to the phone.
First I had 20 byte data chunks. I use "Serial Event" to set a flag and then read and send the data within the main loop. The serial baud rate is 9600 as recommended by the rfduino staff.
Now I have 40 bytes of data. The Rfduino reads 40 bytes of serial data and sends it in two parts (see code). The first byte of each part is an identifier used to distunguish between the different packets.
After some successful transmissions the data becomes corrupt. Usually, once the transmission starts being faulty, I have to reinitiate the connection to get correct data.
This is a basic idea of the code:
void loop() {
if (mData)
{
mData = false;
Serial.readBytes(data, 40);
while(!RFduinoBLE.send(&data[0], 20));
while(!RFduinoBLE.send(&data[20], 20));
}
}
void serialEvent(void) {
mData = true;
}
I checked the serial data with a logic analyzer and there is nothing wrong with it. I also used wireshark to check the ble transmissions. Both 20 Byte chunks are send within the same connection interval. The data sent by the RFduino becomes corrupt after some time.
When I omit the second packet and just send 20 Bytes, the problem does not occur.
I assume, that the radio interferes with the reading of the serial port. I tried to use while(RFduinoBLE.radioActive); before the Serial.read, but I think the data is buffered before that. So this did not change anything.
Additionally I tried to lower the sending power to minimum, without any improvements.
I also tried different connection intervals. I have new data every 128ms. This limits the max. connection interval. All changes did not help at all.
I've read that the BLE radio priority takes about 5-6ms and that there is a 6 byte buffer for data. Using 9600 baud this buffer overflows.
I haven't looked for sources to those statements yet, but lowering the baud rate to 4800 seems to improve the issue with the faulty data.
Still this is no satisfying data rate and the transmission faults still occur. Not that often anymore, but still.
I've run out of ideas how to fix this and would appreciate any thought!
I mean it must be possible to send 40 bytes every 128ms... that's not that much.
I already posted this question in the RFduino Forum - without answer until now - and I wonder if anyone experienced similar issues with the RFduino.
RFduino Forum

TCP retransmission for the same packet but with the different TCP payload

I'm developing an Ethernet driver in Linux platform. I found that when a TCP retransmission occurred, the TCP payloads of multiple retransmission packets referring to the same sequence number packets were different. I can't understand why it would happen. In my driver, I just allocated a normal network device without any specific flags. By the way, the TCP checksum field was also wrong in these retransmission packets, however, the checksum in all the other types of TCP packets was right, such as SYNC, ACK, and DUP ACK.
I captured the packets by wireshark, and it means the packets I captured were not handled by my driver, just from the TCP stack in Linux kernel. But when I tested with other Ethernet devices and drivers, this problem didn't happen. So my questions were like the following.
Is there any possible for TCP stack to retransmit the same packets without same payload?
Which kinds of parameters in Linux kernel would cause these problem?
How can my driver cause this problem?
Thx for all your reply.
I found the reason for this problem. It's a very very fool fault.
Due to making all the DMA buffers (in my driver it's skb->data)address aligned to 4bytes, I called a memmove function to do that. Actually, the data referred by the skb->data is shared by all the TCP/IP stack in the Kernel. So after this wrong operation, when the TCP retranmission occurs, the address referred by skb->data in TCP stack still maintains the original one. That's why the checksum based on original data seems wrong in wireshark. The codes in my driver is like the following.
u32 skb_len = skb->len;
u32 align = check_aligned(skb);
if !align
return skb;
skb_push(skb,align);
memmove(skb->data, skb->data+align, skb_len);
skb_trim(skb, skb_len);
return skb;
I hope my experience can help others to avoid this fool fault.

SnmpExtensionTrap has a size limit?

I am working on a problem in SNMP extension agent in windows, which is passing traps to snmp.exe via SnmpExtensionTrap callback.
We added a couple of fields to the agent recently, and I am starting to see that some traps are getting lost. When I intercept the call in debugger and reduce the length of some strings, the same traps, that would have been lost, will go through.
I cannot seem to find any references to size limit or anything on the data passed via SnmpExtensionTrap. Does anyone know of one?
I would expect the trap size to be limited by the UDP packet size, since SNMP runs over the datagram-oriented UDP protocol.
The maximum size of a UDP packet is 64Kb but you'll have to take into account the SNMP overhead plus any limitations of the transport you're running over (e.g. ethernet).

Resources