A registered netfilter hook can get the packet from the linux kernel. Here linux kernel gets the packet, looks for registered hooks, and passes the packet to them.
The general flow would be:
1. NIC receives the frame
2. Places it in DMA rx ring
3. Kernel's networking subsystem takes it from the DMA rx ring.
But is there a way to get the packet before it enters into linux networking subsystem (may be a big term, my intention is kernel networking code that takes the packet first). That is, my driver should get the packets before it goes into the linux networking stack.
I am a learner and trying to write a piece of code that does the packet processing in the kernel.
Please correct my understanding if wrong and help me with good pointers to read.
Just modify the network NIC card driver? It is the one who got the ISR first. You should be able to filter it there.
Related
Let's talk about Windows user program A, which uses sockets to communicate with the server.
Question a) In summary, when a packet arrives at the LAN card, is there a technique for the driver or kernel to automatically write the packet value to process A?
Actually, while analyzing the actual program A, program A was completely suspended or frozen through the Windows task manager or debugger.
After that, the server sent a packet and I checked the packet through Wireshark.
After that, while checking the memory value of process A through the debugger, the packet received from Wireshark was found in the data area of process A.
(Program A is not connected wiath Windows services or other processes at all, I checked all the parts and only the fundamental question a) remains.)
I used x64dbg, windbg and wireshark.
I am looking at implementing USB communication on a MCU which has a USB engine built into it. Basically you have access to the pipes/endpoints.
I am a little bit confused on the USB stack now. It appears that drivers operate on another level above the pipe/endpoint setup, so the pipe/endpoint is like a middle level layer that drivers are built on. Is this correct?
Secondly, I am interested in simulating serial communication over USB. It appears windows has a premade driver so on the computer side I do not need to program the pipe level.
How do I find out what I need to implement on the MCU to make it behave correctly with the generic serial driver?
This is an answer to your second question regarding the serial communication.
The USB standard defines a communication device class (CDC) for serial communication. The required drivers on the host side are implemented by Windows, macOS, Linux and many more operation systems.
The relevant CDC subclass is PSTN. The relevant documents are found in Class definition for Communication Devices 1.2.
The device basically implements four endpoints:
Control endpoint for configuration requests (baud rate, DTR state etc.). Have a look at SetLineCodeing, GetLineCoding and SetControlLineState.
Bulk endpoint for USB to serial transmission
Bulk endpoint for serial to USB transmission
Interrupt endpoint for notifications (DCD state, errors). See SerialState.
And of course you need to get the device descriptor right.
On top of that, you need to implement all the standard USB requests.
Chances are high that this has already been written for your MCU, both the standard requests and the serial communication. So why not use the existing code?
I am using a Zebra DS457 Scanner to read bar and qr codes via COM-Port (RS232). In my test evironment I used a MSI terminal with Win10 and it worked on the real COM-Port without any problems. But on other devices (Win10 and Win7) there are some issues that the software trigger does not come through and the read information do not get sent back to the computer. When I am using a USB to RS232 FTDI adapter I have no issues at all. But why? First I thought it is Win10 and the legacy support could be better, but the adapter is on all devices better and faster. How is this possible? Maybe a driver specific thing? I am using this adapter link to conrad.de.
An FTDI serial port will impose a minimum latency between the time a character arrives over the wire and when an application can see it, and between the time an application wants to send something and the time it goes over a wire. On older devices, these latencies were a minimum of 1ms each, but I think some newer high speed devices have reduced them to 125us. Further, data which arrives at just the wrong speed sometimes ends up with hundreds of milliseconds of additional latency, for reasons I don't quite understand.
On the other hand, an FTDI device can buffer 256 bytes of data from the wire, or 128 bytes of data from the USB port to be sent over the wire, and process RTS/CTS handshaking, without any software intervention--abilities which are lacking in the UART chips used by PC serial ports. If software gives 128 bytes to an FTDI device, it will start sending it until the remote device deasserts its handshake line, whereupon the FTDI device will stop sending as soon as the current byte is complete; it will then resume transmission as soon as the remote device reasserts handshake. If an FTDI device receives enough data over the wire that its UART would be in danger of overflowing, it will automatically deassert its handshake output without requiring any software intervention. The UART used in PC serial port, by contrast, requires a fast interrupt handler to control or respond to the handshake wires. If an interrupt handler maintains a 4096-byte buffer, it may deassert the handshake wire once that buffer is 75% full, but nothing would deassert the handshake wire if the buffer is less than 75% full and 17 bytes arrive over the wire in quick succession before the UART interrupt handler. Worse, if transmit buffering is enabled, and the PC has fed 16 bytes to the UART for transmission when the remote device deasserts its handshake line, those 16 bytes will be sent out whether or not the remote device is ready to receive them (and based upon the handshake wire, it very well might not be).
Thus, some applications can work much better with an FTDI UART, and some much better with an actual serial port.
I'm writing an IPsec implementation for a microcontroller and I want to test it using a standard Linux box running Debian Lenny. Both devices should secure the communication between them using IPsec ESP in tunnel mode. The keys are setup manually using setkey. There's no (or at least should be no) user space program involved in processing an IPsec packet. Now I want to see how my created packets are processed by the Linux kernel. To see the raw packets I capture them using tcpdump and analyze them using wireshark.
What's the best way to obtain debug information about IPsec processing?
How can I figure out whether the packet is accepted by the kernel?
How can I view the reason for a packet to be dropped?
You can instrument the XFRM (or perhaps ipv4/esp.c) kernel code to print out debug messages at the right spots.
For example, in net/ipv4/esp.c there exists a function esp_input() which has some error cases, but you'll see most the interesting stuff is in the xfrm/*.c code.
That said, I didn't have a problem interoperating a custom IPSec with Linux. Following the 43xx specs and verifying the packets came out correctly via wireshark seemed to do well. If you're having issues and don't want to instrument the kernel then you can setup iptables rules and count the number of (various type of) packets at each point.
Finally, be sure you've actually added a security policy (SP) as well as a security association (SA) and setup firewall rules properly.
i have rule set up to drop udp/tcp packets with matching strings. however, my program which captures packet using libpcap, is still able to see this packet.
Why is this/, what should be the iptable rules to drop packets before it is seen by libpcap?
Is there anyway,perhaps other than iptables rules, to drop this packet before it is seen by libpcap/tcpdump?
Yes, libpcap sees all the packets..
They are being captured before being processed by the netfilter.
Did you try to change the priority of the netfilter hook you use? if you try hooking with the highest priority for incoming packets, it will get the packet before the packet socket kernel code, which is the way libpcap uses to capture packets.
* I assume you are using linux *
EDIT:
Libpcap uses different ways to capture packets - according to the OS. on linux it uses packet socket which is implemented in kernel code using the netfilter framework.
Theres no way for libpcap to see the packets before netfilter, netfilter is a kernel module, and processes all packets before they hit user mode, it can even see the packets before the kernel sees it.
Could you explain further explain ?
Its possible that libpcap is also setting hooks on netfilter that overwrite the one in iptables. The real issue is that looking and what hooks are set on netfilter is far from trivial, and can only be done in kernel mode. Investigate how libpcap gets the packets.