IPsec in Linux kernel - how to figure out what's going on - linux-kernel

I'm writing an IPsec implementation for a microcontroller and I want to test it using a standard Linux box running Debian Lenny. Both devices should secure the communication between them using IPsec ESP in tunnel mode. The keys are setup manually using setkey. There's no (or at least should be no) user space program involved in processing an IPsec packet. Now I want to see how my created packets are processed by the Linux kernel. To see the raw packets I capture them using tcpdump and analyze them using wireshark.
What's the best way to obtain debug information about IPsec processing?
How can I figure out whether the packet is accepted by the kernel?
How can I view the reason for a packet to be dropped?

You can instrument the XFRM (or perhaps ipv4/esp.c) kernel code to print out debug messages at the right spots.
For example, in net/ipv4/esp.c there exists a function esp_input() which has some error cases, but you'll see most the interesting stuff is in the xfrm/*.c code.
That said, I didn't have a problem interoperating a custom IPSec with Linux. Following the 43xx specs and verifying the packets came out correctly via wireshark seemed to do well. If you're having issues and don't want to instrument the kernel then you can setup iptables rules and count the number of (various type of) packets at each point.
Finally, be sure you've actually added a security policy (SP) as well as a security association (SA) and setup firewall rules properly.

Related

Attach NDIS LWF driver to a Virtual Network Adapter? (Kerio)

We have an NDIS LWF driver, and it seems like it cannot get attached to Virtual Network Adapters, for example the one that Kerio Control VPN client creates (Kerio Virtual Network).
When i try to install the NDIS LWF in the network adapter manually by giving it our INF file (Install -> service -> Have disk), the driver doesn't appear in the network service list.
Then i found out that i if add nolower in the FilterMediaTypes in the inf file, it does appear in the network service list, but even then, when i click on OK, it doesn't get added to the list of items and doesn't get attached.
My question is, How can i attach to this Kerio Virtual Network Adapter in order to monitor its packets?
LWFs cannot bind to a network interface that has HKR, Ndi\Interfaces,LowerRange,,nolower in its INF. Generally speaking, the network interface ought to have at least one real LowerRange, and it's reasonable to ask the vendor to add one. For whatever it's worth, we (the Windows OS team) originally shipped the Bluetooth PAN adapter with nolower, and then later realized we needed to update it to have something there, so LWFs could bind to it. Perhaps that anecdote helps motivate this vendor to update their INF.
If the NDIS datapath uses a 14 byte Ethernet-like header and is roughly compatible with Ethernet's ideas of unicast & multicast, then ethernet is the correct thing to put in LowerRange. See the docs for more details.
It's not supported to try and add the nolower token to your LWF driver INF's FilterMediaTypes; you can't reasonably expect to bind to any unknown type of interface. What if the next network adapter indicates packets in some yet-to-be-invented framing layer — how could your LWF possibly make sense of those packets? For that reason, nolower is not a binding interface; it's a special token that means "the empty list".
LWFs also cannot bind to CoNDIS network adapters. This is simply because the LWF programming model has never been extended to cover all the additional signaling for connection management.
I am not personally familiar with the "Kerio" network interface — I don't know if it has nolower in its INF or whether it's CoNDIS (!ndiskd would tell you this). If it's the former, you should ask that vendor to update their INF.

Developing a Mac OSX Network Driver for a Serial Port AT Command Based Modem

First allow me to say that I don't have any experience developing drivers for OSX, nor drivers for Windows. So, there are a lot of things that I don't understand about how drivers work; I'm sure it'll be evident in my question.
I have a modem that is able to open and close TCP/UDP sockets using AT commands. I would like to create some kind of program (kernel extension? driver?) that implements a network driver, converting the network interface calls into AT command serial messages.
That's the basic jist of it. I'm essentially asking if anybody can point me in the right direction / give me a high level overview of how they would approach it and what Apple guides to focus on.
The XNU networking stack -- like most network stacks -- expects network devices to send and receive IP packets directly. It isn't tooled to work with network devices that handle part of the network stack (like TCP or UDP) internally -- it won't be possible to implement a network driver which uses this device.
You might have more luck exposing this device as a SOCKS proxy. You will need to write a userspace daemon which listens on a TCP port on localhost (on the computer) and relays traffic to the serial device; once that's done, you can set the computer to use that device as a SOCKS proxy in the Networking control panel.
(As an aside: most devices that implement this type of interface have a very low limit on the number of open sockets -- often fewer than 10. They're unlikely to be able to handle the network load generated by a desktop OS.)

Is there a way to monitor what process sends UDP packets (source/dest IP and port) in Windows?

I discovered almost accidentally that my machine was sending and receiving UDP packets to a machine in Poland. Not that I have any problem with Poland, I just don't know why my laptop has the need to communicate with a server there. Reverse DNS shows just the ISP providing the address to some end user. Using Wireshark, I can monitor the messages, which were indecipherable as they were probably encrypted. All packets sent from my machine had the same source port, so clearly the application that sent them opened this UDP socket to use it. I am searching for ways to:
1) enumerate all current sockets open in the system, including the process that created it and, for both TCP and UDP, what ports and addresses they are current bound to.
2) because applications can open these sockets, use them, and close them right away, I would love to find (or perhaps even write) a program that once started would somehow get notification each time a socket gets created, or really more importantly when bound to a source and/or destination address and port. For UDP, I would love to also be able to monitor/keep track of the destination IP addresses and ports that socket has sent messages to.
I don't want to monitor the traffic itself, I have Wireshark if I want to view the traffic. I want to be able to then cross reference to discover what application is generating the packets. I want to know if it is from a process I trust, or if it is something I need to investigate further.
Does anybody know of any applications (for the Windows platform) that can do this? If not, any ideas about a .NET or Windows API that provides this capability, should I want to write it myself?
Edit:
After further research - looks like the APIs to use are GetExtendedUdpTable and GetExtendedTcpTable, CodeProject.com has some samples wrapping these in .NET (see http://www.codeproject.com/Articles/14423/Getting-the-active-TCP-UDP-connections-using-the-G). So a combination of this API and some sniffer code would be needed to monitor and keep track of what hosts at what ports using what protocol any particular application on your machine is talking to. If I ever get some free time, I'll consider creating this, if you know of an app that does all this, please let me know.
Try SysInternals TCPView. Despite its name, it handles UDP as well.
netstat -b to enumerate all ports along with the process names.
You can try using SysInternals' Process MOnitor (ProcMon.exe or ProcMon64.exe).
It allows for filtering of Processes by "UDP Send" Operation - and provides detailed UDP Connection data, including source and destination addresses(IP) and ports etc.

Win32 sockets - Forcing ip packets to leave physical interfaces when sending to other local interfaces

Summary: I'm trying to create sockets to pass data between two physical interfaces that exist on the same machine, and Win32 sockets always forwards the traffic directly in the kernel instead of pushing through the physical interfaces. Is there any way to disable this behavior, perhaps through device settings, registry tweaks, routing table shenanigans, or socket options? We're using Windows XP SP3.
Some background. I'm attempting to build some completely automated IP tests to exercise our custom IPv4 equipment. We have a large lab of Windows XP machines, and individual physical ethernet interfaces for each device we're connecting to. Our devices are effectively ethernet routers each with their own IPs.
We need to send data out our lab machines, through our devices, then back into the same computer. We will be sending Unicast and Multicast UDP, TCP, and broadcast IP traffic through the devices.
We want (and likely need) the traffic to originate on the same machine it is destined to.
To do this, we configure two separate NICs each with their own IP on their own subnet, for instance NIC #1 with 10.0.0.1/24 and NIC #2 with 10.0.1.1/24. Our devices then act like simple passthrough routers, and have two interfaces, one on the 10.0.0.0/24 subnet, one on the 10.0.1.0/24 subnet, which they just forward packets back and forth from.
To generate our data, we'd like to be able to use Win32 sockets, since it is well-understood, well-supported, what our customers are using, and would probably be the most rapid approach. Packet injection is probably feasible for UDP and broadcast IP, but very likely not so for TCP. I'd entertain ideas that used packet injection, but would strongly prefer standard Win32 sockets.
As stated in the summary, the packets never leave the machine. I've googled like a madman and I've not found much. Any ideas?
Use Windows' command-line ROUTE utility. You can configure it so any IP packet sent to a specific IP address on a specific Subnet gets sent to another IP/device. For example:
route ADD <NIC_1_IP> MASK <NIC_1_SUBNET> <DEVICE_IP_CONNECTED_TO_NIC_2> METRIC 1
route ADD <NIC_2_IP> MASK <NIC_2_SUBNET> <DEVICE_IP_CONNECTED_TO_NIC_1> METRIC 1
Alternatively, if you know the index numbers of the NIC interfaces, you can specify them instead:
route ADD <NIC_1_IP> MASK <NIC_1_SUBNET> METRIC 1 IF <NIC_2_INTF>
route ADD <NIC_2_IP> MASK <NIC_2_SUBNET> METRIC 1 IF <NIC_1_INTF>
This way, whenever a packet is sent to NIC #1's IP, the packet goes to the device connected to NIC #2, which will then pass it on to NIC #1. And vice versa for packets sent to NIC #2's IP.
For instance, this is a useful technique for allowing WireShark to capture local IP traffic if the PC is connected to a network with a router. Packets from one local IP/Port to another local IP/Port can be bounced off the router back to the PC so they travel through physical interfaces that WireShark can monitor (WireShark will see duplicate copies of each local packet - one outbound and one inbound - but you can filter out the duplicates).
Winsock is always going to bring the packet data up into the kernel space and deal with it there. Thats the whole point to a generic API is that any device is dealt with at the same "layer". If you want to stick with Winsock, I don't believe you can (or would want to) work around this behavior.
You can remove some of the buffer copying with TransmitPackets or TransmitFile, but not between two device interfaces.
That being said, are you having a performance issue with the additional buffer coping that Winsock performs? Security concerns?
How about running the endpoints of your tester inside of distinct virtual machines? Then you need only a single piece of hardware, but you'll have separate TCP/IP stacks that don't know each other are local (and most VM solutions pass the packet straight through the host unchanged, I don't think the host is going to grab the packet and send it straight to another VM unless you configure bridging between VMs... but you'll bind each VM to a different physical network adapter).

iptables and libpcap

i have rule set up to drop udp/tcp packets with matching strings. however, my program which captures packet using libpcap, is still able to see this packet.
Why is this/, what should be the iptable rules to drop packets before it is seen by libpcap?
Is there anyway,perhaps other than iptables rules, to drop this packet before it is seen by libpcap/tcpdump?
Yes, libpcap sees all the packets..
They are being captured before being processed by the netfilter.
Did you try to change the priority of the netfilter hook you use? if you try hooking with the highest priority for incoming packets, it will get the packet before the packet socket kernel code, which is the way libpcap uses to capture packets.
* I assume you are using linux *
EDIT:
Libpcap uses different ways to capture packets - according to the OS. on linux it uses packet socket which is implemented in kernel code using the netfilter framework.
Theres no way for libpcap to see the packets before netfilter, netfilter is a kernel module, and processes all packets before they hit user mode, it can even see the packets before the kernel sees it.
Could you explain further explain ?
Its possible that libpcap is also setting hooks on netfilter that overwrite the one in iptables. The real issue is that looking and what hooks are set on netfilter is far from trivial, and can only be done in kernel mode. Investigate how libpcap gets the packets.

Resources