Receiving UDP on XP vs. Win7 - windows-7

I am using a simple UDP Receiver code built in c++. I upgrade on of my machines to Windows 7 and this line is now getting held up because the UDP stream is not getting through to the executable running:
iResult = recv(sock, RxBuf, buffsize, 0);
The recv function is just held up. I have used wire shark to make sure the UDP stream is active and correct but don't know what the problem is.
Any help would be appreciated.
(the UDP stream is broadcasted)

Unless you have set sock to non-blocking, recv() will block until data is received. So if the program is blocking there, its probably because it isn't receiving any datagrams.
A lot changed in Windows networking between XP and 7, so here some things to check:
Check your bind() statement. Make sure you are really binding the port you think you are and that you are checking for errors.
Simply turning off the firewall in Windows does not completely disable it. There are many components, especially on Vista and later, which are active all the time.
When you first run an executable, Windows Vista and later will ask you to confirm that it should have network access. If you click anything other than "Allow", then the path to that executable may be blocked forever. Adding an "Allow" rule does not override this block. To unblock it, you must turn the firewall back on, and dig down to "Windows Firewall with Advanced Security" to delete the offending rules from both Incoming and Outgoing. You might be amazed at what can build up in there.
You probably need to add an Incoming firewall rule for the UDP port that you are listening on. Even if the firewall is turned off.
Other things to try: disable any anti-virus software, run your listener as Admin, get Wireshark or another packet sniffer to make sure the packets really are reaching the machine.

Related

How do I close a socket (ipv4 and ipv6) connection on Windows from any process?

How do I close tcp v4 and tcp v6 connections on Windows? I don't want to kill the entire process that has the open connection as this obviously will kick everyone else off that process. I need to do this from a separate process, and so will not have access to socket handles, etc. I am using Windows API to get tcp table, etc. so I know which connections are active.
One way might be to enumerate all open handles on the system, or at least the open handles of a given target process, until you find the SOCKET handle you are interested in (see HOWTO: Enumerate handles, Socket Handles, and C++ Get Handle of Open Sockets of a Program - though I'm not sure how you would be able to retrieve the IP/Port pairs of a SOCKET to compare to the active connection you are interested in, without injecting remote getsockname()/getpeername() calls into the owning process of the SOCKET).
Once you have found the SOCKET handle you want, you can then close it by using DuplicateHandle() with the DUPLICATE_CLOSE_SOURCE flag 1.
1: This is how the "Close Handle" feature in Process Explorer works.
Since I'm using C#, I cannot PInvoke SetTcpEntry, even as administrator with an app.manifest file, it always sends a 317 error. So I created a C++ .exe to close a comma separated list of ipv4 addresses on the command line using SetTcpEntry, works fine even without an app.manifest file. That solves kicking ipv4 connections.
I tried using the get handles approach with NtQuerySystemInformation but never could get it working quite right, and it is a private mostly undocumented API and seems unsafe to use.
So, for ipv6, I am using windivert and injecting RST flag to ipv6 packets with certain ip addresses. It is as simple as setting the RST flag of an incoming packet before sending it on through with windivert. The downside is, if the client never sends another packet, the ipv6 socket still stays open indefinitely.
Perhaps someday Microsoft will add a SetTcpEntry6 function, but until then this appears to be the only realistic way.
UPDATE 2022-05-01, found this gem at https://www.x86matthew.com/view_post?id=settcpentry6

How to prevent Windows from sending RST packet when trying to connect to somebody via Pcap.net?

I'm trying to use Pcap.Net to open a tcp connection.
I'm sending following package:
The server is responding with:
After this, Windows on its own sends the reset packet:
Why is this happening, and how do I block this behavior?
I'm doing this on Windows 7
As Mr Harris says, you can use WinDivert to do what you want. E.g. to just do the TCP handshake, you can write something like the following:
// TCP handshake using WinDivert:
HANDLE handle = DivertOpen("inbound && tcp.SrcPort == 80 && tcp.Syn && tcp.Ack", 0, 0, 0);
DivertSend(handle, synPacket, sizeof(synPacket), dstAddr, NULL);
...
DivertRecv(handle, synAckPacket, sizeof(synAckPacket), &srcAddr, &length);
...
DivertSend(handle, ackPacket, sizeof(ackPacket), dstAddr, NULL);
...
The DivertRecv() function redirects the server response into user space before it is handled by the Windows TCP/IP stack. So no pesky TCP RST will be generated. DivertSend() injects packets.
This is the main differences between WinDivert and WinPCAP. The latter is merely a packet sniffer, whereas the former can intercept/filter/block traffic.
WinDivert is written in C so you'd need to write your own .NET wrapper.
(usual disclosure: WinDivert is my project).
Essentially, the problem is that scapy runs in user space, and the windows kernel will receive the SYN-ACK first. Your windows kernel will send a TCP RST because it won't have a socket open on the port number in question, before you have a chance to do anything with scapy.
The typical solution (in linux) is to firewall your kernel from receiving an RST packet on that TCP port (12456) while you are running the script... the problem is that I don't think Windows firewall allows you to be this granular (i.e. looking at TCP flags) for packet drops.
Perhaps the easiest solution is to do this under a linux VM and use iptables to implement the RST drops.
Either by using Boring Old Winsock to make a TCP connection to the server, rather than constructing your own TCP-over-IP-over-Ethernet packets and sending them to a server, or by somehow convincing the Windows Internet protocol stack to ignore the SYN+ACK (and all subsequent packets) you get from the server, so that it doesn't see the SYN+ACK from the server, notice that no process has tried to set up a TCP connection from 192.168.1.3:12456 to 192.168.1.1:80 using the standard in-kernel networking stack (i.e., nobody's tried to set it up using Boring Old Winsock), and send back an RST to tell the server that there's nobody listening at port 12456 on the machine.
You might be able to do the latter using WinDivert. It does not itself appear to have a .NET wrapper, so you might have to look for one if you're going to use .NET rather than Boring Old Unmanaged C or Boring Old Unmanaged C++.

Is there a way to monitor what process sends UDP packets (source/dest IP and port) in Windows?

I discovered almost accidentally that my machine was sending and receiving UDP packets to a machine in Poland. Not that I have any problem with Poland, I just don't know why my laptop has the need to communicate with a server there. Reverse DNS shows just the ISP providing the address to some end user. Using Wireshark, I can monitor the messages, which were indecipherable as they were probably encrypted. All packets sent from my machine had the same source port, so clearly the application that sent them opened this UDP socket to use it. I am searching for ways to:
1) enumerate all current sockets open in the system, including the process that created it and, for both TCP and UDP, what ports and addresses they are current bound to.
2) because applications can open these sockets, use them, and close them right away, I would love to find (or perhaps even write) a program that once started would somehow get notification each time a socket gets created, or really more importantly when bound to a source and/or destination address and port. For UDP, I would love to also be able to monitor/keep track of the destination IP addresses and ports that socket has sent messages to.
I don't want to monitor the traffic itself, I have Wireshark if I want to view the traffic. I want to be able to then cross reference to discover what application is generating the packets. I want to know if it is from a process I trust, or if it is something I need to investigate further.
Does anybody know of any applications (for the Windows platform) that can do this? If not, any ideas about a .NET or Windows API that provides this capability, should I want to write it myself?
Edit:
After further research - looks like the APIs to use are GetExtendedUdpTable and GetExtendedTcpTable, CodeProject.com has some samples wrapping these in .NET (see http://www.codeproject.com/Articles/14423/Getting-the-active-TCP-UDP-connections-using-the-G). So a combination of this API and some sniffer code would be needed to monitor and keep track of what hosts at what ports using what protocol any particular application on your machine is talking to. If I ever get some free time, I'll consider creating this, if you know of an app that does all this, please let me know.
Try SysInternals TCPView. Despite its name, it handles UDP as well.
netstat -b to enumerate all ports along with the process names.
You can try using SysInternals' Process MOnitor (ProcMon.exe or ProcMon64.exe).
It allows for filtering of Processes by "UDP Send" Operation - and provides detailed UDP Connection data, including source and destination addresses(IP) and ports etc.

difference between "address in use" with bind() in Windows and on Linux - errno=98

I have a small TCP server that listens on a port. While debugging it's common for me to CTRL-C the server in order to kill the process.
On Windows I'm able to restart the service quickly and the socket can be rebound. On Linux I have to wait a few minutes before bind() returns with success
When bind() is failing it returns errno=98, address in use.
I'd like to better understand the differences in implementations. Windows sure is more friendly to the developer, but I kind of doubt Linux is doing the 'wrong thing'.
My best guess is Linux is waiting until all possible clients have detected the old socket is broken before allowing new sockets to be created. The only way it could do this is to wait for them to timeout
is there a way to change this behavior during development in Linux? I'm hoping to duplicate the way Windows does this
You want to use the SO_REUSEADDR option on the socket on Linux. The relevant manpage is socket(7). Here's an example of its usage. This question explains what happens.
Here's a duplicate of this answer.
On Linux, SO_REUSEADDR allows you to bind to an address unless an active connection is present. On Windows this is the default behaviour. On Windows, SO_REUSEADDR allows you to additionally bind multiple sockets to the same addresses. See here and here for more.

Simulating a network down to particular process

I am trying to simulate a scenario where connection to the server of one process is down while the connection to another server is up. Just pulling the network cable won't work in my case since I need another process connection to stay up.
Is there any tool for this kind of job? I am on Windows. Thanks!
There's a few layers which you can simulate this at. The easiest would be if your two servers listen on two distinct TCP ports. In that case, you could run two tcp proxies, and stop/pause one when you want to simulate a failure. For Windows I would suggest using tcpTrace to do this.
Another option would be to have the two servers bound to two virtual NICs, which are bridged to the physical NIC. Of course if you have two physical NICs, you could bind each server process to a different physical NIC.
At a lower level, you can ran a WAN simulator. Most simulators allow you to impair specific types of traffic or specific ports. One such simulator is Packetstorm.
One other method which I would suggest is attaching a debugger to one process, and halting all threads on the process with the debugger. Often, a process doesn't die, but gets stuck in garbage collection, or in a loop. As the sockets don't close, many 'high availability' solutions won't automatically failover.
One approach would be to mock the relevant network connection code for the purposes of testing. In this case you would probably want to mock it returning whatever it usually would if the connection was down.
A poor man's approach if you can use sleep/hibernate mode on your machine :
Set an Outbound rule in the Windows Firewall to disallow connection for a particular Program.
Already connected sockets stay connected: put the machine in sleep/hibernate mode for a brief moment to force those sockets to disconnect.
When the system is restored, the program cannot establish new connections.
New connections are made possible as soon as you disable the firewall rule.
Note that it does not simulate network outage because each connection fails immediately with an permission error. But it prevents a process to establish connections.

Resources