I have a weird behaviour on windows. I have 2 processes that are talking to each other on UDP protocol.
Scenario: 1 of the proceeses is up and the other is not. The process try to send udp message towards the one that is down. The one that is up gets from OS or soemthing else a signal for the socket to read as it got message from the other process.
How come ?
It sends a signal on the same port, but it not a real message that was sent from the other application. When trying to read it u get an excpetion of number 10054, connection reset.
Related
Our software (Nmap port scanner) needs to quickly determine the status of a non-blocking TCP socket connect(). We use select() to monitor a lot of sockets, and Windows is good at notifying us when one succeeds. But if the port is closed and the target sends a TCP RST, Windows will keep trying a few times before notifying the exceptfds, and the socket error is WSAECONNREFUSED as expected. Our application has its own timeout, though, and will usually mark the connection as timed-out before Windows gives up. We want to get as close as possible to the behavior of Linux, which is to notify with ECONNREFUSED immediately upon receipt of the first RST.
We have tried using the TCP_MAXRT socket option, and this works to get select() to signal us right away, but the result (for closed ports) is always WSAETIMEDOUT, which makes it impossible to distinguish closed (RST) from filtered/firewalled (network timeout), which puts us back at the original problem. Determining this distinction is a core feature of our application.
So what is the best way on Windows to find out if a non-blocking socket connect() has received a connection reset?
EDITED TO ADD: A core problem here is this line from Microsoft's documentation on the SO_ERROR socket option: "This per-socket error code is not always immediately set." If it were immediately set, we could check for it prior to the connect timeout.
Trying to establish a tcp connection between a client / server. Both machines are Macs and are on the same LAN. Server's app listens on port 12345. After receiving "SYN, ACK" from Server, I send "ACK" but then my client automatically sends a "FIN, ACK" followed by "RST, ACK". So the TCP Flow ends up being:
Client sends SYN.
SVR sends SYN, ACK.
Client sends ACK.
Client sends FIN, ACK.
Client sends RST, ACK.
SVR sends ACK.
SVR sends ACK.
Client sends RST.
Client sends RST.
From reading other posts with similar issues it sounds like this could be happening because I'm trying to manually create the handshake on the user level and the Unix kernel (operating at the system level) sees the "SYN, ACK" far before anything on the user level can respond and moves to close the connection, seeing it as open for no reason. A similar problem to what this Linux user experienced: Unwanted RST TCP packet with Scapy
Whereas iptables worked for the Linux user should I use something like pf in Mac OS X to block / drop the FIN and RST msgs? My client is running 10.9.5 and my server 10.10.3.
Here's a flow graph of the tcp communication Server is 10.0.100.5 and client is 10.0.100.4:
I'm trying to use Pcap.Net to open a tcp connection.
I'm sending following package:
The server is responding with:
After this, Windows on its own sends the reset packet:
Why is this happening, and how do I block this behavior?
I'm doing this on Windows 7
As Mr Harris says, you can use WinDivert to do what you want. E.g. to just do the TCP handshake, you can write something like the following:
// TCP handshake using WinDivert:
HANDLE handle = DivertOpen("inbound && tcp.SrcPort == 80 && tcp.Syn && tcp.Ack", 0, 0, 0);
DivertSend(handle, synPacket, sizeof(synPacket), dstAddr, NULL);
...
DivertRecv(handle, synAckPacket, sizeof(synAckPacket), &srcAddr, &length);
...
DivertSend(handle, ackPacket, sizeof(ackPacket), dstAddr, NULL);
...
The DivertRecv() function redirects the server response into user space before it is handled by the Windows TCP/IP stack. So no pesky TCP RST will be generated. DivertSend() injects packets.
This is the main differences between WinDivert and WinPCAP. The latter is merely a packet sniffer, whereas the former can intercept/filter/block traffic.
WinDivert is written in C so you'd need to write your own .NET wrapper.
(usual disclosure: WinDivert is my project).
Essentially, the problem is that scapy runs in user space, and the windows kernel will receive the SYN-ACK first. Your windows kernel will send a TCP RST because it won't have a socket open on the port number in question, before you have a chance to do anything with scapy.
The typical solution (in linux) is to firewall your kernel from receiving an RST packet on that TCP port (12456) while you are running the script... the problem is that I don't think Windows firewall allows you to be this granular (i.e. looking at TCP flags) for packet drops.
Perhaps the easiest solution is to do this under a linux VM and use iptables to implement the RST drops.
Either by using Boring Old Winsock to make a TCP connection to the server, rather than constructing your own TCP-over-IP-over-Ethernet packets and sending them to a server, or by somehow convincing the Windows Internet protocol stack to ignore the SYN+ACK (and all subsequent packets) you get from the server, so that it doesn't see the SYN+ACK from the server, notice that no process has tried to set up a TCP connection from 192.168.1.3:12456 to 192.168.1.1:80 using the standard in-kernel networking stack (i.e., nobody's tried to set it up using Boring Old Winsock), and send back an RST to tell the server that there's nobody listening at port 12456 on the machine.
You might be able to do the latter using WinDivert. It does not itself appear to have a .NET wrapper, so you might have to look for one if you're going to use .NET rather than Boring Old Unmanaged C or Boring Old Unmanaged C++.
My application needs to receive UDP packets from multiple destination ports (this is a bonafide application and not a sniffer). Therefore, I have chosen to use a PF_PACKET socket and to do port filtering at the application level.
Here's how I create the socket:
int g_rawSocket = socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
I am correctly receiving UDP packets. However, the kernel on which the application runs is sending ICMP packets of type 'Destination unreachable' and code 'Port unreachable' to the remote device that is sending packets to my app. I guess that this is because I have not bound a port number to the socket. However, I wonder if it is appropriate to use bind with a PF_PACKET socket, especially as I need to bind multiple ports to it, which I guess is not possible.
Any comments please?
No, it can't be bound to a specific port, since it's working on a lower level than the Transport (UDP/TCP) layer. However, you could open and listen to all sockets, using regular UDP (AF_INET/SOCK_DGRAM) sockets and select for example and as far as I know you can bind and listen to as many sockets as you want, as long as you don't exceed the limits of open file descriptors for your process.
I have also done the same thing in my application.
in my case i have created sockets as many i need & bind them with the particular port. but i m not listening to any socket. so i created one raw socket
int sock_raw = socket(AF_INET , SOCK_RAW , IPPROTO_UDP);
& then received all the traffic without any ICMP.
So i think u have to bind all the ports to avoid ICMP either you have to some kernel hacking as stoping or removing the code for ICMP in the linux-kernel code & build it again
I need to close UDP socket which has unsent data immediately.
There is SO_LINGER parameter for TCP sockets but I didn't find out anything for UDP.
It's on Windows.
Thanks in advance.
Update 0:
I give background of this question. I have application 1st thread opens/binds/closes socket, 2nd thread sends datagrams to it.
In some cases after closing the socket (errorcode = 0) bind function returns errorcode 10048 "Address already in use". I found out after close() execution port is still used (via netstat command). Maybe I ask incorrect question and the reason of such behavior is something else?
For all application purposes once your send() returns, the packet is "sent". There's no send-buffer like in TCP, and you have no control over the NIC packet queue. Normal close() is all you need.
Edit 0:
#EJP, here's a quote from UNP for you (Section 2.11 "UDP Output"):
This time, we show the socket send buffer as a dashed box bacause it
doesn't really exist. A UDP socket has a send buffer size (which we
can change with the SO_SNDBUF socket option, Section 7.5), but this
is simply an upper limit on the maximum-sized UDP datagram that can
be written to the socket. If an application writes a datagram larget
than the socket send buffer size, EMSGSIZE is returned. Since UDP is
unreliable, it does not need tp keep a copy of the application's data
and does not need an actual send buffer. (The application data is
normally copied into a kernel buffer of some form as it passes down
the protocol stack, but this copy is discarded by the datalink layer
after the data is transmitted.)
This is what I meant in my answer - you have no control over the send buffer - , so "for all application purposes" it does not exist.
I was having this problem with a windows UDP socket as well. After hours of trying everything I finally found my problem was that I was calling socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP) on the main thread to create the socket, calling bind(...) and recvfrom() on a worker thread, then after closing the worker thread I called closesocket(...) on the main thread. None of the functions returned an error but tor some reason, doing this leaves the UDP address/port combination in use (so a future call to bind() triggers error 10048 WSAEADDRINUSE and netstat -abot -p UDP also shows the port still in use until the whole application is closed). The solution was to move socket(...) and closesocket(...) calls into the worker thread.
Other than weird issues like the case above, there is normally no way that a UDP server socket can be left open after calling closesocket() on it. Microsoft explains that there is no connection maintained with a UDP socket and no need to call shutdown() or any other function. Usually the reason a TCP socket is left open after calling closesocket() is that it wasn't disconnected gracefully and it's waiting for about 4 minutes in TCP_WAIT state for possible additional data to come in before it actually closes. In the case above, netstat showed the UDP socket never closed until the application was closed even if I waited 30+ minutes.
If you're using a wrapper around winsock like the .NET framework, I've also read some features like setting up async callbacks can leave a UDP socket bound open if you don't clean up the callbacks correctly, but I don't think there are any such features in the win32 winsock API that can cause that.
Just close it. There's nothing in UDP that says that pending data will be sent, unlike TCP.