Boost asio network disconnection handling - boost

I am using boost asio for my TCP Server, in this I am using async_read_some for reading .
Application is working fine when network is connected, normal connection closing are handled correctly like (EOF,abrupt closing).
But my problem is I am not getting any error when network cable is unplugged. socket is open , and I get error when I am writing on socket. This is the way socket work.
Question: Can this is handled in Boost asio by any method?

You may want to set "keep alive" on the socket, see Socket closed notification. Try:
socket.set_option(boost::asio::socket_base::keep_alive(true));

Related

Hook the socket connection of an application on windows/macos

I have an application that when it is launched, opens a socket connection to a certain server. I want to change the IP and Port of that socket connection to redirect the connection to my proxy server, in order to retrieve all the packets.
I want to do it on both macos and windows. I know there is function in MacOs that allows to modify the memory of a process so the question would be, how to know where is the socket structure in the memory when the application is running and when do I know when the socket connection is starting ?
And on Windows, I heard about DLL ? But I am really lost on it and I dont really understand it.
Any thoughts ?

Windows sockets: How to immediately detect TCP RST on nonblocking connect()?

Our software (Nmap port scanner) needs to quickly determine the status of a non-blocking TCP socket connect(). We use select() to monitor a lot of sockets, and Windows is good at notifying us when one succeeds. But if the port is closed and the target sends a TCP RST, Windows will keep trying a few times before notifying the exceptfds, and the socket error is WSAECONNREFUSED as expected. Our application has its own timeout, though, and will usually mark the connection as timed-out before Windows gives up. We want to get as close as possible to the behavior of Linux, which is to notify with ECONNREFUSED immediately upon receipt of the first RST.
We have tried using the TCP_MAXRT socket option, and this works to get select() to signal us right away, but the result (for closed ports) is always WSAETIMEDOUT, which makes it impossible to distinguish closed (RST) from filtered/firewalled (network timeout), which puts us back at the original problem. Determining this distinction is a core feature of our application.
So what is the best way on Windows to find out if a non-blocking socket connect() has received a connection reset?
EDITED TO ADD: A core problem here is this line from Microsoft's documentation on the SO_ERROR socket option: "This per-socket error code is not always immediately set." If it were immediately set, we could check for it prior to the connect timeout.

Winsock Kernel - Reusing socket return error 0xc0000184

I am developing WDF driver that uses Winsock Kernel module connecting network. I have encounter problem when I try to reuse socket. I am successfully creating, binding and connecting socket. After that I am disconnecting socket with WskDisconnect function with success status. However when I try to reconnect using same socket I get error 0xc0000184. Any ideas why I can't reuse my socket? Binding fails as well.

How to prevent Windows from sending RST packet when trying to connect to somebody via Pcap.net?

I'm trying to use Pcap.Net to open a tcp connection.
I'm sending following package:
The server is responding with:
After this, Windows on its own sends the reset packet:
Why is this happening, and how do I block this behavior?
I'm doing this on Windows 7
As Mr Harris says, you can use WinDivert to do what you want. E.g. to just do the TCP handshake, you can write something like the following:
// TCP handshake using WinDivert:
HANDLE handle = DivertOpen("inbound && tcp.SrcPort == 80 && tcp.Syn && tcp.Ack", 0, 0, 0);
DivertSend(handle, synPacket, sizeof(synPacket), dstAddr, NULL);
...
DivertRecv(handle, synAckPacket, sizeof(synAckPacket), &srcAddr, &length);
...
DivertSend(handle, ackPacket, sizeof(ackPacket), dstAddr, NULL);
...
The DivertRecv() function redirects the server response into user space before it is handled by the Windows TCP/IP stack. So no pesky TCP RST will be generated. DivertSend() injects packets.
This is the main differences between WinDivert and WinPCAP. The latter is merely a packet sniffer, whereas the former can intercept/filter/block traffic.
WinDivert is written in C so you'd need to write your own .NET wrapper.
(usual disclosure: WinDivert is my project).
Essentially, the problem is that scapy runs in user space, and the windows kernel will receive the SYN-ACK first. Your windows kernel will send a TCP RST because it won't have a socket open on the port number in question, before you have a chance to do anything with scapy.
The typical solution (in linux) is to firewall your kernel from receiving an RST packet on that TCP port (12456) while you are running the script... the problem is that I don't think Windows firewall allows you to be this granular (i.e. looking at TCP flags) for packet drops.
Perhaps the easiest solution is to do this under a linux VM and use iptables to implement the RST drops.
Either by using Boring Old Winsock to make a TCP connection to the server, rather than constructing your own TCP-over-IP-over-Ethernet packets and sending them to a server, or by somehow convincing the Windows Internet protocol stack to ignore the SYN+ACK (and all subsequent packets) you get from the server, so that it doesn't see the SYN+ACK from the server, notice that no process has tried to set up a TCP connection from 192.168.1.3:12456 to 192.168.1.1:80 using the standard in-kernel networking stack (i.e., nobody's tried to set it up using Boring Old Winsock), and send back an RST to tell the server that there's nobody listening at port 12456 on the machine.
You might be able to do the latter using WinDivert. It does not itself appear to have a .NET wrapper, so you might have to look for one if you're going to use .NET rather than Boring Old Unmanaged C or Boring Old Unmanaged C++.

How to close Winsock UDP socket hard?

I need to close UDP socket which has unsent data immediately.
There is SO_LINGER parameter for TCP sockets but I didn't find out anything for UDP.
It's on Windows.
Thanks in advance.
Update 0:
I give background of this question. I have application 1st thread opens/binds/closes socket, 2nd thread sends datagrams to it.
In some cases after closing the socket (errorcode = 0) bind function returns errorcode 10048 "Address already in use". I found out after close() execution port is still used (via netstat command). Maybe I ask incorrect question and the reason of such behavior is something else?
For all application purposes once your send() returns, the packet is "sent". There's no send-buffer like in TCP, and you have no control over the NIC packet queue. Normal close() is all you need.
Edit 0:
#EJP, here's a quote from UNP for you (Section 2.11 "UDP Output"):
This time, we show the socket send buffer as a dashed box bacause it
doesn't really exist. A UDP socket has a send buffer size (which we
can change with the SO_SNDBUF socket option, Section 7.5), but this
is simply an upper limit on the maximum-sized UDP datagram that can
be written to the socket. If an application writes a datagram larget
than the socket send buffer size, EMSGSIZE is returned. Since UDP is
unreliable, it does not need tp keep a copy of the application's data
and does not need an actual send buffer. (The application data is
normally copied into a kernel buffer of some form as it passes down
the protocol stack, but this copy is discarded by the datalink layer
after the data is transmitted.)
This is what I meant in my answer - you have no control over the send buffer - , so "for all application purposes" it does not exist.
I was having this problem with a windows UDP socket as well. After hours of trying everything I finally found my problem was that I was calling socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP) on the main thread to create the socket, calling bind(...) and recvfrom() on a worker thread, then after closing the worker thread I called closesocket(...) on the main thread. None of the functions returned an error but tor some reason, doing this leaves the UDP address/port combination in use (so a future call to bind() triggers error 10048 WSAEADDRINUSE and netstat -abot -p UDP also shows the port still in use until the whole application is closed). The solution was to move socket(...) and closesocket(...) calls into the worker thread.
Other than weird issues like the case above, there is normally no way that a UDP server socket can be left open after calling closesocket() on it. Microsoft explains that there is no connection maintained with a UDP socket and no need to call shutdown() or any other function. Usually the reason a TCP socket is left open after calling closesocket() is that it wasn't disconnected gracefully and it's waiting for about 4 minutes in TCP_WAIT state for possible additional data to come in before it actually closes. In the case above, netstat showed the UDP socket never closed until the application was closed even if I waited 30+ minutes.
If you're using a wrapper around winsock like the .NET framework, I've also read some features like setting up async callbacks can leave a UDP socket bound open if you don't clean up the callbacks correctly, but I don't think there are any such features in the win32 winsock API that can cause that.
Just close it. There's nothing in UDP that says that pending data will be sent, unlike TCP.

Resources