Close network kernel socket - macos

I'm developing a network kernel extension and tried to intercept packets, on DataOut callback returned EJUSTRETURN to swallow desired packets. Now I'm willing to pass out same data but on different socket. To achieve this I used
errno_t errorRet = 0;
socket_t newSocket;
errorRet = sock_socket(AF_INET, SOCK_STREAM, IPPROTO_TCP, sockectUpCallBack, cookie, &newSocket);
errorRet = sock_bind(newSocket, (struct sockaddr *)&localAddress);
errorRet = sock_connect(newSocket, (struct sockaddr *)&remoteAddress, MSG_DONTWAIT);
This thing is working and connect function return with code EINPROGRESS 36 /* Operation now in progress */. Now my question is, is it possible to close the socket the packet previously sent through?

Related

Raw socket for directing IPv6 datagrams to the kernel

I’m looking to inject IPv6 datagrams available in the user space (and received through a scheme that first requires some unwrapping that's performed in the user space) to a suitable raw socket for further processing by the Linux kernel. This is fairly simple to do with IPv4 using the following code:
int fd=socket(AF_INET, SOCK_RAW, IPPROTO_RAW);
struct sockaddr_ll sa;
memset(sa, 0, sizeof(sa));
// ip4h is the IPv4 datagram unwrapped in the user space and ready to be
// sent to the kernel
if (sendto(fd, iph, iplen, 0, (struct sockaddr *)&sa, sizeof(sa)) != iplen) {
// Error processing.
}
The above injects full IPv4 packets (including the IPv4 headers), and the IPv4 payload gets processed appropriately by the Linux stack. How should the above be modified for use with IPv6 packets? The following adjustments I tried did not work:
int fd=socket(AF_PACKET, SOCK_DGRAM, htons(ETH_P_ALL));
sa.sll_family=AF_PACKET;
sa.sll_protocol=htons(ETH_P_IPV6);
sa.sll_halen=ETH_ALEN;
sa.sll_ifindex=2; // <index of eth0>
if (sendto(fd, iph, iplen, 0, (struct sockaddr *)&sa, sizeof(sa)) != iplen) {
// Error processing.
}
Any thoughts on why the above doesn't work with raw IPv6 datagrams? 'tcpdump ip6' does show the IPv6 packets I'm inserting, which suggests the kernel sees them! It just happens to be ignoring them as well.

Linux USB driver: Interrupt URBs

I suppose I actually have two separate questions, but I think that they are related enough to include them both. The context is a Linux USB device driver (not userspace).
After transmitting a request URB, how do I receive the response once my complete callback is called?
How can I use interrupt URBs for single request/response pairs, and not as actual continuous interrupt polling (as they are intended)?
So for some background, I'm working on a driver for the Microchip MCP2210 a USB-to-SPI Protocol Converter with GPIO (USB 2.0, datasheet here). This device advertises as generic HID and exposes two interrupt endpoints (an in and an out) as well as it's control endpoint.
I am starting from a working, (but alpha-quality) demo driver written by somebody else and kindly shared with the community. However, this is a HID driver and the mechanism it uses to communicate with the device is very expensive! (sending a 64 byte message requires allocating a 6k HID report struct, and allocation is sometimes performed in the context of an interrupt, requiring GFP_ATOMIC!). We'll be accessing this from an embedded low-memory device.
I'm new to USB drivers and still pretty green with Linux device drivers in general. However, I'm trying to convert this to a plain-jane USB driver (not HID) so I can use the less expensive interrupt URBs for my communications. Here is my code for transmitting my request. For the sake of (attempted) brevity, I'm not including the definition of my structs, etc, but please let me know if you need more of my code. dev->cur_cmd is where I'm keeping the current command I'm processing.
/* use a local for brevity */
cmd = dev->cur_cmd;
if (cmd->state == MCP2210_CMD_STATE_NEW) {
usb_fill_int_urb(dev->int_out_urb,
dev->udev,
usb_sndintpipe(dev->udev, dev->int_out_ep->desc.bEndpointAddress),
&dev->out_buffer,
sizeof(dev->out_buffer), /* always 64 bytes */
cmd->type->complete,
cmd,
dev->int_out_ep->desc.bInterval);
ret = usb_submit_urb(dev->int_out_urb, GFP_KERNEL);
if (ret) {
/* snipped: handle error */
}
cmd->state = MCP2210_CMD_STATE_XMITED;
}
And here is my complete fn:
/* note that by "ctrl" I mean a control command, not the control endpoint */
static void ctrl_complete(struct urb *)
{
struct mcp2210_device *dev = urb->context;
struct mcp2210_command *cmd = dev->cur_cmd;
int ret;
if (unlikely(!cmd || !cmd->dev)) {
printk(KERN_ERR "mcp2210: ctrl_complete called w/o valid cmd "
"or dev\n");
return;
}
switch (cmd->state) {
/* Time to rx the response */
case MCP2210_CMD_STATE_XMITED:
/* FIXME: I think that I need to check the response URB's
* status to find out if it was even transmitted or not */
usb_fill_int_urb(dev->int_in_urb,
dev->udev,
usb_sndintpipe(dev->udev, dev->int_in_ep->desc
.bEndpointAddress),
&dev->in_buffer,
sizeof(dev->in_buffer),
cmd->type->complete,
dev,
dev->int_in_ep->desc.bInterval);
ret = usb_submit_urb(dev->int_in_urb, GFP_KERNEL);
if (ret) {
dev_err(&dev->udev->dev,
"while attempting to rx response, "
"usb_submit_urb returned %d\n", ret);
free_cur_cmd(dev);
return;
}
cmd->state = MCP2210_CMD_STATE_RXED;
return;
/* got response, now process it */
case MCP2210_CMD_STATE_RXED:
process_response(cmd);
default:
dev_err(&dev->udev->dev, "ctrl_complete called with unexpected state: %d", cmd->state);
free_cur_cmd(dev);
};
}
So am I at least close here? Secondly, both dev->int_out_ep->desc.bInterval and dev->int_in_ep->desc.bInterval are equal to 1, will this keep sending my request every 125 microseconds? And if so, how do I say "ok, ty, now stop this interrupt". The MCP2210 offers only one configuration, one interface and that has just the two interrupt endpoints. (I know everything has the control interface, not sure where that fits into the picture though.)
Rather than spam this question with the lsusb -v, I'm going to pastebin it.
Typically, request/response communication works as follows:
Submit the response URB;
submit the request URB;
in the request completion handler, if the request was not actually sent, cancel the response URB and abort;
in the response completion handler, handle the response data.
All that asynchronous completion handler stuff is a big hassle if you have a single URB that is completed almost immediately; therefore, there is the helper function usb_interrupt_msg() which works synchronously.
URBs to be used for polling must be resubmitted (typically from the completion handler).
If you do not resubmit the URB, no polling happens.

Get response from server using the same socket?

I'm writing a small application with a client and a server - the client sends a question and the server answers.
I managed to do the first part - the server gets the question from the client, do some work and sends back an answer. I just can't figure out how to tell the client to wait for a response from the server.
This is my client code:
char* ipAddress = (char*)malloc(15);
wcstombs(ipAddress, (TCHAR*)argv[1], 15);
DWORD port = wcstod(argv[2], _T('\0'));
DWORD numOfThreads = wcstod(argv[3], _T('\0;'));
DWORD method = wcstod(argv[4], _T('\0;'));
//initialize windows sockets service
WSADATA wsaData;
int iResult = WSAStartup(MAKEWORD(2,2), &wsaData);
assert(iResult==NO_ERROR);
//prepare server address
sockaddr_in server_addr;
server_addr.sin_family = AF_INET;
server_addr.sin_addr.s_addr = inet_addr(ipAddress);
server_addr.sin_port = htons(port);
//create socket
SOCKET hClientSocket= socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
assert(hClientSocket!=INVALID_SOCKET);
//connect to server
int nRes=connect(hClientSocket, (SOCKADDR*)&server_addr, sizeof(server_addr));
assert(nRes!=SOCKET_ERROR);
char* buf = "GET /count.htm HTTP/1.1\r\nHost: 127.0.0.1:666\r\nAccept: text/html,application/xhtml+xml\r\nAccept-Language: en-us\r\nAccept-Encoding: gzip, deflate\r\nUser-Agent: Mozilla/5.0\r\n\r\n";
int nBytesToSend= strlen(buf);
int iPos=0;
while(nBytesToSend)
{
int nSent=send(hClientSocket,buf,nBytesToSend,0);
assert(nSent!=SOCKET_ERROR);
nBytesToSend-=nSent;
iPos+=nSent;
}
closesocket(hClientSocket);
int nLen = sizeof(server_addr);
SOCKET hRecvSocket=accept(hClientSocket,(SOCKADDR*)&server_addr, &nLen);
assert(hRecvSocket!=INVALID_SOCKET);
//prepare buffer for incoming data
char serverBuff[256];
int nLeft=sizeof(serverBuff);
iPos=0;
do //loop till there are no more data
{
int nNumBytes=recv(hRecvSocket,serverBuff+iPos,nLeft,0);
//check if cleint closed connection
if(!nNumBytes)
break;
assert(nNumBytes!=SOCKET_ERROR);
//update free space and pointer to next byte
nLeft-=nNumBytes;
iPos+=nNumBytes;
}while(1);
The assertion after the SOCKET hRecvSocket=accept(hClientSocket,(SOCKADDR*)&server_addr, &nLen); line fails.
The closesocket and accept call after your "send" loop - remove those calls. accept is for servers listening for incoming connections, not for clients that are already connected.
After your send() loop completes, go straight into your recv() loop. That should solve your immediate problem:
Also, your send loop is forgetting to referenece iPos on the buffer like I think you intended to. This is what you wanted:
int nSent=send(hClientSocket,buf+iPos,nBytesToSend,0);
In network programming, sockets will fail due to network conditions beyond your control. So "asserts" on network calls are not always appropriate. Better to just expect failure and be prepared to handle it. Typically, closing the socket and the active connection is the way to handle most errors.

In Win32, is there a way to test if a socket is non-blocking?

In Win32, is there a way to test if a socket is non-blocking?
Under POSIX systems, I'd do something like the following:
int is_non_blocking(int sock_fd) {
flags = fcntl(sock_fd, F_GETFL, 0);
return flags & O_NONBLOCK;
}
However, Windows sockets don't support fcntl(). The non-blocking mode is set using ioctl with FIONBIO, but there doesn't appear to be a way to get the current non-blocking mode using ioctl.
Is there some other call on Windows that I can use to determine if the socket is currently in non-blocking mode?
A slightly longer answer would be: No, but you will usually know whether or not it is, because it is relatively well-defined.
All sockets are blocking unless you explicitly ioctlsocket() them with FIONBIO or hand them to either WSAAsyncSelect or WSAEventSelect. The latter two functions "secretly" change the socket to non-blocking.
Since you know whether you have called one of those 3 functions, even though you cannot query the status, it is still known. The obvious exception is if that socket comes from some 3rd party library of which you don't know what exactly it has been doing to the socket.
Sidenote: Funnily, a socket can be blocking and overlapped at the same time, which does not immediately seem intuitive, but it kind of makes sense because they come from opposite paradigms (readiness vs completion).
Previously, you could call WSAIsBlocking to determine this. If you are managing legacy code, this may still be an option.
Otherwise, you could write a simple abstraction layer over the socket API. Since all sockets are blocking by default, you could maintain an internal flag and force all socket ops through your API so you always know the state.
Here is a cross-platform snippet to set/get the blocking mode, although it doesn't do exactly what you want:
/// #author Stephen Dunn
/// #date 10/12/15
bool set_blocking_mode(const int &socket, bool is_blocking)
{
bool ret = true;
#ifdef WIN32
/// #note windows sockets are created in blocking mode by default
// currently on windows, there is no easy way to obtain the socket's current blocking mode since WSAIsBlocking was deprecated
u_long flags = is_blocking ? 0 : 1;
ret = NO_ERROR == ioctlsocket(socket, FIONBIO, &flags);
#else
const int flags = fcntl(socket, F_GETFL, 0);
if ((flags & O_NONBLOCK) && !is_blocking) { info("set_blocking_mode(): socket was already in non-blocking mode"); return ret; }
if (!(flags & O_NONBLOCK) && is_blocking) { info("set_blocking_mode(): socket was already in blocking mode"); return ret; }
ret = 0 == fcntl(socket, F_SETFL, is_blocking ? flags ^ O_NONBLOCK : flags | O_NONBLOCK);
#endif
return ret;
}
I agree with the accepted answer, there is no official way to determine the blocking state of a socket on Windows. In case you get a socket from a third party (let's say, you are a TLS library and you get the socket from upper layer) you cannot decide if it is in blocking state or not.
Despite this I have a working, unofficial and limited solution for the problem which works for me for a long time.
I attempt to read 0 bytes from the socket. In case it is a blocking socket it will return 0, in case it is a non-blocking it will return -1 and GetLastError equals WSAEWOULDBLOCK.
int IsBlocking(SOCKET s)
{
int r = 0;
unsigned char b[1];
r = recv(s, b, 0, 0);
if (r == 0)
return 1;
else if (r == -1 && GetLastError() == WSAEWOULDBLOCK)
return 0;
return -1; /* In case it is a connection socket (TCP) and it is not in connected state you will get here 10060 */
}
Caveats:
Works with UDP sockets
Works with connected TCP sockets
Doesn't work with unconnected TCP sockets

Using a specific network interface for a socket in windows

Is there a reliable way in Windows, apart from changing the routing table, to force a newly created socket to use a specific network interface? I understand that bind() to the interface's IP address does not guarantee this.
(Ok second time lucky..)
FYI there's another question here perform connect() on specific network adapter along the same lines...
According to The Cable Guy
Windows XP and Windows Server® 2003
use the weak host model for sends and
receives for all IPv4 interfaces and
the strong host model for sends and
receives for all IPv6 interfaces. You
cannot configure this behavior. The
Next Generation TCP/IP stack in
Windows Vista and Windows Server 2008
supports strong host sends and
receives for both IPv4 and IPv6 by
default on all interfaces except the
Teredo tunneling interface for a
Teredo host-specific relay.
So to answer your question (properly, this time) in Windows XP and Windows Server 2003 IP4 no, but for IP6 yes. And for Windows Vista and Windows 2008 yes (except for certain circumstances).
Also from http://www.codeguru.com/forum/showthread.php?t=487139
On Windows, a call to bind() affects
card selection only incoming traffic,
not outgoing traffic. Thus, on a
client running in a multi-homed system
(i.e., more than one interface card),
it's the network stack that selects
the card to use, and it makes its
selection based solely on the
destination IP, which in turn is based
on the routing table. A call to bind()
will not affect the choice of the card
in any way.
It's got something to do with
something called a "Weak End System"
("Weak E/S") model. Vista changed to a
strong E/S model, so the issue might
not arise under Vista. But all prior
versions of Windows used the weak E/S
model.
With a weak E/S model, it's the
routing table that decides which card
is used for outgoing traffic in a
multihomed system.
See if these threads offer some
insight:
"Local socket binding on multihomed
host in Windows XP does not work" at
http://www.codeguru.com/forum/showthread.php?t=452337
"How to connect a port to a specified
Networkcard?" at
http://www.codeguru.com/forum/showthread.php?t=451117.
This thread mentions the
CreateIpForwardEntry() function, which
(I think) can be used to create an
entry in the routing table so that all
outgoing IP traffic with a specified
server is routed via a specified
adapter.
"Working with 2 Ethernet cards" at
http://www.codeguru.com/forum/showthread.php?t=448863
"Strange bind behavior on multihomed
system" at
http://www.codeguru.com/forum/showthread.php?t=452368
Hope that helps!
I'm not sure why you say bind is not working reliably. Granted I have not done exhaustive testing, but the following solution worked for me (Win10, Visual Studio 2019). I needed to send a broadcast message via a particular NIC, where multiple NICs might be present on a computer. In the snippet below, I want the broadcast message to go out on the NIC with IP of .202.106.
In summary:
create a socket
create a sockaddr_in address with the IP address of the NIC you want to send FROM
bind the socket to that FROM sockaddr_in
create another sockaddr_in with the IP of your broadcast address (255.255.255.255)
do a sendto, passing the socket created is step 1, and the sockaddr of the broadcast address.
`
static WSADATA wsaData;
static int ServoSendPort = 8888;
static char ServoSendNetwork[] = "192.168.202.106";
static char ServoSendBroadcast[] = "192.168.255.255";
`
... < snip >
if ( WSAStartup(MAKEWORD(2,2), &wsaData) != NO_ERROR )
return false;
// Make a UDP socket
SOCKET ServoSendSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
int iOptVal = TRUE;
int iOptLen = sizeof(int);
int RetVal = setsockopt(ServoSendSocket, SOL_SOCKET, SO_BROADCAST, (char*)&iOptVal, iOptLen);
// Bind it to a particular interface
sockaddr_in ServoBindAddr={0};
ServoBindAddr.sin_family = AF_INET;
ServoBindAddr.sin_addr.s_addr = inet_addr( ServoSendNetwork ); // target NIC
ServoBindAddr.sin_port = htons( ServoSendPort );
int bindRetVal = bind( ServoSendSocket, (sockaddr*) &ServoBindAddr, sizeof(ServoBindAddr) );
if (bindRetVal == SOCKET_ERROR )
{
int ErrorCode = WSAGetLastError();
CString errMsg;
errMsg.Format ( _T("rats! bind() didn't work! Error code %d\n"), ErrorCode );
OutputDebugString( errMsg );
}
// now create the address to send to...
sockaddr_in ServoSendAddr={0};
ServoSendAddr.sin_family = AF_INET;
ServoSendAddr.sin_addr.s_addr = inet_addr( ServoSendBroadcast ); //
ServoSendAddr.sin_port = htons( ServoSendPort );
...
#define NUM_BYTES_SERVO_SEND 20
unsigned char sendBuf[NUM_BYTES_SERVO_SEND];
int BufLen = NUM_BYTES_SERVO_SEND;
ServoSocketStatus = sendto(ServoSendSocket, (char*)sendBuf, BufLen, 0, (SOCKADDR *) &ServoSendAddr, sizeof(ServoSendAddr));
if(ServoSocketStatus == SOCKET_ERROR)
{
ServoUdpSendBytes = WSAGetLastError();
CString message;
message.Format(_T("Error transmitting UDP message to Servo Controller: %d."), ServoSocketStatus);
OutputDebugString(message);
return false;
}

Resources