Using a specific network interface for a socket in windows - windows

Is there a reliable way in Windows, apart from changing the routing table, to force a newly created socket to use a specific network interface? I understand that bind() to the interface's IP address does not guarantee this.

(Ok second time lucky..)
FYI there's another question here perform connect() on specific network adapter along the same lines...
According to The Cable Guy
Windows XP and Windows Server® 2003
use the weak host model for sends and
receives for all IPv4 interfaces and
the strong host model for sends and
receives for all IPv6 interfaces. You
cannot configure this behavior. The
Next Generation TCP/IP stack in
Windows Vista and Windows Server 2008
supports strong host sends and
receives for both IPv4 and IPv6 by
default on all interfaces except the
Teredo tunneling interface for a
Teredo host-specific relay.
So to answer your question (properly, this time) in Windows XP and Windows Server 2003 IP4 no, but for IP6 yes. And for Windows Vista and Windows 2008 yes (except for certain circumstances).
Also from http://www.codeguru.com/forum/showthread.php?t=487139
On Windows, a call to bind() affects
card selection only incoming traffic,
not outgoing traffic. Thus, on a
client running in a multi-homed system
(i.e., more than one interface card),
it's the network stack that selects
the card to use, and it makes its
selection based solely on the
destination IP, which in turn is based
on the routing table. A call to bind()
will not affect the choice of the card
in any way.
It's got something to do with
something called a "Weak End System"
("Weak E/S") model. Vista changed to a
strong E/S model, so the issue might
not arise under Vista. But all prior
versions of Windows used the weak E/S
model.
With a weak E/S model, it's the
routing table that decides which card
is used for outgoing traffic in a
multihomed system.
See if these threads offer some
insight:
"Local socket binding on multihomed
host in Windows XP does not work" at
http://www.codeguru.com/forum/showthread.php?t=452337
"How to connect a port to a specified
Networkcard?" at
http://www.codeguru.com/forum/showthread.php?t=451117.
This thread mentions the
CreateIpForwardEntry() function, which
(I think) can be used to create an
entry in the routing table so that all
outgoing IP traffic with a specified
server is routed via a specified
adapter.
"Working with 2 Ethernet cards" at
http://www.codeguru.com/forum/showthread.php?t=448863
"Strange bind behavior on multihomed
system" at
http://www.codeguru.com/forum/showthread.php?t=452368
Hope that helps!

I'm not sure why you say bind is not working reliably. Granted I have not done exhaustive testing, but the following solution worked for me (Win10, Visual Studio 2019). I needed to send a broadcast message via a particular NIC, where multiple NICs might be present on a computer. In the snippet below, I want the broadcast message to go out on the NIC with IP of .202.106.
In summary:
create a socket
create a sockaddr_in address with the IP address of the NIC you want to send FROM
bind the socket to that FROM sockaddr_in
create another sockaddr_in with the IP of your broadcast address (255.255.255.255)
do a sendto, passing the socket created is step 1, and the sockaddr of the broadcast address.
`
static WSADATA wsaData;
static int ServoSendPort = 8888;
static char ServoSendNetwork[] = "192.168.202.106";
static char ServoSendBroadcast[] = "192.168.255.255";
`
... < snip >
if ( WSAStartup(MAKEWORD(2,2), &wsaData) != NO_ERROR )
return false;
// Make a UDP socket
SOCKET ServoSendSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
int iOptVal = TRUE;
int iOptLen = sizeof(int);
int RetVal = setsockopt(ServoSendSocket, SOL_SOCKET, SO_BROADCAST, (char*)&iOptVal, iOptLen);
// Bind it to a particular interface
sockaddr_in ServoBindAddr={0};
ServoBindAddr.sin_family = AF_INET;
ServoBindAddr.sin_addr.s_addr = inet_addr( ServoSendNetwork ); // target NIC
ServoBindAddr.sin_port = htons( ServoSendPort );
int bindRetVal = bind( ServoSendSocket, (sockaddr*) &ServoBindAddr, sizeof(ServoBindAddr) );
if (bindRetVal == SOCKET_ERROR )
{
int ErrorCode = WSAGetLastError();
CString errMsg;
errMsg.Format ( _T("rats! bind() didn't work! Error code %d\n"), ErrorCode );
OutputDebugString( errMsg );
}
// now create the address to send to...
sockaddr_in ServoSendAddr={0};
ServoSendAddr.sin_family = AF_INET;
ServoSendAddr.sin_addr.s_addr = inet_addr( ServoSendBroadcast ); //
ServoSendAddr.sin_port = htons( ServoSendPort );
...
#define NUM_BYTES_SERVO_SEND 20
unsigned char sendBuf[NUM_BYTES_SERVO_SEND];
int BufLen = NUM_BYTES_SERVO_SEND;
ServoSocketStatus = sendto(ServoSendSocket, (char*)sendBuf, BufLen, 0, (SOCKADDR *) &ServoSendAddr, sizeof(ServoSendAddr));
if(ServoSocketStatus == SOCKET_ERROR)
{
ServoUdpSendBytes = WSAGetLastError();
CString message;
message.Format(_T("Error transmitting UDP message to Servo Controller: %d."), ServoSocketStatus);
OutputDebugString(message);
return false;
}

Related

winsock2: How to get the ipv4/ipv6 address of a connected client after server side code calls `accept()`

There are other similar questions on this site, but they either do not related to winsock2 or they are suitable only for use with ipv4 address spaces. The default compiler for Visual Studio 2019 produces an error when the ntoa function is used, hence an ipv4 and ipv6 solution is required.
I did once produce the code to do this for a Linux system however I am currently at work and do not have access to that. It may or may not be "copy and paste"-able into a windows environment with winsock2. (Edit: I will of course add that code later this evening, but of course it might not be useful.)
The following contains an example, however this is an example for client side code, not server side code.
https://www.winsocketdotnetworkprogramming.com/winsock2programming/winsock2advancedInternet3c.html
Here, the getaddrinfo() function is used to obtain a structure containing matching ipv4 and ipv6 addresses. To obtain this information there is some interaction with DNS, which is not required in this case.
I have some server code which calls accept() (after bind and listen) to accept a client connection. I want to be able to print the client ip address and port to stdout.
The most closely related question on this site is here. However the answer uses ntoa and is only ipv4 compatible.
What I have so far:
So far I have something sketched out like this:
SOCKET acceptSocket = INVALID_SOCKET;
SOCKADDR_IN addr; // both of these are NOT like standard unix sockets
// I don't know how they differ and if they can be used with standard
// unix like function calls (eg: inet_ntop)
int addrlen = sizeof addr;
acceptSocket = accept(listenSocket, (SOCKADDR*)&addr, &addrlen);
if(acceptSocket == INVALID_SOCKET)
{
// some stuff
}
else
{
const std::size_t addrbuflen = INET6_ADDRSRTLEN;
char addrbuf[addrbuflen] = '\0'
inet_ntop(AF_INET, (void*)addr.sin_addr, (PSTR)addrbuf, addrbuflen);
// above line does not compile and mixes unix style function calls
// with winsock2 structures
std::cout << addrbuf << ':' << addr.sin_port << std::endl;
}
getpeername()
int ret = getpeername(acceptSocket, addrbuf, &addrbuflen);
// addrbuf cannot convert from char[65] to sockaddr*
if(ret == ???)
{
// TODO
}
You need to access the SOCKADDR. This is effectively a discriminated union. The first field tells you whether its an IPv4 (==AF_INET) or IPv6 (==AF_INET6) address. Depending on that you cast the addr pointer to be either struct sockaddr_in* or struct sockaddr_in6*, and then read off the IP address from the relevant field.
C++ code snippet in vs2019:
char* CPortListener::get_ip_str(struct sockaddr* sa, char* s, size_t maxlen)
{
switch (sa->sa_family) {
case AF_INET:
inet_ntop(AF_INET, &(((struct sockaddr_in*)sa)->sin_addr),
s, maxlen);
break;
case AF_INET6:
inet_ntop(AF_INET6, &(((struct sockaddr_in6*)sa)->sin6_addr),
s, maxlen);
break;
default:
strncpy(s, "Unknown AF", maxlen);
return NULL;
}
return s;
}
Example:
{
...
char s[INET6_ADDRSTRLEN];
sockaddr_storage ca;
socklen_t al = sizeof(ca);
SOCKET recv = accept(sd, (sockaddr*)&ca, &al);
pObj->m_ip = get_ip_str(((sockaddr*)&ca),s,sizeof(s));
}

Raw socket for directing IPv6 datagrams to the kernel

I’m looking to inject IPv6 datagrams available in the user space (and received through a scheme that first requires some unwrapping that's performed in the user space) to a suitable raw socket for further processing by the Linux kernel. This is fairly simple to do with IPv4 using the following code:
int fd=socket(AF_INET, SOCK_RAW, IPPROTO_RAW);
struct sockaddr_ll sa;
memset(sa, 0, sizeof(sa));
// ip4h is the IPv4 datagram unwrapped in the user space and ready to be
// sent to the kernel
if (sendto(fd, iph, iplen, 0, (struct sockaddr *)&sa, sizeof(sa)) != iplen) {
// Error processing.
}
The above injects full IPv4 packets (including the IPv4 headers), and the IPv4 payload gets processed appropriately by the Linux stack. How should the above be modified for use with IPv6 packets? The following adjustments I tried did not work:
int fd=socket(AF_PACKET, SOCK_DGRAM, htons(ETH_P_ALL));
sa.sll_family=AF_PACKET;
sa.sll_protocol=htons(ETH_P_IPV6);
sa.sll_halen=ETH_ALEN;
sa.sll_ifindex=2; // <index of eth0>
if (sendto(fd, iph, iplen, 0, (struct sockaddr *)&sa, sizeof(sa)) != iplen) {
// Error processing.
}
Any thoughts on why the above doesn't work with raw IPv6 datagrams? 'tcpdump ip6' does show the IPv6 packets I'm inserting, which suggests the kernel sees them! It just happens to be ignoring them as well.

How to correctly receive data using ZeroMQ?

I have two machines in the same network :
The first machine binds to a socket on its own IP address (120.0.0.1) and receives any data coming to the socket .bind()-ed on port 5555:
zmq::context_t context{1};
zmq::socket_t socket{context, ZMQ_SUB};
socket.setsockopt(ZMQ_SUBSCRIBE, "lidar");
socket.bind("tcp://120.0.0.1:5555");
while(true)
{
zmq::message_t message;
auto recv = socket.recv(message);
ROS_INFO("Value: %d", recv.value());
}
The second machine, having an IP address 120.0.0.248, connects to the first machine and sends the messages to it:
sock.connect("tcp://120.0.0.1:5555");
while (1) {
double nodes[8192];
sock.send(zmq::buffer("lidar") , zmq::send_flags::sndmore);
sock.send(zmq::buffer(nodes, (((int)(count)) * 8)));
}
But for some reason, I cannot receive any messages on the first machine and it gets stuck on auto recv = socket.recv(message);.
What is a correct way for such communication?

Why does pcap_sendpacket fail on Thunderbolt interface?

In a multi platform project I am using pcap to get a list of all network interfaces, open each (user cannot select which interfaces to use) and send/receive packets (Ethernet type 0x88e1/HomePlugAV) on each. This works fine on Windows and on Mac OS X, but sometimes on Mac OS X pcap_sendpacket fails after some time on the interface that networksetup -listallhardwareports lists as "Hardware Port: Thunderbolt 1". The error is:
send: No buffer space available
When the program is run after the machine was booted, then it takes some time until the error occurs. When the error occurred once and I stop my program, the error occurs immediately when I restart my program without rebooting the machine.
ifconfig -v en9:
en9: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 index 8
eflags=80<TXSTART>
options=60<TSO4,TSO6>
ether b2:00:1e:94:9b:c1
media: autoselect <full-duplex>
status: inactive
type: Ethernet
scheduler: QFQ
networksetup -listallhardwareports (only the relevant parts):
Hardware Port: Thunderbolt 1
Device: en9
Ethernet Address: b2:00:1e:94:9b:c1
Tests show that on OS X 10.9 the interface is not up initially, but on OS X 10.9.2 and 10.9.3 the interface is up and running after booting.
On OS X 10.9 ifconfig initially says:
en5: flags=8822<BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500 index 8
After ifconfig en5 up the problematic behavior is the same on OS X 10.9.
Why does pcap_sendpacket fail on the Thunderbolt adapter?
How can my program detect that this is a troubling interface before opening it? I know I could open the interface and try to send one packet, but I'ld prefer to do a clean detection beforehand.
As a workaround, you can ignore the "Thunderbolt 1" interface:
#include <stdio.h>
#include <pcap/pcap.h>
#include <CoreFoundation/CoreFoundation.h>
#include <SystemConfiguration/SCNetworkConfiguration.h>
const char thunderbolt[] = "Thunderbolt 1";
// Build with -framework CoreFoundation -framework SystemConfiguration
int main(int argc, char * argv[])
{
// See: https://opensource.apple.com/source/configd/configd-596.13/SystemConfiguration.fproj/SCNetworkInterface.c
// get Ethernet, Firewire, Thunderbolt, and AirPort interfaces
CFArrayRef niArrayRef = SCNetworkInterfaceCopyAll();
// Find out the thunderbolt iface
char thunderboltInterface[4] = "";
if(niArrayRef) {
CFIndex cnt = CFArrayGetCount(niArrayRef);
for(CFIndex idx = 0; idx < cnt; ++idx) {
SCNetworkInterfaceRef tSCNetworkInterfaceRef = (SCNetworkInterfaceRef)CFArrayGetValueAtIndex(niArrayRef, idx);
if(tSCNetworkInterfaceRef) {
CFStringRef BSDName = SCNetworkInterfaceGetBSDName(tSCNetworkInterfaceRef);
const char * interfaceName = (BSDName == NULL) ? "none" : CFStringGetCStringPtr(BSDName, kCFStringEncodingUTF8);
CFStringRef localizedDisplayName = SCNetworkInterfaceGetLocalizedDisplayName(tSCNetworkInterfaceRef);
const char * interfaceType = (localizedDisplayName == NULL) ? "none" : CFStringGetCStringPtr(localizedDisplayName, kCFStringEncodingUTF8);
printf("%s : %s\n", interfaceName, interfaceType);
if(strcmp(interfaceType, thunderbolt) == 0) {
// Make a copy this time
CFStringGetCString(BSDName, thunderboltInterface, sizeof(thunderboltInterface), kCFStringEncodingUTF8);
}
}
}
}
printf("%s => %s\n", thunderbolt, thunderboltInterface);
CFRelease(niArrayRef);
return 0;
}
I'm guessing from
When the program is run after the machine was booted, then it takes some time until the error occurs. When the error occurred once and I stop my program, the error occurs immediately when I restart my program without rebooting the machine.
that what's probably happening here is that the interface isn't active, so packets given to it to send aren't transmitted (and the mbuf(s) for them freed), and aren't discarded, but are, instead, just left in the interface's queue to be transmitted. Eventually either the queue fills up or an attempt to allocate some resource for the packet fails, and the interface's driver returns an ENOBUFS error.
This is arguably an OS X bug.
From
In a multi platform project I am using pcap to get a list of all network interfaces, open each (user cannot select which interfaces to use) and send/receive packets (Ethernet type 0x88e1/HomePlugAV) on each.
I suspect you aren't sending on all interfaces; not all interfaces have a link-layer header type that has an Ethernet type field - for example, lo0 doesn't.
If you're constructing Ethernet packets, you would only want to send on interfaces with a link-layer header type (as returned by pcap_datalink()) of DLT_EN10MB ("10MB" is a historical artifact; it refers to all Ethernet types except for the old experimental 3MB Xerox Ethernet, which had a different link-layer header).
You probably also don't want to bother with interfaces that aren't "active" in some sense (some sense other than "is up"); unfortunately, there's no platform-independent API to determine that, so you're going to have to fall back on #ifdefs here. That would probably rule out interfaces where the packets would pile up unsent and eventually cause an ENOBUFS error.

Winsock returns 10061 on connect only to localhost

I dont understand whats happening. If I create a socket to anywhere else other than localhost (either "localhost", "127.0.0.1" or the external ip of the machine) it works fine.
If I create a socket to an address without something listening in that port i would get a 10060 (timeout) but not a 10061 which makes sense. Why is it that I am getting connection refused when going to localhost.
I tried disabling the firewall just in case it was messing things up, but that is not it
I am doing all the WSA initialize stuff before this.
_socketToServer = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(_socketToServer == -1){
return false;
}
p_int = (int*)malloc(sizeof(int));
*p_int = 1;
if( (setsockopt(_socketToServer, SOL_SOCKET, SO_REUSEADDR,
(char*)p_int, sizeof(int)) == -1 )||
(setsockopt(_socketToServer, SOL_SOCKET, SO_KEEPALIVE, (char*)p_int,
sizeof(int)) == -1 ) ){
free(p_int);
return false;
}
free(p_int);
struct sockaddr_in my_addr;
my_addr.sin_family = AF_INET ;
my_addr.sin_port = htons(_serverPort);
memset(&(my_addr.sin_zero), 0, 8);
my_addr.sin_addr.s_addr = inet_addr(_serverIP);
if( connect( _socketToServer, (struct sockaddr*)&my_addr, sizeof(my_addr))
== SOCKET_ERROR ){
DWORD error = GetLastError(); //here is where I get the 10061
return false;
}
Any ideas?
You are not guaranteed to get a WSAETIMEDOUT error when connecting to a non-listening port on another machine. Any number of different errors can occur. However, a WSAETIMEDOUT error typically only occurs if the socket cannot reach the target machine on the network before connect() times out. If it can reach the target machine, a WSAECONNREFUSED error means the target machine is acknowledging the connect() request and is replying back saying the requested port is not able to accept the connection at that time because either it is not listening or its backlog is full (there is no way to differentiate which). So, when you are connecting to the localhost, you will pretty much always get a WSAECONNREFUSED error when connecting to a non-listening port because you are connecting to the same machine and there is no delay in determining the port's listening status. It has nothing to do with firewalls or anti-malwares. This is just normal behavior.

Resources