Let's say I have Windows 7 with one real network interface and few loopback interfaces.
I have IOCP enabled server that accepts connections from clients.
I'm trying to simulate as much as possible real client connections to the server.
My client code simply establishes X amount of socket connections
(note that client binds to a given interface):
const Int32 remotePort = 12345;
const Int32 MaxSockets = 60000;
Socket[] s = new Socket[MaxSockets];
IPEndPoint bindEndpoint = new IPEndPoint(IPAddress.Parse(args[0]), 0);
for (Int32 i = 0; i < MaxSockets; i++)
{
s[i] = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
s[i].SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
s[i].Bind(bindEndpoint);
s[i].Connect(args[1], remotePort);
IPEndPoint socketInfo = (IPEndPoint)s[i].LocalEndPoint;
Console.WriteLine(String.Format("Connected socket {0} {1} : {2}", i, socketInfo.Address, socketInfo.Port));
}
On a loopback interface I have several IPs that I use for binding.
In addition, I also use real interface to bind on.
I ran into a problem when amount of opened sockets is around 64K per machine:
Unhandled Exception: System.Net.Sockets.SocketException: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full
I've tried several helpless things like:
- setting MaxUserPort to max value and some other recommended TCPIP settings in the registry.
- trying to run two servers on different interfaces (real interfaces and loopback) and using several clients.
Is it a known limitation in Windows or its possible to overcome it somehow?
Thanks for the help!
I have found on some Microsoft page that:
... HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\MaxUserPort
registry subkey is defined as the maximum port up to which ports may be allocated for wildcard binds. The value of the MaxUserPort registry entry defines the dynamic port range...
So, if I force the endpoint to use a certain port, e.g.
IPEndPoint bindEndpoint = new IPEndPoint(IPAddress.Parse(args[0]), 54321);
Then I can open more than 64K simultaneous sockets in the system.
In your code example, you are calling Bind(bindEndpoint), but you do not show how bindEndpoint is defined. Check that :
Your system actually has multiple IP addresses (loopback does not count)
You are actually setting the IP Address of the endpoint to an IP address (not loopback)
The binds are being spread across multiple IP addresses
The loopback address does not count because many systems treat it specially for routing and binding purposes. So binding to ports in loopback may be sucking up the ports across all addresses the same as if you were binding to INADDR_ANY (0.0.0.0).
Both TCP and UDP use an unsigned 16-bit integer to designate port number. I don't imagine any implementation in any operating system is going to be able to open more than 65535 sockets per bound address at best. Additionally, I wouldn't be surprised if Windows doesn't implement fully isolated state tables for each adapter or each bound address but instead relies on a global state table. If that is the case, it would be a Windows network architecture limit instead of a soft, configurable limit.
I've developed a load testing tool.
Running on Windows 10/16G RAM, it could created 60,000 connections with server successfully.
But when try to create more connections, the tool will report "socket WinError 10055 No Buffer Space Available" soon.
Accord to this article, I think the limitation is the overall socket buffer size of whole OS, not the number of opened file.
Related
I'm developing an application with several serial ports. Each of these ports is handled by a different thread and has its own QSerialPort object. From a hardware point of view, they are connected hierarchicaly, meaning that there is one main device connected to the PC with a usb cable (1 COM port), to this main device there are several other devices connected, each of them having its own COM port. The main device can turn on/off the power supply to these child ports.
In the application, the ports are handled asynchronously. Each device object is running in its own loop. If its port is opened, it reads the incoming data. If the port is closed, it tries to open it in every loop until it succeeds. Each QSerialPort object handles errors on the errorOccurred signal. If it receives DeviceNotFoundError, PermissionError, ResourceError error, the port is closed (if it was opened) and the looping continues as described above.
The problem is that this serial communication part of the application is crashing (segmentation fault). I spent days finding the issue but with no results so far. To better understand what is going on, I wanted to ask here. Could it be a problem for QSerialPort if the main device turns off the power supply for the child ports while they're opened and are working? Or if the power supply is turned off while the child ports are being opened/closed or any other operation is being executed on them? (I don't want to include the specific executable code as it's a part of a bigger application and would be hard to make and executable from it. I'd like to discuss just the concepts described above if possible.)
Thanks for any help or ideas!
UPDATE
Creating of QSerialPort and putting it into a different thread:
QThread *t = new QThread(this)
SomeObject *o = new SomeObject(this);
o->moveToThread(t);
t->start();
Later in the SomeObject:
QSerialPort *port = new QSerialPort();
Try to destroy port object before power off, and recreate it after power on
QSerialPort *port = new QSerialPort();
//init and use
//....
delete port;
port = nullptr;
//turn the power off
//turn the power on
port = new QSerialPort();
Is there a way through Windows API to determine which is the primary/default network adapter?
For example, if I have a PC with two network cards, I need to know which one is used by the system to access internet, similarly if I have a network adapter and a virtual adapter.
I tried with GetAdaptersAddresses but it doesn't show which is the favourite one, maybe with GetBestInterface?
How about using GetAdaptersInfo and looking for an IP range that satisfies your requirement?
Alternatively, came across this (WMI):
https://msdn.microsoft.com/en-us/library/windows/desktop/aa394216(v=vs.85).aspx
~snip:
Once you have done so, you will likely have reduced your list to one
or two configured adapters.
You can also use the following procedure to find the default adapter:
Run the following query: "SELECT InterfaceIndex, Destination FROM Win32_IP4RouteTable WHERE Destination='0.0.0.0'" You should only have
one default network destination 0.0.0.0.
Use the InterfaceIndex to retrieve the Network Adapter you want. "SELECT * FROM Win32_NetworkAdapter WHERE InterfaceIndex=" +
insertVariableHere
Here's a CodeProject article claiming to determine the default:
http://www.codeproject.com/Articles/13421/Getting-the-Physical-MAC-address-of-a-Network-Inte
Getting the Physical (MAC) address of a Network Interface Card and finding out if it is the primary adapter on a multi-homed system
Finding out if the adapter with the given index is the primary adapter
In order to find out if the adapter with the given index is the
primary adapter, I had to add a function to the dialog class
CNetCfgDlg. This code iterates over the m_pAdapters array, comparing
the given adapter index with the index for each adapter in the array.
If the given adapter index is equal to the smallest index of all
adapters in the array, then it is the primary adapter
And one more thing to consider, is there's the 'Automatic Metric' setting for each adapter which seems to choose the lowest setting as the preferred (although not sure how to access this metric setting programmatically):
http://www.softminer.net/2011/09/setting-default-network-adapter-in.html
This SO Answer explains how to determine the local IP address used to connect to the Internet (like Google's DNS servers), you can then compare this local IP address with the list returned by GetAdaptersAddresses to determine which network card was used for Internet Access.
While trying to learn more about linux kernel networking ... I have a kernel module that contains a protocol which runs on top of TCP. Its almost an application layer protocol I'm experimenting with. The calls are passed in via the normal system call interface as executed from userspace.
So network calls from within my (layer above TCP) module generally look like this ...
ret = sock->ops->connect(sock, (struct sockaddr *) &myprot.daddr,
sizeof(myprot.daddr), flags);
I've used sendmsg/recvmsg successfully within my KM to send and receive data from a client to a server (from two separate kernel instances). The calls within the KM generally looks as follows:
ret = sock->ops->sendmsg(iocb, myprot.skt, &msg, sizeof(struct msghdr));
ret = sock->ops->recvmsg(iocb, sock, msg, total_len, flags);
What I'm trying to understand now is how and when to use sk_buff to do the same thing. I.e. when to use system calls such as what I use above, and when to directly access the network stack via sk_buff to send and receive data.
I've found many examples of how to send and receive data from within transport layers using sk_buff, but nothing from a layer above the transport that is also contained in a kernel module and using sk_buff.
Update for clarification.
I've overridden struct proto_ops and replaced the member methods for my own protocols use which do correspond to system calls from user space. I do understand that sk_buff is the buffer system for the kernel and is where packets are enqueued. However. I don't see any reason why I can't use the protocol-specific functions of struct proto_ops which also handles sockets and the data enqueued on them (though at a higher level). So it seems to me there are two ways to access sk_buffs depending upon where one wants to access them.
If I'm working in the transport layer and want to access data anywheres within the network stack (e.g. transport, ip, mac), I could directly access sk_buffs, but if I am working above the transport layer, I would use the abstracted protocol specific member functions that correspond to system calls. After all, they both eventually work on sk_buffs.
I guess my confusion, or what I'm trying to confirm that I'm doing right or wrong by knowing the difference in these two ways to access sk_buffs and from where, is ... if I'm sending data over a transport from TCP within the kernel, than I can just make use of the proto_ops system calls that relate to TCP unless I need more control in which I would then make use of the lower level skb functions to manage the queues.
Not sure to understand because you want to use to different things for the same purpose. The proto_ops in sock->ops are operations invoked during the correspondent system call. The sk_buff is the socket buffer system of the kernel; it is the place where packet are enqueued.
There is not the possibility to do the same thing of proto_ops with sk_buff, if it should be possible one of these structures is useless.
I have a computer which is connected with external devices via serial communication (i.e. RS-232/RS-422 of physical or emulated serial ports). They communicate with each other by frequent data exchange (30Hz) but with only small data packet (less than 16 bytes for each packet).
The most critical requirement of the communication is low latency or delay between transmitting and receiving.
The data exchange pattern is handshake-like. One host device initiates communication and keeps sending notification on a client device. A client device needs to reply every notification from the host device as quick as possible (this is exactly where the low latency needs to be achieved). The data packets of notifications and replies are well defined; namely the data length is known.
And basically data loss is not allowed.
I have used following common Win API functions to do the I/O read/write in a synchronous manner:
CreateFile, ReadFile, WriteFile
A client device uses ReadFile to read data from a host device. Once the client reads the complete data packet whose length is known, it uses WriteFile to reply the host device with according data packet. The reads and writes are always sequential without concurrency.
Somehow the communication is not fast enough. Namely the time duration between data sending and receiving takes too long. I guess that it could be a problem with serial port buffering or interrupts.
Here I summarize some possible actions to improve the delay.
Please give me some suggestions and corrections :)
call CreateFile with FILE_FLAG_NO_BUFFERING flag? I am not sure if this flag is relevant in this context.
call FlushFileBuffers after each WriteFile? or any action which can notify/interrupt serial port to immediately transmit data?
set higher priority for thread and process which handling serial communication
set latency timer or transfer size for emulated devices (with their driver). But how about the physical serial port?
any equivalent stuff on Windows like setserial/low_latency under Linux?
disable FIFO?
thanks in advance!
I solved this in my case by setting the comm timeouts to {MAXDWORD,0,0,0,0}.
After years of struggling this, on this very day I finally was able to make my serial comms terminal thingy fast enough with Microsoft's CDC class USB UART driver (USBSER.SYS, which is now built in in Windows 10 making it actually usable).
Apparently the aforementioned set of values is a special value that sets minimal timeouts as well as minimal latency (at least with the Microsoft driver, or so it seems to me anyway) and also causes ReadFile to return immediately if no new characters are in the receive buffer.
Here's my code (Visual C++ 2008, project character set changed from "Unicode" to "Not set" to avoid LPCWSTR type cast problem of portname) to open the port:
static HANDLE port=0;
static COMMTIMEOUTS originalTimeouts;
static bool OpenComPort(char* p,int targetSpeed) { // e.g. OpenComPort ("COM7",115200);
char portname[16];
sprintf(portname,"\\\\.\\%s",p);
port=CreateFile(portname,GENERIC_READ|GENERIC_WRITE,0,0,OPEN_EXISTING,0,0);
if(!port) {
printf("COM port is not valid: %s\n",portname);
return false;
}
if(!GetCommTimeouts(port,&originalTimeouts)) {
printf("Cannot get comm timeouts\n");
return false;
}
COMMTIMEOUTS newTimeouts={MAXDWORD,0,0,0,0};
SetCommTimeouts(port,&newTimeouts);
if(!ComSetParams(port,targetSpeed)) {
SetCommTimeouts(port,&originalTimeouts);
CloseHandle(port);
printf("Failed to set COM parameters\n");
return false;
}
printf("Successfully set COM parameters\n");
return true;
}
static bool ComSetParams(HANDLE port,int baud) {
DCB dcb;
memset(&dcb,0,sizeof(dcb));
dcb.DCBlength=sizeof(dcb);
dcb.BaudRate=baud;
dcb.fBinary=1;
dcb.Parity=NOPARITY;
dcb.StopBits=ONESTOPBIT;
dcb.ByteSize=8;
return SetCommState(port,&dcb)!=0;
}
And here's a USB trace of it working. Please note the OUT transactions (output bytes) followed by IN transactions (input bytes) and then more OUT transactions (output bytes) all within 3 milliseconds:
And finally, since if you are reading this, you might be interested to see my function that sends and receives characters over the UART:
unsigned char outbuf[16384];
unsigned char inbuf[16384];
unsigned char *inLast = inbuf;
unsigned char *inP = inbuf;
unsigned long bytesWritten;
unsigned long bytesReceived;
// Read character from UART and while doing that, send keypresses to UART.
unsigned char vgetc() {
while (inP >= inLast) { //My input buffer is empty, try to read from UART
while (_kbhit()) { //If keyboard input available, send it to UART
outbuf[0] = _getch(); //Get keyboard character
WriteFile(port,outbuf,1,&bytesWritten,NULL); //send keychar to UART
}
ReadFile(port,inbuf,1024,&bytesReceived,NULL);
inP = inbuf;
inLast = &inbuf[bytesReceived];
}
return *inP++;
}
Large transfers are handled elsewhere in code.
On a final note, apparently this is the first fast UART code I've managed to write since abandoning DOS in 1998. O, doest the time fly when thou art having fun.
This is where I found the relevant information: http://www.egmont.com.pl/addi-data/instrukcje/standard_driver.pdf
I have experienced similar problem with serial port.
In my case I resolved the problem decreasing the latency of the serial port.
You can change the latency of every port (which by default is set to 16ms) using control panel.
You can find the method here:
http://www.chipkin.com/reducing-latency-on-com-ports/
Good Luck!!!
I am working on a problem in SNMP extension agent in windows, which is passing traps to snmp.exe via SnmpExtensionTrap callback.
We added a couple of fields to the agent recently, and I am starting to see that some traps are getting lost. When I intercept the call in debugger and reduce the length of some strings, the same traps, that would have been lost, will go through.
I cannot seem to find any references to size limit or anything on the data passed via SnmpExtensionTrap. Does anyone know of one?
I would expect the trap size to be limited by the UDP packet size, since SNMP runs over the datagram-oriented UDP protocol.
The maximum size of a UDP packet is 64Kb but you'll have to take into account the SNMP overhead plus any limitations of the transport you're running over (e.g. ethernet).