Socket Buffer Size Tweaking - Guest OS versus Host OS - windows

I have server-to-client communication taking place in my application that utilizes UDP multicasting. A necessary "tweak" that I have had to make in this setup is to increase the receive buffer size on the client.
In Windows, I have been achieving this by modifying the registry key on the client (receiver):
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Afd\Parameters]
DefaultReceiveWindow = (32-bit DWORD of desired size)
This has worked very well, greatly reducing/minimizing UDP datagram loss.
Now, I am attempting to install a client application and do the same on a Windows VM (guest), that's running on a Windows host. Due to lack of permissions so far, I have only been able to modify this registry setting on the inside/guest OS at this point. And it does not seem to be working as I'm used to - I still encounter a lot of datagram loss, as if this change did nothing.
Is it safe to assume that this receive buffer size change is needed to be made on both the guest and the host OS, for the intended beneficial effect to occur? It seems like this would be the case, but - understandably - I am receiving some organizational resistance to performing such a change on the host, as there are other (unrelated) guest OS's that could also be impacted.

Related

Is it possible to detect which interface is connected to device with known IP without changing interface network?

My knowledge of networking is very basic so please bear with what I am sure are some very fundamental questions.
Is it possible to determine which interface/adapter (on a machine with multiple physical and virtual interfaces/adapters) is directly connected to a device with a known static IPv4 address without changing the network (i.e. the first 3 octets of its IPv4 address) of the interface/adapter beforehand?
I am asking this because I am trying to automate the setting of interface/adapter static IPv4 addresses on multiple machines in order for them to be able to communicate with the device and for it to work even if the machines do not use the same interface/adapter.
My original idea was to brute-force it by
Getting the device's IPv4 address as user input
Getting all physical interfaces/adapters connected to an Unidentified.network
Saving their previous network settings somewhere and loop through them while configuring them to use an appropriate static IPv4 address based on input in step 1
Attempting to ping device for a specified duration and if timeout occurs, revert the network settings of the interface/adapter to its original one otherwise, keep the new network settings and break out of the loop
My current worry is that the above process may inadvertently cause original network settings to be lost if some unpredictable crash were to occur during it.
So, it would be ideal if I could skip steps 2 and 3 and immediately detect which interface/adapter needs to have its IP address changed but based on what I read, this does not seem possible at all since if you could detect the device, then there would be no need to set a static IP address for the interface/adapter in the first place (cause you can already detect it).
I've also seen that using the ARP cache would work if the first-time connection on all machines are done manually and then setting the appropriate entry to persistent so that it would survive across reboots but I would ideally want even first-time connections to be automated if possible.
Can anyone provide any insight as to whether I am attempting the impossible?
Or is there a better way to achieve what I want?
I have seen some recommendations to use wireshark to sniff for incoming ARP packets but that would require installing it on all the machines I would be deploying my automated approach on which may not be feasible.
For context, I am automating this on Windows 10 and would prefer using powershell/bash/python.
Edit 1
Thanks to the link provided in the comment, I have gained a better understanding of IPv4 terminology.
So I guess I can reformulate my question.
Device
IPv4: 192.168.0.216/24
Network: 192.168.0.0
Machine
First interface/adapter
IPv4: 169.254.19.133/16
Network: 169.254.0.0
Second interface/adapter
IPv4: 169.254.27.245/16
Network: 169.254.0.0
If there is already a physical connection between the second interface/adapter and the device, is there a way to detect (not communicate with) the device without having to change the network of the second interface/adapter beforehand?
Or is brute-forcing it the only way to achieve my goal?

Receiving UDP on XP vs. Win7

I am using a simple UDP Receiver code built in c++. I upgrade on of my machines to Windows 7 and this line is now getting held up because the UDP stream is not getting through to the executable running:
iResult = recv(sock, RxBuf, buffsize, 0);
The recv function is just held up. I have used wire shark to make sure the UDP stream is active and correct but don't know what the problem is.
Any help would be appreciated.
(the UDP stream is broadcasted)
Unless you have set sock to non-blocking, recv() will block until data is received. So if the program is blocking there, its probably because it isn't receiving any datagrams.
A lot changed in Windows networking between XP and 7, so here some things to check:
Check your bind() statement. Make sure you are really binding the port you think you are and that you are checking for errors.
Simply turning off the firewall in Windows does not completely disable it. There are many components, especially on Vista and later, which are active all the time.
When you first run an executable, Windows Vista and later will ask you to confirm that it should have network access. If you click anything other than "Allow", then the path to that executable may be blocked forever. Adding an "Allow" rule does not override this block. To unblock it, you must turn the firewall back on, and dig down to "Windows Firewall with Advanced Security" to delete the offending rules from both Incoming and Outgoing. You might be amazed at what can build up in there.
You probably need to add an Incoming firewall rule for the UDP port that you are listening on. Even if the firewall is turned off.
Other things to try: disable any anti-virus software, run your listener as Admin, get Wireshark or another packet sniffer to make sure the packets really are reaching the machine.

IPsec in Linux kernel - how to figure out what's going on

I'm writing an IPsec implementation for a microcontroller and I want to test it using a standard Linux box running Debian Lenny. Both devices should secure the communication between them using IPsec ESP in tunnel mode. The keys are setup manually using setkey. There's no (or at least should be no) user space program involved in processing an IPsec packet. Now I want to see how my created packets are processed by the Linux kernel. To see the raw packets I capture them using tcpdump and analyze them using wireshark.
What's the best way to obtain debug information about IPsec processing?
How can I figure out whether the packet is accepted by the kernel?
How can I view the reason for a packet to be dropped?
You can instrument the XFRM (or perhaps ipv4/esp.c) kernel code to print out debug messages at the right spots.
For example, in net/ipv4/esp.c there exists a function esp_input() which has some error cases, but you'll see most the interesting stuff is in the xfrm/*.c code.
That said, I didn't have a problem interoperating a custom IPSec with Linux. Following the 43xx specs and verifying the packets came out correctly via wireshark seemed to do well. If you're having issues and don't want to instrument the kernel then you can setup iptables rules and count the number of (various type of) packets at each point.
Finally, be sure you've actually added a security policy (SP) as well as a security association (SA) and setup firewall rules properly.

How to speedup UDP communications in Windows XP applications

I am doing some maintenance on software and have a problem that I do not understand.
Application was developed using Microsoft Visual C++ 6 and runs on Windows XP. It consists of 21 applications that communicate to each other via UDP sockets. It is a simulation of an embedded avionics system used to debug the system in a PC environment. Each of the applications simulates a node in the embedded system and the embedded networked is simulated over UDP. The system originally ran on multiple PCs but can now runs on a single Quad core machine.
The system is working but the communication is annoyingly slow. However opening up Internet Explorer and visiting a web site or two set something that would cause my applications to suddenly communicate very fast to each other.
So my question is what did Internet Explorer set when visiting a web site so that my application can also set it? None of the original authors of the system is still around and I have very little windows programming experience.
it might not be a windows problem after all
check your API, check your buffer and check for errormessages 'getlasterror()' that doesn't cause INVALID (-1) and stops the program try to use it even if your program runs perfectit might have useful warnings
check ACK or attack speed it's a common issue at transfer large amounts of data over network connections ( <-- 90% it's your problem ), here is a useful topic on that subject ( support.microsoft.com/kb/823764 )
if neither of those solutions work, try checking your driver version against the manfucturer website.
last resort is those useful ideas:
. use this program www.lvllord.de to increase max half/open connections from 8 to 50
. using a server edition windows can boost some internet based programs
. using multi-threading with sockets API can cause some confusing to the API if you're using more than 2 sockets at different threads in multithreaded application, try optimize performance by using async sockets or something like that ( msdn.microsoft.com/en-us/library/ms738551(VS.85).aspx )
So my question is what did Internet Explorer set when visiting a web site so that my application can also set it? None of the original authors of the system is still around and I have very little windows programming experience.
may be ACK check it on wiki .. it means in other ways the attack speed .. if not then it will be the receive window size
both settings are invisible to users .. but can be set via programs such as tuneup utilities or any general network hidden setting adjustors
it might just do the trick ..
If the protocol above UDP implements reliability the speed loss will be due to massive UDP packet loss on localhost. UDP performance on localhost is terrible, your best bet is to wrap the socket API inside a TCP layer.
If it's UDP broadcast or multicast you will have to look at implementing a broker process to multiplex the messages over TCP.
Might be easier to look at existing messaging APIs that work well at the interprocess level such as ZeroMQ.
Try using Wireshark to see what Internet Explorer is doing.

Sockets vs named pipes for local IPC on Windows?

Are there any reasons for favoring named pipes over sockets for local IPC (both using win-api), effectiveness-wize, resource-wize or otherwise, since both behave very much alike (and likely to be abstracted by a similiar interface anyway), in an application that is likely to already use sockets for network purposes anyway?
I can name at least the addressing issue: port numbers for sockets against filenames for pipes. Also, named pipes (AFAIK) won't alert the firewall (block/unblock dialog), although blocked applications can still communicate via sockets locally. Anything else to take into account?
In the case of using sockets, are there any winsock settings/flags that are recomended when using sockets locally?
Some subtle differences:
Sockets won't work for local IPC if you don't have a functioning adapter. How common is it to have a PC without a functioning adapter? Well, I got bitten when somebody tried to demonstrate our software to a customer on a laptop that was not plugged in to a network or a power supply (so the OS disabled the network card to save power) and the wireless adapter was disabled (because the laptop user didn't use wireless). You can get around this by installing a loopback adapter but that's not ideal.
Firewall software can cause problems with establishing TCP/IP connections. It's not supposed to be an issue for local IPC, but I'm not convinced. Named pipes can have firewalls too.
You may have issues due to the privileges needed to create named pipes, or to create new instances of named pipes. For instance, I was running multiple servers using the same named pipe (probably not a good idea, but this was for testing) and some failed in CreateNamedPipe because the first server to create the pipe was running in Administrator mode (because it was launched from Visual Studio in Administrator mode) while the rest were launched from the command line with normal UAC level.
Although the article mentioned by Rubens is mostly about IPC over a network, it does make the point that "Local named pipes runs in kernel mode and is extremely fast".
Another solution you may want to consider is a named shared memory region. It is a little bit of work because you have to establish a flow control protocol yourself, but I have used this successfully in the past where speed was the most important thing.

Resources