How to Monitor the UDP Buffers in Windows XP - windows

I’m trying to optimize the communication in a large system which is based on UDP.
By optimizing, I mean to minimize the loss of packets.
(Yes I know the inherent limitations of UDP, don’t suggest another protocol)
We have several .exe each with several threads, and use
setsockopt with SO_SNDBUF & SO_RCVBUF to increase to bufers.
We have experienced that setting very large buffer for many sockets degrade the overall performance (more packet loss)
But how to monitor the effect of these increased buffer sizes? . Especially on the receive side, I would like to see if any ports gets messages is discarded due to lack of buffer.
Please suggest how this can be done,
(Windbg in user or kernel mode, special programs, witting something self)
EDIT:
#EdChum:
I have already used WireShark, and yes its painful to correlate the packets on the wire with the packets received by the application. And I have seen several occasions where the packet is on the wire, (captured by Wireshark) but not received by the application.
Those packets lost are usually a small packet to a multicast destination, which is sent with a very little time gap after a big unicast packet. The receiver of the unicast looses the multicast, but other receives it.
My suspicion is that XP sometimes suffer from some buffer starvation somewhere in the NDIS or IP layers and therefore silently drops packets. If there is a counter somewhere I could get this confirmed.

Not sure how to do this using WinDbg but I would use either NetMon or WireShark to monitor the packets and see if any are being discarded, it will be painful depending on how easy it is to reproduce and you will need to learn how to filter the packets so that the display shows what you are interested in but the help for both those apps are very useful.
You have to listen to a physical socket and not the loopback address in order to monitor the packets.

Related

ZeroMQ reliability on bad links

I am wondering how ZeroMQ behaves if messages are delivered over a bad quality link, e.g. a very unstable, low level serial connection which might drop individual bytes.
Of course in such a case the affected message will be lost, but will ZeroMQ be able to recover with the next message? Does it find the start again in any case?
Thank you!
The connection reliability is mostly the responsibility of the TCP protocol - if a socket believes it's connected, then the message is getting through. If packets are lost, then TCP detects that and attempts to retransmit them (see here for more info). This all happens "for free" as far as ZMQ is concerned, any connection type using TCP will behave the same way.
When the TCP connection is lost, which, presumably, could occur if the connection is very unreliable and the message never gets through after repeated attempts by TCP, then ZMQ adds another, separate layer of reliability on top of that, allowing your application to reconnect.
What happens with the original or subsequent messages during this outage depends on the ZMQ socket type you've chosen. Some socket types drop messages, some socket types queue them. If the message was already in transit, it may be lost because the sending socket has relinquished control over it.
Generally, if you want absolute reliability in message delivery, you'll be writing that yourself in your application, with your own confirmations that messages have been received. In most cases, something less than total reliability is needed and you'll just rely on TCP and ZMQ to get the job mostly done. If you're so focused on performance that even the reliability of TCP will slow you down too much and you'd rather just discard that data and move on, you'll need to use UDP - I've heard of people using UDP with ZMQ, but I haven't tried it and I don't believe it's fully supported across the board.

TCP Retransmit and TCPCopy when using loopback device on Windows 7

I have two programs running on the same Windows 7 System which connect via TCP. The server transmits unencoded VGA resolution images to the client in regular intervals.
The problem is, that from time to time, the transmission speed goes down by a factor of ~10 or so and stays that way for some time or until the client process is restarted.
I used the sysinternals process monitor to get some inside in what is going on.
When the transmission speed is reduced I can see that following an initial TCP Send event on the server side, I eventually (after a couple of receive/send pairs) get a number of TCPCopy events on the client side followed by a ~300ms pause in which no TCP events are recorded, followed by a TCP Retransmit event on the server side. I only get those TCPCopy events and the retransmit event when the speed is reduced.
I tried to find out what the TCPCopy event is all about but did not find a lot on the internet.
I have two questions:
What is the TCPCopy event?
What does the TCPCopy event and the Retransmit event tell me about the problems in the TCP connection?
TCPCopy event represented by antivirus softwares sometimes. And many on i saw on web, people who deactivate their antivirus software that was fixed the issue. Especially Eset Nod32. Please try to deactivate your antivirus software both on server and client side and check it again.

How many SNMP packets/sec handle by Windows Server 2003/2008/2012?

We are monitoring more than 400 devices via SNMP, there is no limitation for number of nodes to monitor, licensed for unlimited nodes
the problem is alarms are malfunctioning, the monitoring software team told windows servers cannot handle more than 100 SNMP packets per second, Is it true?
Windows does not process the SNMP packets, it only hands them over to the monitoring software just like any other network packet. To say that Windows cannot handle 100 SNMP packets per second is saying that Windows cannot handle 100 packets of any kind per second.
That does not mean it is impossible for Windows to be the weakest link, but there are other more likely bottlenecks:
Your server hardware (mostly CPU and the network interface).
Your network (cabling, routers, switches, VPN connections, proxies, ..).
The devices you are monitoring. Devices like IP-phones, printers etc do not have a lot of processing power and may not be able to keep up with the SNMP requests from the server.
The monitoring software itself.

WebSockets, UDP, and benchmarks

HTML5 websockets currently use a form of TCP communication. However, for real-time games, TCP just won't cut it (and is great reason to use some other platform, like native). As I probably need UDP to continue a project, I'd like to know if the specs for HTML6 or whatever will support UDP?
Also, are there any reliable benchmarks for WebSockets that would compare the WS protocol to a low-level, direct socket protocol?
On a LAN, you can get Round-trip times for messages over WebSocket of 200 microsec (from browser JS to WebSocket server and back), which is similar to raw ICMP pings. On MAN, it's around 10ms, WAN (over residential ADSL to server in same country) around 30ms, and so on up to around 120-200ms via 3.5G. The point is: WebSocket does add virtually no latency to the one you will get anyway, based on the network.
The wire level overhead of WebSocket (compared to raw TCP) is between 2 octets (unmasked payload of length < 126 octets) and 14 octets (masked payload of length > 64k) per message (the former numbers assume the message is not fragmented into multiple WebSocket frames). Very low.
For a more detailed analysis of WebSocket wire-level overhead, please see this blog post - this includes analysis covering layers beyond WebSocket also.
More so: with a WebSocket implementation capable of streaming processing, you can (after the initial WebSocket handshake), start a single WebSocket message and frame in each direction and then send up to 2^63 octets with no overhead at all. Essentially this renders WebSocket a fancy prelude for raw TCP. Caveat: intermediaries may fragment the traffic at their own decision. However, if you run WSS (that is secure WS = TLS), no intermediaries can interfere, and there you are: raw TCP, with a HTTP compatible prelude (WS handshake).
WebRTC uses RTP (= UDP based) for media transport but needs a signaling channel in addition (which can be WebSocket i.e.). RTP is optimized for loss-tolerant real-time media transport. "Real-time games" often means transferring not media, but things like player positions. WebSocket will work for that.
Note: WebRTC transport can be over RTP or secured when over SRTP. See "RTP profiles" here.
I would recommend developing your game using WebSockets on a local wired network and then moving to the WebRTC Data Channel API once it is available. As #oberstet correctly notes, WebSocket average latencies are basically equivalent to raw TCP or UDP, especially on a local network, so it should be fine for you development phase. The WebRTC Data Channel API is designed to be very similar to WebSockets (once the connection is established) so it should be fairly simple to integrate once it is widely available.
Your question implies that UDP is probably what you want for a low latency game and there is truth to that. You may be aware of this already since you are writing a game, but for those that aren't, here is a quick primer on TCP vs UDP for real-time games:
TCP is an in-order, reliable transport mechanism and UDP is best-effort. TCP will deliver all the data that is sent and in the order that it was sent. UDP packets are sent as they arrive, may be out of order, and may have gaps (on a congested network, UDP packets are dropped before TCP packets). TCP sounds like a big improvement, and it is for most types of network traffic, but those features come at a cost: a delayed or dropped packet causes all the following packets to be delayed as well (to guarantee in-order delivery).
Real-time games generally can't tolerate the type of delays that can result from TCP sockets so they use UDP for most of the game traffic and have mechanisms to deal with dropped and out-of-order data (e.g. adding sequence numbers to the payload data). It's not such a big deal if you miss one position update of the enemy player because a couple of milliseconds later you will receive another position update (and probably won't even notice). But if you don't get position updates for 500ms and then suddenly get them all out once, that results in terrible game play.
All that said, on a local wired network, packets are almost never delayed or dropped and so TCP is perfectly fine as an initial development target. Once the WebRTC Data Channel API is available then you might consider moving to that. The current proposal has configurable reliability based on retries or timers.
Here are some references:
WebRTC Introduction
WebRTC FAQ
WebRTC Data Channel Proposal
To make a long story short, if you want to use TCP for multiplayer games, you need to use what we call adaptive streaming techniques. In other words, you need to make sure that the amount of real-time data sent to synchronize the game world among the clients is governed by the currently available bandwidth and latency for each client.
Dynamic throttling, conflation, delta delivery, and other mechanisms are adaptive streaming techniques, which don't magically make TCP as efficient as UDP, but make it usable enough for several types of games.
I tried to explain these techniques in an article: Optimizing Multiplayer 3D Game Synchronization Over the Web (http://blog.lightstreamer.com/2013/10/optimizing-multiplayer-3d-game.html).
I also gave a talk on this topic last month at HTML5 Developer Conference in San Francisco. The video has just been made available on YouTube: http://www.youtube.com/watch?v=cSEx3mhsoHg
There's no UDP support for Websockets (there really should be), however you can apparently use WebRTC's RTCDataChannel API for UDP-like communication. There's a good article here:
http://www.html5rocks.com/en/tutorials/webrtc/datachannels/
RTCDataChannel actually uses SCTP which has configurable reliability and ordered delivery. You can get it to act like UDP by telling it to deliver messages unordered, and setting the maximum number of retransmits to 0.
I haven't tried any of this though.
I'd like to know if the specs for HTML6 or whatever will support UDP?
WebSockets won't. One of the benefits of WebSockets is that it piggybacks the existing HTTP connection. This means that to proxies and firewalls WebSockets looks like HTTP so they don't get blocked.
It's likely arbitrary UDP connections will never be part of any web specification because of security concerns. The closest thing to what you're after will likely come as part of WebRTC and it's associated JSEP protocol.
are there any reliable benchmarks ... that .. compare the WS protocol to a low-level, direct socket protocol?
Not that I'm aware of. I'm going to go out on a limb and predict WebSockets will be slower ;)

How can I record the network traffic used by Tibco RV?

we using Tibco RVRD for both Unix and windows as the messaging system. Just wonder, other than buy HAWK from Tibco, is there anyway to measure the network usage, before and after RVRD compression?
There is a really great tool for this called Rai Insight
Basically what it can do is to sit on a box and silently listen all the multicast data and represent statistics even in real time. We used it to monitor traffic flow spikes with just few seconds delay.
It can give you traffic statistics braked down by multicast group, service number or even sending machine. Traffic flow peak/average, retransmission rate peak/average. All you can think of.
I haven't really used it for such, but the rvrd web gui (default http://server:7580) provides some statistics on inbound/outbound messages and bytes.

Resources