I am trying to measure latency to a server that I don't control. This is in a colocated environment, so the latency is on the order of 500 us (.5 ms).
I understand that Cisco gear frequently deprioritizes ICMP traffic, making ping times unreliable. Is there a way for me to tell if this is the case on the gear I am traversing?
Can I use TCP acknowledgements to determine the minimum latency to the remote server? To do this, I would somehow need to force the remote server to send a TCP ack immediately on receiving my data.
Try hping. You can send acks and measure the latency:
hping -A -p 80 host
or with a SYN:
hping -S -p 80 host
Also note, deprioritization on a layer-2 link is unlikely (but possible). In addition, seeing ARP being slower than ICMP doesn't necessarily mean ICMP isn't deprioritized---it might mean bandwidth is insufficient to hit the limiting threshold.
ARP will almost always be slower because it broadcasts and may suffer port-queuing at the switch. You could unicast ARP, but that might look suspicious if anyone is looking for it.
You could try using arping, which does a ping using ARPs.
Related
Consider this scenario:
I've 10 Mbps internet connection. One user is downloading 1GB file from internet over TCP called TCP.file and another 1GB file from internet using UDP called UDP.file.
Assume there are no issues with the server (or upstream) serving files over TCP & UDP. There is no packet inspection or anything, just plain router.
Which download will complete first? Will it be TCP.file or UDP.file? And why?
I've two arguments:
1) TCP.file will compete first. Because TCP has flow control and can gobble up entire 10 Mbps BW.
2) UDP.file will compete first. Since UDP has less overhead and no flow control, this can also send consume entire 10 Mbps BW.
Can anyone tell which is correct?
How can I simulate packet loss in mac os?
I want test the different scenarios for MQTT QoS Levels.
Are there any frameworks for this approaches or simple terminal
programs?
I can't speak for OSX, but there are a couple of ways to do it on Linux, so may be you could try them in the VM.
You can use iptables to drop a given % of packets using the ipt_statistic module which is part of the iptables-extensions package
Use something like the CORE Network emulator. This lets you create whole networks and set bandwidth and packet drop rates. They provide VMWare image pre-installed
Because MQTT uses TCP, it is not typically affected by IP/Ethernet packet loss, until it gets serious enough that timeouts occur and the whole TCP connection is dropped. MQTT message re-transmission only occurs when a connection drops and then is re-established.
As such, you may be better off using something like a proxy or TCP port-forwarder between your client and server, so that you can simulate the connection dropping.
socat is an example of a simple TCP port forwarder:
http://www.dest-unreach.org/socat/
The following command listens on port 2883 and forwards connections to 1883 on the same machines:
socat TCP-LISTEN:2883,reuseaddr,fork TCP:localhost:1883
Typing Ctrl-C will cause it to drop the TCP connection.
socat is available from homebrew on Mac OS X.
I'm trying to see the results of an incoming ping on a target windows machine. This is needed to verify that the ping, which is running in a background thread, is being sent from the originator.
I have tried netstat to no avail. Are there any other approaches I could try?
Thanks.
Ping is an ICMP packet and doesn't create a TCP connection (hence you won't see it in netstat). On Linux, I'd add a rule to the firewall.
The most simple solution for your case might be to open a connection and close it. That will add it to the output of netstat with WAIT_CLOSE.
As Aaron Digulla already noted, ping is ICMP. This also means the originator even less trustable then with TCP; there's no SYN/ACK handshake. You just get an IP packet on your host, and you have to trust the header fields. Anyone can spoof those header fields, with almost no restrictions (It might be a bit challenging to get an IP claiming to come from 127.0.0.1 past a router)
Therefore, ICMP is not suitabel for verification tasks. You need a challenge/response protocol. TCP works reasoanbly well as long as you can trust the network but not necessarily all hosts on it (a reasonable assumption for the Internet. Not strong enough for financial transactions, which is why they use SSL)
I recently turned on Windows Firewall logging on my computer and started tracking incoming and outgoing connections. Something curious about the logfiles is that I have noticed numerous UDP packets (in fact, it constitutes basically all of my incoming traffic) that don't have my host as destination or source showing up in the logs.
I thought this might be a implementation detail for UDP (the packets are hopping over my computer in the subnet) but Wikipedia'ing UDP didn't enlighten me any more, and I don't see why my computer should be forwarding these packets in the first place.
Any ideas?
Edit 1: Here is what a log file line with the mysterious UDP packet looks like:
2008-10-11 16:04:31 ALLOW UDP 18.243.7.218 239.255.255.250 49152 3702 0 - - - - - - - RECEIVE
Is 239.255.255.250 a broadcast address? Now that you mention it, the UDP packets I'm seeing have very specific destinations, basically 224.0.0.252, 239.255.255.250, 18.243.255.255. I also get phantom ICMP pings addressed to 224.0.0.1.
The packets addressed to IPs starting with 239 and 224 are multicast packets. This is a way to address traffic to a group of computers without broadcasting it to an entire network. It is used by various legitimate protocols.
224.0.0.252 is the address used by the Link Local Name Resolution protocol.
239.255.255.250 is the address used by the Simple Service Discovery Protocol.
224.0.0.1 is the all hosts address, used by your router to see who on your network is willing to participate in multicast conversations.
The ones addressed to 18.243.255.255 look like broadcasts, again this is used by many legitimate protocols such as Bonjour.
As recommended by Luka, a good protocol analyzer like Wireshark will tell you precisely what each of these packets are and what they contain.
It depends on the type of connection you are on.
On most cable modem ISP's you are basicly on the same LAN as your neigburs, and can usualy see some of their traffic (like brodcast).
Id recomend you install packet sniffer and see what is realy going on.
Good multiplatform packet sniffer is Wireshark
Hard to say without analyzing the log data, but they could be broadcast packets on the segment, in which case you're system would listen to them. This is possible in IPv4 and IPv6.
Your system should not be forwarding them unless it's set up to route, but it can certainly be listening to packets all the time (various network protocols use UDP).
I wonder if the UNIX domain socket connections with postgresql are faster then tcp connections from localhost in high concurrency rate and if it does, by how much?
Postgres core developer Bruce Momjian has blogged about this topic. Momjian states, "Unix-domain socket communication is measurably faster." He measured query network performance showing that the local domain socket was 33% faster than using the TCP/IP stack.
UNIX domain sockets should offer better performance than TCP sockets over loopback interface (less copying of data, fewer context switches), but I don't know whether the performance increase can be demonstrated with PostgreSQL.
I found a small comparison on the FreeBSD mailinglist: http://lists.freebsd.org/pipermail/freebsd-performance/2005-February/001143.html.
I believe that UNIX domain sockets in theory give better throughput than TCP sockets on the loopback interface, but in practice the difference is probably negligible.
Data carried over UNIX domain sockets don't have to go up and down through the IP stack layers.
re: Alexander's answer. AFAIK you shouldn't get any more than one context switch or data copy in each direction (i.e. for each read() or write()), hence why I believe the difference will be negligble. The IP stack doesn't need to copy the packet as it moves between layers, but it does have to manipulate internal data structures to add and remove higher-layer packet headers.
afaik, unix domain socket (UDS) work like system pipes and it send ONLY data, not send checksum and other additional info, not use three-way handshake as TCP sockets...
ps: maybe UDS will be more faster
TCP sockets on localhost are usually implemented using UNIX domain sockets, so the answer on most systems is neglijable to none. However, this is not standard in any way -- it is just how usually it is done, therefore you should not depend on this.