I am using QTCPSocket to connect to a TCP server (which is running on Ubuntu). The server is sending at minimum, a 1 byte packet every 40ms. My application is real-time, so it is important I receive data as fast as possible at the cost of extra network traffic.
Once I have connected a TCP Client from Windows, I start receiving packets. However, the readyRead() signal from the QTCPSocket is only emitted once every 200ms (with 5 bytes in the packet). I have looked at the packets in Wireshark, they are actually 5 byte packets coming across.
However, using QTCPSocket on Mac (the exact same code in fact), I get individual packets every time, all of my 1 byte packets sent arrive as single byte packets, which is great.
I tried creating a raw Windows socket (not using QTCPSocket), and get identical behaviour to QTCPSocket on Windows.
What is the difference causing the Mac socket to receive packets at a much higher time resolution? Is there something I can set in setsockopt() which will prevent this 200ms buffering from occuring?
I am aware that setting TCP_NODELAY on the server side will probably solve my problem, but seeing as the Mac TCP Client works as intended, there must be a way to get the same behaviour on Windows.
Setting mySocket->setSocketOption(QAbstractSocket::LowDelayOption, 1); on the server side is the only way I have found to remedy this problem
For others who stumble upon this coming from search engines:
The above (correct) answer by oggmonster can also be described by:
int on = 1;
if (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char*)&on, sizeof(on)))
{
return -1;
}
You need to acknowledge each byte of data you receive to give the reply ACKs some data to piggyback on. Talk to whoever designed your protocol.
Trying to answer questions like "Why does it work on X and not on Y" is only useful when both behaviors aren't correct. If it has no application-level acknowledgements, then both behaviors are correct. If one of them shouldn't be correct, then the protocol should have a mechanism to control that, such as application-layer acknowledgements. If it doesn't the protocol is broken. Trying to figure out why a broken protocol doesn't work is pointless -- it doesn't work because it's broken.
Related
The first time I skimmed the zeromq docs, I assumed that the sender high watermark was there to ensure that the sender did not get too far ahead of the receiver. Now that I'm looking at it more carefully, it seems that this can't possibly be true, since the wire protocol doesn't have any concept of ACKs so the sender can't know whether the receiver is keeping up or is way behind. After staring at jeromq code in the debugger for way too long, it seems that the watermark is actually a purely "within-same-process" mechanism to ensure that the application thread that's writing to the ZMQ socket does not get too far ahead of the background thread that's responsible for taking messages off the ZMQ socket and writing bytes into the OS's TCP socket.
It seems like a rather fringe thing to worry about, relative to how much attention it's given in the docs. It doesn't even seem like a great way to control memory usage, because if you have a high water mark of 10, then 15 messages of 2kb each is not allowed, but 5 messages of 100 megs each is allowed, so things are still pretty un-predictable.
Am I understanding all this correctly or am I hopelessly confused.
I think that another thing that says it's not to prevent a sender getting too far ahead of the receiver is that if one set the HWM to 0, that's taken as infinity not actually zero. For 0 to mean zero, it'd have to have some too-ing and fro-ing with the receiver to know whether the socket was actually empty throughout the whole connection.
I wish that 0 did mean zero, because then ZeroMQ could implement both Actor Model and Communicating Sequential Processes architectures. But it doesn't, so it can't.
Possible Uses
None the less, a potential useful aspect is related to the fact that ZeroMQ is Actor Model. Suppose one were sending messages, and it kind of mattered whether or not those messages got through. In the situation where the link has collapsed (something that ZeroMQ's heartbeat can tell you, pretty quickly), messages already sent are potentially lost forever. However, if the HWM is being used to throttle the rate of messages being sent by the application, then the number of lost messages when the link breaks is minimised.
Obviously with CSP - the perfect architecture so far as I'm concerned! - you lose no messages (because the acts of sending and receiving are an execution rendezvous; the send won't complete until the receive has also completed).
What I have done in the past is to queue up messages for transmission in the sending application, sending them as and when the socket / connection can ingest them. Having the outbound message queue in the sending application's control (instead of in ZeroMQ's control) means that sender state can potentially get ahead of the transfer of messages, but still recover easily from a network connection fault.
I have written systems where a sender has a choice of two pathways to send messages through - prime and spare - and if the link to prime has collapsed the sender continues to send to spare instead. Having queued the messages inside the application and not in the socket allows the sender's state can get ahead of the actual transfer of messages, knowing that if a link goes down it's still got all the unsent outboud messages that have been generated in the meantime. These can then be directed at spare instead, without having to rewind the sender's internal state (which could be really tricky) to the last known successful transfer.
Something like that, anyway.
"Why not send to both prime and spare anyway?" is a valid question. Well, sometimes things can be complicated...
I am using iperf traffic generation and hard timeout as extension to simple_switch_13.py code in mininet with RYU SDN. I am using linear topology with 8 switches. I set the hard timeout to 5 seconds.
I am working with only one flow. I started the iperf traffic between two hosts(let's say h1 to h7. the terms used are same as terms used in mininet linear topology) for 10 seconds. When the flow started arp packets packets are generated in the network. After that a arp reply from h7 is sent to h1 which creates seven packet in messges from (s7, s6, ... , s1) and respective flowrule is installed in the switches and finally reaches h1. Then h1 sends tcp flow to h7 which also creates seven packet in messages from (s1, s2, ... , s7) and respective flow rule is installed in the switches and reaches h7. So far everything worked fine.
But once the timeout(5 seconds) is completed the flow rule in the switches is deleted. because flow is still in network what actually should happen is controller should send one packet-in message and buffer the rest of the packets so that when the respective flow rule is installed in flow table of the switch then the buffered packets will use the installed flow rule to pass the switch. But that is not happening. The controller is getting a lot of "packet-in" messages before the flow rule get installed into the switch(every packet that came to switch is coming to controller). What might be the reason for the lot of packet in messages. Is the buffers of the switch not working fine (but i am getting packet-in messages with some buffer_id). How to solve this issue?
This is also happening with idle-timeout and udp flow in the starting.(i.e. when h1 starts communication with h7) the switches along the path are generating lot of packet-in messages.
After doing a lot of research I understand that it is not problem with hard timeout or idle timeout. It is happening when a flow with high data rate hits the switch and switch didn't have that flow rule, then it is sending a lot of packet-in messages for the same flow. It is not queuing(or may be not storing the rest of the flow packets after sending one packet-in for that respective flow) those packets. How to solve this issue in mininet?
If we send two messages over the same html5 websocket a split millisecond apart from each other,
Is it theoretically possible for the messages to arrive in a different order than they were sent?
Short answer: No.
Long answer:
WebSocket runs over TCP, so on that level #EJP 's answer applies. WebSocket can be "intercepted" by intermediaries (like WS proxies): those are allowed to reorder WebSocket control frames (i.e. WS pings/pongs), but not message frames when no WebSocket extension is in place. If there is a neogiated extension in place that in principle allows reordering, then an intermediary may only do so if it understands the extension and the reordering rules that apply.
It's not possible for them to arrive in your application out of order. Anything can happen on the network, but TCP will only present you the bytes in the order they were sent.
At the network layer TCP is suppose to guarantee that messages arrive in order. At the application layer, errors can occur in the code and cause your messages to be out of order in the logic of your code. It could be the network stack your application is using or your application code itself.
If you asked me, can my Node.js application guarantee sending and receiving messages in order? I'm going to have to say no. I've run websocket applications connected to WiFi under high latency and low signal. It causes very strange behavior as if packets are dropped and messages are out of sequence.
This article is a good read https://samsaffron.com/archive/2015/12/29/websockets-caution-required
I would like to have a lightning fast website, which is focussed on mobile. Therefore I would like to inline as much graphics, styles and scripts as possible and only use one or two fast HTTP-Requests to display the first part of the page.
My question is how much can I inline, how big may my document get until it gets divided.
As I know so far HTTP uses TCP to send the IP Packets and TCP has a window how far the last send and highest acknowledged Packet may be appart and it scales this window.
But how much payload can be transported, before the server has to wait for an ACK of my client in worst case (first window send, no ACKs received yet). And what does it depend to, the browser, the OS, the device?
But how much payload can be transported, before the server has to wait for an ACK of my client in worst case (first window send, no ACKs received yet). And what does it depend to, the browser, the OS, the device?
It depends on the size of the socket receive buffer in the receiver.
Is it possible to have winsock's send function block until the packet being sent is received at the other end?
My end goal is to be able to send 5-20mb files while still being able to send small 1kb packets on the same connection. So I was thinking I would have it block until the receiver receives the packet. That way if another small packet is queued it wont be stuck waiting for the rest of the large file to be transferred.
Just use two separate TCP connections. They can even connect to the same host and port, the port number at your end will be different.
Stop-and-wait handshaking over any network (i.e. not loopback) would be miserably slow.
You could send the size of your packages instead
struct MyNetworkPackage {
int size;
char* data;
};
if you begin by sending the size, you can deduce on the other side what data belongs to what package.
I've tried to explain winsock in this answer as well.
No. And you don't need to do this.
I am not sure what we would acheive with this.. Mixing traffic on TCP stream won't server any purpose.. Can you explain what exactly u need to do.. Above all, we can never be sure in TCP that the other application has actually received the pkt (it might just be in their tcp buffers)...
It seems that 'send' will block if there is a previous packet still being sent.