In simuLTE, the LteHandoverManager is trying to send a self-constructed packet "X2HandoverControlMsg" to the SCTP layer for further processing. After receiving the packet, however, the SCTP layer discards it and sends a self-defined "DATA" packet to the IP layer.
The problem is: I want to send customized packets constructed in the LteHandoverManager and pass through the layers including SCTP, IP and PPP all the way down to the destination node. Does anybody know if it is possbiel to do that and how?
I want to send a UDP packet from bash (perhaps using netcat or socat) and then receive the one-packet reply, or time-out after three seconds.
(Strictly, the listening needs to start before the initial packet is sent.)
Is this possible, or do I need to write my own small C program?
Netcat lets me either send or receive, so it won't do the job.
Likewise for socat.
Perhaps there is already a UDP request/response tool, but I don't know how to find it.
I need to write my own program for this use case.
Socats datagram mode works asynchronously. In particular, something like
socat -t 3 - UDP-DATAGRAM:localhost:7777,bind=:6666
might come near your requirements.
I am sending and receiving JSON data through a TCP socket. It works fine when it is smaller amounts of data, like 200 bytes or so. But when it gets to about 10 KB it only receives part of the data. I have tried all the different TCP socket retrieve data commands I can find (read, gets, gets.chomp, recv) but I cannot find one that will work for all of my tests.
Here is the code I have now:
socket = TCPSocket.new '10.11.50.xx', 13338
response = socket.recv(1000000000)
I have also tried adding a timeout but I could not get it to work:
socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, 1)
I am not sure what I am missing. Any help would be appreciated.
It's badly documented in the Ruby docs, but I think TCPSocket#recv actually just calls the recv system call. That one (see man 2 recv) reads a number of bytes from the stream that is determined by the kernel, though never more than the application specifies. To receive a larger "message", you will need to call it in a loop.
But there is an easier way: because TCPSocket indirectly inherits from the IO class, you get all of its methods for free, including IO#read which does read as many bytes as you specify (if possible).
You wil also need to implement a way to delimit your messages:
use fixed-length messages
send the length of the message up front in a (fixed-size) header
use some kind of terminator, e.g. a NULL byte
I am using QTCPSocket to connect to a TCP server (which is running on Ubuntu). The server is sending at minimum, a 1 byte packet every 40ms. My application is real-time, so it is important I receive data as fast as possible at the cost of extra network traffic.
Once I have connected a TCP Client from Windows, I start receiving packets. However, the readyRead() signal from the QTCPSocket is only emitted once every 200ms (with 5 bytes in the packet). I have looked at the packets in Wireshark, they are actually 5 byte packets coming across.
However, using QTCPSocket on Mac (the exact same code in fact), I get individual packets every time, all of my 1 byte packets sent arrive as single byte packets, which is great.
I tried creating a raw Windows socket (not using QTCPSocket), and get identical behaviour to QTCPSocket on Windows.
What is the difference causing the Mac socket to receive packets at a much higher time resolution? Is there something I can set in setsockopt() which will prevent this 200ms buffering from occuring?
I am aware that setting TCP_NODELAY on the server side will probably solve my problem, but seeing as the Mac TCP Client works as intended, there must be a way to get the same behaviour on Windows.
Setting mySocket->setSocketOption(QAbstractSocket::LowDelayOption, 1); on the server side is the only way I have found to remedy this problem
For others who stumble upon this coming from search engines:
The above (correct) answer by oggmonster can also be described by:
int on = 1;
if (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char*)&on, sizeof(on)))
{
return -1;
}
You need to acknowledge each byte of data you receive to give the reply ACKs some data to piggyback on. Talk to whoever designed your protocol.
Trying to answer questions like "Why does it work on X and not on Y" is only useful when both behaviors aren't correct. If it has no application-level acknowledgements, then both behaviors are correct. If one of them shouldn't be correct, then the protocol should have a mechanism to control that, such as application-layer acknowledgements. If it doesn't the protocol is broken. Trying to figure out why a broken protocol doesn't work is pointless -- it doesn't work because it's broken.
Can you explain me what exactly are SO_SNDBUF and SO_RCVBUF options?
OK, for some reason the OS buffers the outgoing/incomming data but I'd like to clarify this subject.
What is their role (generally)?
Are they per-socket buffers?
Is there a connection between Transport layer's buffers (the TCP buffer, for example) and these buffers?
Do they have a different behaviour/role when using stream sockets (TCP) and when using connectionless sockets (UDP)?
A good article will be great too.
I googled it but didn't find any useful information.
The "SO_" prefix is for "socket option", so yes, these are per-socket settings for the per-socket buffers. There are usually system-wide defaults and maximum values.
SO_RCVBUF is simpler to understand: it is the size of the buffer the kernel allocates to hold the data arriving into the given socket during the time between it arrives over the network and when it is read by the program that owns this socket. With TCP, if data arrives and you aren't reading it, the buffer will fill up, and the sender will be told to slow down (using TCP window adjustment mechanism). For UDP, once the buffer is full, new packets will just be discarded.
SO_SNDBUF, I think, only matters for TCP (in UDP, whatever you send goes directly out to the network). For TCP, you could fill the buffer either if the remote side isn't reading (so that remote buffer becomes full, then TCP communicates this fact to your kernel, and your kernel stops sending data, instead accumulating it in the local buffer until it fills up). Or it could fill up if there is a network problem, and the kernel isn't getting acknowledgements for the data it sends. It will then slow down sending data on the network until, eventually, the outgoing buffer fills up. If so, future write() calls to this socket by the application will block (or return EAGAIN if you've set the O_NONBLOCK option).
This all is best described in the Unix Network Programming book.
What is their role (generally)?
Data, that you want to send over a socket, is copied to the send buffer of the socket, so your code doesn't have to wait (=block) until the data has really been sent out to the network. When the send call returns successfully, this only means that the data has been placed into the send buffer from where the protocol implementation will read it as soon as it is ready to send that data over the network.
Keep in mind that multiple sockets from multiple processes may all want to send data at the same time, yet at any time only one data packet can be send over a network line. While sending is in progress, all other senders have to wait and once the line is free, the implementation can only process one send request after another.
Data, that arrives from the network, is written to the receive buffer of the socket by the protocol implementation, where it will wait until your code is reading it from there. Otherwise all receiving would have to stop until your code has processed the incoming packet, yet your code may do other things while a packet arrives in the background and again, the interface is shared, so the system must avoid that other processes cannot receive their network data just because your process is refusing to process its own incoming data.
Are they per-socket buffers?
Yes. Every socket has its own set of buffers.
Is there a connection between Transport layer's buffers (the TCP buffer, for example) and these buffers?
I'm not sure what you mean by "TCP buffers" but if you are referring to the TCP receive and send windows, the answer is yes.
TCP will tell the other side regularly how much room is left in your receive buffer, so that the other side will never send more data than would fit into your receive buffer. If your receive buffer is full, the other side will stop sending completely until there's room again, which will be the case as soon as soon as you read some data from it.
So if you cannot read data as often as would be required to prevent your socket buffer from running full, increasing the receive buffer size can prevent that TCP connections will have to pause sending data.
On the other hand, if the send buffer is running full, the socket will not accept any more data from your code. Any attempt to send will either block or fail with an error (non-blocking socket) until there's room again.
And as TCP can only work with the data currently in the send buffer, the send buffer size also influences the sending behavior of TCP. The TCP sending strategy can depend on various factors. One of them is the amount of data that is known to be sent. If your send buffer is just 2 KB, then TCP will never see more than 2 KB for sending, even though your app may know, that much more data is going to follow. If your send buffer is 256 KB and you put 128 KB of data into it, TCP will know that it has to send 128 KB of data for this connection and this may (and most likely will) influence the sending strategy that TCP uses.
Do they have a different behaviour/role when using stream sockets (TCP) and when using connectionless sockets (UDP)?
Yes. That's because for TCP the data you send is just a stream of bytes. There is no relationship between the bytes and the packets being sent out. Sending 80 bytes could mean sending one packet with 80 bytes or sending 10 packets with each 8 bytes. TCP will decide that on its own. Same for incoming. If there are 200 bytes in your receive buffer, you cannot know how these got there, an amount of bytes you read from a TCP socket may have been transported using any number of packets. So despite transporting data in chunks over packet based networks, a TCP connection behaves like a serial line link.
UDP on the other hand sends datagrams. If you place 80 bytes into the send buffer of an UDP socket, then these 80 bytes are for sure sent out in a single UDP packet containing 80 bytes of payload data. Data is sent in exactly the same way you write it into the send buffer. If you write 80 bytes one after another, 80 packets are sent out, each containing one byte. If you tell a TCP socket to send 200 bytes but there are only 100 bytes room left in the send buffer, TCP will add 100 bytes to the buffer and let you know that 100 of your 200 bytes were added. UDP on the ohter hand will block or fail with an error, as either all 200 bytes fit or nothing fits; there is no partial fit with UDP.
Also when receiving, datagrams are stored in the UDP receive buffer, not bytes. If a TCP socket first receives 80 bytes data and then 200 bytes data, you can perform a read call that reads all 280 bytes at once. If an UDP socket first receives a datagram with 80 bytes and then a datagram with 200 bytes and you request to read 280 bytes from it, you get exactly 80 bytes, as all data returned by a read call must be from the same datagram. You cannot read across datagram borders. Also note that if you request to only read 20 bytes, you receive the first 20 bytes of the datagram and the other 60 bytes are discarded. Next time you read data, it will be from the next datagram whose size was 200 bytes.
So the difference in two sentences: TCP sockets store bytes in the socket buffers, UDP sockes store datagrams in the socket buffers. And datagrams must fit completely in the buffers, incoming datagrams that cannot fit completely into the socket buffer are silently discarded, even though the buffer had some room available.
In Windows, the send buffer does have an effect in UDP. If you blast packets out faster than the network can transmit them, eventually you will fill the socket output buffer and SendTo will fail with "would block". Increasing SO_SNDBUF will help with this. I had to increase both the send and receive buffers for a test I was doing to find the maximum packet rate I could send between a Windows box and a Linux box. I could have also handled the send size by detecting the "would block" error code, sleeping a bit, and retrying. But pumping up the send buffer size was simpler.
The default in Windows is 8K, which seems needlessly small in this era of PC's with GB's of RAM!
Searching Google for "SO_RECVBUF msdn" gave me...
http://msdn.microsoft.com/en-us/library/ms740476(VS.85).aspx
which answers your "are they per socket" with these lines from the options table:
SO_RCVBUF int Specifies the total per-socket buffer space reserved for receives.
SO_SNDBUF int Specifies the total per-socket buffer space reserved for sends.
With more detail later on:
SO_RCVBUF and SO_SNDBUF
When a Windows Sockets implementation supports the SO_RCVBUF and
SO_SNDBUF options, an application can request different buffer sizes
(larger or smaller). The call to setsockopt can succeed even when the
implementation did not provide the whole amount requested. An
application must call getsockopt with the same option to check the
buffer size actually provided.
Above answers didn't answer all questions, especially about the relationship between Socket buffer and TCP buffer.
I think they are different things in different layer. TCP buffer is the consumer of Socket buffer.
Socket buffers (input & output) is an IO buffer that is accessed by System calls from the application code in user space.
For example, with output buffer, the application code can
Send data immediately before the buffer is full and be blocked when buffer is full.
Set the buffer size.
Flush the data in buffer to the underlying storage (TCP send buffer).
Close the output buffer by close the stream.
TCP buffers (send & receive) are in kernel space that only OS can access.
For example, with TCP send buffer, the TCP protocol implementation can
Send packets and accept ACK.
Guarantee delivery and ordering of packets.
Control congestion by resizing the inflight packets window.
By the way, UDP protocol doesn't have buffer but UDP socket can still have IO buffer.
These are my understanding and I'm more than happy to get any feedback/modification/correction.