This is more of a observation and also a suggestion for whats the best way to handle this scenario.
I have two threads one just pumps in data and another receives the data and does lot of work before sending it another socket. Both the threads are connected via a Domain socket. The protocol used here is UDP. I did not want to use TCP as it is stream based, which means if there is little space in the queue my data is split and sent. This is bad as Iam sending data that should not be split. Hence I used DGRAM. Interestingly when the send thread overwhelms the recv thread by pumping so much data, at some point the Domain socket buffer gets filled up and sendto() returns ENOBUFS. I was of the opinion that should this happen, sendto() would block until the buffer is available. This would be my desired behaviour. However this does not seem to be the case. I solve this problem in a rather weird way.
CPU Yield method
If I get ENOBUFS, I do a sched_yield(); as there is no pthread_yield() in OSX. After that I try to resend again. If that fails I keep doing the same until it is taken. This is bad as Iam wasting cpu cycles just doing something useless. I would love if sendto() blocked.
Sleep method
I tried to solve the same issue using sleep(1) instead of sched_yield() but this of no use as sleep() would put my process to sleep instead of just that send thread.
Both of them does not seem to work for me and Iam running out of options. Can someone suggest what is the best way to handle this issue? Is there some clever tricks Iam not aware of that can reduce unnecessary cpu cycles? btw, what the man page says about sentto() is wrong, based on this discussion http://lists.freebsd.org/pipermail/freebsd-hackers/2004-January/005385.html
The Upd code in kernel:
The udp_output function in /sys/netinet/udp_usrreq.c, seems clear:
/*
* Calculate data length and get a mbuf
* for UDP and IP headers.
*/
M_PREPEND(m, sizeof(struct udpiphdr), M_DONTWAIT);
if (m == 0) {
error = ENOBUFS;
if (addr)
splx(s);
goto release;
}
I'm not sure why sendto() isn't blocking for you... but you might try calling this function before you each call to sendto():
#include <stdio.h>
#include <sys/select.h>
// Won't return until there is space available on the socket for writing
void WaitUntilSocketIsReadyForWrite(int socketFD)
{
fd_set writeSet;
FD_ZERO(&writeSet);
FD_SET(socketFD, &writeSet);
if (select(socketFD+1, NULL, &writeSet, NULL, NULL) < 0) perror("select");
}
Btw how big are the packets that you are trying to send?
sendto() on OS X is really nonblocking (that is M_DONTWAIT flag for).
I suggest you to use stream based connection and just receive the whole data on the other side by using MSG_WAITALL flag of the recv function. If your data has strict structure than it would be simple, just pass the correct size to the recv. If not than just send some fixed-size control packet first with the size of the next chunk of data and then the data itself. On the receiver side you would be wait for control packet of fixed size and than the data of size from control packet.
Related
Trying to see if I can get a response from ctrader server.
Getting no response and seems to hang at "s.recv(1024)". So not sure what could be going wrong here. I have limited experience with sockets and network coding.
I have checked my login credentials and all seems ok.
Note: I am aware of many FIX engines that are available for this purpose but wanted to
try this on my own.
ctrader FIX guides
require 'socket'
hostname = "h51.p.ctrader.com"
port = 5201
#constructing a fix message to see what ctrader server returns
#8=FIX.4.4|9=123|35=A|49=demo.ctrader.*******|56=cServer|57=QUOTE|50=QUOTE|34=1|52=20220127-16:49:31|98=0|108=30|553=********|554=*******|10=155|
fix_message = "8=FIX.4.4|9=#{bodylengthsum}|" + bodylength + "10=#{checksumcalc}|"
s = TCPSocket.new(hostname, port)
s.send(fix_message.force_encoding("ASCII"),0)
print fix_message
puts s.recv(1024)
s.close
Sockets are by default blocking on read. When you call recv that call will block if no data is available.
The fact that your recv call is not returning anything, would be an indication that the server did not send you any reply at all; the call is blocking waiting for incoming data.
If you would use read instead, then the call will block until all the requested data has been received.
So calling recv(1024) will block until 1 or more bytes are available.
Calling read(1024) will block until all 1024 bytes have been received.
Note that you cannot rely on a single recv call to return a full message, even if the sender sent you everything you need. Multiple recv calls may be required to construct the full message.
Also note that the FIX protocol gives the msg length at the start of each message. So after you get enough data to see the msg length, you could call read to ensure you get the rest.
If you do not want your recv or read calls to block when no data (or incomplete data) is available, then you need to use non-blocking IO instead for your reads. This is complex topic, which you need to research, but often used when you don't want to block and need to read arbitary length messages. You can look here for some tips.
Another option would be to use something like EventMachine instead, which makes it easier to deal with sockets in situations like this, without having to worry about blocking in your code.
When using the simple case of NdisSend, I free the packet buffer and the buffer in case the returned status is not NDIS_STATUS_PENDING,
So what is the proper way of freeing the buffers that i Allocated with NdisAllocatePacket and NdisAllcateBuffer in the case of NdisSendPackets?
in the NdisSendPackets page, MSDN says:
"The caller of NdisSendPackets should test the returned status for each packet in such an array individually when its ProtocolSendComplete function is called with the completion Status"
But in the ProtocolSendComplete MSDN, it says:
"This function is a required driver function that completes the processing of a protocol-initiated send previously passed to NdisSendPackets or NdisSend, which returned NDIS_STATUS_PENDING."
So.. based on what i gathered, SendComplete is called only for packets that returned NDIS_STATUS_PENDING.
But why it says that i "should test the returned status " in the SendComplete, if its only for NDIS_STATUS_PENDING?!
The main problem I'm facing right now is that even tho I'm looping through every packet in the array after NdisSendPackets, and freeing only the ones that are not NDIS_STATUS_PENDING, i got a bugcheck at NdisFreeBuffer for "Attempt to free pool which was already freed" (BAD_POOL_CALLER), even tho I'm sure as hell i didn't free it before myself.. I moved all my buffer freeing to the SendComplete and i no longer get bugcheck, So it seems like my assumption was true and my SendComplete gets called for every packet, and not just the ones that return NDIS_STATUS_PENDING..
And another side question:
Do all NICs support multi packet send/recv? Is there any need for me to be worry when i use a packet array with NdisSendPackets and NdisMIndicateReceivePacket, instead of a single packet, or do all the NICs properly support multi packet send/recv?
Based on the following code, I built a version of an echo server, but with a threaded delay. This was built because I've noticed that upon initial connection, my first send is sent back to the client, but the client does not receive it until a second send. My real-world use case is that I need to send messages to the server, do a lot of processing, and then send the result back... say 10-30 seconds later (could be hours in some cases).
http://www.wangafu.net/~nickm/libevent-book/Ref8_listener.html
So here is my code. For brevity's sake, I have only included the libevent-related code; not the threading code or other stuff. When debugging, a new connection is set up, the string buffer is filled properly, and debugging reveals that the writes go successfully.
http://pastebin.com/g02S2RTi
But I only receive the echo from the send-before-last. I send from the client numbers to validate this and when I send a 1 from the client, I receive nothing from the server via echo... even though the server is definitely writing to the buffer using evbuffer_add ( I have also tried this using bufferevent_write_buffer).
From the client when I send a 2, I then receive the 1 from the previous send. It's like my writes are being cached.... I have turned off nagle.
So, my question is: Does libevent cache sends using the following method?
evbuffer_add( outputBuffer, buffer, length );
Is there a way to flush this cache? Is there some other method to mark the cache as finished or complete? Can I force a send? It never sends on it's own... I have even put in delays. Replacing evbuffer_add with "send" works perfectly every time.
Most likely you are affected by Nagle algorithm - basically it buffers outgoing data, before sending it to the network. Take a look at this article: TCP/IP options for high-performance data transmission.
Here is an example how to disable buffering:
int flag = 1;
int result = setsockopt(sock, /* socket affected */
IPPROTO_TCP, /* set option at TCP level */
TCP_NODELAY, /* name of option */
(char *) &flag, /* the cast is historical
cruft */
sizeof(int)); /* length of option value */
I am programming a client application sending TCP/IP packets to a server. Because of timeout issues I want to start a timer as soon as the ACK-Package is returned (so there can be no timeout while the package has not reached the server). I want to use the winapi.
Setting the Socket to blocking mode doesn't help, because the send command returns as soon as the data is written into the buffer (if I am not mistaken). Is there a way to block send till the ACK was returned, or is there any other way to do this without writing my own TCP-implementation?
Regards
It sounds like you want to do the minimum implementation to achieve your goal. In this case you should set your socket to blocking, and following the send which blocks until all data is sent, you call recv which in turn will block until the ACK packet is received or the server end closes or aborts the connection.
If you wanted to go further with your implementation you'd have to structure your client application in such a way that supports asynchronous communication. There are a few techniques with varying degrees of complexity; polling using select() simple, event model using WSASelectEvent/WSAWaitForMultipleEvents challenging, and the IOCompletionPort model which is very complicated.
peudocode... Will wait until ack is recevied, after which time you can call whatever functionallity you want -i chose some made up function send_data.. which would then send information over the socket after receiving the ack.
data = ''
while True
readable, writable, errors = select([socket])
if socket in readble
data += recv(socket)
if is_ack(data)
timer.start() #not sure why you want this
break
send_data(socket)
I am looking to send a large message > 1 MB through the windows sockets send api. Is there a efficient way to do this, I do not want to loop and then send the data in chunks. I have read somewhere that you can increase the socket buffer size and that could help. Could anyone please elaborate on this. Any help is appreciated
You should, and in fact must loop to send the data in chunks.
As explained in Beej's networking guide:
"send() returns the number of bytes actually sent out—this might be less than the number you told it to send! See, sometimes you tell it to send a whole gob of data and it just can't handle it. It'll fire off as much of the data as it can, and trust you to send the rest later."
This implies that even if you set the packet size to 1MB, the send() function may not send all of it, and you are forced to loop until the total number of bytes sent by your calls to send() total the number of bytes you are trying to send. In fact, the greater the size of the packet, the more likely it is that send() will not send it all.
Aside from all that, you don't want to send 1MB packets because if they get lost, you will have to transmit the entire 1MB packet again, whereas if you lost a 1K packet, retransmitting it is not a big deal.
In summary, you will have to loop your send() calls, and the receiver will even have to loop their recv() calls too. You will likely need to prepend a small header to each packet to tell the receiver how many bytes are being sent so the receiver can loop the appropriate number of times.
I suggest you take a look at Beej's network guide for more detailed info about send() and recv() and how to deal with this problem. It can be found at http://beej.us/guide/bgnet/output/print/bgnet_USLetter.pdf
Why don't you want to send it in chunks?
That's the way to do it in 99% of the cases.
What makes you think that sending in chunks is inefficient? The OS is likely to chunk large "send" calls anyway, and may coalesce small ones.
Likewise on the receiving side the client should be looping anyway as there's no guarantee of getting all the data in one go.
The windows sockets subsystem is not oblidged to send the whole buffer you provide anyway. You can't force it since some network level protocols have an upper limit for the packet size.
As a practical matter, you can actually allocate a large buffer and send in one call using Winsock. If you are not messing with socket buffer sizes, the buffer will generally be copied into kernel mode for sending anyway.
There is a theoretical possibility that it will return without sending everything, however, so you really should loop for correctness. The chunks you send should, however, be large (64k or the ballpark) to avoid repeated kernel transitions.
If you want to do a loop after all, you can use this C++ code:
#define DEFAULT_BUFLEN 1452
int SendStr(const SOCKET &ConnectSocket, const std::string &str, int strlen){
char sndbuf[DEFAULT_BUFLEN];
int sndbuflen = DEFAULT_BUFLEN;
int iResult;
int count = 0;
int len;
while(count < strlen){
len = min(strlen-count, sndbuflen);
//void * memcpy ( void * destination, const void * source, size_t num );
memcpy(sndbuf,str.data()+count,len);
// Send a buffer
iResult = send(ConnectSocket, sndbuf, len, 0);
// iResult: Bytes sent
if (iResult == SOCKET_ERROR){
throw WSAGetLastError();
}
else{
if(iResult > 0){
count+=iResult;
}
else{
break;
}
}
}
return count;
}