Is it possible to have winsock's send function block until the packet being sent is received at the other end?
My end goal is to be able to send 5-20mb files while still being able to send small 1kb packets on the same connection. So I was thinking I would have it block until the receiver receives the packet. That way if another small packet is queued it wont be stuck waiting for the rest of the large file to be transferred.
Just use two separate TCP connections. They can even connect to the same host and port, the port number at your end will be different.
Stop-and-wait handshaking over any network (i.e. not loopback) would be miserably slow.
You could send the size of your packages instead
struct MyNetworkPackage {
int size;
char* data;
};
if you begin by sending the size, you can deduce on the other side what data belongs to what package.
I've tried to explain winsock in this answer as well.
No. And you don't need to do this.
I am not sure what we would acheive with this.. Mixing traffic on TCP stream won't server any purpose.. Can you explain what exactly u need to do.. Above all, we can never be sure in TCP that the other application has actually received the pkt (it might just be in their tcp buffers)...
It seems that 'send' will block if there is a previous packet still being sent.
Related
I'm looking for a proper way to have one goroutine sending out request packets to specific servers while a second goroutine receiving the responses and handling them, maybe even create a new goroutine for each response to handle.
The architecture of the game is that there are multiple masterservers, which can be asked for ip lists of registered servers.
After getting the ips and ports from the masterservers, each of the ips gets a request for its data, like server name, map, players, etc.
Also, are there better ways to handle this?
Currently I am creating a goroutine per request that also waits for a response afterwards.
The waiting for a response timeouts after 35ms and continues to send 1.2 times the previous amount of request packets to have a small burst of requests. Also the timeout is doubled on every retry.
I'd like to know if there are better strategies that have proven to be more robust and have a lower latency, that are not too complex.
Edit:
I only create the client side sockets, but would have, if there is no better approach, a client that sends UDP request packets that contain a different socket's address as sender value in order to receive the answers on a different socket that acts kind of like a server, where all the response packets are collected. In order to separate the sending socket from the receiving socket.
This question is tagged as client-server as one of the sockets is supposed to act like a server, even tho all it does is receive expected answers in response to request packets sent by the client socket.
I have a need to be able to validate TOS/DSCP marks on response data from a set of HTTP servers. Would it be possible, given a list of target URLs to test, if there is a way in go to generate the HTTP request, and then be able to examine the response's TCP packet details in order to obtain the TOS value?
My assumption at this point is that it may require creating a socket, and then dynamically generating a TCP packet that contains the HTTP request payload. I've been searching around to see if there were any libraries that would aid in this task, but haven't found anything specific yet.
Note: a simple TCP connection will not provide enough data - the target servers in question will alter TOS/DSCP marks dynamically based on the HTTP server name (so essentially, a single physical server will respond with different TOS marks depending on the vHost requested), so it is important to be able to verify the TOS on actual HTTP response packets, and not something simple like a ping. The TOS values in the TCP 3-way handshake cannot be trusted either - it must be a packet containing the HTTP data.
I did end up solving this problem using gopacket/pcap and net/http.
In a nutshell, what I ended up doing is writing a function that creates a channel, and then calls a goroutine that does the actual packet capture and parsing. The goroutine passes the captured TOS value back to the channel, and then the original function does the http request, and then reads the channel to get the TOS result. Still a bit of a work-in-progress, but so far, this solution seems to be working fairly well.
One server - ZMQ_ROUTER, many clients - ZMQ_DEALER
How on a server(ZMQ_ROUTER) send a message to all clients(ZMQ_DEALER)?
UPD:
I know there are PUB-SUB pattern and that is really what I need. But I want to use only the current ROUTER-DEALER socket. Is it possible?
Yes, but It won't be the answer you would like to hear. I think there isn't a flag, or socket option for this. What you can do:
Track the connected dealers manually, than create a loop and send the same stuff to every connected dealer. If you send large messages you can zero copy the load, so you don't have to allocate the memory time to time.
I am using QTCPSocket to connect to a TCP server (which is running on Ubuntu). The server is sending at minimum, a 1 byte packet every 40ms. My application is real-time, so it is important I receive data as fast as possible at the cost of extra network traffic.
Once I have connected a TCP Client from Windows, I start receiving packets. However, the readyRead() signal from the QTCPSocket is only emitted once every 200ms (with 5 bytes in the packet). I have looked at the packets in Wireshark, they are actually 5 byte packets coming across.
However, using QTCPSocket on Mac (the exact same code in fact), I get individual packets every time, all of my 1 byte packets sent arrive as single byte packets, which is great.
I tried creating a raw Windows socket (not using QTCPSocket), and get identical behaviour to QTCPSocket on Windows.
What is the difference causing the Mac socket to receive packets at a much higher time resolution? Is there something I can set in setsockopt() which will prevent this 200ms buffering from occuring?
I am aware that setting TCP_NODELAY on the server side will probably solve my problem, but seeing as the Mac TCP Client works as intended, there must be a way to get the same behaviour on Windows.
Setting mySocket->setSocketOption(QAbstractSocket::LowDelayOption, 1); on the server side is the only way I have found to remedy this problem
For others who stumble upon this coming from search engines:
The above (correct) answer by oggmonster can also be described by:
int on = 1;
if (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char*)&on, sizeof(on)))
{
return -1;
}
You need to acknowledge each byte of data you receive to give the reply ACKs some data to piggyback on. Talk to whoever designed your protocol.
Trying to answer questions like "Why does it work on X and not on Y" is only useful when both behaviors aren't correct. If it has no application-level acknowledgements, then both behaviors are correct. If one of them shouldn't be correct, then the protocol should have a mechanism to control that, such as application-layer acknowledgements. If it doesn't the protocol is broken. Trying to figure out why a broken protocol doesn't work is pointless -- it doesn't work because it's broken.
I need to write an Order Manager that routes client (stock, FX, whatever) orders to the proper exchange. The clients want to send orders, but know nothing about FIX or other proprietary protocols, only an internal (normalized) format for sending orders. I have applications (servers) that each connect through FIX/Binary/etc connections to each FIX/etc provider. I would like a broker program in between the clients and the servers that take the normalized order and turn it into a proper format to a given FIX/etc provider, and take messages from the servers and turn it back to a normalized format for the clients. It is ok for the clients to specify a route, but it is up to a broker program in between the clients and the servers to communicate messages about that order back and forth between clients and servers. So somehow the output [fills, partial fills, errors, etc] from the server has to be routed back to the right client.
I have studied the ZMQ topologies, and REQ->ROUTER->DEALER doesn't work [the code works - I mean it is the wrong topology] since the servers are not identical.
//This topology doesn't work because the servers are not identical
#include "zhelpers.hpp"
int main (int argc, char *argv[])
{
// Prepare our context and sockets
zmq::context_t context(1);
zmq::socket_t frontend (context, ZMQ_ROUTER);
zmq::socket_t backend (context, ZMQ_DEALER); // ZMQ_ROUTER here? Can't get it to work
frontend.bind("tcp://*:5559");
backend.bind("tcp://*:5560");
// Start built-in device
zmq::device (ZMQ_QUEUE, frontend, backend);
return 0;
}
I thought that maybe a ROUTER->ROUTER topology instead is correct, but I can't get the code to work - the clients send orders but never get responses back so I must be doing something wrong. I thought that using ZMQ_IDENTITY is the correct thing to do, but not only can I also not get this to work, but it seems as if ZMQ is moving away from ZMQ_IDENTITY?
Can someone give a simple example of three ZMQ programs [not in separate threads, three separate processes] that show the correct way to do this?
Look at the MajorDomo example in the Guide: http://zguide.zeromq.org/page:all#toc71
You'd use a worker pool per exchange.
Responding to:
ROUTER->ROUTER topology instead is correct, but I can't get the code to work
My understanding is that ZMQ Sockets comes in Pairs to enable a certain pattern.
PAIR
REQ/REP
PUB/SUB
PUSH/PULL
Only PAIR socket type can talk to another socket of type PAIR and behaves similar to your normal socket.
For all other socket types, there is a complimentary socket type for communication. For example REQ socket type can only talk to REP socket type. REQ Socket type can not talk to REQ socket type.
My understanding is that in ROUTER/DEALER, ROUTER can talk to DEALER but ROUTER can not talk to ROUTER socket type.
My understanding could be wrong but from the examples this is what I have understood so far.