EMI/UCP is a protocol to communicate to SMS gateways.
I am confused how should I do in respect to network connection.
Do you open a socket for each logic bundle of data (and close it of course), or do you re-use the same socket?
How do you handle out-of-sequence responses in both cases?
My use case is send a couple of SMS with status request (submit short message), each will generate a traffic of 4 messages (a 51 operation and its ack and a 53 operation from the gateway plus the 53 ack).
If I open two sockets, may I be confident each dialog is performed on the some socket or is it a false assumption?
If I use a single socket, how I distinguish the messages from the two conversations? From the OaDC (originator address) only?
Answering the last question: you have to match the time stamp in the Ucp51 response message (ACK). It is the field: SCTS.
The correlation id is the couple SCTS and ADC (address caller: it is the recipient cell phone).
Related
Hello all assuming that we have a pub-sub pattern in zmq with many subscribers, one publisher, and a message of 3GB. My question is does the publisher send n x O(m) where n is the number of subscribers and m is the 3GB size or does it only uploads once the 3 GB and somehow the subscriber download it? so to avoid the n x O(m).
According to zmq docs pub-sub is a multicast pattern
"ZeroMQ’s low-level patterns have their different characters. Pub-sub
addresses an old messaging problem, which is multicast or group
messaging"
so i expect not n x O(m) but just O(m) am i correct?
It all depends on the transport you choose rather than just the zeromq pattern (in this case pub/sub).
If you choose tcp then there will be X copies of the data sent to the subscribers from the host you are running on because tcp has a separate connection to each one. If you choose pgm (reliable multicast) there will be one copy sent from the host and it will end up being fanned out in a router downstream to each subscriber.
There is also a newer radio/dish pattern that supports basic multicast but you lose the publisher side subscription filtering.
In general you only send once through the pub socket and it gets send to all subscribers.
See docs here: https://zeromq.org/socket-api/#publish-subscribe-pattern
PUB socket
A PUB socket is used by a publisher to distribute data. Messages sent are distributed in a fan out fashion to all connected peers. This socket type is not able to receive any messages.
When a PUB socket enters the mute state due to having reached the high water mark for a subscriber, then any messages that would be sent to the subscriber in question shall instead be dropped until the mute state ends. The send function does never block for this socket type.
I'm looking for a proper way to have one goroutine sending out request packets to specific servers while a second goroutine receiving the responses and handling them, maybe even create a new goroutine for each response to handle.
The architecture of the game is that there are multiple masterservers, which can be asked for ip lists of registered servers.
After getting the ips and ports from the masterservers, each of the ips gets a request for its data, like server name, map, players, etc.
Also, are there better ways to handle this?
Currently I am creating a goroutine per request that also waits for a response afterwards.
The waiting for a response timeouts after 35ms and continues to send 1.2 times the previous amount of request packets to have a small burst of requests. Also the timeout is doubled on every retry.
I'd like to know if there are better strategies that have proven to be more robust and have a lower latency, that are not too complex.
Edit:
I only create the client side sockets, but would have, if there is no better approach, a client that sends UDP request packets that contain a different socket's address as sender value in order to receive the answers on a different socket that acts kind of like a server, where all the response packets are collected. In order to separate the sending socket from the receiving socket.
This question is tagged as client-server as one of the sockets is supposed to act like a server, even tho all it does is receive expected answers in response to request packets sent by the client socket.
If we send two messages over the same html5 websocket a split millisecond apart from each other,
Is it theoretically possible for the messages to arrive in a different order than they were sent?
Short answer: No.
Long answer:
WebSocket runs over TCP, so on that level #EJP 's answer applies. WebSocket can be "intercepted" by intermediaries (like WS proxies): those are allowed to reorder WebSocket control frames (i.e. WS pings/pongs), but not message frames when no WebSocket extension is in place. If there is a neogiated extension in place that in principle allows reordering, then an intermediary may only do so if it understands the extension and the reordering rules that apply.
It's not possible for them to arrive in your application out of order. Anything can happen on the network, but TCP will only present you the bytes in the order they were sent.
At the network layer TCP is suppose to guarantee that messages arrive in order. At the application layer, errors can occur in the code and cause your messages to be out of order in the logic of your code. It could be the network stack your application is using or your application code itself.
If you asked me, can my Node.js application guarantee sending and receiving messages in order? I'm going to have to say no. I've run websocket applications connected to WiFi under high latency and low signal. It causes very strange behavior as if packets are dropped and messages are out of sequence.
This article is a good read https://samsaffron.com/archive/2015/12/29/websockets-caution-required
I am experimenting with ZeroMQ where I want to create a server that does :
REQ-PIPELINE-REPLY
I want to sequentially receives data query requests, push it through a inproc pipeline to parallelise the data query and the sink merges the data back. After the sink merges the data together, the sink sends the merged data as the reply back to the request.
Is this possible? How would it look? I am not sure if the push/pull will preserve client's address for the REP socket to send back to.
Assuming that each client has only a single request out at any one time.
Is this possible?
Yes, but with different socket types.
How would it look?
(in C)
What you may like to do is shift from a ZMQ_REP socket on the external server socket to a ZMQ_ROUTER socket. The Router/Dealer sockets have identities which can allow you to have multiple requests in your pipeline and still respond correctly to each.
The Asynchronous Client/Server Pattern:
http://zguide.zeromq.org/php:chapter3#The-Asynchronous-Client-Server-Pattern
The only hitch in this is that you will need to manage the multiple parts of the ZMQ message. The first part is the identity. Second is null. Third is the data. As long as you REPLY in the same order as the REQUEST the identity will guide your response's data to the correct client. I wrapped my requests in a struct:
struct msg {
zmq_msg * identity;
zmq_msg * nullMsg;
zmq_msg * data;
};
Make sure to use zmq_msg_more when receiving messages and set the more flag when sending correctly.
I am not sure if the push/pull will preserve client's address for the
REP socket to send back to.
You are correct. A push pull pattern would not allow for specifying of the return address between multiple clients.
From the Websocket's RFC6455,
it's possible that control frames interleave with fragmented frames.
I don't understand the need for it, as it makes the design more complex for both sending and receiving part.
Currently, control frame can be "Close", "Ping" and "Pong" (everything else is reserved).
If the control frame is "Close", then receiving the end of the fragmentation is useless, so no interleaving would be required (the fragmenting side could just send the "Close" opcode and stop sending any more fragment, since you are not supposed to send anything after a "Close").
If the control frame is "Ping" or "Pong", it does not make any sense. The fragmenting side is sending data to the client, so why would it ask for pinging the client if it's alive (it has this information in the send system call already) ? Or reply to a ping immediately, since it's actually sending data to the client ?
So, why do we need this mechanism (of interleaved control frame) at all ?
It is to detect half open connections: http://blog.stephencleary.com/2009/05/detection-of-half-open-dropped.html
The other side could be sending you data, but unable to get your data. So being able of interleave pings and pongs, it is possible to check that at least the other end can understand your messages and reply to them.
It does not make it much more complex. You have to read delimited frames anyway, when you find a control frame, take action and continue reading more frames.
http://www.whatwg.org/specs/web-apps/current-work/multipage/network.html#ping-and-pong-frames
.3.4 Ping and Pong frames
The WebSocket protocol specification defines Ping and Pong frames that
can be used for keep-alive, heart-beats, network status probing,
latency instrumentation, and so forth. These are not currently exposed
in the API.
User agents may send ping and unsolicited pong frames as desired, for
example in an attempt to maintain local network NAT mappings, to
detect failed connections, or to display latency metrics to the user.
User agents must not use pings or unsolicited pongs to aid the server;
it is assumed that servers will solicit pongs whenever appropriate for
the server's needs.