I'm trying to use lighttpd (v1.4.49) with mod_wstunnel.
$HTTP["url"] =~ "^/websocket" {
wstunnel.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "50007" ) ) )
wstunnel.frame-type = "text"
wstunnel.ping-interval = 30
}
The backend TCP-Server sends single line JSON-Messages that should be received by the WebSocket-Clients onmessage handler.
However, sometimes two successive messages are concatenated by mod_wstunnel and received (and passed to onmessage) as one message.
Is there any "end-of-message" token I could send to explicitly "tell" mod_wstunnel that the message is complete?
Thanks,
Sam
You should probably fix your application if your application depends on framing at the websockets layer. See https://www.rfc-editor.org/rfc/rfc6455#section-5.4
Unless specified otherwise by an extension, frames have no semantic
meaning. An intermediary might coalesce and/or split frames, if no
extensions were negotiated by the client and the server or if some
extensions were negotiated, but the intermediary understood all the
extensions negotiated and knows how to coalesce and/or split frames
in the presence of these extensions. One implication of this is that
in absence of extensions, senders and receivers must not depend on
the presence of specific frame boundaries.
Your backend is sending JSON and knows nothing about websockets, and therefore can not specify how mod_wstunnel should send websocket frames. Your client app should not depend on the websocket framing, but if you wanted to try to mitigate this on the server-side, your backend could pause between sending each JSON message. It would be better to fix your app to process complete JSON messages, one at a time.
Related
Trying to see if I can get a response from ctrader server.
Getting no response and seems to hang at "s.recv(1024)". So not sure what could be going wrong here. I have limited experience with sockets and network coding.
I have checked my login credentials and all seems ok.
Note: I am aware of many FIX engines that are available for this purpose but wanted to
try this on my own.
ctrader FIX guides
require 'socket'
hostname = "h51.p.ctrader.com"
port = 5201
#constructing a fix message to see what ctrader server returns
#8=FIX.4.4|9=123|35=A|49=demo.ctrader.*******|56=cServer|57=QUOTE|50=QUOTE|34=1|52=20220127-16:49:31|98=0|108=30|553=********|554=*******|10=155|
fix_message = "8=FIX.4.4|9=#{bodylengthsum}|" + bodylength + "10=#{checksumcalc}|"
s = TCPSocket.new(hostname, port)
s.send(fix_message.force_encoding("ASCII"),0)
print fix_message
puts s.recv(1024)
s.close
Sockets are by default blocking on read. When you call recv that call will block if no data is available.
The fact that your recv call is not returning anything, would be an indication that the server did not send you any reply at all; the call is blocking waiting for incoming data.
If you would use read instead, then the call will block until all the requested data has been received.
So calling recv(1024) will block until 1 or more bytes are available.
Calling read(1024) will block until all 1024 bytes have been received.
Note that you cannot rely on a single recv call to return a full message, even if the sender sent you everything you need. Multiple recv calls may be required to construct the full message.
Also note that the FIX protocol gives the msg length at the start of each message. So after you get enough data to see the msg length, you could call read to ensure you get the rest.
If you do not want your recv or read calls to block when no data (or incomplete data) is available, then you need to use non-blocking IO instead for your reads. This is complex topic, which you need to research, but often used when you don't want to block and need to read arbitary length messages. You can look here for some tips.
Another option would be to use something like EventMachine instead, which makes it easier to deal with sockets in situations like this, without having to worry about blocking in your code.
According to the spec for websockets protocol 13 (RFC 6455), the payload length for any given frame can be 0.
frame-payload-data ; n*8 bits in
; length, where
; n >= 0
I am building a websocket client to this spec, but when I sent echo.websocket.org a frame with an empty payload, I get nothing back. I experience the same using their GUI:
This is troublesome for me, since the way I'm building my client somewhat requires me to send empty frames when I FIN a multi-frame message.
Is this merely a bug in the Echo Test server? Do a substantial number of server implementations drop frames with empty payloads?
And if this is a bug in Echo Test, does anyone know how I might get in touch with them? The KAAZING site only has tech support contact info for their own products.
If you send a data-frame with no payload, there is nothing to echo back. This behaviour is fully correct. However, it might be standard-conform, too, to send back a dataframe with 0 payload, too. The main question is, if the application layer is informed at all, when a dataframe with no payload is received. This is probably not the case in most implementations.
With TCP this is similar: A TCP-Keepalive is a datagram with 0 payload. It is ack'd by the remote TCP-stack, but the application layer is not informed about it (i.e. select() does not return or a read()-syscall remains blocking), which is the expected behaviour.
An application-layer protocol should not rely on the datagrams to structure the data, but should merely expect a stream of bytes without taking regard on how these are transported.
I just tried the echo test on websocket.org with empty payloads and it seems to work fine using Chrome, Safari and Firefox (latest versions of each). Which browser are you using?
Btw, that demo program doesn't abide by any "echo protocol" (afaik), so there's no formal specification that dictates what to do on empty data in a WebSocket set of frames.
If you need help using WebSocket, there are Kaazing forums: http://developer.kaazing.com/forums.
When I am using native websocket API I can see just a payload in my chrome console for sockts:
But when I use socket.io with their emit event, I can see some strange numbers before my actual payload. I do understand that colors mean that you either send or received the data, but what does the numbers like 42, 3, 2, 430, 420, 5 mean.
Is there a place I can get a full list of these numbers with descriptions?
The code which generates it is kind of big, so I just post small snippets.
Client side always look like this:
socket.emit('joinC', room, function(color){ ... });
Server side looks like this:
io.sockets.in(room).emit('moveS', {...});
I know you asked a while ago, but the information remains for those who are researching.
I did an analysis with reverse engineering in version 2.3.0 (socket.io) and 3.4.2 (engine.io) and got the following:
The first number is the type of communication for engine.io, using the enumerator:
Key
Value
0
"open"
1
"close"
2
"ping"
3
"pong"
4
"message"
5
"upgrade"
6
"noop"
The second number is the type of action for socket.io, using the enumerator
Key
Value
0
"CONNECT"
1
"DISCONNECT"
2
"EVENT"
3
"ACK"
4
"ERROR"
5
"BINARY_EVENT"
6
"BINARY_ACK"
There are other optional information that can be passed on, such as namespace and ID, but I will not go into that part.
After these codes he expects a Json Array, where index 0 is the name of the event and index 1 is the argument.
So the instruction 42["moveS",{"from":"g1", "to", "f3"}] is a message for engine.io (4), is an event for socket.io (2), which will emit the "moveS" action passing JSON {"from": "g1", "to", "f3"} as a parameter(Actually JSON.Parse({"from": "g1", "to", "f3"})).
Hope this helps. =D
Websockets allow you to send data back and forth over a full-duplex communication channel.
Socket.IO on the other hand is a realtime application framework that uses websockets as transport adding features like namespacing connections, rooms, fallback to other transports etc. To build all those features, the messages exchanged back and forward must cary some semantics so that Socket.IO knows what they mean (i.e. what kind of message is it, event, error etc) and what to do with them. For that it uses a protocol that frames the message with some codes that identify it's semantic. That's what you are seeing with those numbers.
Unfortunately the Socket.IO documentation is very terse and it's hard to understand exactly how those codes are combined and parsed. To get their exact meaning I think one needs to look at the Socket.IO source code.
EDIT from a socket.io Github issue:
This is handled in socket.io-parser and engine.io-parser, which are implementations of socket.io-protocol and engine.io-protocol respectively. You can find the protocol description for socket.io here and for engine.io here.
The encoding sections in these documents are of interest when looking at the actual data that is sent through the transports. The socket.io-protocol handles encoding of metadata, like namespaes to an engine.io-protocol handleable format.
Fairly new to zeromq and trying to get a basic pub/sub to work. When I run the following (sub starting before pub) the publisher finishes but the subscriber hangs having not received all the messages - why ?
I think the socket is being closed but the messages have been sent ? Is there a way of ensuring all messages are received ?
Publisher:
import zmq
import random
import time
import tnetstring
context=zmq.Context()
socket=context.socket(zmq.PUB)
socket.bind("tcp://*:5556")
y=0
for x in xrange(5000):
st = random.randrange(1,10)
data = []
data.append(random.randrange(1,100000))
data.append(int(time.time()))
data.append(random.uniform(1.0,10.0))
s = tnetstring.dumps(data)
print 'Sending ...%d %s' % (st,s)
socket.send("%d %s" % (st,s))
print "Messages sent: %d" % x
y+=1
print '*** SERVER FINISHED. # MESSAGES SENT = ' + str(y)
Subscriber :-
import sys
import zmq
import tnetstring
# Socket to talk to server
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
filter = "" # get all messages
socket.setsockopt(zmq.SUBSCRIBE, filter)
x=0
while True:
topic,data = socket.recv().split()
print "Topic: %s, Data = %s. Total # Messages = %d" % (topic,data,x)
x+=1
In ZeroMQ, clients and servers always try to reconnect; they won't go down if the other side disconnects (because in many cases you'd want them to resume talking if the other side comes up again). So in your test code, the client will just wait until the server starts sending messages again, unless you stop recv()ing messages at some point.
In your specific instance, you may want to investigate using the socket.close() and context.term(). It will block until all the messages have been sent. You also have the problem of a slow joiner. You can add a sleep after the bind, but before you start publishing. This works in a test case, but you will want to really understand what is the solution vs a band-aid.
You need to think of the PUB/SUB pattern like a radio. The sender and receiver are both asynchronous. The Publisher will continue to send even if no one is listening. The subscriber will only receive data if it is listening. If the network goes down in the middle, the data will be lost.
You need to understand this in order to design your messages. For example, if you design your messages to be "idempotent", it doesn't matter if you lose data. An example of this would be a status type message. It doesn't matter if you have any of the previous statuses. The latest one is correct and message loss doesn't matter. The benefits to this approach is that you end up with a more robust and performant system. The downsides are when you can't design your messages this way.
Your example includes a type of message that requires no loss. Another type of message would be transactional. For example, if you just sent the deltas of what changed in your system, you would not be able to lose the messages. Database replication is often managed this way which is why db replication is often so fragile. To try to provide guarantees, you need to do a couple things. One thing is to add a persistent cache. Each message sent needs to be logged in the persistent cache. Each message needs to be assigned a unique id (preferably a sequence) so that the clients can determine if they are missing a message. A second socket (ROUTER/REQ) needs to be added for the client to request the missing messages individually. Alternatively, you could just use the secondary socket to request resending over the PUB/SUB. The clients would then all receive the messages again (which works for the multicast version). The clients would ignore the messages they had already seen. NOTE: this follows the MAJORDOMO pattern found in the ZeroMQ guide.
An alternative approach is to create your own broker using the ROUTER/DEALER sockets. When the ROUTER socket saw each DEALER connect, it would store its ID. When the ROUTER needed to send data, it would iterate over all client IDs and publish the message. Each message should contain a sequence so that the client can know what missing messages to request. NOTE: this is a sort of reimplementation of Kafka from linkedin.
I'm working on a Ruby TCP client/server app using GServer and TCPSocket. I've run into a problem that I don't understand. My TCPSocket client successfully connects to my GServer, but I can only send data using puts. Calls to TCPSocket.send or TCPSocket.write do nothing. Is there some magic that I'm missing?
tcp_client = TCPSocket.new( ipaddr, port )
tcp_client.puts( 'Z' ) # -> GServer receives "Z\n"
But if I use write or send...
tcp_client = TCPSocket.new( ipaddr, port )
tcp_client.write( 'Z' ) # -> nothing is received
tcp_client.send( 'Z' ) # -> nothing is received
Thanks for the help
Additional information:
The behavior is the same on Linux & Windows.
Flushing the socket after write doesn't change the behavior.
Are you sure the problem isn't on the server side? Are you using some method to read that expects a string or something ending in "\n"?
With buffering taken care of in previous posts to address the question of whether the data is being sent consider capturing the data on the line using something like wireshark. If the data you are sending is seen on the line then the server isn't receiving it.
Otherwise, if the data isn't going onto the line, TCP may hold onto data to avoid sending a single segment with only a few bytes in it (see Nagle's Algorithm). Depending on your OS or TCP vendor you may have different behaviour, but most TCP stacks support the TCP_NODELAY option which may help get the data out in a more timely manner.
tcp_client.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1)
This can help debugging, but typically shouldn't be left in production code if throughput is higher priority than responsiveness.
Try explicitly flushing:
tcp_client = TCPSocket.new( ipaddr, port )
tcp_client.write( 'Z' )
tcp_client.send( 'Z' )
tcp_client.flush
This way, the output is buffered at most only until the point at which you decide it should be sent out.
Hi there the reason should be related to the fact puts add automatic LF and CRL to your string.
If you want to use send or write you need to add them yourself so for instance that would be:
tcp_client.send( "Z\r\n",0)
I had the same problem, so after reading the socket, I had to explicitly delete the last instance of "\n" by doing the following:
client_socket.gets.gsub(/\n$/, '')