Vertx SocketJS disconnects after few seconds of server being busy - websocket

Need some help understanding on where disconnects occur (SocketJS, Vertx) and how timeouts can be configured.
I am creating SockJSServer along with creating eventBus bridge. Problem that I observer is frequent WebSocket connection disconnects. Looking at the websocket frames, I see pings every 5 seconds and heart-beats what I configured every 1/2 sec(what seem to take effect). However, once heart beats are being delayed for longer then 5 second disconnects comes with message c[3000,'Go away']. As observed it happens when server is busy(doing something else on separate thread).
I have searched Vertx documentation and looked over vertx code and found few configuration parameter(which appear to be different across versions and documentation).
.putNumber("ping_interval", 120000)
.putNumber("session_timeout", 1200000)
.putNumber("heartbeat_period",500)
To be absolutely sure, I have tried different config that did not appear to have any impact. At this point, I think I have reached dead wall and need some help.
Vertx version 2.1P3
Server snipet
final SockJSServer server = vertx.createSockJSServer(httpServer);
server.bridge(new JsonObject().putString("prefix", "/eventbus")
.putNumber("ping_interval", 120000)
.putNumber("session_timeout", 1200000)
.putNumber("heartbeat_period",500),
new JsonArray().addObject(new JsonObject()),
new JsonArray().addObject(new JsonObject()));
Client code:
var eventBus = new EventBus('//hostX:12001/eventbus');

When you receive a SOCKET_IDLE event, you can't complete the event with a "true" parameter, as it indicates the socket must be closed:
SockJSHandler.create(vertx,handlerOptions).bridge(options, event -> {
boolean result = true;
switch(event.type()) {
case SOCKET_CREATED:
LOGGER.info("Socket created");
break;
case SOCKET_IDLE:
result = false;
return;
case SOCKET_CLOSED:
LOGGER.info("Socket closed");
break;
}
event.complete(result);
});

Related

NetMQ PUSH socket blocks indefinitely when it reaches HWM

I'm using NetMQ (Nuget 3.3.2.2) on .NET 4.5 and I have a single fast generator process with a PUSH socket, and a single slow consumer process using a PULL socket. If I send enough messages to hit the sending HWM, the sending process blocks the thread indefinitely.
Some contrived (generator) code which illustrates the problem:
using (var ctx = NetMQContext.Create())
using (var pushSocket = ctx.CreatePushSocket())
{
pushSocket.Connect("tcp://127.0.0.1:42404");
var template = GenerateMessageBody(i);
for (int i = 1; i <= 100000; i++)
{
pushSocket.SendMoreFrame("SampleMessage").SendFrame(Messages.SerializeToByteArray(template));
if (i % 1000 == 0)
Console.WriteLine("Sent " + i + " messages");
}
Console.WriteLine("All finished");
Console.ReadKey();
}
On my configuration, this will usually report it has sent about 5000 or 6000 messages, and will then simply block. If I set the send HWM set to a large value (or 0), then it sends all of the messages as expected.
It looks like it's waiting to receive another command before it tries again, here: (SocketBase.TrySend)
// Oops, we couldn't send the message. Wait for the next
// command, process it and try to send the message again.
// If timeout is reached in the meantime, return EAGAIN.
while (true)
{
ProcessCommands(timeoutMillis, false);
From what I've read in the 0MQ guide, blocking on a full PUSH sockeet is the correct behaviour (and is what I want it to do), however I would expect it to recover once the consumer has cleared its queue.
Short of using some sort of TrySend pattern and dealing with the block myself, is there some option I can set or some other facility I can use to have the PUSH socket attempt to resend blocked messages periodically?

Sending events between two Meteor servers

Is there a way to send events between two Meteor servers? I know I can connect Server1 to Server2 (and vice versa) using DDP.connect and just call methods between the two servers. This will not work for me, because one of my servers (Server1) will be at my house behind a dynamic IP and firewall. DDP.connect requires a url. What is the best way, if any, to communicate between the two servers? I am thinking the only way to do this would be something like Socket.io where Server1 connects to Server2. I'm not sure if this can be done in Meteor though. Thanks.
You can do this by using DDP.connect to connect server 1 to server 2 on startup, then subscribing to a collection that server 2 publishes, for example:
On Server 2 (known URL):
var Events = new Meteor.Collection("events");
Meteor.publish("events", function () {
return Events.find({});
}
On Server 1 (at your house):
var EventConnection = DDP.connect("<server 2 URL>");
var Events = new Meteor.Collection("events", {connection: EventConnection});
EventConnection.subscribe("events");
Events.find({}).observe({
added: function (newEvent) {
// do something with newEvent
}
});
Then, whenever server 2 adds an object to the Events collection, you will get it on server 1 via the connection. Watch out, though - every time server 1 connects to server 2 it will get all previous events as well. If you don't want that to happen, you need to use the ready callback on subscribe:
Revised code for Server 1:
var EventConnection = DDP.connect("<server 2 URL>");
var Events = new Meteor.Collection("events", {connection: EventConnection});
EventConnection.subscribe("events", function () {
Events.find({}).observe({
added: function (newEvent) {
// do something with newEvent
}
});
});

routing files with zeromq (jeromq)

I'm trying to implement a "file dispatcher" on zmq (actually jeromq, I'd rather avoid jni).
What I need is to load balance incoming files to processors:
each file is handled only by one processor
files are potentially large so I need to manage the file transfer
Ideally I would like something like https://github.com/zeromq/filemq but
with a push/pull behaviour rather than publish/subscribe
being able to handle the received file rather than writing it to disk
My idea is to use a mix of taskvent/tasksink and asyncsrv samples.
Client side:
one PULL socket to be notified of a file to be processed
one DEALER socket to handle the (async) file transfer chunk by chunk
Server side:
one PUSH socket to dispatch incoming file (names)
one ROUTER socket to handle file requests
a few DEALER workers managing the file transfers for clients and connected to the router via an inproc proxy
My first question is: does this seem like the right way to go? Anything simpler maybe?
My second question is: my current implem gets stuck on sending out the actual file data.
clients are notified by the server, and issue a request.
the server worker gets the request, and writes the response back to the inproc queue but the response never seems to go out of the server (can't see it in wireshark) and the client is stuck on the poller.poll awaiting the response.
It's not a matter of sockets being full and dropping data, I'm starting with very small files sent in one go.
Any insight?
Thanks!
==================
Following raffian's advice I simplified my code, removing the push/pull extra socket (it does make sense now that you say it)
I'm left with the "non working" socket!
Here's my current code. It has many flaws that are out of scope for now (client ID, next chunk etc..)
For now, I'm just trying to have both guys talking to each other roughly in that sequence
Server
object FileDispatcher extends App
{
val context = ZMQ.context(1)
// server is the frontend that pushes filenames to clients and receives requests
val server = context.socket(ZMQ.ROUTER)
server.bind("tcp://*:5565")
// backend handles clients requests
val backend = context.socket(ZMQ.DEALER)
backend.bind("inproc://backend")
// files to dispatch given in arguments
args.toList.foreach { filepath =>
println(s"publish $filepath")
server.send("newfile".getBytes(), ZMQ.SNDMORE)
server.send(filepath.getBytes(), 0)
}
// multithreaded server: router hands out requests to DEALER workers via a inproc queue
val NB_WORKERS = 1
val workers = List.fill(NB_WORKERS)(new Thread(new ServerWorker(context)))
workers foreach (_.start)
ZMQ.proxy(server, backend, null)
}
class ServerWorker(ctx: ZMQ.Context) extends Runnable
{
override def run()
{
val worker = ctx.socket(ZMQ.DEALER)
worker.connect("inproc://backend")
while (true)
{
val zmsg = ZMsg.recvMsg(worker)
zmsg.pop // drop inner queue envelope (?)
val cmd = zmsg.pop //cmd is used to continue/stop
cmd.toString match {
case "get" =>
val file = zmsg.pop.toString
println(s"clientReq: cmd: $cmd , file:$file")
//1- brute force: ignore cmd and send full file in one go!
worker.send("eof".getBytes, ZMQ.SNDMORE) //header indicates this is the last chunk
val bytes = io.Source.fromFile(file).mkString("").getBytes //dirty read, for testing only!
worker.send(bytes, 0)
println(s"${bytes.size} bytes sent for $file: "+new String(bytes))
case x => println("cmd "+x+" not implemented!")
}
}
}
}
client
object FileHandler extends App
{
val context = ZMQ.context(1)
// client is notified of new files then fetches file from server
val client = context.socket(ZMQ.DEALER)
client.connect("tcp://*:5565")
val poller = new ZMQ.Poller(1) //"poll" responses
poller.register(client, ZMQ.Poller.POLLIN)
while (true)
{
poller.poll
val zmsg = ZMsg.recvMsg(client)
val cmd = zmsg.pop
val data = zmsg.pop
// header is the command/action
cmd.toString match {
case "newfile" => startDownload(data.toString)// message content is the filename to fetch
case "chunk" => gotChunk(data.toString, zmsg.pop.getData) //filename, chunk
case "eof" => endDownload(data.toString, zmsg.pop.getData) //filename, last chunk
}
}
def startDownload(filename: String)
{
println("got notification: start download for "+filename)
client.send("get".getBytes, ZMQ.SNDMORE) //command header
client.send(filename.getBytes, 0)
}
def gotChunk(filename: String, bytes: Array[Byte])
{
println("got chunk for "+filename+": "+new String(bytes)) //callback the user here
client.send("next".getBytes, ZMQ.SNDMORE)
client.send(filename.getBytes, 0)
}
def endDownload(filename: String, bytes: Array[Byte])
{
println("got eof for "+filename+": "+new String(bytes)) //callback the user here
}
}
On the client, you don't need PULL with DEALER.
DEALER is PUSH and PULL combined, so use DEALER only, your code will be simpler.
Same goes for the server, unless you're doing something special, you don't need PUSH with ROUTER, router is bidirectional.
the server worker gets the request, and writes the response back to
the inproc queue but the response never seems to go out of the server
(can't see it in wireshark) and the client is stuck on the poller.poll
awaiting the response.
Code Problems
In the server, you're dispatching files with args.toList.foreach before starting the proxy, this is probably why nothing is leaving the server. Start the proxy first, then use it; Also, once you call ZMQProxy(..), the code blocks indefinitely, so you'll need a separate thread to send the filepaths.
The client may have an issue with the poller. The typical pattern for polling is:
ZMQ.Poller items = new ZMQ.Poller (1);
items.register(receiver, ZMQ.Poller.POLLIN);
while (true) {
items.poll(TIMEOUT);
if (items.pollin(0)) {
message = receiver.recv(0);
In the above code, 1) poll until timeout, 2) then check for messages, and if available, 3) get with receiver.recv(0). But in your code, you poll then drop into recv() without checking. You need to check if the poller has messages for that polled socket before calling recv(), otherwise, the receiver will hang if there's no messages.

How to continuously read JMS Messages in a thread and achnowledge them based on their JMSMessageID in another thread?

I've written a Continuous JMS Message reveiver :
Here, I'm using CLIENT_ACKNOWLEDGE because I don't want this thread to acknowledge the messages.
(...)
connection.start();
session = connection.createQueueSession(true, Session.CLIENT_ACKNOWLEDGE);
queue = session.createQueue(QueueId);
receiver = session.createReceiver(queue);
While (true) {
message = receiver.receive(1000);
if ( message != null ) {
// NB : I can only pass Strings to the other thread
sendMessageToOtherThread( message.getText() , message.getJMSMessageID() );
}
// TODO Implement criteria to exit the loop here
}
In another thread, I'll do something as follows (after successful processing) :
This is in a distinct JMS Connection executed simultaneously.
public void AcknowledgeMessage(String messageId) {
if (this.first) {
this.connection.start();
this.session = this.connection.createQueueSession( false, Session.AUTO_ACKNOWLEDGE );
this.queue = this.session.createQueue(this.QueueId);
}
QueueReceiver receiver = this.session.createReceiver(this.queue, "JMSMessageID='" + messageId + "'");
Message AckMessage = receiver.receive(2000);
receiver.close();
}
It appears that the message is not found (AckMessage is null after timeout) whereas it does exist in the Queue.
I suspect the message to be blocked by the continuous input thread.. indeed, when firing the AcknowledgeMessage() alone, it works fine.
Is there a cleaner way to retrieve 1 message ? based on its QueueId and messageId
Also, I feel like there could be a risk of memory leak in the continuous reader if it has to memorize the Messages or IDs during a long time.. justified ?
If I'm using a QueueBrowser to avoid impacting the Acknowledge Thread, it looks like I cannot have this continuous input feed.. right ?
More context : I'm using ActiveMQ and the 2 threads are 2 custom "Steps" of a Pentaho Kettle transformation.
NB : Code samples are simplified to focus on the issue.
Well, you can't read that message twice, since you have already read it in the first thread.
ActiveMQ will not delete the message as you have not acknowledge it, but it won't be visible until you drop the JMS connection (I'm not sure if there is a long timeout here as well in ActiveMQ).
So you will have to use the original message and do: message.acknowledge();.
Note, however, that sessions are not thread safe, so be careful if you do this in two different threads.

Azure Worker: Read a message from the Azure queue in a mutex way

The run method of my worker role is:
public override void Run()
{
Message msg=null;
while (true)
{
msg = queue.GetMessage();
if(msg!=null && msg.DequeueCount==1){
//delete message
...
//execute operations
...
}
else if(msg!=null && msg.DequeueCount>1){
//delete message
...
}
else{
int randomTime = ...
Thread.Sleep(randomTime);
}
}
}
For performance tests I would that a message could be analysed only by a worker (I don't consider failure problems on workers).
But seems by my tests, that two workers can pick up the same message and read DequeueCount equals to 1 (both workers). Is it possible?
Does exist a way that allow just a worker to read a message in a "mutex" way?
How is your "getAMessage(queue)" method defined? If you do PeekMessage(), a message will be visible by all workers. If you do GetMessage(), the message will be got only by the worker which firsts get it. But for the invisibility timeout either specified or the default (30 sec.). You have to delete the message before the invisibility timeout comes.
Check out the Queue Service API for more information. I am sure that there is something wrong in your code. I use queues and they behave as by documentation in dev storage and in production storage. You may want to explicitly put higher value of the Visibility Timeout when you do GetMessage. And make sure you do not sleep longer than the visibility timeout.

Resources