Sending events between two Meteor servers - websocket

Is there a way to send events between two Meteor servers? I know I can connect Server1 to Server2 (and vice versa) using DDP.connect and just call methods between the two servers. This will not work for me, because one of my servers (Server1) will be at my house behind a dynamic IP and firewall. DDP.connect requires a url. What is the best way, if any, to communicate between the two servers? I am thinking the only way to do this would be something like Socket.io where Server1 connects to Server2. I'm not sure if this can be done in Meteor though. Thanks.

You can do this by using DDP.connect to connect server 1 to server 2 on startup, then subscribing to a collection that server 2 publishes, for example:
On Server 2 (known URL):
var Events = new Meteor.Collection("events");
Meteor.publish("events", function () {
return Events.find({});
}
On Server 1 (at your house):
var EventConnection = DDP.connect("<server 2 URL>");
var Events = new Meteor.Collection("events", {connection: EventConnection});
EventConnection.subscribe("events");
Events.find({}).observe({
added: function (newEvent) {
// do something with newEvent
}
});
Then, whenever server 2 adds an object to the Events collection, you will get it on server 1 via the connection. Watch out, though - every time server 1 connects to server 2 it will get all previous events as well. If you don't want that to happen, you need to use the ready callback on subscribe:
Revised code for Server 1:
var EventConnection = DDP.connect("<server 2 URL>");
var Events = new Meteor.Collection("events", {connection: EventConnection});
EventConnection.subscribe("events", function () {
Events.find({}).observe({
added: function (newEvent) {
// do something with newEvent
}
});
});

Related

Vercel serverless function timeout, using functions that take more then 10 sec to execute

I am building a pet project like a multiplayer quiz using Next JS deployed on Vercel.
Everything works perfect on a localhost, but when I deploy it on Vercel as a cloud function (in the API route) I meet a problem that serverless function can only last for 10 seconds.
So I want to understand what is the best practice to handle the problem.
version of the cycle in an api route looks like this:
export async function quizGameProcess(
roomInitData: InitGameData,
questions: QuestionInDB\[\],
) {
let questionNumber = 0;
let maxPlayerPoints = 0;
const pointsToWin = 10;
while (maxPlayerPoints \< pointsToWin) {
const currentQuestion = questions\[questionNumber\];
// Wait 5 seconds before start
await new Promise(resolve =\>
setTimeout(resolve, 5000),
);
// Show question to players for 15 seconds
await questionShowInRoom(roomInitData, currentQuestion);
await new Promise(resolve => setTimeout(resolve, 15000)); <====== Everything works great until this moment
// Show the right answer for 5 seconds
await AnswerPushInRoom(roomInitData);
await new Promise(resolve =>
setTimeout(resolve, 5),
);
maxPlayerPoints = await countPlayerPoints(roomInitData)
...
questionNumber++
So i need 15 seconds to show players the question and cloud function returns error while invoking it.
questionShowInRoom() function just changes a string in the database from :
room = {activeWindow: prepareToStart}
to
room = {activeWindow: question}
after 15 seconds it must change it to:
room = {activeWindow: showAnswer}
So the function must return something before 10 seconds, but if you return something - the route stops execution.
I cant use VPS because the project must stay as one Next JS project folder and must be easy maintained in one place and be free.
So if i divide the code - and make some 'worker', how it should be invoked? By some other route? isnt that a bad practice?
Or of it will be the frontend just making polling every second trying to invoke it until timestamp difference become more than 15 seconds.. looks like a strange decision.
So what is the best practice to handle the problem?

How to safely write to one file from many verticle instances in vert.x 3.2?

Instead of using a logger or database server I'd like to append information to one file from possibly many verticle instances.
There are versions of methods for writing asynchronously to a file.
Can I assume that vertx handles the synchronisation between the writes so that these dont interfere when using those versions of methods marked as ¨async¨ ?
There seems to be a rule that one can rely on vertx providing all isolation between concurrent processing out of the box. But is that true in case of writing file access?
Could you please include a code snippet into the answer that shows how to open and write to one file from many verticle instances with finest possible granularity, e.g. for logging requests.
I wouldn't recommend writing to a single file with many different "writers". Regarding concurrent logging I would stick to the Single Writer principle.
Create a Verticle which subscribes to the Event Bus and listens for messages to be logged. Lets call this Verticle Logger which listens to system.logger.
EventBus eb = vertx.eventBus();
eb.consumer("system.logger", message -> {
// write to file
});
Verticles which like to log something need to send a message to the Logger Verticle:
eventBus.send("system.logger", "foobar");
Appending to a existing file work something like this (didn't test):
vertx.fileSystem().open("file.log", new OpenOptions(), result -> {
if (result.succeeded()) {
Buffer buff = Buffer.buffer(message); // message from consume
AsyncFile file = result.result();
file.write(buff, buff.length() * i, ar -> {
if (ar.succeeded()) {
System.out.println("done");
} else {
System.err.println("write failed: " + ar.cause());
}
});
} else {
System.err.println("open file failed " + result.cause());
}
});

Vertx SocketJS disconnects after few seconds of server being busy

Need some help understanding on where disconnects occur (SocketJS, Vertx) and how timeouts can be configured.
I am creating SockJSServer along with creating eventBus bridge. Problem that I observer is frequent WebSocket connection disconnects. Looking at the websocket frames, I see pings every 5 seconds and heart-beats what I configured every 1/2 sec(what seem to take effect). However, once heart beats are being delayed for longer then 5 second disconnects comes with message c[3000,'Go away']. As observed it happens when server is busy(doing something else on separate thread).
I have searched Vertx documentation and looked over vertx code and found few configuration parameter(which appear to be different across versions and documentation).
.putNumber("ping_interval", 120000)
.putNumber("session_timeout", 1200000)
.putNumber("heartbeat_period",500)
To be absolutely sure, I have tried different config that did not appear to have any impact. At this point, I think I have reached dead wall and need some help.
Vertx version 2.1P3
Server snipet
final SockJSServer server = vertx.createSockJSServer(httpServer);
server.bridge(new JsonObject().putString("prefix", "/eventbus")
.putNumber("ping_interval", 120000)
.putNumber("session_timeout", 1200000)
.putNumber("heartbeat_period",500),
new JsonArray().addObject(new JsonObject()),
new JsonArray().addObject(new JsonObject()));
Client code:
var eventBus = new EventBus('//hostX:12001/eventbus');
When you receive a SOCKET_IDLE event, you can't complete the event with a "true" parameter, as it indicates the socket must be closed:
SockJSHandler.create(vertx,handlerOptions).bridge(options, event -> {
boolean result = true;
switch(event.type()) {
case SOCKET_CREATED:
LOGGER.info("Socket created");
break;
case SOCKET_IDLE:
result = false;
return;
case SOCKET_CLOSED:
LOGGER.info("Socket closed");
break;
}
event.complete(result);
});

Lync 2013 - consuming 180 ringing responses from a forked request

Is it possible to configure Lync 2013 to only send a single 180/183 ringing back upstream after an INVITE to Lync triggers multiple INVITEs to Lync subscriber endpoints that each end up generating a 180/183 message.
In case of simultaneous ring, I want Lync to consume all these 180s to avoid unnecessary messaging back to the originator INVITE'ing Lync that is behind a SBC.
It seems to be acting as a forking proxy rather than b2bua.
You're right by saying Lync forks calls. If a user has multiple endpoints, Lync will fork the call to each endpoint and in return each endpoint will return the ringing response.
You can create an MSPL script to catch 180 responses. Since MSPL is stateless, it would require a backing application (a ServerApplication) that checks if a 180 response is already sent for the current call, and block subsequent ringing responses. Based on the assumption that for all requests the CallID header will be identical, you can then decide which responses to send and which not.
A simple MSPL would be something like:
<lc:applicationManifest
lc:appUri="http://www.contoso.com/DefaultRoutingScript"
xmlns:lc="http://schemas.microsoft.com/lcs/2006/05">
<lc:responseFilter reasonCodes="1XX" />
<lc:proxyByDefault action="true" />
<lc:splScript><![CDATA[
if (sipResponse && sipResponse.StatusCode == 180)
{
Dispatch("OnResponse");
}
]]></lc:splScript>
</lc:applicationManifest>
Then in your server application you handle the OnResponse event, I imagine something like this:
public void OnResponse(object sender, ResponseReceivedEventArgs e)
{
if (e.Response.StatusCode == 180)
{
var callIdHeader = e.Response.AllHeaders.FindFirst(Header.StandardHeaderType.CallID);
if (callIdHeader != null)
{
var callId = callIdHeader.Value;
if (ShouldSendRingingResponse(callId))
{
e.ClientTransaction.ServerTransaction.SendResponse(e.Response);
}
}
}
}
public bool ShouldSendRingingResponse(string callId) { .... }
Then you can create some logic in the ShouldSendRingingResponse function to see whether to send the 180 response or not.
Note that I did not test this, it's just a basic outline of how I would attempt to handle the situation.
There isn't a way to prevent this in Lync; however, typically an AudioCodes SBC will be deployed too that contains an option to handle this scenario.
Multiple 18x:
The device supports the interworking of different support for multiple18x responses (including 180 Ringing, 181 Call is Being Forwarded, 182 Call Queued,and 183 Session Progress) that are forwarded to the caller. The UA can be configured as supporting only receipt of the first 18x response (i.e., the device forwards only this response to the caller), or receipt of multiple 18x responses (default). This is configured by the IP Profile parameter, 'SBC Remote Multiple 18x Support

routing files with zeromq (jeromq)

I'm trying to implement a "file dispatcher" on zmq (actually jeromq, I'd rather avoid jni).
What I need is to load balance incoming files to processors:
each file is handled only by one processor
files are potentially large so I need to manage the file transfer
Ideally I would like something like https://github.com/zeromq/filemq but
with a push/pull behaviour rather than publish/subscribe
being able to handle the received file rather than writing it to disk
My idea is to use a mix of taskvent/tasksink and asyncsrv samples.
Client side:
one PULL socket to be notified of a file to be processed
one DEALER socket to handle the (async) file transfer chunk by chunk
Server side:
one PUSH socket to dispatch incoming file (names)
one ROUTER socket to handle file requests
a few DEALER workers managing the file transfers for clients and connected to the router via an inproc proxy
My first question is: does this seem like the right way to go? Anything simpler maybe?
My second question is: my current implem gets stuck on sending out the actual file data.
clients are notified by the server, and issue a request.
the server worker gets the request, and writes the response back to the inproc queue but the response never seems to go out of the server (can't see it in wireshark) and the client is stuck on the poller.poll awaiting the response.
It's not a matter of sockets being full and dropping data, I'm starting with very small files sent in one go.
Any insight?
Thanks!
==================
Following raffian's advice I simplified my code, removing the push/pull extra socket (it does make sense now that you say it)
I'm left with the "non working" socket!
Here's my current code. It has many flaws that are out of scope for now (client ID, next chunk etc..)
For now, I'm just trying to have both guys talking to each other roughly in that sequence
Server
object FileDispatcher extends App
{
val context = ZMQ.context(1)
// server is the frontend that pushes filenames to clients and receives requests
val server = context.socket(ZMQ.ROUTER)
server.bind("tcp://*:5565")
// backend handles clients requests
val backend = context.socket(ZMQ.DEALER)
backend.bind("inproc://backend")
// files to dispatch given in arguments
args.toList.foreach { filepath =>
println(s"publish $filepath")
server.send("newfile".getBytes(), ZMQ.SNDMORE)
server.send(filepath.getBytes(), 0)
}
// multithreaded server: router hands out requests to DEALER workers via a inproc queue
val NB_WORKERS = 1
val workers = List.fill(NB_WORKERS)(new Thread(new ServerWorker(context)))
workers foreach (_.start)
ZMQ.proxy(server, backend, null)
}
class ServerWorker(ctx: ZMQ.Context) extends Runnable
{
override def run()
{
val worker = ctx.socket(ZMQ.DEALER)
worker.connect("inproc://backend")
while (true)
{
val zmsg = ZMsg.recvMsg(worker)
zmsg.pop // drop inner queue envelope (?)
val cmd = zmsg.pop //cmd is used to continue/stop
cmd.toString match {
case "get" =>
val file = zmsg.pop.toString
println(s"clientReq: cmd: $cmd , file:$file")
//1- brute force: ignore cmd and send full file in one go!
worker.send("eof".getBytes, ZMQ.SNDMORE) //header indicates this is the last chunk
val bytes = io.Source.fromFile(file).mkString("").getBytes //dirty read, for testing only!
worker.send(bytes, 0)
println(s"${bytes.size} bytes sent for $file: "+new String(bytes))
case x => println("cmd "+x+" not implemented!")
}
}
}
}
client
object FileHandler extends App
{
val context = ZMQ.context(1)
// client is notified of new files then fetches file from server
val client = context.socket(ZMQ.DEALER)
client.connect("tcp://*:5565")
val poller = new ZMQ.Poller(1) //"poll" responses
poller.register(client, ZMQ.Poller.POLLIN)
while (true)
{
poller.poll
val zmsg = ZMsg.recvMsg(client)
val cmd = zmsg.pop
val data = zmsg.pop
// header is the command/action
cmd.toString match {
case "newfile" => startDownload(data.toString)// message content is the filename to fetch
case "chunk" => gotChunk(data.toString, zmsg.pop.getData) //filename, chunk
case "eof" => endDownload(data.toString, zmsg.pop.getData) //filename, last chunk
}
}
def startDownload(filename: String)
{
println("got notification: start download for "+filename)
client.send("get".getBytes, ZMQ.SNDMORE) //command header
client.send(filename.getBytes, 0)
}
def gotChunk(filename: String, bytes: Array[Byte])
{
println("got chunk for "+filename+": "+new String(bytes)) //callback the user here
client.send("next".getBytes, ZMQ.SNDMORE)
client.send(filename.getBytes, 0)
}
def endDownload(filename: String, bytes: Array[Byte])
{
println("got eof for "+filename+": "+new String(bytes)) //callback the user here
}
}
On the client, you don't need PULL with DEALER.
DEALER is PUSH and PULL combined, so use DEALER only, your code will be simpler.
Same goes for the server, unless you're doing something special, you don't need PUSH with ROUTER, router is bidirectional.
the server worker gets the request, and writes the response back to
the inproc queue but the response never seems to go out of the server
(can't see it in wireshark) and the client is stuck on the poller.poll
awaiting the response.
Code Problems
In the server, you're dispatching files with args.toList.foreach before starting the proxy, this is probably why nothing is leaving the server. Start the proxy first, then use it; Also, once you call ZMQProxy(..), the code blocks indefinitely, so you'll need a separate thread to send the filepaths.
The client may have an issue with the poller. The typical pattern for polling is:
ZMQ.Poller items = new ZMQ.Poller (1);
items.register(receiver, ZMQ.Poller.POLLIN);
while (true) {
items.poll(TIMEOUT);
if (items.pollin(0)) {
message = receiver.recv(0);
In the above code, 1) poll until timeout, 2) then check for messages, and if available, 3) get with receiver.recv(0). But in your code, you poll then drop into recv() without checking. You need to check if the poller has messages for that polled socket before calling recv(), otherwise, the receiver will hang if there's no messages.

Resources