How to ensure that clients are ready before emitting from socket.io server - socket.io

I'm trying to make a two player competitive maze game using socket.io. To ensure that both players get the same maze, I want to send a seed to both clients, where the client then generates a maze based on said seed. However, when the second player joins, only the first player (who was already in the room) receives the emission.
Here is the relevant server-side room and seed emission code:
// Find and join an unfilled room
var roomID = 0;
while (typeof io.sockets.adapter.rooms[roomID.toString()] !== 'undefined' && io.sockets.adapter.rooms[roomID.toString()].length >= 2)
roomID++;
socket.join(roomID.toString());
console.log('A user from ' + socket.conn.remoteAddress + ' has connected to room ' + roomID);
// Seed announcement
if (io.sockets.adapter.rooms[roomID.toString()].length == 2) {
var seed = Math.random().toString();
socket.in(roomID).emit('seed', seed);
console.log("announcing seed " + seed + " to room " + roomID);
}
socket.on('seedAck', function(msg) {
console.log(msg);
})
On the client side, I have some code to respond back to the server with the seed, to find out if they're receiving the seed properly.
socket.on('seed', function(msg) {
// Some other code here...
socket.emit('seedAck', 'client has recieved seed ' + msg);
});
Here is what the server sees:
A user from ::1 has connected to room 0
A user from ::1 has connected to room 0
announcing seed 0.936373041709885 to room 0
client has received seed 0.936373041709885
To verify that only the client that was already in the room received the seed, I refreshed the first client, and this time only the second client received the seed.
I believe what's happening is that the server is sending the seed before the second client is ready. However, after multiple Google searches, I could not come to a solution. I was considering adding a button to the client, requiring the user to press the button first (and thus ensuring that the client is ready), but then it requires some tedious bookkeeping. I've also considering some sort of callback function, but I haven't come to a conclusion on how to properly implement it.
Is my only option to manually keep track when the both clients are ready, or is there a better solution that's integrated within socket.io?
Edit: I've attempted modified my code so that it waits for both clients to send a message that they're ready before sending the seed. Server-side, I have this:
socket.on('ready', function(msg) {
console.log(msg);
// Seed announcement
if (io.sockets.adapter.rooms[roomID.toString()].length == 2) {
var seed = Math.random().toString();
socket.to(roomID).emit('seed', seed);
console.log("announcing seed " + seed + " to room " + roomID);
}
});
while client-side, I now have this:
socket.on('seed', function(msg) {
// Some other code here ...
socket.emit('seedAck', 'client has received seed ' + msg);
});
socket.on('connect', function() {
socket.emit('ready', 'client is ready!');
});
However, the same problem persists, as shown by the server output:
A user from ::1 has connected to room 0
client is ready!
A user from ::1 has connected to room 0
client is ready!
announcing seed 0.48290129541419446 to room 0
client has received seed 0.48290129541419446
The second client still does not properly receive the seed.

A small modification to my edited server code fixed the problem:
socket.on('ready', function(msg) {
console.log(msg);
// Seed announcement
if (io.sockets.adapter.rooms[roomID.toString()].length == 2) {
var seed = Math.random().toString();
io.sockets.in(roomID.toString()).emit('seed', seed); // Modified line
console.log("announcing seed " + seed + " to room " + roomID);
}
});
This solution was taken from this SO question.

Related

Pushing data to websocket browser client in Lua

I want to use a NodeMCU device (Lua based top level) to act as a websocket server to 1 or more browser clients.
Luckily, there is code to do this here: NodeMCU Websocket Server
(courtesy of #creationix and/or #moononournation)
This works as described and I am able to send a message from the client to the NodeMCU server, which then responds based on the received message. Great.
My questions are:
How can I send messages to the client without it having to be sent as a response to a client request (standalone sending of data)? When I try to call socket.send() socket is not found as a variable, which I understand, but cannot work out how to do it! :(
Why does the decode() function output the extra variable? What is this for? I'm assuming it will be for packet overflow, but I can never seem to make it return anything, regardless of my message length.
In the listen method, why has the author added a queuing system? is this essential or for applications that perhaps may receive multiple simultaneous messages? Ideally, I'd like to remove it.
I have simplified the code as below:
(excluding the decode() and encode() functions - please see the link above for the full script)
net.createServer(net.TCP):listen(80, function(conn)
local buffer = false
local socket = {}
local queue = {}
local waiting = false
local function onSend()
if queue[1] then
local data = table.remove(queue, 1)
return conn:send(data, onSend)
end
waiting = false
end
function socket.send(...)
local data = encode(...)
if not waiting then
waiting = true
conn:send(data, onSend)
else
queue[#queue + 1] = data
end
end
conn:on("receive", function(_, chunk)
if buffer then
buffer = buffer .. chunk
while true do
local extra, payload, opcode = decode(buffer)
if opcode==8 then
print("Websocket client disconnected")
end
--print(type(extra), payload, opcode)
if not extra then return end
buffer = extra
socket.onmessage(payload, opcode)
end
end
local _, e, method = string.find(chunk, "([A-Z]+) /[^\r]* HTTP/%d%.%d\r\n")
local key, name, value
for name, value in string.gmatch(chunk, "([^ ]+): *([^\r]+)\r\n") do
if string.lower(name) == "sec-websocket-key" then
key = value
break
end
end
if method == "GET" and key then
acceptkey=crypto.toBase64(crypto.hash("sha1", key.."258EAFA5-E914-47DA-95CA-C5AB0DC85B11"))
conn:send(
"HTTP/1.1 101 Switching Protocols\r\n"..
"Upgrade: websocket\r\nConnection: Upgrade\r\n"..
"Sec-WebSocket-Accept: "..acceptkey.."\r\n\r\n",
function ()
print("New websocket client connected")
function socket.onmessage(payload,opcode)
socket.send("GOT YOUR DATA", 1)
print("PAYLOAD = "..payload)
--print("OPCODE = "..opcode)
end
end)
buffer = ""
else
conn:send(
"HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: 12\r\n\r\nHello World!",
conn.close)
end
end)
end)
I can only answer 1 question, the others may be better suited for the library author. Besides, SO is a format where you ask 1 question normally.
How can I send messages to the client without it having to be sent as a response to a client request (standalone sending of data)?
You can't. Without the client contacting the server first and establishing a socket connection the server wouldn't know where to send the messages to. Even with SSE (server-sent events) it's the client that first initiates a connection to the server.

How can I send messages to specific client using Faye Websockets?

I've been working on a web application which is essentially a web messenger using sinatra. My goal is to have all messages encrypted using pgp and to have full duplex communication between clients using faye websocket.
My main problem is being able to send messages to a specific client using faye. To add to this all my messages in a single chatroom are saved twice for each person since it is pgp encrypted.
So far I've thought of starting up a new socket object for every client and storing them in a hash. I do not know if this approach is the most efficient one. I have seen that socket.io for example allows you to emit to a specific client but not with faye websockets it seems ? I am also considering maybe using a pub sub model but once again I am not sure.
Any advice is appreciated thanks !
I am iodine's author, so I might be biased in my approach.
I would consider naming a channel by the used ID (i.e. user1...user201983 and sending the message to the user's channel.
I think Faye will support this. I know that when using the iodine native websockets and builtin pub/sub, this is quite effective.
So far I've thought of starting up a new socket object for every client and storing them in a hash...
This is a very common mistake, often seen in simple examples.
It works only in single process environments and than you will have to recode the whole logic in order to scale your application.
The channel approach allows you to scale using Redis or any other Pub/Sub service without recoding your application's logic.
Here's a quick example you can run from the Ruby terminal (irb). I'm using plezi.io just to make it a bit shorter to code:
require 'plezi'
class Example
def index
"Use Websockets to connect."
end
def pre_connect
if(!params[:id])
puts "an attempt to connect without credentials was made."
return false
end
return true
end
def on_open
subscribe channel: params[:id]
end
def on_message data
begin
msg = JSON.parse(data)
if(!msg["to"] || !msg["data"])
puts "JSON message error", data
return
end
msg["from"] = params[:id]
publish channel: msg["to"].to_s, message: msg.to_json
rescue => e
puts "JSON parsing failed!", e.message
end
end
end
Plezi.route "/" ,Example
Iodine.threads = 1
exit
To test this example, use a Javascript client, maybe something like this:
// in browser tab 1
var id = 1
ws = new WebSocket("ws://localhost:3000/" + id)
ws.onopen = function(e) {console.log("opened connection");}
ws.onclose = function(e) {console.log("closed connection");}
ws.onmessage = function(e) {console.log(e.data);}
ws.send_to = function(to, data) {
this.send(JSON.stringify({to: to, data: data}));
}.bind(ws);
// in browser tab 2
var id = 2
ws = new WebSocket("ws://localhost:3000/" + id)
ws.onopen = function(e) {console.log("opened connection");}
ws.onclose = function(e) {console.log("closed connection");}
ws.onmessage = function(e) {console.log(e.data);}
ws.send_to = function(to, data) {
this.send(JSON.stringify({to: to, data: data}));
}.bind(ws);
ws.send_to(1, "hello!")

Sending events between two Meteor servers

Is there a way to send events between two Meteor servers? I know I can connect Server1 to Server2 (and vice versa) using DDP.connect and just call methods between the two servers. This will not work for me, because one of my servers (Server1) will be at my house behind a dynamic IP and firewall. DDP.connect requires a url. What is the best way, if any, to communicate between the two servers? I am thinking the only way to do this would be something like Socket.io where Server1 connects to Server2. I'm not sure if this can be done in Meteor though. Thanks.
You can do this by using DDP.connect to connect server 1 to server 2 on startup, then subscribing to a collection that server 2 publishes, for example:
On Server 2 (known URL):
var Events = new Meteor.Collection("events");
Meteor.publish("events", function () {
return Events.find({});
}
On Server 1 (at your house):
var EventConnection = DDP.connect("<server 2 URL>");
var Events = new Meteor.Collection("events", {connection: EventConnection});
EventConnection.subscribe("events");
Events.find({}).observe({
added: function (newEvent) {
// do something with newEvent
}
});
Then, whenever server 2 adds an object to the Events collection, you will get it on server 1 via the connection. Watch out, though - every time server 1 connects to server 2 it will get all previous events as well. If you don't want that to happen, you need to use the ready callback on subscribe:
Revised code for Server 1:
var EventConnection = DDP.connect("<server 2 URL>");
var Events = new Meteor.Collection("events", {connection: EventConnection});
EventConnection.subscribe("events", function () {
Events.find({}).observe({
added: function (newEvent) {
// do something with newEvent
}
});
});

Why fetching data from sqlite block Node.JS?

I want to fetch and export to csv file huge amount (5 - 12 milions rows) of archive data from Sqlite database. While doing this the whole server is blocked. No other connection can be handled by server (for example I couldn't open website in another tab in browser).
Node.JS server part:
function exportArchiveData(response, query){
response.setHeader('Content-type', 'text/csv');
response.setHeader('Content-disposition', 'attachment; filename=archive.csv');
db.fetchAllArchiveData(
query.ID,
function(error, data){
if(!error)
response.write(data.A + ';' + data.B + ';' + data.C + '\n');
},
function(error, retrievedRows){
response.end();
});
};
Sqlite DB module:
module.exports.SS.prototype.fetchAllArchiveData = function (
a, callback, complete) {
var self = this;
// self.sensorSqliteDb.all(
self.sensorSqliteDb.each(
'SELECT A, B, C '+
'FROM AD WHERE '+
' A="' + a + '"'+
' ORDER BY C ASC' +
';'
,
callback,
complete
);
};
I also create index on AD like CREATE INDEX IAD ON AD(A, C) and EXPLAIN QUERY PLAN show that this index is used by sqlite engine.
Still, when I call exportArchiveData server send the data properly but no other action can be performed during this. I have a huge amount of data (5 - 12 milions of rows to send) so it takes ~3 minutes.
How can I prevent this from blocking whole server?
I thought that if I use EACH and there will be callback's the server will be more responsive. Also Memory usage is huge (about 3GB and even more). Can I prevent this somehow?
In answer to comments, I would like to add some clarifications:
I use node-sqlite3 from developmentseed. It should be asynchronous and non-blocking. And it is. When statement is prepared I can request main page. But when server start serving data, then Node.js server is blocked. I guess thats because request for home page is one request to call some callback while there are milions request for callback handling archive data "EACH".
If I use sqlite3 tool from linux command line I do not get rows immediately but that is not the problem as long as node-sqlite3 is non-blocking.
Yes. I'm hitting CPU max. What is worse, when I request twice as much data the whole memory is used, and then server freeze forever.
OK. I handle this problem this way.
Instead of using Database#each I use Database#prepare with multiple Statement#get.
What is more, I investigate that running out of memory was caused by full buffer of response. So now, I call for next row when I get previous and when response buffer have place for new data. Working perfect. And Now server is not blocked (only during preparing statement).
Sqlite module:
module.exports.SS.prototype.fetchAllArchiveData = function (
a) {
var self = this;
var statement = self.Db.prepare(
'SELECT A, B, C '+
'FROM AD WHERE '+
' A="' + a + '"'+
' ORDER BY C ASC' +
';'
,
function(error){
if(error != null){
console.log(error);
}
}
);
return statement;
};
Server side:
function exportArchiveData(response, query){
var respRet = null;
var i = 0;
var statement = db.fetchAllArchiveData(
query.ID);
var getcallback = function(err, row){
if(err != null){
console.mylog(err);
return;
}
if(typeof(row) != 'undefined'){
respRet = response.write(row.A + ';' + row.B + ';' + row.C + '\n');
console.log(i++ + ' ' + respRet);
if(respRet){
statement.get(getcallback);
}else{
console.log('should wait on drain');
response.on('drain', function(){
console.log('drain - set on drain to null, call statement');
response.on('drain', function(){});
statement.get(getcallback);
});
}
}else{
response.end();
}
};
statement.get(function(err, row){
response.setHeader('Content-type', 'text/csv');
response.setHeader('Content-disposition', 'attachment; filename=archive.csv');
getcallback(err, row);
});
};

Redis / RabbitMQ - Pub / Sub - Performances

I wrote a little test for a simple scenario:
One publisher and one subscriber
Publisher send 1000000 messages
Subscriber receive the 1000000 messages
First test with RabbitMQ, fanout Exchange, RabbitMq node type Ram : 320 seconds
Second test with Redis, basic pub/Sub : 24 seconds
Am i missing something? Why a such difference ? Is this a configuration problem or something?
First scenario: one node.js process for the subscriber, one for the publisher, each one, one connection to rabbitmq with amqp node module.
Second scénario: one node.js process for the subscriber, one for the publisher, each one got one connection to redis.
Any help is welcom to understand... I can share the code if needed.
i'm pretty new to all of this.
What i need, is a high performances pub / sub messaging system. I'd like to have clustering capabilities.
To run my test, i just launch the rabbitMq server (default configuration) and i use the following
Publisher.js
var sys = require('sys');
var amqp = require('amqp');
var nb_messages = process.argv[2];
var connection = amqp.createConnection({url: 'amqp://guest:guest#localhost:5672'});
connection.addListener('ready', function () {
exchangeName = 'myexchange';
var start = end = null;
var exchange = connection.exchange(exchangeName, {type: 'fanout'}, function(exchange){
start = (new Date()).getTime();
for(i=1; i <= nb_messages; i++){
if (i%1000 == 0){
console.log("x");
}
exchange.publish("", "hello");
}
end = (new Date()).getTime();
console.log("Publishing duration: "+((end-start)/1000)+" sec");
process.exit(0);
});
});
Subscriber.js
var sys = require('sys');
var amqp = require('amqp');
var nb_messages = process.argv[2];
var connection = amqp.createConnection({url: 'amqp://guest:guest#localhost:5672'});
connection.addListener('ready', function () {
exchangeName = 'myexchange';
queueName = 'myqueue'+Math.random();
var queue = connection.queue(queueName, function (queue) {
queue.bind(exchangeName, "");
queue.start = false;
queue.nb_messages = 0;
queue.subscribe(function (message) {
if (!queue.start){
queue.start = (new Date()).getTime();
}
queue.nb_messages++;
if (queue.nb_messages % 1000 == 0){
console.log('+');
}
if (queue.nb_messages >= nb_messages){
queue.end = (new Date()).getTime();
console.log("Ending at "+queue.end);
console.log("Receive duration: "+((queue.end - queue.start)/1000));
process.exit(0);
}
});
});
});
Check to ensure that:
Your RabbitMQ queue is not configured as persistent (since that would require disk writes for each message)
Your prefetch count on the subscriber side is 0
You are not using transactions or publisher confirms
There are other things which could be tuned, but without knowing the details of your test it's hard to guess. I would just make sure that you are comparing "apples to apples".
Most messaging products can be made to go as fast as humanly possible at the expense of various guarantees (like delivery assurance, etc) so make sure you understand your application's requirements first. If your only requirement is for data to get shoveled from point A to point B and you can tolerate the loss of some messages, pretty much every messaging system out there can do that, and do it well. The harder part is figuring out what you need beyond raw speed, and tuning to meet those requirements as well.

Resources