Exchange data between node.js script and client's Javascript - ajax

I have the following situation, where the already sent headers problem happens, when sending multiple request from the server to the client via AJAX:
It is something I expected since I opted to go with AJAX, instead of sockets. Is there is other way around to exchange the data between the server and the client, like using browserify to translate an emitter script for the client? I suppose that I can't escape the sockets, so I will take advice about simpler library, as sockets.io seems too complex for such a small operation.
//-------------------------
Update:
Here is the node.js code as requested.
var maxRunning = 1;
var test_de_rf = ['rennen','ausgehen'];
function callHandler(word, cb) {
console.log("word is - " + word);
gender.gender_function_rf( word , function (result_rf) {
console.log(result_rf);
res.send(result_rf);// Here I send data back to the ajax call
setTimeout(function() { cb(null);
}, 3000);
});
}
async.eachLimit(test_de_rf, maxRunning, function(item, done) {
callHandler(item, function(err) {
if (err) throw new Error(err);
done();
});
}, function(err) {
if (err) throw new Error(err);
console.log('done');
});

res.send() sends and finishes an http response. You can only call it once per request because the request is finished and done after calling that. It is a fairly high level way of sending a response (does it all at once in one call).
If you wanted to have several different functions contributing to a response, you could use the lower level functions on the http object such as res.setHeader(), res.writeHead(), res.write() (which you can call multiple times) and res.end() (which indicates the end of the response).
You can use the standard webSocket API in the browser and get webSocket module for server-side support or you can use socket.io which offers both client and server support and a number of higher level functions (such as automatic reconnect, automatic failover to http polling if webSockets are not supported, etc...).
All that said, if what you really want is the ability to just send some data from server to client whenever you want, then a webSocket is really the better way to go. This is a persistent connection, is supported by all modern browsers and allows the server to send data unsolicited to the client at any time. I'd hardly say socket.io is complex. The doc isn't particularly great at explaining things (not uncommon in the open source world as the node.js doc isn't particularly great either). But, I've always been able to figure advanced things out by just looking at a few runtime data structures in the debugger and/or looking at the source code.

Related

Dropzone.js - Multiple file upload without duplicated response

TLDR;
I managed to simplify my question after a good night's sleep. Here's the simpler question.
I want to upload N files to a server, which would process them together and return a single response (e.g. Total foobars in all files combined = XYZ).
What's the best way to send this single response back to the client?
Thanks.
&
Below is the old question, left behind as a lesson for me.
I'm using Dropzone.js to build D&D functionality into my app.
Please note: I know there are a couple of questions already that discuss multifile uploads. But they are different from my question. They talk about how to get a single callback call instead of multiple ones.
My issue is related to the situation where I drag and drop multiple files into the dropzone, but am seeing the single server response being duplicated multiple times. Here is my config:
Dropzone.options.inner = {
init: function() {
this.on("dragenter", function(e) {
$('#inner').addClass('drag-over');
//// TODO - find out WTF this isn't working (low priority)
}),
this.on("completemultiple", function(file, resp) {
//// TODO
})
},
url: "php/...upload...php",
timeout: 120000, // 2m
uploadMultiple: true,
autoProcessQueue: false,
clickable: false,
};
//// ... Some other stuff
//// ...
$(document).ready(function() {
$('#inner').click(function() {
Dropzone.forElement('.dropzone').processQueue();
});
In the beginning I intercepted the "complete" event, rather than "completemultiple". That resulted in its handler being invoked multiple separate times (once for each file), even though the server-side php was only being invoked once. Each invocation returned a duplicate copy of the same server-side message.
I didn't want that, so I changed it to "completemultiple", and now I can confirm that the handler only gets called once with an array of files, but the single server response is now buried within each file object returned - each has a duplicate copy of the exact same response.
It doesn't matter ultimately because it is the same message, after all. But the whole esthetics of the thing now seems off which indicates to me I'm doing something wrong - the response seems to indicate two independent uploads, but they were part of a single invocation of the server side php. Why make the client "believe" there were two separate upload requests when the server-side script only has one opportunity to respond (i.e. The php is not sending back different messages for each file - should it? And if so, what's the best way to do it?)
How can I make it so that if I have a scenario in which it's all-or-none, I get a single response back from the php script?
This is especially important to me because my server response will contain the status and some other data. The script does more than simply receiving the uploaded files (hence the longer timeout).
I thought maybe that's a sign that I should separate the uploading part from the processing part and trigger the processing once the upload is complete.
But that means that the server side upload script can't clean up after itself. It needs to persist data beyond its own life. Also it now needs to return a handle to this data back to the client, which would dispatch the server-side processor in a different ajax call passing it this handle - and the subsequent call needs to clean up the files left by the uploader after it is done processing them.
This seems the less elegant solution. Is this something I just need to get used to? Or is there a better way of accomplishing what I want?
Also, any other free tips and hints from the front-end gurus in my network will be gratefully accepted.
Thanks.
&
The following approach works. Until something better can be found.
Dropzone.options.inner = {
// . . .
init: function() {
this.on("completemultiple", function(file) {
var code = JSON.parse(file[0].xhr.response).code;
var data = { "code" : code };
$.post('php/......php', data, function(res) {
// TODO - surface the res back to the user
});
})
},
};
&

Synchronize POSTs to an API in Angular

I'm trying to synchronize my POSTs to an endpoint in Angular. I did see some examples of doing a synchronized GET but had trouble understanding the examples well enough to apply them to POSTs.
The POSTs are pretty simple, at least from my perspective as the front-end developer. I send an object with an parent group ID and a sub group ID to a /parentgroups endpoint. On the backend, however, async calls cause the data to get overwritten.
Apologies for lack of an example, but I am pretty far from having one that's close to working how I need. My code is still async and a loop over calls to $http.post().
You actually cannot do real synchronous (as in blocking) http calls in Angular, it forces you do use async. If you can't do it with callbacks then you have a problem with your architecture that the entire team should focus on solving ASAP. If your current architecture requires the frontend to do blocking calls then your architecture is quite simply broken and needs to be fixed.
Anyway, while I recommend against it you could always register your request in a list, and then in each callback you pop the next request from the list and run it. That way you can just keep pushing requests into the list without knowing how many there will be. Something like this (untested, but the general principle should work):
var requestList = [];
requestList.push(function() {
$http.post('/someUrl', {})
.success(function(data, status, headers, config) {
// Remove the next request from list and call it
requestList.shift()();
});
});
requestList.push(function() {
$http.post('/someOtherUrl', {})
.success(function(data, status, headers, config) {
// Remove the next request from list and call it
requestList.shift()();
});
});
// Start the first request
requestList.shift()();
This is fairly clean, but still a bit of a hack. It would probably work fine but I would be taking a good long look at why the API forces you to do something like this.

Sending events from server to client(s) in Meteor

Is there a way to send events from the server to all or some clients without using collections.
I want to send events with some custom data to clients. While meteor is very good in doing this with collections, in this case the added complexity and storage its not needed.
On the server there is no need for Mongo storage or local collections.
The client only needs to be alerted that it received an event from the server and act accordingly to the data.
I know this is fairly easy with sockjs but its very difficult to access sockjs from the server.
Meteor.Error does something similar to this.
The package is now deprecated and do not work for versions >0.9
You can use the following package which is originally aim to broadcast messages from clients-server-clients
http://arunoda.github.io/meteor-streams/
No collection, no mongodb behind, usage is as follow (not tested):
stream = new Meteor.Stream('streamName'); // defined on client and server side
if(Meteor.isClient) {
stream.on("channelName", function(message) {
console.log("message:"+message);
});
}
if(Meteor.isServer) {
setInterval(function() {
stream.emit("channelName", 'This is my message!');
}, 1000);
}
You should use Collections.
The "added complexity and storage" isn't a factor if all you do is create a collection, add a single property to it and update that.
Collections are just a shape for data communication between server and client, and they happen to build on mongo, which is really nice if you want to use them like a database. But at their most basic, they're just a way of saying "I want to store some information known as X", which hooks into the publish/subscribe architecture that you should want to take advantage of.
In the future, other databases will be exposed in addition to Mongo. I could see there being a smart package at some stage that strips Collections down to their most basic functionality like you're proposing. Maybe you could write it!
I feel for #Rui and the fact of using a Collection just to send a message feel cumbersome.
At the same time, once you have several of such message to send around is convenient to have a Collection named something like settings or similar where you keep these.
Best package I have found is Streamy. It allows you to send to everybody, or just one specific user
https://github.com/YuukanOO/streamy
meteor add yuukan:streamy
Send message to everybody:
Streamy.broadcast('ddpEvent', { data: 'something happened for all' });
Listen for message on client:
// Attach an handler for a specific message
Streamy.on('ddpEvent', function(d, s) {
console.log(d.data);
});
Send message to one user (by id)
var socket = Streamy.socketsForUsers(["nJyQvECmkBSXDZEN2"])._sockets[0]
Streamy.emit('ddpEvent', { data: 'something happened for you' }, socket);

ExpressJS backend hanging with too much requests

I have an express app running with Sequelize.js as an ORM. My express app receives requests from my main Rails app, and because of the cross-domain policy, these requests are performed with getJSON.
On the client, the request is fired when the user hits a key.
Everything goes fine and express logs the queries being performed (and json being served) each time the user hits the key. Even trying to hit quickly it performs ok. But, whenever I leave the key pressed (or maybe several clients hitting the key very quickly), as it starts firing lots of requests, at some moment the server just hangs, all the requests from that point on are left pending (I see that in the Network tab of Chrome Dev Tools), and they slowly start to timeout. I have to reboot the server to make it respond again.
The server code for my request is:
models.Comment.findAllPublic(req.params.pId, req.params.sId, function(comments){
var json = comments.map(function(comment){
var com = {};
['user_id','user_avatar', 'user_slug', 'user_name', 'created_at', 'text', 'private', 'is_speaker_note'].forEach(function(key){
com[key]=comment[key];
});
return com;
});
res.json({comments: json});
});
And the findAllPublic method from the Comment model (this is a Sequelize model) is:
findAllPublicAndMyNotes: function(current_user, presentationId, slideId, cb){
db.query("SELECT * FROM `comments` WHERE commentable_type='Slide' AND commentable_id=(SELECT id from `slides` where `order_in_presentation`="+slideId+" AND `presentation_id`="+presentationId+") AND (`private` IS FALSE OR (`private` IS TRUE AND `user_id`="+current_user+" AND `is_speaker_note` IS FALSE))",self.Comment).on('success', cb).on('failure',function(err){console.log(err);});
}
How to avoid the server from getting stuck? Am I leaving some blocking code in the request that may slowly hang the server as new requests are made?
At first I thought it could be a problem because of the "forEach" when composing the json object from the Sequelize model, but I also tried leaving the callback for the mysql query empty, just responding empty json and it also got frozen.
Maybe it is a problem of the mysql connector? When the server gets stuck I can normally run the mysql console and perform queries on my database and it also responds, so I don't know if that's the problem.
I know I could just control the key event to prevent it from firing too many requests when the key gets pressed for a long time, but the problem seems to appear also when several clients hit the key repeatedly and concurrently.
Any thoughts? Thanks in advance for the help :D
Two things:
It seems like you have some path where res.render is not being called. It could be that the database you're connecting to is dropping the connection to your Express server after the absurd number of requests and the callback is never fired (and there's no database.on('close', function() { // Handle disconnect from DB, perhaps auto-restarting }) code to catch it.
Your client-side code should detect when an AJAX request on keypress is still pending while a new one is being started, and cancel the old one. I'm guessing getJSON is a jQuery method? Assuming it's jQuery's, then you need something like the following
.
var currKeyRequest = null;
function callOnKeyUp() {
var searchText = $('#myInputBox').value;
if(currKeyRequest) {
currKeyRequest.reject();
currKeyRequest = null;
}
currKeyRequest = $.getJSON('path/to/server', function(json) {
currKeyRequest = null;
// Use JSON code
});
}
This way, you reduce the load on the client, the latency of the autocomplete functionality (but why not use the jQuery UI autocomplete if that's what you're after?), and you can save the server from some of the load as well if the keypresses are faster than handshaking with the server (possible with a good touch-typist a few hours flight away).

How are notifications such as "xxxx commented on your post" get pushed to the front end in Facebook? [duplicate]

I have read some posts about this topic and the answers are comet, reverse ajax, http streaming, server push, etc.
How does incoming mail notification on Gmail works?
How is GMail Chat able to make AJAX requests without client interaction?
I would like to know if there are any code references that I can follow to write a very simple example. Many posts or websites just talk about the technology. It is hard to find a complete sample code. Also, it seems many methods can be used to implement the comet, e.g. Hidden IFrame, XMLHttpRequest. In my opinion, using XMLHttpRequest is a better choice. What do you think of the pros and cons of different methods? Which one does Gmail use?
I know it needs to do it both in server side and client side.
Is there any PHP and Javascript sample code?
The way Facebook does this is pretty interesting.
A common method of doing such notifications is to poll a script on the server (using AJAX) on a given interval (perhaps every few seconds), to check if something has happened. However, this can be pretty network intensive, and you often make pointless requests, because nothing has happened.
The way Facebook does it is using the comet approach, rather than polling on an interval, as soon as one poll completes, it issues another one. However, each request to the script on the server has an extremely long timeout, and the server only responds to the request once something has happened. You can see this happening if you bring up Firebug's Console tab while on Facebook, with requests to a script possibly taking minutes. It is quite ingenious really, since this method cuts down immediately on both the number of requests, and how often you have to send them. You effectively now have an event framework that allows the server to 'fire' events.
Behind this, in terms of the actual content returned from those polls, it's a JSON response, with what appears to be a list of events, and info about them. It's minified though, so is a bit hard to read.
In terms of the actual technology, AJAX is the way to go here, because you can control request timeouts, and many other things. I'd recommend (Stack overflow cliche here) using jQuery to do the AJAX, it'll take a lot of the cross-compability problems away. In terms of PHP, you could simply poll an event log database table in your PHP script, and only return to the client when something happens? There are, I expect, many ways of implementing this.
Implementing:
Server Side:
There appear to be a few implementations of comet libraries in PHP, but to be honest, it really is very simple, something perhaps like the following pseudocode:
while(!has_event_happened()) {
sleep(5);
}
echo json_encode(get_events());
The has_event_happened function would just check if anything had happened in an events table or something, and then the get_events function would return a list of the new rows in the table? Depends on the context of the problem really.
Don't forget to change your PHP max execution time, otherwise it will timeout early!
Client Side:
Take a look at the jQuery plugin for doing Comet interaction:
Project homepage: http://plugins.jquery.com/project/Comet
Google Code: https://code.google.com/archive/p/jquerycomet/ - Appears to have some sort of example usage in the subversion repository.
That said, the plugin seems to add a fair bit of complexity, it really is very simple on the client, perhaps (with jQuery) something like:
function doPoll() {
$.get("events.php", {}, function(result) {
$.each(result.events, function(event) { //iterate over the events
//do something with your event
});
doPoll();
//this effectively causes the poll to run again as
//soon as the response comes back
}, 'json');
}
$(document).ready(function() {
$.ajaxSetup({
timeout: 1000*60//set a global AJAX timeout of a minute
});
doPoll(); // do the first poll
});
The whole thing depends a lot on how your existing architecture is put together.
Update
As I continue to recieve upvotes on this, I think it is reasonable to remember that this answer is 4 years old. Web has grown in a really fast pace, so please be mindful about this answer.
I had the same issue recently and researched about the subject.
The solution given is called long polling, and to correctly use it you must be sure that your AJAX request has a "large" timeout and to always make this request after the current ends (timeout, error or success).
Long Polling - Client
Here, to keep code short, I will use jQuery:
function pollTask() {
$.ajax({
url: '/api/Polling',
async: true, // by default, it's async, but...
dataType: 'json', // or the dataType you are working with
timeout: 10000, // IMPORTANT! this is a 10 seconds timeout
cache: false
}).done(function (eventList) {
// Handle your data here
var data;
for (var eventName in eventList) {
data = eventList[eventName];
dispatcher.handle(eventName, data); // handle the `eventName` with `data`
}
}).always(pollTask);
}
It is important to remember that (from jQuery docs):
In jQuery 1.4.x and below, the XMLHttpRequest object will be in an
invalid state if the request times out; accessing any object members
may throw an exception. In Firefox 3.0+ only, script and JSONP
requests cannot be cancelled by a timeout; the script will run even if
it arrives after the timeout period.
Long Polling - Server
It is not in any specific language, but it would be something like this:
function handleRequest () {
while (!anythingHappened() || hasTimedOut()) { sleep(2); }
return events();
}
Here, hasTimedOut will make sure your code does not wait forever, and anythingHappened, will check if any event happend. The sleep is for releasing your thread to do other stuff while nothing happens. The events will return a dictionary of events (or any other data structure you may prefer) in JSON format (or any other you prefer).
It surely solves the problem, but, if you are concerned about scalability and perfomance as I was when researching, you might consider another solution I found.
Solution
Use sockets!
On client side, to avoid any compatibility issues, use socket.io. It tries to use socket directly, and have fallbacks to other solutions when sockets are not available.
On server side, create a server using NodeJS (example here). The client will subscribe to this channel (observer) created with the server. Whenever a notification has to be sent, it is published in this channel and the subscriptor (client) gets notified.
If you don't like this solution, try APE (Ajax Push Engine).
Hope I helped.
According to a slideshow about Facebook's Messaging system, Facebook uses the comet technology to "push" message to web browsers. Facebook's comet server is built on the open sourced Erlang web server mochiweb.
In the picture below, the phrase "channel clusters" means "comet servers".
Many other big web sites build their own comet server, because there are differences between every company's need. But build your own comet server on a open source comet server is a good approach.
You can try icomet, a C1000K C++ comet server built with libevent. icomet also provides a JavaScript library, it is easy to use as simple as:
var comet = new iComet({
sign_url: 'http://' + app_host + '/sign?obj=' + obj,
sub_url: 'http://' + icomet_host + '/sub',
callback: function(msg){
// on server push
alert(msg.content);
}
});
icomet supports a wide range of Browsers and OSes, including Safari(iOS, Mac), IEs(Windows), Firefox, Chrome, etc.
Facebook uses MQTT instead of HTTP. Push is better than polling.
Through HTTP we need to poll the server continuously but via MQTT server pushes the message to clients.
Comparision between MQTT and HTTP: http://www.youtube.com/watch?v=-KNPXPmx88E
Note: my answers best fits for mobile devices.
One important issue with long polling is error handling.
There are two types of errors:
The request might timeout in which case the client should reestablish the connection immediately. This is a normal event in long polling when no messages have arrived.
A network error or an execution error. This is an actual error which the client should gracefully accept and wait for the server to come back on-line.
The main issue is that if your error handler reestablishes the connection immediately also for a type 2 error, the clients would DOS the server.
Both answers with code sample miss this.
function longPoll() {
var shouldDelay = false;
$.ajax({
url: 'poll.php',
async: true, // by default, it's async, but...
dataType: 'json', // or the dataType you are working with
timeout: 10000, // IMPORTANT! this is a 10 seconds timeout
cache: false
}).done(function (data, textStatus, jqXHR) {
// do something with data...
}).fail(function (jqXHR, textStatus, errorThrown ) {
shouldDelay = textStatus !== "timeout";
}).always(function() {
// in case of network error. throttle otherwise we DOS ourselves. If it was a timeout, its normal operation. go again.
var delay = shouldDelay ? 10000: 0;
window.setTimeout(longPoll, delay);
});
}
longPoll(); //fire first handler

Resources