I'm developing a flex application, that brings data from server using "RemoteObject" . I am using AMFPHP for server side remoting. The problem is when i call a specific method using AMFPHP's service browser , result data comes up in 9-10 seconds. But when i call the same method from my flex application , it takes 20-40 seconds !!!
the code which is sending a remote object request to my server is
remoteObject.destination = "decoyDestination";
remoteObject.source = "PHP/manager1";
remoteObject.endpoint = "http://insight2.ultralysis.com/0Amfphp/Amfphp/";
remoteObject.addEventListener(FaultEvent.FAULT,handleFault);
remoteObject.addEventListener(ResultEvent.RESULT,handleResult);
var params:Object = new Object();
params.action = "default";
params.ValueVolume = 1;
timer.start();
remoteObject.init(params);
and my handle result function is
private function handleResult (event:ResultEvent):void
{
timer.stop();
CursorManager.removeAllCursors();
Alert.show("result found at: "+timer.currentCount/60+" seconds");
}
The average timing is 30 seconds at least. As much as i know about remoting with amfphp it should work more and more faster. Am i missing something ?
*Note: using FB's built in Network Monitor i can see that a request is being sent. But the response time and elapsed time is always blank. Event after the response is received
Any kind of help will be appreciated
Thanks in advance
A few things I would like you to try,
Having the network monitor turned on, will cause the performance hit - so, turn it off,
Service browser is obviously not running in debug mode, if you export release build your project and try to call these services, you should be able to see the response much quicker 9-10 seconds as you would expect, (running the app in debug mode always takes more time, looking at the response time,i'm thinking that you are getting a lot of data from the server which obviously takes time when in Debug mode)
Related
Timeouts
I'm using Oat++ to make a REST API client/server for a Windows application. I'm designing the requests to be small and fast, so I want the timeouts to be quick. It's not catastrophic if it times out, I just don't want the request to block for a long time. "Give up quickly" is my motto for this.
ConnectionMonitor
I see what looks like the facility for this: ConnectionMonitor. I attempted to adapt this to my application.
auto objectMapper = oatpp::parser::json::mapping::ObjectMapper::createShared();
auto config = oatpp::openssl::Config::createShared();
auto connectionProvider = oatpp::openssl::client::ConnectionProvider::createShared(config, {server_addr, server_port)});
auto monitor = std::make_shared<oatpp::network::monitor::ConnectionMonitor>(connectionProvider);
monitor->addMetricsChecker(
std::make_shared<oatpp::network::monitor::ConnectionInactivityChecker>(
std::chrono::seconds(5),
std::chrono::seconds(5)
)
);
auto requestExecutor = oatpp::web::client::HttpRequestExecutor::createShared(monitor);
auto client = MyClient::createShared(requestExecutor, objectMapper);
(I'm not using the AppComponents recommendation because in this application, I wanted parameters for the objects normally created in it—e.g. ObjectMapper, Config, ConnectionProvider, HttpRequestExecutor, etc.—to come from the object, not hardcoded defaults.)
Results
When I invoke one of the client API calls, times out right around 21 seconds, which is the same timeout I see when I don't do any of the ConnectionMonitor stuff and just pass connectionProvider to the CTOR for requestExecutor.
When I monitor network traffic, I can see something being attempted every couple of seconds. Will that thwart the timeout?
Without the monitor code, the client works fine, as it does with the code, so long as I don't kill the server or block the connection (to test the timeout).
What am I doing wrong?
Is there a way I can diagnose this problem?
I've been attempting to do a bit of performance review on an app I have, it's a back end Kotlin app that just pulls in some data, does a bit of data transformation and dumps it out, nothing too fancy. One thing that caught my eye was the final bit of execution where we dump our final data onto a queue, at first I noticed when we start up the app the final network call takes a very long time at first, sometimes over a second. Normally we run this network call in a coroutine to stop that last call blocking everything but I started trying to time the coroutine and the network call separately and got some odd results, from what I can see the coroutine takes can take forever to launch/complete compared to the network call. It's entirely possible I'm not recording things correctly but this is the general timing approach I have:
val coroutineTime - Instant.now().toEpochMillis()
GlobalScope.launch {
executionTime = measureTimeMillis { <--DO Message Sending -->}
totalTime = Instant.now().toEpochMillis() - coroutineTime
// Log out execution Time and total time
}
Now here what I'll see is something like
- totalTime = ~800ms
- executionTime = ~150ms
These aren't one-offs either, I have multiple of these processes going on at once ( up to 10 threads I think) and the first total times will always take significantly longer than the actual executionTime/network call. Eventually after a new dozen messages the overhead will calm down and these times will become equivalent at about 15ms, but having nearly 700ms overhead on coroutine start up seems insane to me.
Is this normal/expected behavior? I've tested this in a separate app and see similar but less extreme results where the first coroutine will take about 70ms to boot up, I'm struggling to find any other examples of this type of discussion outside of kotlin being used in android development.
As a first note, it's almost never a good idea to use the GlobalScope unless you really know what you're doing. This is why it was marked as delicate API. You should instead use a scope that is appropriately closed (following the lifecycle of whatever component launches this work).
Now, AFAIK, this GlobalScope runs on the default dispatcher, so maybe this is due to a cold start of that default thread pool. Later, it could also be a problem to use this dispatcher for network calls depending on the amount of concurrent coroutines you have. It would be more appropriate to use Disptachers.IO instead for IO bound work (or a custom thread pool).
It still doesn't explain the cold start, but I would first change that before investigating.
This is expected behavior if you use coroutines inappropriately ;-)
My guess is that your message sending is a blocking operation. By default GlobalScope.launch() dispatches coroutines with Dispatchers.Default which is designed to perform CPU-intensive operations, it has a limited number of threads and you should never block when using it. If you do you may run out of threads and coroutines will need to wait until some blocking operations will finish.
If you need to run blocking or IO code, you should use Dispatchers.IO instead:
GlobalScope.launch(Dispatchers.IO) {
I was facing similar issue, I have a function that loads some data from shared prefs, makes some calculations on the data (all this done in Dispatcher.Default), and return the result on Dispatcher.Main. I measured how long it took the Coroutine to actually start executing the block inside Dispatchers.Main.launch { } after calculations are done(time from tag2 to tag3 below), and got about 950ms (!!) Here is the function :
fun someName() {
CoroutineScope(Dispatchers.Default).launch {
val time = System.currentTimeMillis()
//load data and calculations
Log.d("tag2", "load and calculations took " + (System.currentTimeMillis() - time))
CoroutineScope(Dispatchers.Main.immediate).launch {
Log.d("tag3", "reached main thread code " + (System.currentTimeMillis() - time))
//do something
Log.d("tag4", "do something took " + (System.currentTimeMillis() - time))
}
}
}
But then I realized this happens while app launch, and main thread is busy creating all the UI, so even with .immediate it takes time until main thread will get to execute the dispatched code... then I tried to run this function after app already started and waiting, and found that from tag2 to tag 3 takes about 1ms (!!) (with .immediate). So looks like when dispatching something on Coroutine, when thread isn't busy it will start immediately
I have an express app running with Sequelize.js as an ORM. My express app receives requests from my main Rails app, and because of the cross-domain policy, these requests are performed with getJSON.
On the client, the request is fired when the user hits a key.
Everything goes fine and express logs the queries being performed (and json being served) each time the user hits the key. Even trying to hit quickly it performs ok. But, whenever I leave the key pressed (or maybe several clients hitting the key very quickly), as it starts firing lots of requests, at some moment the server just hangs, all the requests from that point on are left pending (I see that in the Network tab of Chrome Dev Tools), and they slowly start to timeout. I have to reboot the server to make it respond again.
The server code for my request is:
models.Comment.findAllPublic(req.params.pId, req.params.sId, function(comments){
var json = comments.map(function(comment){
var com = {};
['user_id','user_avatar', 'user_slug', 'user_name', 'created_at', 'text', 'private', 'is_speaker_note'].forEach(function(key){
com[key]=comment[key];
});
return com;
});
res.json({comments: json});
});
And the findAllPublic method from the Comment model (this is a Sequelize model) is:
findAllPublicAndMyNotes: function(current_user, presentationId, slideId, cb){
db.query("SELECT * FROM `comments` WHERE commentable_type='Slide' AND commentable_id=(SELECT id from `slides` where `order_in_presentation`="+slideId+" AND `presentation_id`="+presentationId+") AND (`private` IS FALSE OR (`private` IS TRUE AND `user_id`="+current_user+" AND `is_speaker_note` IS FALSE))",self.Comment).on('success', cb).on('failure',function(err){console.log(err);});
}
How to avoid the server from getting stuck? Am I leaving some blocking code in the request that may slowly hang the server as new requests are made?
At first I thought it could be a problem because of the "forEach" when composing the json object from the Sequelize model, but I also tried leaving the callback for the mysql query empty, just responding empty json and it also got frozen.
Maybe it is a problem of the mysql connector? When the server gets stuck I can normally run the mysql console and perform queries on my database and it also responds, so I don't know if that's the problem.
I know I could just control the key event to prevent it from firing too many requests when the key gets pressed for a long time, but the problem seems to appear also when several clients hit the key repeatedly and concurrently.
Any thoughts? Thanks in advance for the help :D
Two things:
It seems like you have some path where res.render is not being called. It could be that the database you're connecting to is dropping the connection to your Express server after the absurd number of requests and the callback is never fired (and there's no database.on('close', function() { // Handle disconnect from DB, perhaps auto-restarting }) code to catch it.
Your client-side code should detect when an AJAX request on keypress is still pending while a new one is being started, and cancel the old one. I'm guessing getJSON is a jQuery method? Assuming it's jQuery's, then you need something like the following
.
var currKeyRequest = null;
function callOnKeyUp() {
var searchText = $('#myInputBox').value;
if(currKeyRequest) {
currKeyRequest.reject();
currKeyRequest = null;
}
currKeyRequest = $.getJSON('path/to/server', function(json) {
currKeyRequest = null;
// Use JSON code
});
}
This way, you reduce the load on the client, the latency of the autocomplete functionality (but why not use the jQuery UI autocomplete if that's what you're after?), and you can save the server from some of the load as well if the keypresses are faster than handshaking with the server (possible with a good touch-typist a few hours flight away).
I've been scouring the web trying to find a straight answer to this. Does anyone know the default timeout lengths for ajax request by browser? Also by version if it's changed?
According to the specs, the timeout value defaults to zero, which means there is no timeout. However, you can set a timeout value on the XHR.timeout property; the value is in milliseconds.
Sources:
http://www.w3.org/TR/2011/WD-XMLHttpRequest2-20110816/#the-timeout-attribute
http://msdn.microsoft.com/en-us/library/cc304105(v=vs.85).aspx
I don't think browsers have a timeout for AJAX, there is only synchronous or asynchronous requests; synchronous - first freezes the JavaScript execution until the request returns,
asynchronous - does not freeze JavaScript execution, it simply takes the request out of the execution flow, and if you have a callback function it will execute the the function in parallel with the running scripts (similar to a thread)
**sync flow:**
running JS script
|
ajax
(wait for response)
|
execute callback
|
running JS script
**async flow:**
running JS script
|
ajax --------------------
| |
running JS script execute callback
I did a modest amount of testing. To test I loaded my website, stopped the local server and then attempted an AJAX request. I set the timeout to something low like 1000ms until I could ensure I had minimal code (you must put the xhr.timeout after open and before send).
Once I got it working my initial goal was to determine the appropriate amount of time to allow however I was surprised how quickly the timeout would be outright ignored by browsers. My goal reformed in to trying to determine what the maximum timeout could be before error handling was no longer viable.That means past these fairly short spans of time your timeout handler script will not work at all. What I found was pretty pathetic.
Chrome 60: 995ms, 996ms will throw a dirty evil error in to the console.
Firefox 52 ESR: ~3000ms, position of mouse or other issue may cause no response around or just under three seconds.
So...
xhr.open(method,url,true);
xhr.timeout = 995;//REALLY short
xhr.send(null);
xhr.ontimeout = function ()
{
//Code will only execute if at or below *effective* timeouts list above.
//Good spot to make a second attempt.
}
So if your timeout is set higher than 995ms Chrome will ignore your code and puke on your nice clean empty console that you worked hard to keep clean. Firefox is not much better and there are unreliable requests that just timeout for well beyond any patience I have and in doing so ignore the ontimeout handler.
Browser does have a timeout value, behavior depends upon browser chrome has timeout value of 5 minutes and after 5 minutes it does resend ajax call
I'm not a node.js master, so I'd like to have more points of view about this.
I'm creating an HTTP node.js web server that must handle not only lots of concurrent connections but also long running jobs. By default node.js runs on one process, and if there's a piece of code that takes a long time to execute any subsequent connection must wait until the code ends what it's doing on the previous connection.
For example:
var http = require('http');
http.createServer(function (req, res) {
doSomething(); // This takes a long time to execute
// Return a response
}).listen(1337, "127.0.0.1");
So I was thinking to run all the long running jobs in separate threads using the node-webworker library:
var http = require('http');
var sys = require('sys');
var Worker = require('webworker');
http.createServer(function (req, res) {
var w = new Worker('doSomething.js'); // This takes a long time to execute
// Return a response
}).listen(1337, "127.0.0.1");
And to make the whole thing more performant, I thought to also use cluster to create a new node process for each CPU core.
In this way I expect to balance the client connections through different processes with cluster (let's say 4 node processes if I run it on a quad-core), and then execute the long running job on separate threads with node-webworker.
Is there something wrong with this configuration?
I see that this post is a few months old, but I wanted to provide a comment to this in the event that someone comes along.
"By default node.js runs on one process, and if there's a piece of code that takes a long time to execute any subsequent connection must wait until the code ends what it's doing on the previous connection."
^-- This is not entirely true. If doSomething(); is required to complete before you send back the response, then yes, but if it isn't, you can make use of the Asynchronous functionality available to you in the core of Node.js, and return immediately, while this item processes in the background.
A quick example of what I'm explaining can be seen by adding the following code in your server:
setTimeout(function(){
console.log("Done with 5 second item");
}, 5000);
If you hit the server a few times, you will get an immediate response on the client side, and eventually see the console fill with the messages seconds after the response was sent.
Why don't you just copy and paste your code into a file and run it over JXcore like
$ jx mt-keep:4 mysourcefile.js
and see how it performs. If you need a real multithreading without leaving the safety of single threading try JX. its 100% node.JS 0.12+ compatible. You can spawn the threads and run a whole node.js app inside each of them separately.
You might want to check out Q-Oper8 instead as it should provide a more flexible architecture for this kind of thing. Full info at:
https://github.com/robtweed/Q-Oper8