Long polling from Rails 3.2 - heroku

I have a carrierwave upload that takes a lot of time to process, but is required to finish before the user can continue on to the next action in the browser. I'm using an Ajax file upload on the front-end, so the app UI gives progress updates on the upload and processing. This works fine on my dev environment because the timeout on my dev server is relatively long, however not so good on Heroku since Cedar times out the request after 30 seconds if no response is sent. I've been trying to create a streaming response which sends a space every couple of seconds until the process has completed by creating a response object which responds to each thus:
class LongPoller
def initialize(yield_every=2,task)
#yield_every = yield_every
#task = task
end
def each
t = Thread.new(&#task)
while t.alive?
sleep #yield_every
yield ' '
end
yield t.value.to_json
end
end
This isn't working as expected though, because Thin seems to be batching the responses and not sending them back to the client.
Anyone have any ideas how I can get this to work?

Related

Http Post - long delay at state ExecuteRequestHandler

I have an asp.net mvc website hosted on Windows Server 2012r2 Standard which uses KnockoutJS to display data in a grid. This server is dedicated to the process that I'm having trouble with - it does not server any other requests.
An ajax call is made to a "GetRecords" action of a controller. This returns data for a couple of dozen records very quickly.
The user is able to make amendments to the data and submit for update. The knockout code makes another ajax call, this time posting the records. At this point the site "hangs" for a long time (over 10 minutes), but it does complete successfully and the updated date is persisted to a database. During the "hang time" the CPU for the IIS Worker Processes hovers around the 50% mark.
I'm trying to figure out what's causing the delay. It seems that the delay happens before the first line of code of the controller action is reached. I've added trace statements to the action and I can see that once the 1st line is executed, then the action completes within a couple of seconds.
From the IIS manager, I've drilled in to "Worker Processes"\"Current Requests" during the time the page is "hung", I can see that the State is listed as "ExecuteRequestHandler" and the Module Name is "ManagedPipelineHandler". There are no other "Current Requests" displayed.
Using the Chrome dev tools, I've captured the json being posted for the update, it is approx 4mb in size.
I've ruled out the problem being caused by bandwidth because I've tested from a browser running locally (on the web server), and I get the same delay.
Also, when I post the same number of records on the same site hosted on my dev VM then it works fine - completes end-to-end in under 3 seconds.
Any suggestion on steps I can take to improve performance of the post?
I have created a process dump of the IIS worker process when it is in the "hanging" state, this is available at: onedrive link
It seems that "Thread 28" is causing the issue, since this has a "Time spent in user mode" value of over 2 minutes. I requested the process dump about 2 minutes after making the http post request from the website. The post did eventually complete ok after about 20 minutes
Able to work around this problem bypassing the MVC model binding. The view model param (editBatchVm) that was passed into the controller method has been replaced. So, instead of:
public void ResubmitRejectedVouchersAsNewBatch(EditBatchViewModel editBatchVm)
{
I now have:
public void ResubmitRejectedVouchersAsNewBatch()
{
string requestData = "";
using (var reader = new StreamReader(Request.InputStream))
{
requestData = reader.ReadToEnd();
}
EditBatchViewModel editBatchVm = JsonConvert.DeserializeObject<EditBatchViewModel>(requestData);

Morbo server works only after constant refresh

I am developing a web application with Mojolicious. The morbo development server is a wonderful thing that works great, but once I start returning complicated hashes on the stack and then rendering a webpage, the morbo server will start to act funny. In my browser, if I navigate to one of those webpages that use a complicated hash, the browser will tell me that the connection has been reset. I have to refresh about 10-12 times before the page will load.
For example:
The code below shows one of my app controllers. It simply gets a json object from an AJAX request, and then returns a different json object. It works fine, except that the browser demands to be refreshed a thousand times before it will load.
package MyApp::Controller::Library;
use Mojo::Base 'Mojolicious::Controller';
use Mojo::Asset::File;
use MyApp::Model::Generate;
use MyApp::Model::Database;
use MyApp::Model::IpDatabase;
use Mojo::JSON qw(decode_json);
# Receives a json object from an AJAX request and
# sends the necessary information back to be
# displayed in a table.
sub list_ajax_catch {
my $self = shift;
my $json = $self->param('data');
my $input = decode_json $json;
$self->render(
json => {
"Object A" => {
"name" => "Object A's Name",
"description" => "A Description for Object A",
"height" => "10",
"width" => "5",
}
}
);
}
1;
The problem is not limited to this instance. It seems that anytime there is a lot of processing on the server, the browser has troubles resetting. It doesn't matter what browser, I've tried Chrome, IE, Firefox, and others (on multiple computers). It doesn't matter if I'm not even sending or receiving data back and forth from the html to the app. All that seems to trigger it is if there is any amount of processing in my web app that is more than just rendering templates, BUT if I am running Hypnotoad, everything is fine.
This example is not one that requires a lot of processing, but it does cause the browser to reset, and as you can see, it shouldn't take long to run or freeze anything up. I thought the problem was a timeout issue, but by default, timeout doesn't happen until after 15 seconds, so it can't be that.
I have figured out the problem! This has been an issue for me for over a month now and I am so glad that it is working again. My problem was that when I started the morbo development server, I used the following command:
morbo -w ~/web_dev/my_app script/my_app
The -w allows me to watch a directory for changes so that I don't have to restart the app each time I changed some of my JavaScript files. My problem was that the directory I watched also contained my log files. So each time I went to my webpage, the logs would change and the server would restart.

webpage.open() never calls callback

I'm using PhantomJS 1.8.2 to run some Jasmine unit tests using JsTestDriver. The tests run fine using Chrome, but about half the time when using PhantomJS, the test result is that no test cases were found.
I've narrowed the issue down to PhantomJS failing to open the local JsTestDriver page (http://localhost:9876/capture). Here's how to reproduce this, about 50% of the times, the Loaded ... with status ... message is never shown:
Start JsTestDriver server locally
Run phantomjs phantomjs-jstd-bridge.js
The file phantomjs-jstd-bridge.js looks like this:
var page = require('webpage').create();
var url = 'http://localhost:9876/capture';
console.log('Loading ' + url);
page.open(url, function(status) {
console.log('Loaded ' + url + ' with status ' + status);
});
The first log line (Loading ...) is always displayed, but the second line coming from the callback is only printed about half the time.
What could be the cause for this? Opening the URL in question in a web browser works fine every time.
Is there any way to get more info on why PhantomJS does not call the callback?
Check some tips mentioned in the Troubleshooting wiki page. Particularly useful is tracking the network transfer activity as it may indicate whether some resources are not sent properly or other similar problems.

Realtime display using redis pubsub in ruby?

I have stream of data coming to me via an http hit. I want to update data in realtime. I have started pushing HTTP hits data to a redis pubsub. Now I want to show it to users.
I want to update user's screen as soon as I get some data on redis channel. I want to use ruby as that is the language I am comfortable with.
I would use Sinatra's "stream" feature coupled with EventSource on the client side. Leaves IE out, though.
Here's some mostly functional server side code pulled from https://github.com/redis/redis-rb/blob/master/examples/pubsub.rb (another option is https://github.com/pietern/hiredis-rb):
get '/the_stream', provides: 'text/event-stream' do
stream :keep_open do |out|
redis = Redis.new
redis.subscribe(:channel1, :channel2) do |on|
on.message do |channel, msg|
out << "data: #{msg}\n\n" # This is an EventSource message
end
end
end
end
Client side. Most modern browsers support EventSource, except IE:
var stream = new EventSource('/the_stream');
stream.onmessage = function(e) {
alert("I just got this from the server: " + e.data);
}
As of I know you can do this via Faye check this link out
There are couple approach If you wish you can try
I remember myself building a Long Polling server using thin and sinatra to achieve something like this now If u wish you can do the same
I know of few like this and this flash client that you can use to connect directly to redis
There is EventMachine Websocket implementation u can use and hook it up with HTML 5 and Flash for non HTML 5 browser
Websocket-Rack
Other Approach you can try just a suggestion since most of them arent written in ruby
Juggernaut ( I dont think it based on Redis Pub-sub Thing also there used to ruby thing earlier not sure of now)
Socket.io
Webd.is
NULLMQ Not a redis pub sub but this is Zero MQ implementation in javascript
There are few other approach you can find If u google up :)
Hope this help

Implementing Timeout on BackgroundTransferService transfer request - Windows Phone

I am using BackgroundTransferService to download a file from the internet.
pseudo code goes something like this:
BackgroundTransferRequest transferRequest = new BackgroundTransferRequest(transferUri);
transferRequest.Method = "GET";
transferRequest.Tag = "myTag";
transferRequest.TransferPreferences = TransferPreferences.AllowCellularAndBattery;
BackgroundTransferService.Add(transferRequest);
after this, i add an event handler to handle the transfer when it is completed.
I am only using TransferStatusChanged event handler, not TransferProgressChanged
transferRequests = BackgroundTransferService.Requests;
transferRequests.Last().TransferStatusChanged += new EventHandler<BackgroundTransferEventArgs>(transfer_TransferStatusChanged);
under transfer_TransferStatusChanged() i do whatever i want to do with my downloaded file, or handle the failed situations (404 etc).
The problem is that my downloads go on for indefinite time if there is no 404 response from the server (for example when there is no such server, eg. www.googlea.com/myfilename). I want to implement a timeout for such scenario .. how can i do that ?
There is no built in support for such a scenario. You'll have to build in the timeout support yourself.
Be careful of transfering large files though as the transfer could be done in parts and over a very large period of time, depending on connectivity and battery charge level.
Of course, you may want to add a check that a file exists before making the transfer request and if you have any control over the server you should make sure that the correct responses are being sent too.

Resources