Generating PDF's via Delayed Job while maintaing a RESTful pattern - ruby

currently I am running a Rails app on Heroku, and everything is working great with exception of generating PDF documents that sometimes contain thousands of records. Heroku has a built-in timeout of 30 seconds, so if the request takes more than 30 seconds, it's abandoned.
That's fine, since they offer delayed_job support built-in. However, all of the PDF's i generate follow a typical restful pattern. For instance, a request to "/posts.pdf" generates a pdf (using PRAWN and PRAWNTO) and it's delivered to the browser.
So my basic question is, how do I create dynamically generated PDF's with delayed_job while maintaining the basic RESTful patterns Rail's so conveniently provides. Thanks.

You could do something like:
Send a request to generate the pdf: POST /generate_new_pdf
Have that action return the ID of the new pdf before it's created
Poll the endpoint for that resource ID until it's done (returning 202's in the meantime): GET /pdfs/:id

Related

How do I show progress of a long-running server operation (Web API Commanding) in the Lightswitch HTML client?

I have a VS 2013 Lightswitch HTML Client application to which I've added a button that makes a Web API REST post. This basically 'refreshes' the data in the table from the original upstream source. This is all working correctly, but the operation takes a few minutes, and I want to report status to the user as it runs.
Right now, I've tried attaching a simple Refresh when the post returns as follows:
$.post("/api/data/", "Refresh", function (response) {
screen.getData().then(function (newData) { screen.reQuery(); });
});
This doesn't actually seem to do a refresh (screen.reQuery is apparently the wrong call), but the better option would be to instead have the server show progress of this long-running application.
One thought I had would be to have the server call return data in the form of "percent done" in the response as it processes it, but I don't know if this would be delivered to the client piecemeal, nor the best way to display this to the user in Lightswitch.
I'm open to other third-party libraries that might help with this, but I'd like to stick with WebAPI for commanding instead of adding something like SignalR for now, if possible. Thanks!
In general this seems like not the best idea to run operations that takes minutes on the server.
A reasonable alternative is to create a single call, that will in turn create multiple Web Jobs (see Azure Web Jobs for more info). The Web Jobs will be broken to smaller individual tasks, and your html will query the web jobs rather than your Web API.

4GL and Magento SOAP API. Need a simple example

My company runs a 4GL application internally. It's very old and no one really knows how to improve/develop for it since the developers are long gone.
I need to make a simple SOAP call to my Magento web store. There are tons of examples online in a multitude of languages, but I can't find a single 4GL (OpenEdege ABL) example.
I'm trying to set SKU's to Out of stock status.
Does anyone have a simple example that I can look at, or at least a starting point since there seems to be so little information on 4GL on the web.
Example of the call I need in PHP:
<?php
$proxy = new SoapClient('http://www.domain.com/api/soap/?wsdl');
$sessionId = $proxy->login('admin', 'password');
$proxy->call($sessionId, 'product_stock.update', array('sku123', array('qty'=>50, 'is_in_stock'=>1)));
For version 10.2B there's built in support for consuming web services in Progress ABL.
This is a basic tutorial of how to create a client for a SOAP-based web service in ABL. It's not best practices or in any way complete. Just a quick guide to get started.
1. Analyse the WSDL
There's a built in tool available via command line that lets you analyse a WSDL and create documentation about available services, datatypes, syntax etc. Invoke it on your wsdl like this:
proenv> bprowsdldoc yourwsdl-file c:\temp\docs
The wsdl can be local or remote. If its remote you specify the URL, if it's local you can specify just the local complete path. Documentation in html format will end up in c:\temp\docs. Open up index.html in that folder.
2. Create a basic client
In the index.html document there's a number of headings. Click the link under "Port types". In the Port Type document you will find some useful data.
Copy-and-paste the example in "Connection Details" into your Progress Editor. It should look something like this (names of services and procedures will be different - they are defined in the wsdl):
DEFINE VARIABLE hWebService AS HANDLE NO-UNDO.
DEFINE VARIABLE hYYY AS HANDLE NO-UNDO.
CREATE SERVER hWebService.
hWebService:CONNECT("-WSDL 'file_or_url_to_wsdl.wsdl'").
RUN XXX SET hYYY ON hWebService.
If you run this code your client is connected to the web service but it's still not doing anything.
Further down the same document there's a heading called "Operation (internal procedure) details". This is where the actual web service is invoked. It will look something like the code below. It actually show two ways of making the same call, one functional call and one procedural so choose whatever you prefer and insert it into your editor (I'm usually using the procedural for no real reason other than old habits):
DEFINE VARIABLE strXMLRequest AS CHARACTER NO-UNDO.
DEFINE VARIABLE ProcessXMLResult AS CHARACTER NO-UNDO.
FUNCTION ProcessXML RETURNS CHARACTER
(INPUT strXMLRequest AS CHARACTER)
IN hYYY.
/* Function invocation of ProcessXML operation. */
ProcessXMLResult = ProcessXML(strXMLRequest).
/* Procedure invocation of ProcessXML operation. */
RUN ProcessXML IN hYYY (INPUT strXMLRequest, OUTPUT ProcessXMLResult).
Now all you need to end your program is disconnecting and cleaning up. So insert:
hWebService:DISCONNECT().
DELETE OBJECT hWebService.
If you've followed all steps you should have a skeleton for invoking a web service. The only problem is that you need to handle the in- and out-data.
3. Handle the answer and the request
Depending on how the web service is built this can be easy (if you only input and output simple data like strings and numbers) or quite complicated (if you input and output entire xml-documents). The documentation you created in step one lists all datatypes (in the index.html document) but it doesn't offer any support in how you create any needed xml documents. There's specific Progress documentation available on how to work with xml...
The better approach is to take a look at the official documentation. There you will find everything above and more - how to handle errors etc.
Here is an overview of all 10.2B documentation and here is the PDF named Web Services.
Here is a link to a complete (but actually not so good) example in the Progress KnowledgeBase where a client and corresponding request/response xml is created and handled.
Look at these chapters:
6 - Creating an ABL Client from WSDL
7 - Connecting to Web Services from ABL
8 - Invoking Web Service Operations from ABL
That will basically take you through the entire process from start to beginning.

Grails store and fetch data on client side

Background: We are using grails 2.1.1. We are not using any DB as of now. We make a web service call for each response on another server.
Now the problem is, there is web service call which returns some static data in XML form and this data is usable throughout the application. The size of the xml is around 40kb. This xml contains static data like, project_list, status_type_list etc. and we have to use this in various dropdowns and menu items in different gsp pages.
So, please suggest us the best way to handle this data. So that it doesn't effect our page load time and browsing experience. And also we can easily use the data on client side.
responding to your comment on the question. I would prefer using annotation based caching over the plugin, if the requirement is as simple as you state that it is.
If the calls are being made from server-side and you want to cache the results of the parsed XML then you can do something like:
#Cacheable("staticDataCache")
def getStaticDataFromXML() {}
You can then use the above method to pull the maps, lists whatever data structure you've used to store the result and it will pull it from the cache.
and then another service method to flush the cache, which you can call frequently from a Job.
#CacheFlush("staticDataCache")
def flushStaticDataCache() {}
Use the cache plugin to cache the static xml data. And then add some policy as to when the cache should be updated... (i.e. using a job to check if the xml has changed every hour)

Facebook tests users with user to user requests

I asked this question last week but only got 8 views.
A part of the application I'm working on requires creating a ton of user-to-user requests and validating they all get processed correctly in the application. This requires countless hours of QA work and could be automated with a simple script like
users_api = Koala::Facebook::TestUsers.new(config)
users = test_users.create_network(10, true, "email,user_likes,publish_actions")
users.permutations(2) do |u1, u2|
graph = Koala::Facebook::API.new(u1['access_token'])
requests_types.each do |req|
graph # .user_to_user_request(u2, req) Oh noes I can't do this part
end
end
Everything I've seen points to the fact that it's impossible to create user-to-user requests in a script, even for test users. Is there any other (automated) way to do this?
Edit
What I'm trying to find is a way to create user-to-user requests. The validation would still be manual by the QA team. The problem we're facing is that they need to create 90 requests and make sure they didn't skip a single one, then validate the data.
Solution to this is tricky one. You probably have two solutions, depending on what you need.
First one is to manually provide access tokens for tests. That would require creating several fictional users or gathering access tokens from friends via Api Explorer. This is of course very inconvenient, but probably needed for second idea so I'm mentioning it. The question is how much users will you need to test? In most situations 3-4 users should be enough to provide test case.
Second idea will require actually running tests suite once using first idea and recording results using gems like webmock or fakeweb. This will allow you to remember what API response will serve and using it in later tests without need to regenerate tokens. This should also speed up your tests significantly as will avoid waiting for each request from FB API.

How do you measure the progress of a web service call?

I have an ASP.NET web service which does some heavy lifting, like say,some file operations, or generating Excel Sheets from a bunch of crystal reports. I don't want to be blocked by calling this web service, so i want to make the web service call asynchronous. Also, I want to call this web service from a web page, and want some mechanism which will allow me to keep polling the server so that i can i can show some indicator of progress on the screen, like say, the number of files that have been processed. Please note that i do not want a notification on completion of the web method call, rather, i want a live progress status. How do i go about it?
Write a separate method on the server that you can query by passing the ID of the job that has been scheduled and which returns an approximate value between 0-100 (or 0.0 and 1.0, or whatever) of how far along it is.
E.g. in REST-style, you could make a GET request to http://yourserver.com/app/jobstatus/4133/ which would return a simple '52' as text/plain. Then you just have to query that every (second? two seconds? ten seconds?) to see how far along it is.
How you actually accomplish the monitoring on the backend hugely depends on what your process is and how it works.
I think XML web service is slow, so creating multiple methods and polling the progress will be extremely slow and will generate huge load on the server. I wouldn't do it in production environment. I see the same (but smaller) problems with database polling.
Try SOAP extensions instead. It implements an event-driven model. See Adding a Progress Bar to Your Web Service Client Application on MSDN.
You can also use SoapExtensions to notify your client of the download/process progress. The server can then send events to the client. Nothing in the client has to be changed if you don't use it.
Allows for something like this in your client:
//...
private localhost.MyWebServiceService _myWebService = new localhost.MyWebServiceService ();
_myWebService.processDelegate += ProgressUpdate;
_myWebService.CallHeavyMethod();
//...
private void ProgressUpdate(object sender, ProgressEventArgs e)
{
double progress = ((double)e.ProcessedSize / (double)e.TotalSize) * 100.00;
//Show Progress...
}
Have the initial "start report generation" web service call create a task in some task pool, and return the caller the ID of the task.
Then, provide another method that returns the "percent done" for a given taskId.
Provide a third method that returns the actual result for a completed task.
Easiest way would be to have the Web Service update a field on a database with the progress of the call, and then create a Web Service that queries that field and returns the value.
Make the web service to return some sort of task ID or session ID. Make another web method to query with that ID, which returns the information needed (% completion, list of files, whatever). Poll this method at some interval from the client.
Use a database to store the process information, if you do this in memory of the web service, this will not scale well in web farm environment, as it may happen that the task runs on another server, than the one you are polling.
EDIT: I just saw another similar answer, and comment to it. The commenter is right - you can use in-memory table to avoid disk operations, but still using a separate db server.

Resources