Use EC2 for PDF Generation, provide public URL to user - laravel

I have developed an application which allows Users to select multiple "transactions"; each of this is directly related to a PDF file.
When a User multi-selects them, and "prints" them, these PDF files are merged into one longer file to provide ease of print.
Currently, "transaction" PDFs are generated on request, and so is PDF-merging.
I'm trying to scale this up relaying over Amazon infrastructure, some questions arised to me.
Should I implement a queue for the PDF generation per "transaction"? If so, how can I provide the user a seamless experience? We don't want them to "wait"
Can I use EC2 to generate these PDF files for me? If so, can I provide a "public" link for the user to download the file directly from Amazon, instead of using our resources.
Thanks a lot!
EDIT ---- More details
User inputs some information through a regular form
System generates a PDF per request, using the provided information for the document
The PDF generated by the system is kept under Amazon S3
We provide an API which allows you to "print" multiple pdfs at once, to do so, we merge the selected PDF files from S3, into one file for ease-of-print
When you multi-print documents, a new window is opened which is your merged file directly, user needs to wait around 20ish seconds for it to display.
We can to leverage the resources used to generate the PDFs onto Amazon infrastructure, but we need to keep the same flow, meaning, we should provide an instant public link to the User to download & print the files.

Based on your understanding, i think you just need your link to be created immediately right after user request for file. However, you want in parallel to create PDF merge. I have idea to do that based on my understanding, and may be it could work in your situations.
First start with some logic to create unique pdf file name, with random string representing name of file. And at same time in background generate PDF, but the name of PDF should be same as you created in first step. This will give user instant name of file with link to download. However, your file creation is still in progress.
Make sure, you use threads if using PHP or event loop if using Node.JS to run both steps at same time. This will avoid 404 error for file not found.
Transferring files from EC2 to S3 would also add latency delay. But if you want to preserve files for later or multiple use in future then S3 is good idea as it could simply serve PDF files for faster delivery. As we know S3 is used for static media storage. Otherwise simply compute everything and generate files on EC2

Related

Is it possible to create a Google Document directly from a Template's JSON

Since the Docs API allows for retrieval of Google Docs via JSON, is there a way to directly inject that JSON data, with alterations, into a new Google Document? I have several Google Documents that are used as templates for contract agreements, and the data points for all the information to be entered into the templates is already pre-stored in a database.
My goal would be to store the template's JSON server-side and call that data to generate a current agreement and modified agreement by iterating through the placeholder values, inserting database / user input values at those placeholders, and then creating a new Google Doc from that JSON that can be downloaded. Skipping the step of creating google-side copies, grabbing their ID's, and calling something like Batch Update
My inspiration for this came from Federico Tomassetti's post: A template system for Google Docs: Google Drive automation and PDF generation with Google Execution API.
However, Federico creates copies of the template and then fills that copy using App Scripts. I'm currently searching for a solution that would allow for Agreements to be fully generated before being inserted to reduce overhead and simplify permissions, etc. Skipping the step of creating copies and then editing.
I'm just surprised there isn't some kind of way to directly upload without having to call a GET to the template every time, creating copies, and then editing those copies via Batch Update. Hopefully I'm missing some API call that exists, but so far I haven't found anything.

How can I upload multiple files from urls directly to cloud storage

I've tried some of the services out there, including droplet, ctrlq.org/save, and some other sites that support directly fetching a file from a url and uploading it to dropbox, google drive and the like. Without the user having to store the file on a local disk.
Now the problem is none of these services support multiple urls or batch uploading, but I have quite a few urls and I really need a service where I can put them in, split them with enters or semicolons, and have the files uploaded to dropbox.(or any other cloud storage)
Any help would be gladly appreciated.
The Dropbox Saver JavaScript control allows you to save up to 100 files to the user's Dropbox in one shot. You'll need to programmatically create the button using Dropbox.createSaveButton as explained in the linked page.
It seems like the 100-file limit (at any one time) is universal, but you might find that it isn't the case when using the DropBox REST API. It looks possible to do this with NodeJS server side (OAuth and posts) or Javascript client side (automating FileReader). I'll review and try to add content so these aren't just links.
If you can leave a page open for about 20 minutes due to "technical limitations", the dropbox should be loadable 100-at-a-time like that, assuming each upload takes less than 2 seconds; it's an easy hook to add a progress indicator.
If you're preloading the dropbox once yourself or the initial load is compatible with manual action, perhaps mapping a drive and trying to unzip an archive of your links to it would work. If your list of links isn't extremely volatile then the REST API could be used to synchronize changes.
Edit: Forgot to include this page on CloudConvert, which unzips archives containing up to 100 files into DropBox. Your use case doesn't seem to include retrieving the actual content at your servers (generated zip files), sending the automation list to the browser and then having the browser extract to dropbox, but it's another option.
The Dropbox API now offers the ability to save a file into Dropbox directly via a URL. There's a blog post about it here:
https://blogs.dropbox.com/developers/2015/06/programmatically-saving-a-url-to-dropbox/
The documentation can be found here:
https://www.dropbox.com/developers/core/docs#save-url

Retrieve the user response saved in a file in an app hosted on Cloudbees

I have hosted a Tomcat application on CloudBees which allows users to edit some XML and saves them. I need to download and save these files locally for my personal usage. However I could not find a way to do this. I tried the 'download source' option but it downloads the original files that I had uploaded and not the edited versions. However my application is able to access the edited versions (and so clearly everything is being saved all right). Getting these files back is extremely critical and necessary for me and is, in fact, the whole motive of this app. Kindly tell if there is some way to get back the files in CloudBees or any other free Java hosting site which would allow me to do it.
It's not very clear from your question how your app is currently dealing with these files, but I'll take a swing at providing some general info.
To support editing and downloading of files, your app design would need to address the following issues:
How do users edit/upload the changed XML?
Where does your app store the changed XML?
How does your app retrieve the edited XML and make it available for download?
For #1, you will need to provide an edit or upload interface in your app for manipulating the XML files. I'm assuming this is something your app has already solved using a form of some kind.
For #2, you need to pick an approach for storing the files that is appropriate for app's needs and the runtime environment where your app will be deployed. For instance, on CloudBees (or most other CLoud platforms), it's important to understand that the local filesystem of the app can be used for temporary storage, but it is not clustered and it will be wiped away each time the app is updated or restarted. If these XML files need to be available forever, you will need to store them in a persistent location that is external to the application's runtime instance. Most developers use databases (such as the CloudBees MySQL service) to store persistent data in this way. In general, your app can store these files anywhere, but your app needs to manage how to store them, and how to retrieve them later.
For #3, to allow a user to download the changed files, you will need to implement your own mechanism for retrieving the file from its persistent location, and then send it back to the user's browser. If you want something like right-click "Save As" to work, then your app will just need to support a URL that can display the edited XML file directly in the browser. If your app then provides a link to that URL, users can download it using RightClick+SaveAs. If you want the user to be able to click on a button/link and trigger a Save As dialog automatically, then you'd need to write a URL handler (Servlet) that serves the XML content up using a Content-Disposition header (see this StackOverflow article). This header will tell the browser that the file is supposed to be saved to disk, and allows you to provide a default file name.

saving a document to the internet so it can be shared by other users

I have a working Cocoa app that creates a database file and stores it locally. What I would like to do is store that file on a remove server so that different users of my app at different locations would be sharing the same file. My thought was to store the file on a website or ftp server, such as www.mydomain.com/mydatafile.
Forgetting about issues like two users attempting to access the file simultaneously for the moment, can someone point me to an example of how to property construct the URL to be used?
I'm thinking that it should be a fairly simple process with two parts, the first of which is a cocoa NSURL question, and the second which is really more of a w3 issue:
Create the URL to the file itself, and
Append the username and password require to login to the FTP site.
Any nudges in the right direction would be appreciated!
* edit *
I should mention that the file I would like to be shared by multiple users, is basically several custom objects stored as a file with NSKeyedArchiver...
I suggest you to intgrate your app with some cloud based document storage,sharing,editing service like Google docs/drive.
Until and unless you are going to provide very specific file formats native to your app and are doing something out of ordinary.
Using something like this would save you time, and user wont have to create yet another login-id.

Image File Uploads Security

I am implementing a project to my site to allow users to upload image files (ai, pdf, jpeg, gif, tiff). I know this can be very risky but I was wondering what kind of security checks I should put in place to make sure these files to not cause my site any harm.
OR
Should I use something like dropbox to upload my images? If I do this is it possible to get these images whenever I want so I can display them within the browser to the user?
image uploads are fine, because you know what you want: An image
First rule is never to trust the client, so let the user upload the file (maybe you want to add an upload size limit).
Second, you have to ensure that the image is really an image so
Check the mime-type of the file (don't go by the file extension, use a real mime type check like the file shell command or an appropriate library)
To really make sure the file is OK, Open and Reprocess it using an image library like GD, ImageMagick etc. and save it to disk (keep in mind this needs some resource!). This will also filter out corrupted images.
An uploaded file usually doesn't harm the site itself but the users who download the file.
I've come across with a file uploading part of a project I worked.
Some high-level suggestions to complement sled's answer:
The mime type is set on base of the file extension, so it's no useful (as the file has not been uploaded yet to the server, the mime type is just a 'guess' in base of his extension).
So solutions would be:
Do the content check client-side (before sending the http-request)
When you get the whole file by HTTP do the check server-side before persisting to the disk.
Other Suggestions:
The simple file extension check
(wheter by filename or mime-type) is
the basic secutiry measure that also
has to be present.
Folder permissions: Don't allow execute permissions, don't allow the user to create new folders (as it might create a sub-folder with executing permissions).

Resources