How can I upload multiple files from urls directly to cloud storage - download

I've tried some of the services out there, including droplet, ctrlq.org/save, and some other sites that support directly fetching a file from a url and uploading it to dropbox, google drive and the like. Without the user having to store the file on a local disk.
Now the problem is none of these services support multiple urls or batch uploading, but I have quite a few urls and I really need a service where I can put them in, split them with enters or semicolons, and have the files uploaded to dropbox.(or any other cloud storage)
Any help would be gladly appreciated.

The Dropbox Saver JavaScript control allows you to save up to 100 files to the user's Dropbox in one shot. You'll need to programmatically create the button using Dropbox.createSaveButton as explained in the linked page.

It seems like the 100-file limit (at any one time) is universal, but you might find that it isn't the case when using the DropBox REST API. It looks possible to do this with NodeJS server side (OAuth and posts) or Javascript client side (automating FileReader). I'll review and try to add content so these aren't just links.
If you can leave a page open for about 20 minutes due to "technical limitations", the dropbox should be loadable 100-at-a-time like that, assuming each upload takes less than 2 seconds; it's an easy hook to add a progress indicator.
If you're preloading the dropbox once yourself or the initial load is compatible with manual action, perhaps mapping a drive and trying to unzip an archive of your links to it would work. If your list of links isn't extremely volatile then the REST API could be used to synchronize changes.
Edit: Forgot to include this page on CloudConvert, which unzips archives containing up to 100 files into DropBox. Your use case doesn't seem to include retrieving the actual content at your servers (generated zip files), sending the automation list to the browser and then having the browser extract to dropbox, but it's another option.

The Dropbox API now offers the ability to save a file into Dropbox directly via a URL. There's a blog post about it here:
https://blogs.dropbox.com/developers/2015/06/programmatically-saving-a-url-to-dropbox/
The documentation can be found here:
https://www.dropbox.com/developers/core/docs#save-url

Related

Use EC2 for PDF Generation, provide public URL to user

I have developed an application which allows Users to select multiple "transactions"; each of this is directly related to a PDF file.
When a User multi-selects them, and "prints" them, these PDF files are merged into one longer file to provide ease of print.
Currently, "transaction" PDFs are generated on request, and so is PDF-merging.
I'm trying to scale this up relaying over Amazon infrastructure, some questions arised to me.
Should I implement a queue for the PDF generation per "transaction"? If so, how can I provide the user a seamless experience? We don't want them to "wait"
Can I use EC2 to generate these PDF files for me? If so, can I provide a "public" link for the user to download the file directly from Amazon, instead of using our resources.
Thanks a lot!
EDIT ---- More details
User inputs some information through a regular form
System generates a PDF per request, using the provided information for the document
The PDF generated by the system is kept under Amazon S3
We provide an API which allows you to "print" multiple pdfs at once, to do so, we merge the selected PDF files from S3, into one file for ease-of-print
When you multi-print documents, a new window is opened which is your merged file directly, user needs to wait around 20ish seconds for it to display.
We can to leverage the resources used to generate the PDFs onto Amazon infrastructure, but we need to keep the same flow, meaning, we should provide an instant public link to the User to download & print the files.
Based on your understanding, i think you just need your link to be created immediately right after user request for file. However, you want in parallel to create PDF merge. I have idea to do that based on my understanding, and may be it could work in your situations.
First start with some logic to create unique pdf file name, with random string representing name of file. And at same time in background generate PDF, but the name of PDF should be same as you created in first step. This will give user instant name of file with link to download. However, your file creation is still in progress.
Make sure, you use threads if using PHP or event loop if using Node.JS to run both steps at same time. This will avoid 404 error for file not found.
Transferring files from EC2 to S3 would also add latency delay. But if you want to preserve files for later or multiple use in future then S3 is good idea as it could simply serve PDF files for faster delivery. As we know S3 is used for static media storage. Otherwise simply compute everything and generate files on EC2

Realtime drive javascript example not working - Google API

I set up the Realtime drive example shown here: https://developers.google.com/drive/realtime/realtime-quickstart
On this site: http://shuub.com
But the thing is, that when I access the link from a different browser (logged in a different Google account), it won't load the file.
All I need is to edit some plain text with another user, without needing to access a google account, it doesn't even need to be saved after closing the site. Is it possible?
Thanks for reading.
But the thing is, that when I access the link from a different browser
(logged in a different Google account), it won't load the file.
Probably you need to share the file with the other user first.
Open Google Drive in your Browser. If you did not modify the example code, your file should be located in the root folder. It's probably named "New Realtime Quickstart File". Right-click on the file and share it with the other user by adding his account to the list and granting all permissions.
All I need is to edit some plain text with another user, without
needing to access a google account, it doesn't even need to be saved
after closing the site. Is it possible?
The website you have linked is not reachable so I don't know what you want to do exactly.
Probably you could also use other and (in that use case without saving and login) simpler techniques like Mozillas TogetherJS (you can try it on jsfiddle.net) or you could use a tool like Etherpad.

Retrieve the user response saved in a file in an app hosted on Cloudbees

I have hosted a Tomcat application on CloudBees which allows users to edit some XML and saves them. I need to download and save these files locally for my personal usage. However I could not find a way to do this. I tried the 'download source' option but it downloads the original files that I had uploaded and not the edited versions. However my application is able to access the edited versions (and so clearly everything is being saved all right). Getting these files back is extremely critical and necessary for me and is, in fact, the whole motive of this app. Kindly tell if there is some way to get back the files in CloudBees or any other free Java hosting site which would allow me to do it.
It's not very clear from your question how your app is currently dealing with these files, but I'll take a swing at providing some general info.
To support editing and downloading of files, your app design would need to address the following issues:
How do users edit/upload the changed XML?
Where does your app store the changed XML?
How does your app retrieve the edited XML and make it available for download?
For #1, you will need to provide an edit or upload interface in your app for manipulating the XML files. I'm assuming this is something your app has already solved using a form of some kind.
For #2, you need to pick an approach for storing the files that is appropriate for app's needs and the runtime environment where your app will be deployed. For instance, on CloudBees (or most other CLoud platforms), it's important to understand that the local filesystem of the app can be used for temporary storage, but it is not clustered and it will be wiped away each time the app is updated or restarted. If these XML files need to be available forever, you will need to store them in a persistent location that is external to the application's runtime instance. Most developers use databases (such as the CloudBees MySQL service) to store persistent data in this way. In general, your app can store these files anywhere, but your app needs to manage how to store them, and how to retrieve them later.
For #3, to allow a user to download the changed files, you will need to implement your own mechanism for retrieving the file from its persistent location, and then send it back to the user's browser. If you want something like right-click "Save As" to work, then your app will just need to support a URL that can display the edited XML file directly in the browser. If your app then provides a link to that URL, users can download it using RightClick+SaveAs. If you want the user to be able to click on a button/link and trigger a Save As dialog automatically, then you'd need to write a URL handler (Servlet) that serves the XML content up using a Content-Disposition header (see this StackOverflow article). This header will tell the browser that the file is supposed to be saved to disk, and allows you to provide a default file name.

Making websites available offline

I am using HTML5 offline storage. The goal is to make the whole site available offline. So intuitively, no server requests means all the pages need to be on the client. The only way I know of to accomplish such a task is to make the site into one page then show hide portions with jquery when the user "navigates". Is there a better way?
The html 5 offline spec allows multiple pages to be saved offline so you don't need to put all your content onto one page.
EDIT: link to spec http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html
Be careful that your jquery does not still point to the cloud. You'll need to save the relevant .js files locally.
N.B. If your whole site can be generated and saved as individual .html files then all you need to do is to save these files in the correct (relative) directory structure.

How to cache images and html files in PhoneGap

I need a way for cache images and html files in PhoneGap from my site. I'm planning that users will see site without internet connection like it will be with it. But I see information only about sql data storing, but how can I store images (and use later).
To cache images check out this library -of which I'm the creator-:
imgcache.js
. It's designed for the very purpose of caching images using the local filesystem. If you check out the examples you will see that it can also detect when an image fails to be loaded (because you're offline or you have a very bad connection) and then replaces it automatically with the cached image. The user of the webapp doesn't even notice it's offline.
As for html pages.. if they're html static files, they could be stored locally in the web app (file:// in phonegap).
If they're dynamically generated pages, check the localStorage API if you have a small amount of data, otherwise the filesystem API.
For my web app I retrieve only json data from my server (and process/render it using Backbone+Underscore). The json payload is stored into the localStorage. If the application gets offline, it will fetch json data from the localStorage instead of the server (home-baked fork of Backbone.dualStorage)
You then get the full offline experience: pages+images.
Caching like you might need for simple offline operation is not exactly that easy.
Your first option is the cache manifest. It has some limitations (like the size of the cache) but might work for you since it was designed to do what you want.
Another options is that you can store content on the disk of the device using the file system APIs. This has some drawbacks like security and the fact that you have to load the file from a path / url that is different than you might normally load it from on the web. Check out the hydra plugin for an example of this.
One final option might be to store stuff in localStorage (which has the benefit of being private on all platforms) and then pull it out of there when needed ... that means base64'ing all your images tho so that is a pretty big departure from just standard caching.
Caching is very much possible on Android OS. but on Apple as stated above there are limitations with the size of the images and cache size etc.
If you are willing to integrate and allow the caching on iOS you can use "cache manifest" to do so. but keep the draw backs and limitations in mind.
Also
if you want to save the file to Documents folder under my App, Apple will reject your App. The reason is the system backup all data under Documents folder to iCould after iOS6, so Apple does not allow big data like images or JSON file which could sync from your server again to keep in this folder.
So there is another work around which is good So one can use LocalFileSystem.TEMPORARY instead. It does not save the data to Library/Cache, but it save data to temp folder of App, which does not been auto backup to iCloud and not auto deleted either.
Regards
Rajeev

Resources