Delete files before uploading via Fineuploader - fine-uploader

(using fineuploader with Traditional endpoints)
Application flow:
User will upload several files
Server will perform some backend operations on theses files
Files will then be deleted
Because of these "backend operations" it would be cleaner to delete all the files in the upload directory before any files are uploaded. Fineuploader has a lot of callbacks. My thinking was add a callback for validateBatch. Then I could kick off my delete operation, but how do I then indicate to fine uploader to continue.

Related

Strapi: Upload files to a specific media library folder using strapi upload api

I'm trying to upload files to Strapi's media library through its upload api.
Right now, all the files are getting uploaded to "API Uploads" folder.
Is there a way to upload the files to folder of my choice?
I couldn't find any details about it in the documentation.
Currently this is not possible, according to the documentation last updated on December 14, 2022: https://docs.strapi.io/developer-docs/latest/plugins/upload.html#endpoints
Folders are an admin panel feature and are not part of the REST or the GraphQL API. Files uploaded through the REST or GraphQL API are located in the automatically created "API Uploads" folder.
There is a feature request on this, so users can vote: https://feedback.strapi.io/feature-requests/p/support-for-media-folders-in-the-content-api
(In the feature request, there's discussion on how to do it in a hacky way, did not test it myself)
Also, If you have to create many folders (like I had to), you would have to do it manually through the interface, but my solution was to automate it with a script that creates them directly onto the db table.

Use EC2 for PDF Generation, provide public URL to user

I have developed an application which allows Users to select multiple "transactions"; each of this is directly related to a PDF file.
When a User multi-selects them, and "prints" them, these PDF files are merged into one longer file to provide ease of print.
Currently, "transaction" PDFs are generated on request, and so is PDF-merging.
I'm trying to scale this up relaying over Amazon infrastructure, some questions arised to me.
Should I implement a queue for the PDF generation per "transaction"? If so, how can I provide the user a seamless experience? We don't want them to "wait"
Can I use EC2 to generate these PDF files for me? If so, can I provide a "public" link for the user to download the file directly from Amazon, instead of using our resources.
Thanks a lot!
EDIT ---- More details
User inputs some information through a regular form
System generates a PDF per request, using the provided information for the document
The PDF generated by the system is kept under Amazon S3
We provide an API which allows you to "print" multiple pdfs at once, to do so, we merge the selected PDF files from S3, into one file for ease-of-print
When you multi-print documents, a new window is opened which is your merged file directly, user needs to wait around 20ish seconds for it to display.
We can to leverage the resources used to generate the PDFs onto Amazon infrastructure, but we need to keep the same flow, meaning, we should provide an instant public link to the User to download & print the files.
Based on your understanding, i think you just need your link to be created immediately right after user request for file. However, you want in parallel to create PDF merge. I have idea to do that based on my understanding, and may be it could work in your situations.
First start with some logic to create unique pdf file name, with random string representing name of file. And at same time in background generate PDF, but the name of PDF should be same as you created in first step. This will give user instant name of file with link to download. However, your file creation is still in progress.
Make sure, you use threads if using PHP or event loop if using Node.JS to run both steps at same time. This will avoid 404 error for file not found.
Transferring files from EC2 to S3 would also add latency delay. But if you want to preserve files for later or multiple use in future then S3 is good idea as it could simply serve PDF files for faster delivery. As we know S3 is used for static media storage. Otherwise simply compute everything and generate files on EC2

How to program an event that detects when a file is upload to Yammer?

Event That is thrown when file is upload to Yammer.I am not able to do it with Yammer javascript sdk.
You don't specify how quickly you need to respond to a file being uploaded. The Data Export API allows you to download just the files.csv which includes a list of files uploaded. You can consume this to know which new files have been uploaded.

How can I upload multiple files from urls directly to cloud storage

I've tried some of the services out there, including droplet, ctrlq.org/save, and some other sites that support directly fetching a file from a url and uploading it to dropbox, google drive and the like. Without the user having to store the file on a local disk.
Now the problem is none of these services support multiple urls or batch uploading, but I have quite a few urls and I really need a service where I can put them in, split them with enters or semicolons, and have the files uploaded to dropbox.(or any other cloud storage)
Any help would be gladly appreciated.
The Dropbox Saver JavaScript control allows you to save up to 100 files to the user's Dropbox in one shot. You'll need to programmatically create the button using Dropbox.createSaveButton as explained in the linked page.
It seems like the 100-file limit (at any one time) is universal, but you might find that it isn't the case when using the DropBox REST API. It looks possible to do this with NodeJS server side (OAuth and posts) or Javascript client side (automating FileReader). I'll review and try to add content so these aren't just links.
If you can leave a page open for about 20 minutes due to "technical limitations", the dropbox should be loadable 100-at-a-time like that, assuming each upload takes less than 2 seconds; it's an easy hook to add a progress indicator.
If you're preloading the dropbox once yourself or the initial load is compatible with manual action, perhaps mapping a drive and trying to unzip an archive of your links to it would work. If your list of links isn't extremely volatile then the REST API could be used to synchronize changes.
Edit: Forgot to include this page on CloudConvert, which unzips archives containing up to 100 files into DropBox. Your use case doesn't seem to include retrieving the actual content at your servers (generated zip files), sending the automation list to the browser and then having the browser extract to dropbox, but it's another option.
The Dropbox API now offers the ability to save a file into Dropbox directly via a URL. There's a blog post about it here:
https://blogs.dropbox.com/developers/2015/06/programmatically-saving-a-url-to-dropbox/
The documentation can be found here:
https://www.dropbox.com/developers/core/docs#save-url

Fine Uploader initial file list, origin of list?

I understand how to respond to Fine Uploader in order to populate the file list with previously uploaded files. However I am unsure as which event or procedure is best to follow, in order to essentially record the files a user might upload. Initially one might retain the list of files, via the Success post. However recording uploaded files that might have failed, or have been paused, or that were stopped. Those list items need to be reported to the server as they happen, since there will be no Success post. Is there a built in mechanism for this, or should I build my own, posting back to the server whatever is in the list when it changes, then recalling all of it upon the initial file list FU call?
Your initial file list should only ever be populated with files that have been successfully uploaded in a previous session. If you are using Fine Uploader S3 (and I know you are from previous discussions) this would mean that files that have been associated with upload success calls are the only ones you should ever include in your initial file list. How you gather the data for the initial file list is entirely dependent upon the inner workings of your application. Presumably, you have a DB that contains metadata along with state for all uploaded files.

Resources