There are already several articles about starting downloads from flutter web.
I link this answer as example:
https://stackoverflow.com/a/64075629/15537341
The procedure is always similar: Request something from a server, maybe convert the body bytes to base64 and than use the AnchorElement to start the download.
It works perfectly for small files. Let's say, 30MB, no problem.
The whole file has to be loaded into the browser first, than the user starts the download.
What do to if the file is 10GB?
Is there a way to read a stream from the server and write a stream to the users download? Or is an other way preferable like to copy the file to a special folder that is directly hosted by the webserver?
Related
My goal is to download a large zip file (15 GB) and extract it to Google Cloud using Laravel Storage (https://laravel.com/docs/8.x/filesystem) and https://github.com/spatie/laravel-google-cloud-storage.
My "wish" is to sort of stream the file to Cloud Storage, so I do not need to store the file locally on my server (because it is running in multiple instances, and I want to have the disk size as small as possible).
Currently, there does not seem to be a way to do this without having to save the zip file on the server. Which is not ideal in my situation.
Another idea is to use a Google Cloud Function (eg with Python) to download, extract and store the file. However, it seems like Google Cloud Functions are limited to a max timeout of 9 mins (540 seconds). I don't think that will be enough time to download and extract 15GB...
Any ideas on how to approach this?
You should be able to use streams for uploading big files. Here’s the example code to achieve it:
$disk = Storage::disk('gcs');
$disk->put($destFile, fopen($sourceZipFile, 'r+'));
In short, my API will accept file uploads. I (ultimately) store them in S3, but to save uploading them to S3 on the same request, I queue the upload process and do it in the background.
I was originally storing the file on the server, and in my job, I was queueing the file path, and then grabbing the contents with that file path on the server, and then sending to S3.
I develop/stage on a single server. My production environment will sit behind a load balancer, with 2-3 servers. I realised that my jobs will fail 2/3 of the time as the file that I am linking to in my job may be on a different server and not on the server running the job.
I realised I could just base64_encode the file contents, and just store that in Redis (as opposed to just storing the path of the file). Using the following:
$contents = base64_encode(file_get_contents($file));
UploadFileToCloud::dispatch($filePath, $contents, $lead)->onQueue('s3-uploads');
I have quite a large Redis store, so I am confident I can do this for lots of small files (most likely in my case), but some files can be quite large.
I am starting to have concerns that I may run into issues using this method, most likely issues to do with my Redis store running out of memory.
I have thought about a shared drive between all my instances and revert back to my original method of storing the file path, but unsure.
Another issue I have is if a file upload fails, if it's a big file, can the failed_jobs table handle the amount of data (for example) of a base64 encoded 20mb pdf.
Is base64 encoding the file contents and queuing that the best method? Or, can anyone recommend an alternative means to queue uploading a file upload in a multi server environment?
We wanted to let our clients review the live streams made. We checked the option ‘Record all live streams’ from the Wowza Engine Manager. We know that the streamings are being saved inside the wowza content folder but since our engine is located in a EC2 instance we could find no easy way for our clients to watch them but to download them through console.
Can the manager be configured to show the videos there like it is on Wowza Streaming Cloud?
in my case I set up a webserver(apache2) on the same machine listening on port 8080 (wowza uses 80 for hls streaming), then I set a symbolic link from /var/www/html/content to {Wowza installation Folder} /content this way the users can reach the recordings at http://youserver.com:8080/content
by default apache will list all files on the folder and if the file is .mp4 the browser will play the video, if file is .flv it will be downloaded
If it's an option for you, you can move your recordings to s3. You should first mount an s3 bucket in your filesystem (s3fs), then configure the module ModuleMediaWriterFileMover to move the recorded files to the mount dir.
A better approach:
Move the files to an S3 bucket as soon as they are ready.
Wowza actually has a module for this (of course it does, everybody needs it)
https://www.wowza.com/forums/content.php?813-How-to-upload-recorded-media-to-an-Amazon-S3-bucket-(ModuleS3Upload)
So, as you do with every other module,
1- include files in lib folder
2- go to the engine manager UI and add the module
3- set your keys and bucket in the manager properties
Restart and done. Works like a charm and no files are uploaded before they are ready.
Note: Be careful because unless you are naming each stream with a timestamp like I'm doing, amazon will overwrite the file when uploading one with the same name.
Is it possible to create a stream in wowza from multiple files ? So these file would be played in a row after each other? As far as I know, I can only stream from one file being in the content directory..
1.) I would like to split that one file for my own reasons, to add some security to it etc... , and then to create the playlist from these multiple files and publish it for streaming.. so it won't take that much time comparing to the second way.
2.) Or do I need to put these multiple files back together and then publish the playlist?
I would also like to consider the time of the playlist being created even using a big file. I am using ffmpeg to split the file into smaller pieces using a script.
Therefore it would be automatic, when a user would request a stream, I run the script that splits the files and creates the playlist for user..
I hope I didn't take it from the wrong way. Help please
On wowza website you can download this module to create playlists on server side without concating your files.
You may also want to check com.wowza.wms.stream.publish.Stream class which enables to you create a Stream on server side and attach playlist items to it. And this post will help you get started on how to create dynamic playlists if you need it.
am using liferay custom portlet and in that am using jasper report now my problem is that how can i download the pdf report directly on the client machine
right now am storing the file at server first.then provide url for downloading the pdf to user.but how can i directly store the file to client machine if i have pdf file's outputstream .
ot if i can know some how when user click on the download link and after downloading the file if i want to delete the donlowded file from the server then how can i do it.?if any one can guide me...
I'm not sure what you're asking for is possible, but I would be interested in seeing someone correct that statement though.
Servers really shouldn't be directly storing files on a client machine as that violates the intent of the client server relationship. A client has to make a request for the file and then the client can save that file (eg like a ftp download). Servers just don't manipulate client machines as they see fit.
As far as knowing when a file is downloaded, there isn't anything in a portlet you can do to detect that. You can use ResourceRequest and serveResource method to serve a file, but nothing in the portlet API will inform your portlet that the download is complete or that it wasn't interrupted by something.
As an alternative you might try simply having a cron job that will clean out old files. In this case, make sure to inform users how long they have to successfully download the file.