Is this even possible? I tried using FFMPEG but it allows me to dump it only to a file on a server host where i dont have much free space, so im wondering if its somehow possible to dump that to some other places like a FTP?
Related
There are already several articles about starting downloads from flutter web.
I link this answer as example:
https://stackoverflow.com/a/64075629/15537341
The procedure is always similar: Request something from a server, maybe convert the body bytes to base64 and than use the AnchorElement to start the download.
It works perfectly for small files. Let's say, 30MB, no problem.
The whole file has to be loaded into the browser first, than the user starts the download.
What do to if the file is 10GB?
Is there a way to read a stream from the server and write a stream to the users download? Or is an other way preferable like to copy the file to a special folder that is directly hosted by the webserver?
I am using a scraper and uploading my data into redshift using EC2. I would prefer not to upload the data into S3 first. My code is in Jupyter Notebook. However, I get the "String contains invalid or unsupported UTF8 codepoints. Bad UTF8 hex sequence: 80 (error 3)" error that I see a lot of other people have asked about previously. I even found a page on redshift that walks through using a Remote Desktop. However, as I said before I would prefer not going through S3. Is this possible?
Currently using psycopg2 to connect to the database. I figured it wouldn't work but I tried just putting in acceptinvchars after the database user/password line, and it said that ACCEPTINVCHARS isn't defined.
If you want to copy data to Redshift right from your notebook you have to compose valid INSERT statements and execute them against the existing table in Redshift. However, throughput of this approach is quite low. I don't know how much data you plan to write but I guess scrappers should have higher throughput than that. You can first write the output of your Python script to the same EC2 instance and use COPY command.
More info on copying from an EC2 instance here: COPY from Remote Host (SSH)
As for your error, you likely have accented letters in your input and you need to use LATIN1 encoding everywhere
I used ffmpeg + libx264 to convert file format as H264, then uploaded the file to Hadoop. I used WebHDFS to access the file by HTTP, but can not online play. If I download this file over HTTP, it can play by HTML5 video. My English is poor, hope you know what I mean.
I got the reason, the format of converted file was crash, need to use qt-faststart tool fix it, then can play, thanks.
Is it possible to create a stream in wowza from multiple files ? So these file would be played in a row after each other? As far as I know, I can only stream from one file being in the content directory..
1.) I would like to split that one file for my own reasons, to add some security to it etc... , and then to create the playlist from these multiple files and publish it for streaming.. so it won't take that much time comparing to the second way.
2.) Or do I need to put these multiple files back together and then publish the playlist?
I would also like to consider the time of the playlist being created even using a big file. I am using ffmpeg to split the file into smaller pieces using a script.
Therefore it would be automatic, when a user would request a stream, I run the script that splits the files and creates the playlist for user..
I hope I didn't take it from the wrong way. Help please
On wowza website you can download this module to create playlists on server side without concating your files.
You may also want to check com.wowza.wms.stream.publish.Stream class which enables to you create a Stream on server side and attach playlist items to it. And this post will help you get started on how to create dynamic playlists if you need it.
Here's the scenario - a client uploads a Sybase dump file to (gzipped) to our local FTP server. We have an automated process which picks these up and then moves them to different server within the network where the database server resides. Unfortunately, this transfer is over a WAN, which for large files takes a long time, and sometimes our clients forget to FTP in binary mode, which results in 10GB of transfer over our WAN all for nothing as the dump file can't be loaded at the other end. What I'd like to do, is verify the integrity of the dump file on the local server before sending it out over the WAN, but I can't just try and "load" the dump file, as we don't have Sybase installed (and can't install it). Are there any tools or bits of code that I can use to do this?
There are a few things you can do from the command line. The first, on the sending side, is to generate md5sum's of the files.
$ md5sum *.dmp
2bddf3cd8b04010183dd3295ce7594ff pubs_1.dmp
7510e0250c8d68bae3e0e794c211e60b pubs_2.dmp
091fe54fa5fd81d8c109cc7835d37f4a pubs_3.dmp
On the client side, they can run the same. Secondly, usually Sybase dumps are done with the compress option. If this option is used, you can also test the file integrity by uncompressing the files via the command line. This isn't as complete, but it will verify the 8 byte CRC-32 checksum which is part of the compress algorithm.
$ gunzip --test *.dmp
gunzip: pubs_3.dmp: unexpected end of file
Neither of these methods validate that Sybase will be able to load the file, but it does help ensure the file isn't corrupt.
There is no way to really verify the integrity of the dump file without loading it in some way by a backup server. The client should know whether the dump is successful or not via the backup log or output during the dump.
But to solve your problem you should use to SFTP or SCP, all transfers are done in binary, alleviating your problem.
Ensure that they are also using compression in the dump a value of 1-3 is more than enough, this should reduce your network traffic also.