Using WebHDFS to play video over HTTP - hadoop

I used ffmpeg + libx264 to convert file format as H264, then uploaded the file to Hadoop. I used WebHDFS to access the file by HTTP, but can not online play. If I download this file over HTTP, it can play by HTML5 video. My English is poor, hope you know what I mean.

I got the reason, the format of converted file was crash, need to use qt-faststart tool fix it, then can play, thanks.

Related

how to enable h264 codecs in spice server?

I'm new to spice-server source code. And I want to test streaming codecs of SPICE. I have tried streaming-video=all option to enable streaming codecs, but it's only mjpeg. I know spice server is able to stream in h.264. But I don't know how to enable it. Any suggestion please?
// reds.cpp
// Modify the default codecs, set 'gstreamer:h264;' to be the first.
static const char default_video_codecs[] = "gstreamer:h264;" GSTREAMER_CODECS;
Notice: you need set up gstreamer lib first. SPICE-SERVER will check the gstreamer lib. Also, your spice client should also support gstreamer:h264.
If not, SPICE SERVER will use default MJPEG codecs as fallback.(Details found: video-stream.cpp file, dcc_create_video_encoder function)

Access Haivision SRT statistics from ffmpeg command

I have been successfully using ffmpeg to stream using the Haivision SRT protocol enabled, I need to access the SRT statistics as described here https://github.com/Haivision/srt/blob/master/docs/API/statistics.md
Can anyone help me to understand how to access these statistics from an ffmpeg commadline streaming SRT.

Flutter web: Download large files by reading a stream from the server?

There are already several articles about starting downloads from flutter web.
I link this answer as example:
https://stackoverflow.com/a/64075629/15537341
The procedure is always similar: Request something from a server, maybe convert the body bytes to base64 and than use the AnchorElement to start the download.
It works perfectly for small files. Let's say, 30MB, no problem.
The whole file has to be loaded into the browser first, than the user starts the download.
What do to if the file is 10GB?
Is there a way to read a stream from the server and write a stream to the users download? Or is an other way preferable like to copy the file to a special folder that is directly hosted by the webserver?

How to dump raw RTSP stream to FTP?

Is this even possible? I tried using FFMPEG but it allows me to dump it only to a file on a server host where i dont have much free space, so im wondering if its somehow possible to dump that to some other places like a FTP?

How could I watch the recorded live streams using wowza engine?

We wanted to let our clients review the live streams made. We checked the option ‘Record all live streams’ from the Wowza Engine Manager. We know that the streamings are being saved inside the wowza content folder but since our engine is located in a EC2 instance we could find no easy way for our clients to watch them but to download them through console.
Can the manager be configured to show the videos there like it is on Wowza Streaming Cloud?
in my case I set up a webserver(apache2) on the same machine listening on port 8080 (wowza uses 80 for hls streaming), then I set a symbolic link from /var/www/html/content to {Wowza installation Folder} /content this way the users can reach the recordings at http://youserver.com:8080/content
by default apache will list all files on the folder and if the file is .mp4 the browser will play the video, if file is .flv it will be downloaded
If it's an option for you, you can move your recordings to s3. You should first mount an s3 bucket in your filesystem (s3fs), then configure the module ModuleMediaWriterFileMover to move the recorded files to the mount dir.
A better approach:
Move the files to an S3 bucket as soon as they are ready.
Wowza actually has a module for this (of course it does, everybody needs it)
https://www.wowza.com/forums/content.php?813-How-to-upload-recorded-media-to-an-Amazon-S3-bucket-(ModuleS3Upload)
So, as you do with every other module,
1- include files in lib folder
2- go to the engine manager UI and add the module
3- set your keys and bucket in the manager properties
Restart and done. Works like a charm and no files are uploaded before they are ready.
Note: Be careful because unless you are naming each stream with a timestamp like I'm doing, amazon will overwrite the file when uploading one with the same name.

Resources