Can Chromecast play HLS stream with H264 video codec and without audio stream in it?
When I have created HLS stream without audio in it with supported H264 Baseline profile, I am able to play same stream on my vlc player with m3u8 file but same video I am not able to play on chromecast default receiver.
I had the same problem. It seems ChromeCast does not support HLS without audio.
The interesting thing is that if we want to play a simple video (e.g. mp4) without audio, ChromeCast is able to play it.
To cope with that issue I suggest to add a silent audio track to your TS files. Using FFMPEG you can add a silent audio track adding -f lavfi -i anullsrc parameters into your command line to create m3u8 files.
You can find more information about anullsrc here: https://trac.ffmpeg.org/wiki/Null
Here are the supported Media for Google Cast.
Video codecs (Chromecast 1st and 2nd Gen.)
H.264 High Profile up to level 4.1 (the processor can decode up to
720p/60fps or 1080p/30fps)
Delivery methods and adaptive streaming protocols
HTTP Live Streaming (HLS)
AES-128 encryption
Raw MP3 segments can be part of an HLS media playlist
Related
I've implemented an HLS service with ffmpeg (which pulls a live stream from nginx-rtmp). That all works fine, but now I'm wondering what kind of programming pattern I should be using to get live captioning to work.
I'm planning on using ffmpeg to output the incoming mp4 stream to multiple WAV chunks (i.e., the same way HLS fMP4 parts are created), and then sending those chunks over to Azure Cognitive Services for speech-to-text recognition. My question is, what do I do when I receive the speech results? Do I dump that vtt file into the same directory as my HLS chunks, and then serve that up using a single m3u8 file (with audio/video tracks along with the text track)?
Currently ffmpeg is updating the m3u8 playlist for HLS clients; would it be possible for me to create the m3u8 playlist just for the vtt files, and serve that concurrently with the "regular" HLS playlist? Also, time synchronization would seem to be difficult, because I'll be sending discrete WAV files over to Azure, so the vtt timestamps are going to be relative to the chunk I'm sending.
Help! I've done searches online, and I grasp the various issues, but I'm not sure how to plumb them all together.
I'm trying to stream my video into h264, so I can play it on a html5 page through video tag. I have found a lot of examples showing how to stream a video file to rtmp stream. but I can barely find a example for h264.
Here is the only example I can find:
ffmpeg -f dshow -i video="Virtual-Camera" -preset ultrafast -vcodec libx264 -tune zerolatency -b 900k -f mpegts udp://10.1.0.102:1234
This seems fits to my need. But I don't know what kind of server udp://10.1.0.102:1234 is.
If it starts with rtmp://10.1.0.102, then I know it's a rtmp server, and I have to setup a nginx and a rtmp module. But what's a udp server? What do I have to do to setup one?
Thanks a lot.
This ffmpeg command line allows streaming over MPEG2-TS over UDP.
So it acts as a live encoder, and that's not a bad choice for live encoder.
So you have a live encoder in place, but to stream to a web page, you also need a streaming server software, that will ingest (receive) this live stream, and will convert it to a format playable by HTML5 video tag. The format is likely to be WebRTC.
You can use Wowza or Unreal Media Server - they will ingest your MPEG2-TS stream and output to webpage as WebRTC stream.
UDP:// is not a streaming format as such, but tells you it's serving the stream over UDP (instead of TCP). The format is actually MPEG-TS (which you can see from -f mpegts)
If you want to play it in a normal browser, you will need to provide it in a different format. for live video, there isn't really a universally supported format that you can just use with the tag. Microsoft Edge and Apple Safari both support HLS natively, but both Chrome and Firefox lack any native support for a live streaming format.
With HLS, you can use hls.js and get playback in pretty much all browsers. ffmpeg can natively output HLS, you would just need a web server as well then.
I have an IP camera which sends out a live stream in RTSP over UDP and I want to display this stream in the browser, and I want it to work on the major browsers and on mobile (both iOs and Android). To achieve this I want to convert the stream to HTTP Live Streaming (HLS) on the server before sending it to the client. Now I've read that not very long ago Apple added support for fragmented MP4 (fMP4) as format for the stream, whereas normally the stream would be sent in MPEG-TS format. And fMP4 is also the format that MPEG-DASH supports, and MPEG-DASH might be the industry standard in a few years.
Now my question is, what the advantages and disadvantages of both fMP4 and MPEG-TS are?
EDIT: According to the technical notes for HLS from Apple, live streams must be encoded as MPEG-TS streams (https://developer.apple.com/library/content/technotes/tn2224/_index.html#//apple_ref/doc/uid/DTS40009745-CH1-ENCODEYOURVARIANTS). Is there a reason for this or is this information outdated?
fMP4 is likely to replace TS as a standard. It has less overhead and is required for HEVC, but the main advantage is compatibility with DASH - i.e. you can generate both HLS and DASH using the same files, which helps with compute and storage costs. For your particular use case, HLS TS probably has the more coverage (due to old devices and players) than HLS fMP4, but HLS+DASH fMP4 is what I would choose.
On a project, we have a camera with a RTSP stream (video & audio, encoded in H264). We need to make the stream available on all browsers (desktop & mobile).
I've seen some solutions:
Convert the stream on HLS (iOS) and MPEG DASH (other browsers) with FFMPEG on a server
Video only with jsmpeg
The problem is since we need a really live streaming (e.g. the user can record some pictures/video on live), a low the latency solution is a specification of the project.
Any ideas?
I'm trying to understand the feasibility of a live streaming solution.
I want to grab WebRTC streams (audio and video), send them to a server and transform them in chunks to send to a html5 video tag or a DASH player using WebM container (VP8 and Opus codecs).
I also looked into ffmpeg, ffserver and gstreamer but...
My question is how to feed the WebRTC streams (live) and transform them in HTTP chunks (live DASH compatible)?
Anyone achieved something like this?