Possible to Combine Live m3u8 stream with PIP overlay from WebRTC source? - ffmpeg

Can someone tell me what server-side technology (perhaps ffmpeg), one could use in order to:
1) display this full-screen live-streaming video:
http://aolhdshls-lh.akamaihd.net/i/gould_1#134793/master.m3u8
2) and overlay it in the lower-right corner with a live video coming from a webRTC video-chat stream?
3) and send that combined stream into a new m3u8 live-stream
4) Note that it needs to be a server-side solution - - - cannot launch multiple video players in this case (needs to pass the resulting stream to SmartTV's which only have one video-decoder at a time)
The closest example I've found so far is this article:
https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
Which isn't really live, nor is it really doing overlays.
any advice is greatly appreciated.

Let me clear what you want in this case :
Input video is HLS streaming from webRTC : What about delay? is dealy important thing in your work?
Overlay image into video : This will need that decoding input video, and filtering it, encoding again. so it needs a lot of cpu resource and even more if input video is 1080p.
Re-struct new HLS format : You must put it lot of encoding option to make sure that ts fragment works well. most important thing is GOP size and ts duration.
You need a web server to provide m3u8 index file. you can use nginx, apache.
What i tell you now in this answer is that ffmpeg command line, which making overlay from input HLS streaming and re-make ts segments.
Following command-line will do what you want in step 1 to step 3 :
ffmpeg \
-re -i "http://aolhdshls-lh.akamaihd.net/i/gould_1#134793/master.m3u8" \
-i "[OVERLAY_IMAGE].png" \
-filter_complex "[0:v][1:v]overlay=main_w:main_h[output]" \
-map [output] -0:a -c:v libx264 -c:a aac -strict -2 \
-f ssegment -segment_list out.list out%03d.ts
This is basic command line that overlay image from your input HLS streaming then creating ts segment and index file.
I don't have any further experience with HLS so it can be done without any tuning option but maybe you should tune it for you work. and also you should search a little bit for web server to provide m3u8 but it won't be hard.
GOP size(-g) and its duration(segment_tim), as i said, will be key point of your tuning.

Related

How I can convert any mp4 to adv8dvbt23.ts file?

I can download http://www.w6rz.net/adv8dvbt23.ts.
And there are many samples for dvbt sample ts files.
But, I want to convert my video file to TS file for dvbt.
First, I checked on google, but I cannot find any answer.
I think, this does not make sense, or, the way of thinking may have been wrong.
FFmpeg can used for this?
but, there is no any parmameter for Transmit mode, QAM / 64QAB, guard interval.
FFmpeg can used for this? but, there is no any parmameter for Transmit mode, QAM / 64QAB, guard interval.
As I explained already:
ffmpeg doesn't know anything about RF things like Constellation type; it is just a tool to transcode between different video formats. .ts is for "transport stream", and it's the video container format that DVB uses. The GNU Radio transmit flowgraphs on the other hand know nothing about video things – all they do is take the bits from a file. So that file needs to be in a format that the receiver would understand, and that's why I instructed you to use FFMPEG with the parameters you need. Since I don't know which bitrate you're planning on transmitting, I can't help you with how to use ffmpeg
So, you need to generate video data that your DVB-T receiver understands, but more importantly even, you need to put them in a container that ensures constant bitrate.
As pointed out in a different comment to your ham.stackexchange.com question about the topic, your prime source of examples would be GNU Radio's own gr-dtv module; when you look into gnuradio/gr-dtv/examples/README.dvbt, you'll find a link to https://github.com/drmpeg/dtv-utils , W6RZ's own tooling :)
There you'll find the tools necessary to calculate the exact stream bitrate you need your MPEG transport stream to have. Remember, a DVB-T transmitter has to transmit at a constant bits per second, so your video container must be constant-bitrate. That's why a transport stream pads the video data to achieve constant rate.
Then, you'll use ffmpeg to transcode your video and put into the transport stream container:
ffmpeg -re -i inputvideo.mpeg \
-vcodec mpeg2video \
-s 720x576 #resolution; this is a good choice, since most TVs will deal with it \
-r 25 #frames per second, use 25\
-flags cgop+ilme -sc_threshold 1000000000 #MPEG codec options\
-b:v 2M #Video *codec data* bit rate (defines video quality). Must be lower than stream bit rate, so < muxrate-(audio bitrate)\
-minrate:v 2M -maxrate:v 2M #enforce constant video bit rate\
-acodec mp2 -ac 2 -b:a 192k #audio codec, quality and bitrate\
-muxrate ${RATE FROM TOOL}
-f mpegts #specify you want a MPEG Transport Stream container as output\
outputfile.ts

Pushing on-fly transcoded video to embeded http results with no seekbar

I'm trying to achieve a simple home-based solution for streaming/transcoding video to low-end machine that is unable to play file properly.
I'm trying to do it with ffmpeg (as ffserver will be discontinued)
I found out that ffmpeg have build in http server that can be used for this.
The application Im' testing with (for seekbar) is vlc
I'm probably doing something wrong here (or trying to do something that other does with other applications)
My ffmpeg code I use is:
d:\ffmpeg\bin\ffmpeg.exe -r 24 -i "D:\test.mkv" -threads 2 -vf
scale=1280:720 -c:v libx264 -preset medium -crf 20 -maxrate 1000k
-bufsize 2000k -c:a ac3 -seekable 1 -movflags faststart -listen 1 -f mpegts http://127.0.0.1:8080/test.mpegts
This code also give me ability to start watching it when I want (as opposite to using rtmp via udp that would start video as soon as it transcode it)
I readed about moving atoom thing at file begging which should be handled by movflags faststart
I also checked the -re option without any luck, -r 25 is just to suppress the Past duration 0.xx too large warning which I read is normal thing.
test file is one from many ones with different encoder setting etc.
The setting above give me a seekbar but it doesn't work and no overall duration (and no progress bar), when I switch from mpegts to matroska/mkv I see duration of video (and progress) but no seekbar.
If its possible with only ffmpeg I would prefer to stick to it as standalone solution without extra rtmp/others servers.
after some time I get to point where:
seek bar is a thing on player side , hls in version v6 support pointing to start item as v3 start where ever it whats (not more than 3 items from end of list)
playback and seek is based on player (safari on ios support it other dosn't) also ffserver is no needed to push the content.
In the end it work fine without seek and if seek is needed support it on your end with player/js.player or via middle-ware like proxy video server.

Creating HLS variants with FFMPEG

I am starting with a high res video file and I would like to create 3 variants, low quality, mid quality, and high quality for mobile streaming. I want these mid/low/high variants to be segmented into ts pieces that the m3u8 file will be pointing that. Is there a way to do this in one line in ffmpeg?
I have successfully generated an m3u8 file and ts segments with ffmpeg, do I need to do this 3x and set specs for low/mid/high? If so, how do I get the singular m3u8 file to point to all variants as opposed to one for each variant?
This is the command I used to generate the m3u8 file along with the ts segments.
ffmpeg -i C:\Users\george\Desktop\video\hos.mp4 -strict -2 -acodec aac -vcodec libx264 -crf 25 C:\Users\user\Desktop\video\hos_Phone.m3u8
Yes, you need to encode all variants and generate the media playlists first (the playlists containing the segments).
If you want you can do it in one command since ffmepg supports multiple inputs/outputs. Eg:
ffmpeg -i input \
... [encoding parameters 1] ... output1 \
... [encoding parameters 2] ... output2 \
....[encoding parameters 3] ... output3
You must provide the variants in multiple qualities/bitrates but the aspect ratio should remain the same. Keeping the aspect ratio was initially mandatory but in the latest HLS authoring guide it's downgraded to a recommendation.
All variant streams must be keyframe aligned so set a GOP size using the -g option, disable scene-cut detection and use a segment duration hls_time which is a multiple of your keyframe interval.
Once you have all 3x m3u8 media playlist you can manually create the master playlist which points to each media playlist.
Example from the Apple HLS documentation, you must change the bandwidth, codecs, resolution and playlist filenames according to your own encoding options:
#EXTM3U
#EXT-X-VERSION:6
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=2855600,CODECS="avc1.4d001f,mp4a.40.2",RESOLUTION=960x540
medium.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=5605600,CODECS="avc1.640028,mp4a.40.2",RESOLUTION=1280x720
high.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1755600,CODECS="avc1.42001f,mp4a.40.2",RESOLUTION=640x360
low.m3u8
The Aspect ratio Does not have to be the same, that makes no sense.
How could you know what the client can play?
Aspect ratios are 4:3 for non-HD, 16:9 for a HD variants.
You don't want to do all your variants in one ffmpeg command if you
need segment times to be consistent.
Also watch transcoding downward, if you go from 1080 to 360, there
might be issues. One that I often get is that the audio degrades and
sounds weird. I try to go down no more than half, if I want high
quality.
#DavidC That hex is the codec version number.

ffmpeg live stream overlay issues,while any one of the stream is lost other streams are getting stuck

So far we have done
We have a video chat client which has a set of 9 video streams (users) with
h.264 codec using Adobe FMS. Now, using ffmpeg we are able to combine these
streams into one stream using the overlay (video) and amix (audio) filters.
We are able to send the single combined stream to a live streaming service.
The stream of the active speaker is shown in a bigger size using the scale
property of ffmpeg.
Code as follows:
ffmpeg -i "rtmp://localhost/live/mystream" -i "rtmp://localhost/live/mystream2 " -i "rtmp://localhost/live/mystream3 "-filter_complex"nullsrc=size=300x300 [b1];[0:v] setpts=PTS-STARTPTS,scale=100x100 [s1];[1:v] setpts=PTS-STARTPTS,scale=200x200 [s2];[2:v]setpts=PTS-STARTPTS,scale=100x100 [s3];[b1][s1] overlay=shortest=1 [b1+s1];[b1+s1][s2] overlay=shortest=1 [b1+s2];
[b1+s2][s3] overlay=shortest=1:x=100" out.mp4
Help Needed in the following 2 major issues. Any help would be appreciated.
Whenever the active speaker changes, the stream of that user should be shown in a bigger
size. is this possible to do without restarting the ffmpeg process?
Right now, if one of the 9 streams stops, the ffmpeg process crashes.

What parameters or software is best to use to convert .MP4 to .FLV

I'm on Windows 7 and i have many .MP4 video that i want to convert on .flv. I have try ffmpeg and Free FLV converter, but each time the results are not what i'm looking for.
I want a video of same quality (or almost, looking good) and a more little size for the video, because right now, each time i have made a try, the video result is pretty bad and the video size just increase.
How can i have a good looking video, less in size and in .FLV ?
Thanks a lot !
First, see slhck's blog post on superuser for a good FFmpeg tutorial. FLV is a container format and can support several different video formats such as H.264 and audio formats such as AAC and MP3. The MP4 container can also support H.264 and AAC, so if your input uses these formats then you can simply "copy and paste" the video and audio from the mp4 to the flv. This will preserve the quality because there is no re-encoding. These two examples do the same thing, which is copying video and audio from the mp4 to the flv, but the ffmpeg syntax varies depending on your ffmpeg version. If one doesn't work then try the other:
ffmpeg -i input.mp4 -c copy output.flv
ffmpeg -i input.mp4 -vcodec copy -acodec copy output.flv
However, you did not supply any information about your input, so these examples may not work for you. To reduce the file size you will need to re-encode. The link I provided shows how to do that. Pay special attention to the Constant Rate Factor section.

Resources