I have 2 rtmp, one of them for audio playback and one for streaming video. I mix all of them using ffmpeg, but I want to change the delay between audio and video while running ffmpeg. How can I do that?
Related
I'm developing video streaming website using MSE.
Each video converted to FragmentedMP4 (h264,aac => avc1,mp4a)
It is working very fine but what if I wanted to use webm format? like YouTube or Facebook they sometimes use it.
I want to know how to get index (like sidx atom in fmp4) from VP8, VP9 or vorbis codec
I use bento4 and ffmpeg to get metadata from video and audio
but bento4 is for MP4 Just, and use MP4BoxJS to parse index in browser by JavaScript.
What should I use? ffmpeg or what to create fragmented webm or something like that and get index stream info to append segments to MSE SourceBuffer and sure it should be seekable stream..
I need to live-stream RTMP based video to the webpage, and the end result should be dynamic and adaptive (DASH).
Below FFMPEG command works with single stream but it's not adaptive/no low-high selection options.
ffmpeg -i rtmp://source.mysite.com/live/9 temp/manifest.mpd
I need something like 1080p RTMP input and 240p, 360p, 480, 720p and 1080p output in single DASH manifest.
Can somebody guide how can I have a stable/tamed, multi-bitrate adaptive result here?
I use ffmpeg for video streaming, but I have images and images convert to video then streaming. I want encoder wait the image but ffmpeg starts and makes all image then close.
I need a video file whose audio and video track duration are always the same. The file must contain an audio track even if source audio has no audio track. How do I tell ffmpeg to add a silent audio track when source has no audio trace. Also, if source has an audio track that is a different duration than the video, I need ffmpeg to append silent audio to make both output audio and video the same duration. Is this possible in one line with ffmpeg?
The command below will add a silent track of the same length* as the video, if there is no audio** in the source file.
ffmpeg -i video -f lavfi -i anullsrc=cl=1 -shortest -c:v libx264 -c:a aac output.mov
*Since video frame duration and audio frame duration aren't usually identical, the lengths won't be exactly the same.
**when map is not specified, ffmpeg selects a single audio stream from among the inputs with the highest channel count. If there are two or more streams with same no. of channels, it selects stream with lowest index. anullsrc here has one channel, so it will be passed over except when source has no audio.
I'm trying to make a batch of videos for uploading to youtube. My emphasis is on the audio (mostly MP3, with some WMA). I have several image files that need to be picked up at random to go with the audio (ie) display an image for 5 seconds before showing the next. I want the video to stop when the audio stream ends. How should I use ffmpeg to achieve this ?
Ref:
http://trac.ffmpeg.org/wiki/Create%20a%20video%20slideshow%20from%20images