I need to live-stream RTMP based video to the webpage, and the end result should be dynamic and adaptive (DASH).
Below FFMPEG command works with single stream but it's not adaptive/no low-high selection options.
ffmpeg -i rtmp://source.mysite.com/live/9 temp/manifest.mpd
I need something like 1080p RTMP input and 240p, 360p, 480, 720p and 1080p output in single DASH manifest.
Can somebody guide how can I have a stable/tamed, multi-bitrate adaptive result here?
Related
I used a number of jpeg files to create a timelapse video with ffmpeg. Individually they are visually ok.
These source images are captured by a mirrorless DSL camera in JPEG format.
If I upload the timelapsevideo to youtube, the video is clear and without any artifact: https://www.youtube.com/watch?v=Qs-1ahCrb0Y
However if I play the video file locally on MacOS in Photo or Quicktime apps or in iOS, there are artifacts in the video. Here are some of the examples:
1.
2.
This is the ffmpeg command I used to generate the video:
ffmpeg -framerate 30 -pattern_type glob -i "DSCF*.JPG" -pix_fmt yuv420p -profile baseline output.mp4
What additional parameter I can use to remove those artifacts?
Edit:
File info
The video plays without issue in VLC.
The H.264 codec standard defines levels. The level represents the resources required by a decoder to smoothly process a stream. Usually, levels are only pertinent for hardware players. However, some software players may have been designed with a level ceiling. Apparently, that's the case with Apple's players.
Your video's frame size is 6000x4000 for which the player has to support level 6.0, which is a recent addition to the standard (~2 years). I suggest you halve the resolution,
ffmpeg -framerate 30 -pattern_type glob -i "DSCF*.JPG" -vf scale=iw/2:ih/2,format=yuv420p -profile baseline out.mp4
I am investigating a possibility to store video streams which are coming from few sources already coded in h264 without video transcoding as the device I would like to use for this project won't be capable of transcoding combined video on the fly.
What I am looking for is two or more pictures side to side (not video concatenation) packed into mp4/avi/mkv.
I believe mkv container supports such kind of packaging but I've not been able to find appropriate options for ffmpeg or other tool to store it this way. What it does is very slow video transcoding into one big h264 stream.
If your player can handle it just make it perform the side-by-side view. No encoding or muxing required.
mpv video player
Example using mpv:
mpv --lavfi-complex="[vid1][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
The above example assumes each input has the same height. Otherwise you will have to add the scale, scale2ref, pad, and/or crop filters. Simple example using the crop filter to remove 20 pixels from the height:
mpv --lavfi-complex="[vid1]crop=iw:ih-20[c];[c][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
See the mpv documentation and FFmpeg Filters for more info.
Just specify multiple inputs.
ffmpeg -i [input 1] -i [input 2] ... -map 0 -map 1 ... -codec copy -f matroska [output]
As for the "side-to-side" part, it's up to the player to determine the presentation. If you don't control the player and you need a specific layout or presentation, then you must "burn" all these video streams into a new one and encode it as a new single stream.
I would like to ask someone how knows FFmpeg good
As you can see I already know how to set timecodes that contain in green borders,
but I don't know is there any opportunity set the Video timecode.
Thank you for you help
Only possible with ffmpeg if you are ready to re-encode the video stream as MPEG-2 e.g.
ffmpeg -i input -c:v mpeg2video -gop_timecode "03:04:05:06" output
I'm attempting to write a script that will merge 2 separate video files into 1 wider one, in which both videos play back simultaneously. I have it mostly figured out, but when I view the final output, the video that I'm overlaying is extremely slow.
Here's what I'm doing:
Expand the left video to the final video dimensions
ffmpeg -i left.avi -vf "pad=640:240:0:0:black" left_wide.avi
Overlay the right video on top of the left one
ffmpeg -i left_wide.avi -vf "movie=right.avi [mv]; [in][mv] overlay=320:0" combined_video.avi
In the resulting video, the playback on the right video is about half the speed of the left video. Any idea how I can get these files to sync up?
Like the user 65Fbef05 said, the both videos must have the same framerate
use -f framerate and framerate must be the same in both videos.
To find the framerate use:
ffmpeg -i video1
ffmpeg -i video2
and look for the line which contains "Stream #0.0: Video:"
on that line you'll find the fps in movie.
Also I don't know what problems you'll encounter by mixing 2 audio tracks.
From my part I will try to use the audio from the movie which will be overlayed
over and discard the rest.
I've a C# program generating JPEG images in realtime, i need to (continuously) generate a video from the images and stream it (also in realtime).
I've used ffmpeg to transcode an input video source and stream it, doesn't ffmpeg have an option to get the input as a set of images(always being generated) and make the video out of it ?
Cheers
Actually I used VLC for the streaming....
Actually I just found at that I could:
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
But i need to tell ffmpeg to keep doing it, I mean, if it doesn't find another image ffmpeg should wait for another one to be generated... is this possible ?