I would like to ask someone how knows FFmpeg good
As you can see I already know how to set timecodes that contain in green borders,
but I don't know is there any opportunity set the Video timecode.
Thank you for you help
Only possible with ffmpeg if you are ready to re-encode the video stream as MPEG-2 e.g.
ffmpeg -i input -c:v mpeg2video -gop_timecode "03:04:05:06" output
Related
i am a beginner programmmer and am trying to implement ffmpeg. I am trying to convert a bunch of images to video and add a audio background. Can anybody help me and tell me how to loop the audio as required by the length of the video generated.
PS. This number of images varies so can we implement something that dynamically loops the audio as required
Use
ffmpeg -i images%d.jpg -f lavfi -i amovie=audio.mp3:loop=0,asetpts=N/SR/TB -shortest out.mp4
I need a video file whose audio and video track duration are always the same. The file must contain an audio track even if source audio has no audio track. How do I tell ffmpeg to add a silent audio track when source has no audio trace. Also, if source has an audio track that is a different duration than the video, I need ffmpeg to append silent audio to make both output audio and video the same duration. Is this possible in one line with ffmpeg?
The command below will add a silent track of the same length* as the video, if there is no audio** in the source file.
ffmpeg -i video -f lavfi -i anullsrc=cl=1 -shortest -c:v libx264 -c:a aac output.mov
*Since video frame duration and audio frame duration aren't usually identical, the lengths won't be exactly the same.
**when map is not specified, ffmpeg selects a single audio stream from among the inputs with the highest channel count. If there are two or more streams with same no. of channels, it selects stream with lowest index. anullsrc here has one channel, so it will be passed over except when source has no audio.
I would like to transcode video stream using ffmpeg tool and change only the video stream resolution, i.e. the video and audio parameters should remain the same.
According to the man page of the ffmpeg the following command line should provide the desired result:
ffmpeg -i input.mp4 -vcodec copy -acodec copy -s WxH output.avi
The Video codec of the input stream is compatible with avi container.
The actual result is that the resolution remains unchanged and it seems that the stream is just repacked in avi container.
The resolution of the output stream is changed successfully without -vcodec copy option, but the video codec is changed: h264 (Constrained Baseline) - > mpeg4 (Simple Profile).
When you copy a video stream, you cannot change any of its paramters, sinceā¦ well, you're copying it. ffmpeg won't touch it in any way, so it can't change the dimensions, frame rate, et cetera.
Also, ffmpeg always chooses a default video codec if you don't specify one. For AVI files, that's mpeg4.
If you want H.264 video, choose -c:v libx264 instead (or -vcodec libx264 which is the same). If you need to keep the original profile, use -profile:v baseline.
Two things:
When you change the size, you will recode the video. This lowers the quality and might considerably harm the video. To compensate for this, you might need to set a higher quality level. You do this by setting the Constant Rate Factor to anything below the default of 23, e.g. with -crf 20. Experiment and see how your video looks like. If you have the time, add the -preset slow (or slower, veryslow), which will give you better compression.
Not that it matters in your case, since your input uses the Constrained Baseline profile, but note that H.264 in AVI is not properly supported, at least when using B pictures. Baseline doesn't support B pictures though, so you should be fine. It could happen that file can't be played back on some devices or players if you use the Main profile or anything above. I would rather mux it into an MP4 or MKV container, especially if your input file is MP4 anyway.
I'm attempting to write a script that will merge 2 separate video files into 1 wider one, in which both videos play back simultaneously. I have it mostly figured out, but when I view the final output, the video that I'm overlaying is extremely slow.
Here's what I'm doing:
Expand the left video to the final video dimensions
ffmpeg -i left.avi -vf "pad=640:240:0:0:black" left_wide.avi
Overlay the right video on top of the left one
ffmpeg -i left_wide.avi -vf "movie=right.avi [mv]; [in][mv] overlay=320:0" combined_video.avi
In the resulting video, the playback on the right video is about half the speed of the left video. Any idea how I can get these files to sync up?
Like the user 65Fbef05 said, the both videos must have the same framerate
use -f framerate and framerate must be the same in both videos.
To find the framerate use:
ffmpeg -i video1
ffmpeg -i video2
and look for the line which contains "Stream #0.0: Video:"
on that line you'll find the fps in movie.
Also I don't know what problems you'll encounter by mixing 2 audio tracks.
From my part I will try to use the audio from the movie which will be overlayed
over and discard the rest.
I've a C# program generating JPEG images in realtime, i need to (continuously) generate a video from the images and stream it (also in realtime).
I've used ffmpeg to transcode an input video source and stream it, doesn't ffmpeg have an option to get the input as a set of images(always being generated) and make the video out of it ?
Cheers
Actually I used VLC for the streaming....
Actually I just found at that I could:
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
But i need to tell ffmpeg to keep doing it, I mean, if it doesn't find another image ffmpeg should wait for another one to be generated... is this possible ?