I would like to transcode video stream using ffmpeg tool and change only the video stream resolution, i.e. the video and audio parameters should remain the same.
According to the man page of the ffmpeg the following command line should provide the desired result:
ffmpeg -i input.mp4 -vcodec copy -acodec copy -s WxH output.avi
The Video codec of the input stream is compatible with avi container.
The actual result is that the resolution remains unchanged and it seems that the stream is just repacked in avi container.
The resolution of the output stream is changed successfully without -vcodec copy option, but the video codec is changed: h264 (Constrained Baseline) - > mpeg4 (Simple Profile).
When you copy a video stream, you cannot change any of its paramters, sinceā¦ well, you're copying it. ffmpeg won't touch it in any way, so it can't change the dimensions, frame rate, et cetera.
Also, ffmpeg always chooses a default video codec if you don't specify one. For AVI files, that's mpeg4.
If you want H.264 video, choose -c:v libx264 instead (or -vcodec libx264 which is the same). If you need to keep the original profile, use -profile:v baseline.
Two things:
When you change the size, you will recode the video. This lowers the quality and might considerably harm the video. To compensate for this, you might need to set a higher quality level. You do this by setting the Constant Rate Factor to anything below the default of 23, e.g. with -crf 20. Experiment and see how your video looks like. If you have the time, add the -preset slow (or slower, veryslow), which will give you better compression.
Not that it matters in your case, since your input uses the Constrained Baseline profile, but note that H.264 in AVI is not properly supported, at least when using B pictures. Baseline doesn't support B pictures though, so you should be fine. It could happen that file can't be played back on some devices or players if you use the Main profile or anything above. I would rather mux it into an MP4 or MKV container, especially if your input file is MP4 anyway.
Related
I used a number of jpeg files to create a timelapse video with ffmpeg. Individually they are visually ok.
These source images are captured by a mirrorless DSL camera in JPEG format.
If I upload the timelapsevideo to youtube, the video is clear and without any artifact: https://www.youtube.com/watch?v=Qs-1ahCrb0Y
However if I play the video file locally on MacOS in Photo or Quicktime apps or in iOS, there are artifacts in the video. Here are some of the examples:
1.
2.
This is the ffmpeg command I used to generate the video:
ffmpeg -framerate 30 -pattern_type glob -i "DSCF*.JPG" -pix_fmt yuv420p -profile baseline output.mp4
What additional parameter I can use to remove those artifacts?
Edit:
File info
The video plays without issue in VLC.
The H.264 codec standard defines levels. The level represents the resources required by a decoder to smoothly process a stream. Usually, levels are only pertinent for hardware players. However, some software players may have been designed with a level ceiling. Apparently, that's the case with Apple's players.
Your video's frame size is 6000x4000 for which the player has to support level 6.0, which is a recent addition to the standard (~2 years). I suggest you halve the resolution,
ffmpeg -framerate 30 -pattern_type glob -i "DSCF*.JPG" -vf scale=iw/2:ih/2,format=yuv420p -profile baseline out.mp4
Good day,
I'm currently writing a bash script which records the screen under certain conditions. The problem is that only avi works as a file extension for recording the screen. This script is going to be used on an Raspberry Pi and currently I get on a decent virtual machine only 10-20 fps (goal would be around 30 fps). I think .avi is not suited for my project. But .mpeg and .mp4 are not working for recording. I tried recording with .avi and then converting it in .mp4, but I have limited memory and .avi ist just too big in size. I use currently the following command:
ffmpeg -f x11grab -y -r 30 -s 960x750 -i :0.0+0,100 -vcodec huffyuv ./Videos/out_$now.avi
//$now is the current date and time
So I wanted to know if I need some special packages from ffmpeg to record with for example .mp4 or if there are other file formats available for ffmpeg screen recording.
Edit:
I found that the codec libx264 for mp4 works, but the fps drop until they hit5 fps, which is definetly too low. The recorded video appeared like being a fast forward version of the recorded screen.
With mpeg4 for mpeg I reached over 30 fps, but the video qualitywas very bad.
It appears that even my big avi-files look like being played fast forward. Is there something I do wrong?
Is there a good middle way, where I get a decend video quality, good fps (20+) and a file which isn't too big?
Edit 2:
I tried recording it with .avi and converting it afterwards. Just converting with ffmpeg -i test.avi -c:a aac -b:a 128k -c:v libx264 -crf 23 output.mp4
resulted in the same framedrops as if I was recording with .mp4. But when I cut a littlebit of the beginning of the video and named the outputfile .mp4, the size became much smaller. But when I started the cutting at 0:00:00 (so tried just converting), it just changed the file format without converting it (so the size stayed the same). Any ideas?
I am investigating a possibility to store video streams which are coming from few sources already coded in h264 without video transcoding as the device I would like to use for this project won't be capable of transcoding combined video on the fly.
What I am looking for is two or more pictures side to side (not video concatenation) packed into mp4/avi/mkv.
I believe mkv container supports such kind of packaging but I've not been able to find appropriate options for ffmpeg or other tool to store it this way. What it does is very slow video transcoding into one big h264 stream.
If your player can handle it just make it perform the side-by-side view. No encoding or muxing required.
mpv video player
Example using mpv:
mpv --lavfi-complex="[vid1][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
The above example assumes each input has the same height. Otherwise you will have to add the scale, scale2ref, pad, and/or crop filters. Simple example using the crop filter to remove 20 pixels from the height:
mpv --lavfi-complex="[vid1]crop=iw:ih-20[c];[c][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
See the mpv documentation and FFmpeg Filters for more info.
Just specify multiple inputs.
ffmpeg -i [input 1] -i [input 2] ... -map 0 -map 1 ... -codec copy -f matroska [output]
As for the "side-to-side" part, it's up to the player to determine the presentation. If you don't control the player and you need a specific layout or presentation, then you must "burn" all these video streams into a new one and encode it as a new single stream.
So I have a Raspberry Pi app that records output from the on-board camera. These files are recorded as H264. After a user presses a button I want to display a portion of that video with OMXPlayer. OMXPlayer always needs an MP4 container ( it always ignores FPS ).
I don't want to wrap the entire H264 into an MP4 as that takes too much time.
My solution would be trim the last 30 seconds and place into MP4 container. Can I do this in one step without copying the entire content of the H264 into the MP4 first?
I don't want to re-encode this and I'm looking for the fastest operation possible.
This will be fast, just do a stream copy of the file to an mp4 container.
ffmpeg -i INPUTFILE -sseof 30 -c:v copy -c:a copy -pix_fmt yuv420p out.mp4
I need a video file whose audio and video track duration are always the same. The file must contain an audio track even if source audio has no audio track. How do I tell ffmpeg to add a silent audio track when source has no audio trace. Also, if source has an audio track that is a different duration than the video, I need ffmpeg to append silent audio to make both output audio and video the same duration. Is this possible in one line with ffmpeg?
The command below will add a silent track of the same length* as the video, if there is no audio** in the source file.
ffmpeg -i video -f lavfi -i anullsrc=cl=1 -shortest -c:v libx264 -c:a aac output.mov
*Since video frame duration and audio frame duration aren't usually identical, the lengths won't be exactly the same.
**when map is not specified, ffmpeg selects a single audio stream from among the inputs with the highest channel count. If there are two or more streams with same no. of channels, it selects stream with lowest index. anullsrc here has one channel, so it will be passed over except when source has no audio.