I am currently using ffmpeg to convert a custom container media format to mp4. It is straightforward to dump all the h.264 frames to one file and the aac audio to another. Then I can combine the two and create an mp4 file with ffmpeg.
The problem is that the video source isn't always perfect. From time to time frames are dropped or late etc. This causes an A/V sync issue since the pts is generated using a constant rate by ffmpeg. The source format I am using has the PTS value but I cant figure out a way to pass it to ffmpeg with the raw h.264 frames.
I suppose it would be possible to create a demuxer for the custom format, but it seems like a lot effort. I looked into ffmpeg's .nut container format thinking that I might be able to convert from the custom container to .nut first. Unfortunately it seems more complex than it looks on the surface.
It seems like there should be an easy way to pass a frame and its PTS value to ffmpeg, but I haven't come across it yet. Any help would be appreciated.
Here is the ffmpeg command I am using
ffmpeg -f s16le -ac 1 -ar 48k -i source.audio -framerate 20 -i source.video -c:a aac -b:a 64k -r 20 -c:v h264_nvenc -rc:v vbr_hq -cq:v 19 -n out.mp4
I found different articles on changing the fps with ffmpeg but none of them is matching for my exact purposes.
There is an ffmpeg command like below:
ffmpeg -i RTSPCAMERAPRODUCEH264 -c:v copy -an -movflags +frag_keyframe+empty_moov -f mp4
This will remux my camerastream to fragmented mp4 perfectly.
Is there a way to force ffmpeg to lower the FPS to save bandwidth?
I.e. camera streams 30fps, it needs 1Mbps for fmp4 (sample numbers!):
I'd like to know if it's possible to lower the FPS and get an output stream for which 500kbps (50% of original is enough) without re-encoding.
ffmpeg -r 1 -i RTSPCAMERAPRODUCEH264 -c:v copy -an -movflags +frag_keyframe+empty_moov -f mp4
and
ffmpeg -i RTSPCAMERAPRODUCEH264 -c:v copy -an -movflags +frag_keyframe+empty_moov -r 1 -f mp4
do not seem to work.
A temporally coded video stream (like one with H264 codec) cannot arbitrarily drop intermediate packets, so this is not possible. Only whole or trailing part of GOPs may be dropped.
In and attempt to make my recordings more digestible for video editors I'm trying to re-encode my files to DNxHR, after a few hiccups I got solid output with this command:
ffmpeg -ss 00:08:20 -i \\ASEXYCAPTUREPC\Users\djcim\Videos\Main\Magewell\Mage00.ts -map 0 -c:v dnxhd `
-profile:v dnxhr_hq -b:v 250M -acodec copy -ss 00:00:10 -t 00:00:20 `
S:\Videos\SavedClips\COD\Magewell\Test_Mage.mov
However it doesn't seem like it's taking my specified bitrate, when I probe the file with ffprobe:
ffprobe -i [input file] -show_streams
it says bit_rate=1739980800 which comes out to 1700M if I'm not misaken, far above the 250M I want.
Not sure if this is the only factor but the result is huge files, around 12GB a minute while the source file itself is 10 minutes long but only 15GB.
The source file is also encoded / recorded using FFmpeg and has a resolution / frame rate of 3440x1440 # 100FPS with a 250MB bitrate.
Any ideas? Really hoping to get these files much smaller.
The DNx encoders don't accept bespoke ratecontrol. The quantization parameters are fixed. Output bitrate is a function of frame size, framerate and pixel format.
When I encode videos by FFMpeg I would like to put a jpg image before the very first video frame, because when I embed the video on a webpage with "video" html5 tag, it shows the very first picture as a splash image. Alternatively I want to encode an image to an 1 frame video and concatenate it to my encoded video. I don't want to use the "poster" property of the "video" html5 element.
You can use the concat filter to do that. The exact command depends on how long you want your splash screen to be. I am pretty sure you don't want an 1-frame splash screen, which is about 1/25 to 1/30 seconds, depending on the video ;)
The Answer
First, you need to get the frame rate of the video. Try ffmpeg -i INPUT and find the tbr value. E.g.
$ ffmpeg -i a.mkv
ffmpeg version N-62860-g9173602 Copyright (c) 2000-2014 the FFmpeg developers
built on Apr 30 2014 21:42:15 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
[...]
Input #0, matroska,webm, from 'a.mkv':
Metadata:
ENCODER : Lavf55.37.101
Duration: 00:00:10.08, start: 0.080000, bitrate: 23 kb/s
Stream #0:0: Video: h264 (High 4:4:4 Predictive), yuv444p, 320x240 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 1k tbn, 50 tbc (default)
At least one output file must be specified
In the above example, it shows 25 tbr. Remember this number.
Second, you need to concatenate the image with the video. Try this command:
ffmpeg -loop 1 -framerate FPS -t SECONDS -i IMAGE \
-t SECONDS -f lavfi -i aevalsrc=0 \
-i INPUTVIDEO \
-filter_complex '[0:0] [1:0] [2:0] [2:1] concat=n=2:v=1:a=1' \
[OPTIONS] OUTPUT
If your video doesn't have audio, try this:
ffmpeg -loop 1 -framerate FPS -t SECONDS -i IMAGE \
-i INPUTVIDEO \
-filter_complex '[0:0] [1:0] concat=n=2:v=1:a=0' \
[OPTIONS] OUTPUT
FPS = tbr value got from step 1
SECONDS = duration you want the image to be shown.
IMAGE = the image name
INPUTVIDEO = the original video name
[OPTIONS] = optional encoding parameters (such as -vcodec libx264 or -b:a 160k)
OUTPUT = the output video file name
How Does This Work?
Let's split the command line I used:
-loop 1 -framerate FPS -t SECONDS -i IMAGE: this basically means: open the image, and loop over it to make it a video with SECONDS seconds with FPS frames per second. The reason you need it to have the same FPS as the input video is because the concat filter we will use later has a restriction on it.
-t SECONDS -f lavfi -i aevalsrc=0: this means: generate silence for SECONDS (0 means silence). You need silence to fill up the time for the splash image. This isn't needed if the original video doesn't have audio.
-i INPUTVIDEO: open the video itself.
-filter_complex '[0:0] [1:0] [2:0] [2:1] concat=n=2:v=1:a=1': this is the best part. You open file 0 stream 0 (the image-video), file 1 stream 0 (the silence audio), file 2 streams 0 and 1 (the real input audio and video), and concatenate them together. The options n, v, and a mean that there are 2 segments, 1 output video, and 1 output audio.
[OPTIONS] OUTPUT: this just means to encode the video to the output file name. If you are using HTML5 streaming, you'd probably want to use -c:v libx264 -crf 23 -c:a libfdk_aac (or -c:a libfaac) -b:a 128k for H.264 video and AAC audio.
Further information
You can check out the documentation for the image2 demuxer which is the core of the magic behind -loop 1.
Documentation for concat filter is also helpful.
Another good source of information is the FFmpeg wiki on concatenation.
The answer above works for me but in my case it took too much time to execute (perhaps because it re-encodes the entire video). I found another solution that's much faster. The basic idea is:
Create a "video" that only has the image.
Concatenate the above video with the original one, without re-encoding.
Create a video that only has the image:
ffmpeg -loop 1 -framerate 30 -i image.jpg -c:v libx264 -t 3 -pix_fmt yuv420p image.mp4
Note the -framerate 30 option. It has to be the same with the main video. Also, the image should have the same dimension with the main video. The -t 3 specifies the length of the video in seconds.
Convert the videos to MPEG-2 transport stream
According to the ffmpeg official documentation, only certain files can be concatenated using the concat protocal, this includes the MPEG-2 transport streams. And since we have 2 MP4 videos, they can be losslessly converted to MPEG-2 TS:
ffmpeg -i image.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts image.ts
and for the main video:
ffmpeg -i video.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts video.ts
Concatenate the MPEG-2 TS files
Now use the following command to concatenate the above intermediate files:
ffmpeg -i "concat:image.ts|video.ts" -c copy -bsf:a aac_adtstoasc output.mp4
Although there are 4 commands to run, combined they're still much faster then re-encoding the entire video.
My solution. It sets an image with duration of 5 sec before the video along with aligning video to be 1280x720. Image should have 16/9 aspect ratio.
ffmpeg -i video.mp4 -i image.png -filter_complex '
color=c=black:size=1280x720 [temp]; \
[temp][1:v] overlay=x=0:y=0:enable='between(t,0,5)' [temp]; \
[0:v] setpts=PTS+5/TB, scale=1280x720:force_original_aspect_ratio=decrease, pad=1280:720:-1:-1:color=black [v:0]; \
[temp][v:0] overlay=x=0:y=0:shortest=1:enable='gt(t,5)' [v]; \
[0:a] asetpts=PTS+5/TB [a]'
-map [v] -map [a] -preset veryfast output.mp4
I have a raw YUV video file that I want to do some basic editing to in Adobe CS6 Premiere, but it won't recognize the file. I thought to use ffmpeg to convert it to something Premiere would take in, but I want this to be lossless because afterwards I will need it in YUV format again. I thought of avi, mov, and prores but I can't seem to figure out the proper command line to ffmpeg and how to ensure it is lossless.
Thanks for your help.
Yes, this is possible. It is normal that you can't open that raw video file since it is just raw data in one giant file, without any headers. So Adobe Premiere doesn't know what the size is, what framerate ect.
First make sure you downloaded the FFmpeg command line tool. Then after installing you can start converting by running a command with parameters. There are some parameters you have to fill in yourself before starting to convert:
What type of the YUV pixel format are you using? The most common format is YUV4:2:0 planar 8-bit (YUV420p). You can type ffmpeg -pix_fmts to get a list of all available formats.
What is the framerate? In my example I will use -r 25 fps.
What encoder do you want to use? The libx264 (H.264) encoder is a great one for lossless compression.
What is your framesize? In my example I will use -s 1920x1080
Then we get this command to do your compression.
ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -r 25 -pix_fmt yuv420p -i inputfile.yuv -c:v libx264 -preset ultrafast -qp 0 output.mp4
A little explanation of all other parameters:
With -f rawvideo you set the input format to a raw video container
With -vcodec rawvideo you set the input file as not compressed
With -i inputfile.yuv you set your input file
With -c:v libx264 you set the encoder to encode the video to libx264.
The -preset ultrafast setting is only speeding up the compression so your file size will be bigger than setting it to veryslow.
With -qp 0 you set the maximum quality. 0 is best, 51 is worst quality in our example.
Then output.mp4 is your new container to store your data in.
After you are done in Adobe Premiere, you can convert it back to a YUV file by inverting allmost all parameters. FFmpeg recognizes what's inside the mp4 container, so you don't need to provide parameters for the input.
ffmpeg -i input.mp4 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1920x1080 -r 25 rawvideo.yuv