ffmpeg -r option - ffmpeg

I am trying to use ffmpeg (under linux) to add a small title to a video. So, I use:
ffmpeg -i hk.avi -r 30000/1001 -metadata title="SOF" hk_titled.avi
The addition of title seems to work, but, the problem is the output file is about a 1/3rd of the file size of the input file and I was wondering why this is? Is this at the expense of quality of the video? I am unsure.. How do I preserve the same quality/size as the input file?
The main point I am unable to figure out is the use of -r option. Going through the ffmpeg docs, it seems to suggest that -r is frames per second (The input video is 23.9fps). At the moment, (30000/1001) works out to 29 fps, but I was unsure if I should be using this value.
Thanks for your time.

The default settings for ffmpeg do not always provide a good quality output when you encode, but this depends on your output format and the available encoders. With your output ffmpeg will use the default of -b 200k or -b:v 200k.
However, you can tell ffmpeg to simply copy the input streams without re-encoding and this is recommended if you just want to add or edit metadata. These examples do the same thing but use different syntax depending on your ffmpeg version:
ffmpeg -i hk.avi -vcodec copy -acodec copy -metadata title="SOF" hk_titled.avi
ffmpeg -i hk.avi -c copy -metadata title="SOF" hk_titled.avi

Related

ffmpeg enforces bitrate value other than what specified

I have a folder containing 1701 image frames named "frame0000.jpg", "frame0001.jpg",..., "frame1700.jpg". When I try to convert them to a video using this command:
ffmpeg -r:1751/61 -b:2400k -i frame%3d.jpg video1.avi
It produces a video with a bitrate of 717kbs and 25 frames/second (so the FPS is also different than what I specified!); hence, it has a poor quality.
I read a lot about this issue such as:
FFMPEG ignores bitrate
But couldn't find the solution to my case.
Any help is appreciated.
Fixed command:
ffmpeg -framerate 1751/61 -i frame%3d.jpg -b:v 2400k video1.avi
Option placement is important
Syntax is:
ffmpeg [input options] -i input [output options] output
Use valid options
-r:1751/61 is incorrect. Use -framerate 1751/61. The image demuxer prefers -framerate, not -r.
-b:2400k is incorrect. Use -b:v 2400k
Refer to the log
It should have provided errors to help you determine the problem:
Invalid stream specifier: 1751/61
Option b (video bitrate (please use -b:v)) cannot be applied to input -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to.

ffmpeg: crop video into two grayscale sub-videos; guarantee monotonical frames; and get timestamps

The need
Hello, I need to extract two regions of a .h264 video file via the crop filter into two files. The output videos need to be monochrome and extension .mp4. The encoding (or format?) should guarantee that video frames are organized monotonically. Finally, I need to get the timestamps for both files (which I'd bet are the same timestamps that I would get from the input file, see below).
In the end I will be happy to do everything in one command via an elegant one liner (via a complex filter I guess), but I start doing it in multiple steps to break it down in simpler problems.
In this path I get into many difficulties and despite having searched in many places I don't seem to find solutions that work. Unfortunately I'm no expert of ffmpeg or video conversion, so the more I search, the more details I discover, the less I solve problems.
Below you find some of my attempts to work with the following options:
-filter:v "crop=400:ih:260:0,format=gray" to do the crop and the monochrome conversion
-vf showinfo possibly combined with -vsync 0 or -copyts to get the timestamps via stderr redirection &> filename
-c:v mjpeg to force monotony of frames (are there other ways?)
1. cropping each region and obtaining monochrome videos
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:260:0,format=gray" outL.mp4
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:1280:0,format=gray" outR.mp4
The issue here is that in the output files the frames are not organized monotonically (I don't understand why; how come would that make sense in any video format? I can't say if that comes from the input file).
EDIT. Maybe it is not frames, but packets, as returned by av .demux() method that are not monotonic (see below "instructions to reproduce...")
I have got the advice to do a ffmpeg -i outL.mp4 outL.mjpeg after, but this produces two videos that look very pixellated (at least playing them with ffplay) despite being surprisingly 4x bigger than the input. Needless to say, I need both monotonic frames and lossless conversion.
EDIT. I acknowledge the advice to specify -q:v 1; this fixes the pixellation effect but produces a file even bigger, ~12x in size. Is it necessary? (see below "instructions to reproduce...")
2. getting the timestamps
I found this piece of advice, but I don't want to generate hundreds of image files, so I tried the following:
$ ffmpeg -y -hide_banner -i outL.mp4 -vf showinfo -vsync 0 &>tsL.txt
$ ffmpeg -y -hide_banner -i outR.mp4 -vf showinfo -vsync 0 &>tsR.txt
The issue here is that I don't get any output because ffmpeg claims it needs an output file.
The need to produce an output file, and the doubt that the timestamps could be lost in the previous conversions, leads me back to making a first attempt of a one liner, where I am testing also the -copyts option, and the forcing the encoding with -c:v mjpeg option as per the advice mentioned above (don't know if in the right position though)
ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:1280:0,format=gray" -vf showinfo -c:v mjpeg eyeL.mp4 &>tsL.txt
This does not work because surprisingly the output .mp4 I get is the same as the input. If instead I put the -vf showinfo option just before the stderr redirection, I get no redirected output
ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:260:0,format=gray" -c:v mjpeg outR.mp4 -vf showinfo dummy.mp4 &>tsR.txt
In this case I get the desired timestamps output (too much: I will need some solution to grab only the pts and pts_time data out of it) but I have to produce a big dummy file. The worst thing is anyway, that the mjpeg encoding produces a low resolution very pixellated video again
I admit that the logic how to place the options and the output files on the command line is obscure to me. Possible combinations are many, and the more options I try the more complicated it gets, and I am not getting much closer to the solution.
3. [EDIT] instructions how to reproduce this
get a .h264 video
turn it into .mp by ffmpeg command $ ffmpeg -i inVideo.h264 out.mp4
run the following python cell in a jupyter-notebook
see that the packets timestamps have diffs greater and less than zero
%matplotlib inline
import av
import numpy as np
import matplotlib.pyplot as mpl
fname, ext="outL.direct", "mp4"
cont=av.open(f"{fname}.{ext}")
pk_pts=np.array([p.pts for p in cont.demux(video=0) if p.pts is not None])
cont=av.open(f"{fname}.{ext}")
fm_pts=np.array([f.pts for f in cont.decode(video=0) if f.pts is not None])
print(pk_pts.shape,fm_pts.shape)
mpl.subplot(211)
mpl.plot(np.diff(pk_pts))
mpl.subplot(212)
mpl.plot(np.diff(fm_pts))
finally create also the mjpeg encoded files in various ways, and check packets monotony with the same script (see also file size)
$ ffmpeg -i inVideo.h264 out.mjpeg
$ ffmpeg -i inVideo.h264 -c:v mjpeg out.c_mjpeg.mp4
$ ffmpeg -i inVideo.h264 -c:v mjpeg -q:v 1 out.c_mjpeg_q1.mp4
Finally, the question
What is a working way / the right way to do it?
Any hints, even about single steps and how to rightly combine them will be appreciated. Also, I am not limited tio the command line, and I would be able to try some more programmatic solution in python (jupyter notebook) instead of the command line if someone points me in that direction.

Using ffmpeg to convert an SEC file

I need to convert an SEC file into any video format that I can share and/or upload to Youtube. MP4, etc.
I'm a complete newbie at all things terminal. I've tried:
ffmpeg -i video.sec video.mp4
ffmpeg -i video.sec -bsf:v h264_mp4toannexb -c:v copy video.avi
ffmpeg -i video.sec -b 256k -vcodec h264 -acodec aac video.mp4
I don't understand what any of these mean, they're just examples I found online. However, whatever I try returns this error:
Invalid data found when processing input
Any thoughts? Thanks!
I had to add the following option so it would skip the SEC's custom header.
-skip_initial_bytes 48
i know this is old, but i was trying to figure this out as well, what ended up finally working for me was this command.
./ffmpeg -f h264 -i INPUT.sec -filter:v "setpts=4*PTS" OUTPUT.avi
the -f h264 was the part i was missing. and the -filter:v "setpts=4*PTS" part is to slow it back down to the original speed. you can also change the .avi at the end to whichever format works best for you.
i hope this helps someone out :)
OK, just to clear up some recent threads…
The Samsung DVR used here was an SRD-440. RB kindly sent me a file to test and he sent me a .BU file with an associated .db2 file. This was a bit of a surprise as in all older Samsung DVR’s, the .bu files can only be played back in the DVR. I mentioned this here, https://spreadys.wordpress.com/2014/07/21/ifsec-samsung-exports/
It appears that Samsung have caught on, and the BU file is now playable due to it being a H264/AVC Stream conforming to a standard profile. I have updated the IFSEC Post mentioned above to highlight this change.
Back to RB’s stream and the challenge was to get these files viewable in WMV format. They were all field based, at 704×288.
The speed of playback is controlled by the Samsung software, using the .db2 file. As such, the metadata and timing information in the video stream was wrong. This caused speed issues and then quality issues when attempting to correct this.
As a result, I found it necessary to force an input rate and generate a new Presentation Time Stamp BEFORE the input file.
The following FFmpeg string did the job…
ffmpeg -r 12 -fflags genpts -i FILE.bu -vf scale=704:528 -sws_flags lanczos -q:v 2 FILE.wmv
Remember, this is for preview – analysis would be completed differently due to the scaling, the interpolation method, and the WMV compression!
As its likely that RB may have quite a few .bu files in a folder, I placed this into a batch file to transcode the whole lot within a few minutes… more on batch files coming in a new post soon!
https://spreadys.wordpress.com/2014/07/21/ifsec-samsung-exports/
or
ffmpeg -i (name of file).sec (name of final file).mp4
ffmpeg -i (name of file).sec -filter:v "setpts=3.3*PTS" (name of final_file).mp4

Join multiple flv with ffmpeg

i am trying to join two flv files using -concat option in ffmpeg-1.1 . I have created a list named mylist.txt and placed two flv files into it, but the problem i am facing is that the output of first file in mylist.txt streams perfect but video breaks into pieces when it comes to the second file. Looks like i am using the wrong options with -concat, please guide me for suitable commands with -concat option. Following are the commands and configurations i am using for transcoding .flv files:-
mylist.txt
file '/root/1.flv'
file '/root/2.flv'
ffmpeg command :-
ffmpeg -re -f concat -i /root/mylist.txt -acodec copy -vcodec copy output.flv
Following link is the output of ffmpeg command :-
http://pastebin.com/P3uaUDEd
Unless the 2 files were encoded the same (and even if they were it could still be a problem) you would need to transcode the audio and video so that things like time stamps, bitrates, resolutions and other codec internals are correct in both streams. Change you acodec copy and vcodec copy to the codecs of your choice (x264 and mp3/aac are good choices).

Can you splice a 1 min clip out of a larger file, without transcoding it?

I have a site that allows people to upload large video files in various formats (avi, mp4, mkv and flv). I need to generate a 1 minute "sample" from the larger file that has been uploaded, and the sample needs to be in the same format, have the same frame dimensions and bit-rate as the original file. Is there a way to simply cut out a section of the file into a new file? Preferably in ffmpeg (or any other tool if ffmpeg is impossible).
First you'll want to understand how video files actually work. Here's a set of tutorials explaining that: Overly Simplistic Guide to Internet Video.
Then, you can try a variety of tools that may help with slicing out a sample. One is flvtool (if your input is FLV) or FFmpeg. With FFmpeg you can specify a start time and stop time, and it will attempt to cut out just what you ask for (but it will have to find the nearest key-frame to begin slicing at).
Here's the FFmpeg command to read a file called input.flv, start 15 seconds into the video, and then cut out the next 60 seconds, but otherwise keep the same parameters for the audio code and video codec, and write it to an output file:
ffmpeg -i input.flv -ss 15 -t 60 -acodec copy -vcodec copy output.flv
Finally if you want you can write computer code in C or C++ (using FFmpeg's libav libraries) or Java (using Xuggler) to programatically do this, but that's pretty advanced for your use case.
If you are having problems keeping auto and video synced up as I was, the following may help (found on another website):
ffmpeg -sameq -i file.flv -ss 00:01:00 -t 00:00:30 -ac 2 -r 25 -copyts output.flv
As Evan notes, the approach in the accepted answer can result in loss of A/V sync. However his solution is not correct because -sameq was removed.
As stated at https://trac.ffmpeg.org/wiki/Seeking the -ss option should come before -i not after. This fixed the issue for me.
Next option is to use -fs switch. Example:
ff -i ip.mkv -fs 500M -c copy ~/Movies/reservoir/carbohydrates.mkv
Extract 500 megabytes (500×1000×1000 bytes + ‘muxing overhead’) off selected source.
–based on filesize, as you can tell
One love. And respect.

Resources