Used a older version of ffmpeg on a project and now im planning on switching over to avconv and was wondering if the -fps filter has been changed or depreciated? If it has been changed what is the current substitute for the -fps filter in avconv? Much Thanks
I have found that -r works quite well. But you care correct it does seem that fps filter has seen better days. It does almost work sometimes, but not others.
Make sure to use -r right before the output file, otherwise it can be interpreted to be the frame rate of one of the input streams if you write your command in some ways.
Related
By default the following unpresuming FFmpeg command:
ffmpeg -i "input.mp4" "output.mkv"
...will lossily encode a file unless it has the -c copy flag added, which will then pass the video through without any encoding. I remember not realising this as a beginner to FFmpeg years ago and being surprised when I found out, and ever since then it's something I've wondered about but not got around to asking.
The main justification for this behaviour that comes to mind for me is that encoding is a much more common operation, and it might be annoying to have to pass an extra -encode flag for most uses.
Was this ever one of the reasons cited for this design decision? Has the issue ever even been discussed in the FFmpeg mailing lists, or has it remained unquestioned since being written during the days of Fabrice Bellard?
Hi there I am aiming to record 1 hr videos at 500x375) from a raspberry pi (running 64-bit bullseye) which need to be recorded in such a way that they can endure unexpected program termination or system shutdown.
Currently I am using a bash script utilising libcamera-vid and libav:
libcamera-vid -t $filmDuration --framerate 5 --width 500 --height 375 --nopreview --codec libav --libav-format avi -o "$(date +%Y%m%d_%H%M.avi)" --tuning-file /usr/share/libcamera/ipa/raspberrypi/imx219_noir.json
I initially encoded h.264 as mp4 but found that any interruption of the script would corrupt the file and I lack the understanding to work around this (though I suspect a method exists). The avi format on the other hand seems more robust and so I moved to using it but I am having a fairly serious issue by which the file appears to think the video is running at 600fps, rather than 5.
As far as I can tell this is not the case and there has been no loss in video duration that I would expect if the frames were being condensed but the machine learning toolkit (utilising openCV) these videos are recorded for takes the fps information as part of its novel video analysis effectively making it unable to analyse them.
I am not sure why exactly this is occurring or how to fix it but any advice would be very welcome; including any suggestions for other encoding software or solutions to recording to mp4 in a way that avoids corruption.
Not resolved as such but after opening an issue at the libcamera-apps repo this behaviour has been replicated and confirmed to be unintended.
While a similar issue that was effecting the mkv format incorrectly reporting its fps (as 30 according ffprobe) has been fixed, currently the issue with avi files incorrectly reporting fps has not.
Edit: New update to the libcamera-apps has now fixed the avi issue as well according to latest commit.
What I did so far:
I learned with this answer that I can use negative mapping to remove unwanted streams (extra audio, subtitles) from my video files.
I them proceeded to apply it to a few dozen files in a folder using a simple for /r loop on Windows' cmd. Since I thought this process as some kind of trim, I didn't care about my original files and wanted ffmpeg to replace them, which of course it cannot.
I tried to search a bit further and find ways to work around this issue without simply using a new destination an manually replacing files afterwards, but had no luck.
However a lot of my findings seemed to indicate that ffmpeg has capabilities to use external temporary files for some of it's functions, even though I couldn't really find more onto it.
What I want to do:
So is there any way that I can make ffmpeg remove those extra streams and them replace the original file somehow. I'll also be needing to use this to multiple file, by I don't think this would be a big issue...
I really need this to be done with ffmpeg, as learning the tool to it's full extent is a long-therm goal of mine and I want to keep working on that curve, but as for batch/cmd, I prefer it because I haven't properly learned a programming language yet (even if I often meddle with a few), but I would be happy to use suggestions of any kind for handling ffmpeg!
Thank you!
Not possible with ffmpeg alone
ffmpeg can't do in-place file changes.
The output must be a new file.
However, deleting/removing/replacing to original file with the new file should be trivial in your batch script.
I saw some vague references while searching and also stumbled upon the cache protocol and -hls_flags temp_file
The cache protocol allows some limited seeking during playback of live inputs. -hls_flags temp_file is only usable with the HLS muxer and creates a file named filename.tmp which is then renamed once the active segment completes. Neither are usable for what you want to do.
Here's the output showing 2177 frames converted https://i.imgur.com/VP4M7Zl.png. It is ignoring the further 1000-odd frames in the input set.
There's nothing unusual AFAICS about input JPG 2177.
What might cause this? Where should I look for more info?
This can happen if numbering isn't perfectly smooth output02177.jpg is followed by e.g. output02179.jpg, or if output02178.jpg is 0-sized.
Edit: I just remembered, you're using a very old build of ffmpeg. Upgrade to 4.0 or newer, and change -r 60 to -framerate 60
file1.wav is 25 minutes long. file2.wav is 20 seconds long. File2.wav delays to the end of the file1.wav and the two are "amixed" together. The delay works perfect and overlays into the correct location at the end of the 25 minute file1.wav file. My problem is the blending of the two clips together -- I believe the dropout_transition (even though it's set to 0) still creates an audible undesireable "dip" before and after file2.wav overlays onto file1.wav. Is there a way to ensure that no "dips" happen at all? The two clips are well-balanced with either using mixing software so I don't want them change at all, but I also don't want distortion. Is this possible? Is it possible to use amerge instead of amix as an alternative? I tried but I can't figure out the correct syntax. Help from geniuses appreciated!
I've tried various different dropout_transition settings and volume settings...this is as close as I've come to desired results. Like I said, I can't figure out the correct syntax to use amerge instead of amix.
ffmpeg -i file1.wav -i file2.wav -filter_complex "[1]adelay=70751488S|#70751488S,volume=1[b];[0][b]amix=inputs=2:duration=first:dropout_transition=0,volume=2" /output.wav
It works! Here's the solution. 1) As stated, make sure that your dropout transition = length of longest file in seconds. Thank you for this 2) The two files I was using were 48/16bit wave files. Make sure that you export 32-bit floating point wave files to be mixed together. From the manual: amix only supports "floating point samples; If the amix input has integer samples then aresample will be automatically inserted to perform the conversion to float samples." Apparently, this works but aresample somehow messes up the transitions when you don't use 32-bit wave files. No idea why just avoid this conversion and only use 32-bit wave files. 3) Note: When combining two .wav files together the total lowered output is -6dB. This is to prevent distortion and is normal. So the question is why did aresample mess things up for me?