I have two .OGG files of similar size, FPS and duration. My goal is to combine them into a side-by-side presentation using FFMPEG. To this end I've tried the following cmd:
ffmpeg -i subject.ogg -vf "[in]pad=3*iw:3*ih[left];movie=clinician.ogg[right];[left] [right]overlay=100:0[out]" combined.ogg
Suffice to say that the resultant video is non-playable. During the combination process FFMPEG prints lots of errors that read like:
[Parsed_overlay_2 # 0x1eb7d3e0] Buffer queue overflow, dropping
What is this telling me?
Note:
both source files are playable
I padded the 'output' to be rather large in an attempt to understand the params
the placement of the 2nd video at 100:0 is arbitrary. Once I get the cmd working I'll move it to a better location in the output.
both videos began life as .FLV recorded from web cameras. I converted them to .ogg as FFMPEG didn't want to combine two .FLV files. If there is a better route to this, please let me know.
So - what's wrong with my parameters and what am I doing to cause these FFMPEG errors?
EDIT:
ffmpeg -i clinician.ogg
Input #0, ogg, from 'clinician.ogg':
Duration: 00:05:20.98, start: 0.001000, bitrate: 2273 kb/s
Stream #0:0: Video: theora, yuv420p, 500x500 [SAR 1:1 DAR 1:1], 1k tbr, 1k tbn, 1k tbc
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Stream #0:1: Audio: vorbis, 8000 Hz, stereo, s16
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
ffmpeg -i subject.ogg
Input #0, ogg, from 'subject.ogg':
Duration: 00:05:17.60, start: 0.001000, bitrate: 1341 kb/s
Stream #0:0: Video: theora, yuv420p, 300x300 [SAR 1:1 DAR 1:1], 83.33 tbr, 1k tbn, 1k tbc
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Stream #0:1: Audio: vorbis, 8000 Hz, stereo, s16
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Converting to x264 was a great suggestion. That seemed to turn the tide.
Here are some notes for posterity:
to convert flv to x264 and correct audio sync issues:
ffmpeg -y -i subject_s_2242_r_1658.flv -async 1 -ac 2 -strict -2 -acodec vorbis \
-c:v libx264 -preset slow -crf 22 subject.mkv
to merge two x264 files into a single side-by-side file and put the two mono audio tracks into stereo in the resultant file:
ffmpeg -y -i clinician.mkv -vf: "movie=subject.mkv[right];pad=iw*2:ih:0:0[left];[left][right]overlay=500:0" \
-filter_complex "amovie=clinician.mkv[l];amovie=subject.mkv[r];[l][r] amerge" final.mkv
I was unable to install AVISYNTH (running on CentOS 6.2) but it does look like a great solution.
It is probably easiest to do this using Avisynth.
Make the following input.avs file:
a = AviSource("first.avi")
b = AviSource("second.avi")
StackHorizontal(a,b)
Then run ffmpeg -i input.avs output.avi ... plus any other options you want.
EDIT: Another way to do it (not fast) is to dump the frames from both files to png and combine them with ImageMagick (for example montage) or similar image processing tools.
#!/bin/bash
ffmpeg -i first.avi first_%05d.png
ffmpeg -i second.avi second_%05d.png
for file in first_*.png ; do montage ${file} ${file/first/second} ${file/first/output} ; done
ffmpeg -i output_%05d.png output.avi
This actually lets you do a lot more image processing than just side-by-side, you can do arbitrary scale/overlay/background/etc. The problem is that the N-th frame from one file may not be at exactly the same time as the N-th frame from the other file, if they are variable frame rate, this is something that AviSynth handles perfectly for you. If the clips are constant frame rate that is not a problem.
Combining clips by making a new clip containing both like this (whether through avisynth or not) requires recompressing the video, and reduces video quality/increases file size.
I am not sure how to read ogg files into Avisynth, but there is probably a way. Check the FAQ on input formats.
Side comment: The choice of theora/ogg is strange. Better: H.264 in mp4 container.
Related
I have a webm audio file , I was trying to convert it into mp4 using ffmpeg. But it is failed to create the mp4. The info about the file is as follows.
fmpeg -i 54ebe077-96fc-4ace-9a38-f13c58807322.webm -hide_banner
Input #0, matroska,webm, from '54ebe077-96fc-4ace-9a38-f13c58807322.webm':
Metadata:
encoder : Lavf56.40.101
creation_time : 2019-10-22T11:19:12.000000Z
Duration: 00:00:24.16, start: 0.000000, bitrate: 41 kb/s
Stream #0:0: Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
Stream #0:1: Audio: opus, 48000 Hz, mono, fltp (default)
At least one output file must be specifiedffmpeg -i 54ebe077-96fc-4ace-9a38-f13c58807322.webm -qscale 0 out.mp4
I was tried to convert it using the following command
ffmpeg -i 54ebe077-96fc-4ace-9a38-f13c58807322.webm -qscale 0 out.mp4
It throws errors
[opus # 0x56489c7f9840] LBRR frames is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[opus # 0x56489c7f9840] Error decoding a SILK frame.
[opus # 0x56489c7f9840] Error decoding an Opus frame.
Too many packets buffered for output stream 0:1.
[aac # 0x56489c82d640] Qavg: 59180.625
[aac # 0x56489c82d640] 2 frames left in the queue on closing
Conversion failed!
How to fix this issue? I have played the file in VLC and I can hear the sound from the source file. But failed to convert it
Your ffmpeg is too old
Update your ffmpeg:
Download an already compiled ffmpeg
Or see compile instructions at FFmpeg Wiki
This was ticket #4641: Error decoding SILK frame. The fix is newer than the most current release branch (FFmpeg 4.3 as of writing this), so you have to get a build from the git master branch (either of the links above will do), or wait for FFmpeg 4.4.
If you can't update
If you can't update your ffmpeg the old workaround is to use libopus to decode:
ffmpeg -c:a libopus -i input ...
I am looking to encode a 4k video shot with iPhone 6s in VP9 in the best quality possible.
For reference, stream data of the video I would like to encode, via ffprobe:
Duration: 00:00:10.48, start: 0.000000, bitrate: 46047 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 3840x2160, 45959 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc (default)
Metadata:
creation_time : 2017-03-13T21:12:56.000000Z
handler_name : Core Media Data Handler
encoder : H.264
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 79 kb/s (default)
Metadata:
creation_time : 2017-03-13T21:12:56.000000Z
handler_name : Core Media Data Handler
I am using the following FFmpeg commands, based on these instructions (see Best Quality (Slowest) Recommended Settings section).
ffmpeg -i INPUT.mov -c:v libvpx-vp9 -pass 1 -b:v 46000K -threads 4 -speed 4 -g 9999 -an -f webm -y /dev/null
ffmpeg -I INPUT.mov -c:v libvpx-vp9 -pass 2 -b:v 46000K -threads 4 -speed 0 -g 9999 -an -f webm OUTPUT.webm
Is there a best practice to select an optimal -b:v value such that the resulting video is visually indistinguishable from the original? I have tried values ranging from 36000K-46000K, but these result in massive files with an overall bitrate exceeding the target bitrate.
Thanks in advance!
Just have to experiment with different, much lower bit rates, and view the results. I try to watch for artifacts. Does hair still look good? Cloth? Lettering, like on road signs and store windows? No blockiness? No bleeding of dark and light at sharp edges? No echoes? I find motion blur in the original hard to judge, have to compare side by side to tell the difference between that and compression artifacts.
Try 1/10th of 36000k. I find vp9 at a nominal 400k bit rate works great on 1280x720 video. (ffmpeg with libvpx-vp9 overshoots, and I typically end up with a 20% higher actual bit rate, 480k) 4K is 3840x2160, 9x the size of 1280x720, so it would seem a 3600k bit rate should produce good results.
Another guide is that vp9 is reportedly about equal in quality to mp4 at half the bit rate. Video that looks good at a 1000k bit rate in mp4 should look good at 500k in vp9.
Want to batch convert a bunch of different video files from cli instead of Rolands old-and-slow-drag-and-drop-one-file-at-a-time-software. I have used ffprobe in OS X Terminal here. This shows us what the software did to the file and I want to do the same. MJPEG AVI I get but the rest, how would my ffmpeg syntax look to achieve this result efter converting?
Example: My ffprobe give me this
Input #0, avi, from 'P10_0001.AVI':
Metadata:
comment :
encoder : Roland Corporation
Duration: 00:03:17.64, start: 0.000000, bitrate: 16694 kb/s
Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown), 640x480, 15285 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 2 channels, s16, 1411 kb/s
What would the ffmpeg syntax look like to do this with a new file.
I've been trying some simple ones but those are not accepted by the machine (Edirol p-10) and I hope someone can point me in the right direction. :)
Edit:
OK. The syntax I want to do is involving 3 files.
File that has the correct codec and everything to work with the machine. P10_0001.AVI
A file that does not have the correct format (codec etc.) softvision.mpg
A new file just as file 2 but with the codec of file number 1. P10_0002.AVI
ffmpeg -i gradomat.mpg -framerate 25 -vf scale=640:480 -vcodec mjpeg -pix_fmt yuvj422p -b:v 15285k -b:a 1411k -acodec pcm_s16le -ar 44100 -ac 2 -metadata encoder="Roland Corporation" P10_000X.AVI
Think this solved it temporarily but the problem is that I have to write that my self, it would have been better if ffprobe gave me that syntax instead.
This is also a solution, but in python.
https://github.com/cskonopka/rolandp10fp
i am trying to convert some different video formats to flv using ffmpeg. But it seems that only some videos go through.
ffmpeg -i /var/www/tmp/91640.avi -ar 22050 -ab 32 -f flv /var/www/videos/91640.flv
here is some debug info:
Seems stream 0 codec frame rate differs from container frame rate: 23.98 (65535/2733) -> 23.98 (5000000/208541)
Input #0, avi, from '/var/www/tmp/91640.avi':
Duration: 00:01:12.82, start: 0.000000, bitrate: 5022 kb/s
Stream #0.0: Video: mpeg4, yuv420p, 1280x528 [PAR 1:1 DAR 80:33], 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s
WARNING: The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s
Output #0, flv, to '/var/www/videos/91640.flv':
Stream #0.0: Video: flv, yuv420p, 1280x528 [PAR 1:1 DAR 80:33], q=2-31, 200 kb/s, 90k tbn, 23.98 tbc
Stream #0.1: Audio: adpcm_swf, 22050 Hz, 5.1, s16, 0 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Error while opening codec for output stream #0.1 - maybe incorrect parameters such as bit_rate, rate, width or height
also, if i try to grab one frame ad convert it to jpeg i get an error as well
ffmpeg -i /var/www/tmp/91640.avi -an -ss 00:00:03 -t 00:00:01 -r 1 -y /var/www/videos/91640.jpg
debug info
...
[mpeg4 # 0x1d7d810]Invalid and inefficient vfw-avi packed B frames detected
av_interleaved_write_frame(): I/O error occurred
Usually that means that input file is truncated and/or corrupted.
im thinking that the image fails because the video conversion failed in the first place, not sure though
any ideas what goes wrong?
Bits, not kbits
From your console output:
WARNING: The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s
Use 32k, not just 32.
Only stereo or mono is supported
The encoder adpcm_swf ony supports mono or stereo, so add -ac 2 as an output option. The console output would have suggested this if you were using a recent ffmpeg build.
Use -vframes 1 for single image outputs
Instead of -t 00:00:01 -r 1 use -vframes 1.
A better encoder
Instead of using the encoders flv and adpcm_swf, I recommend libx264 and libmp3lame:
ffmpeg -i input -vcodec libx264 -preset medium -crf 23 -acodec libmp3lame -ar 44100 -q:a 5 output.flv
-preset – Controls the encoding speed to compression ratio. Use the slowest preset you have patience for: ultrafast,superfast, veryfast, faster, fast, medium, slow, slower, veryslow.
-crf – Constant Rate Factor. A lower value is a higher quality. Range is 0-51 for this encoder. 0 is lossless, 18 is roughly "visually lossless", 23 is default, and 51 is worst quality. Use the highest value that still gives an acceptable quality.
-q:a – Audio quality for libmp3lame. Range is 0-9 for this encoder. A lower value is a higher quality.
Also see
FFmpeg and x264 Encoding Guide
Encoding VBR (Variable Bit Rate) mp3 audio
I have checked the FFMpeg documentation and many forums and figured out the correct command-line to extract subtitles from an .MP4 video should look like so:
ffmpeg -i video.mp4 -vn -an -codec:s:0 srt out.srt
However, I get the following error, which lends me to question whether this is feasible at all:
Error while opening encoder for output stream #0:0 - maybe incorrect parameters
such as bit_rate, rate, width or height
Using ffmpeg -codecs, I can confirm that ffmpeg should be able to encode subrip subtitles.
Using ffmpeg -i video.mp4, I can see that there is two subtitle tracks embedded in the video :
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
...
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 720x572 [SAR 64:45 DAR 256:143], 1341 kb/s, 25 fps, 25 tbr, 90k tbn, 180k tbc
Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 191 kb/s
Stream #0:2(fra): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 191 kb/s
Stream #0:3(eng): Subtitle: dvd_subtitle (mp4s / 0x7334706D)
Stream #0:4(und): Subtitle: mov_text (text / 0x74786574)
EDIT
I have tested with the simplified command-line shown in the comments but I still get the same error. Here is a link to the detailed verbose output from running the command. I have also tried to completely disable metadata and chapters in the resulting output but that still produces the same error.
I enventually figured out why I did not succeed:
The specified command-line would have been perfectly fine if the subtitles from the source video were encoded in a text-based representation. However, as can be seen in the output to the ffmpeg -i command-line, the subtitles are encoded in the "dvd_subtitle" format.
The dvd_subtitle format stores bitmaps for each subtitle in the video. Therefore, there is no way ffmpeg would be able to translate the bitmaps into text.
For this task, one has to resort to an OCR-based software which assists a user with the task of identifying each subtitle as text from its bitmap représentation.
(There is a secondary text-based subtitle in the source video, but I don't know where it came from and is not seen by most popular players. For all intents and purposes, this "mov_text" subtitle seems to be a stub placeholder, probably an artifact of the conversion from the original DVD)
Just FYI, (Can't comment due to rep yet), but extracting SRT from an MP4 will result in a file formatted as MOV_Text not the regular SRT. It will still get added and work but its like changing mp4 to m4v. While it usually works, things don't work the same in the code. Mov_text is horrible for manually adjusting font/size etc etc. Best bet is to download and test an SRT from the web!
This will work, but will result in mov_text coded srt file:
ffmpeg -i in.mp4 out.srt
Try using map option...
if there are too many streams in input files....
Syntax would be:
ffmpeg -i video.mp4 -map 0:4 out.srt
Since there are two subtitle streams in your video, first on 0:3 which is dvdsubtitle so cannot be converted to srt so we will convert second subtitle on 0:4 stream which is mov_text and is soft copy of subtitle so can be easily converted....