I have a webm audio file , I was trying to convert it into mp4 using ffmpeg. But it is failed to create the mp4. The info about the file is as follows.
fmpeg -i 54ebe077-96fc-4ace-9a38-f13c58807322.webm -hide_banner
Input #0, matroska,webm, from '54ebe077-96fc-4ace-9a38-f13c58807322.webm':
Metadata:
encoder : Lavf56.40.101
creation_time : 2019-10-22T11:19:12.000000Z
Duration: 00:00:24.16, start: 0.000000, bitrate: 41 kb/s
Stream #0:0: Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
Stream #0:1: Audio: opus, 48000 Hz, mono, fltp (default)
At least one output file must be specifiedffmpeg -i 54ebe077-96fc-4ace-9a38-f13c58807322.webm -qscale 0 out.mp4
I was tried to convert it using the following command
ffmpeg -i 54ebe077-96fc-4ace-9a38-f13c58807322.webm -qscale 0 out.mp4
It throws errors
[opus # 0x56489c7f9840] LBRR frames is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[opus # 0x56489c7f9840] Error decoding a SILK frame.
[opus # 0x56489c7f9840] Error decoding an Opus frame.
Too many packets buffered for output stream 0:1.
[aac # 0x56489c82d640] Qavg: 59180.625
[aac # 0x56489c82d640] 2 frames left in the queue on closing
Conversion failed!
How to fix this issue? I have played the file in VLC and I can hear the sound from the source file. But failed to convert it
Your ffmpeg is too old
Update your ffmpeg:
Download an already compiled ffmpeg
Or see compile instructions at FFmpeg Wiki
This was ticket #4641: Error decoding SILK frame. The fix is newer than the most current release branch (FFmpeg 4.3 as of writing this), so you have to get a build from the git master branch (either of the links above will do), or wait for FFmpeg 4.4.
If you can't update
If you can't update your ffmpeg the old workaround is to use libopus to decode:
ffmpeg -c:a libopus -i input ...
Related
I've been using this command to convert my public rtmp audio/video stream to a local mp3 audio icecast2 stream, but I have been unable to do the same for both video and audio.
[Audio Only] (This works fine)
ffmpeg -re -i rtmp://162.142.xx.xxx:xxx/stream -vn -codec:a libmp3lame -b:a 128k -f mp3 -content_type audio/mpeg icecast://source:password#192.168.1.xxx:80/live
I've tried to re-write in order to support video, but I keep hitting dead ends
[Audio & Video Attempt] (this does not work)
ffmpeg -re -i rtmp://162.142.xx.xxx:xxx/stream -codec:v -f mpeg4 -b:v -f mpeg4 -content_type video/mpeg4 icecast://source:password#192.168.1.xxx:80/live
When I run this command, it gives me the error below asking for a suitable format.
$ ffmpeg -re -i rtmp://162.142.xx.xxx:xxx/stream -codec:v -f mpeg4 -b:v -f mpeg4 -content_type video/mpeg4 icecast://source:password#192.168.1.xxx:80/live
[h264 # 0x5598ffbb8980] co located POCs unavailable
[h264 # 0x5598ffbb8980] mmco: unref short failure
Input #0, flv, from 'rtmp://162.142.xx.xxx:xxx/stream':
Metadata:
|RtmpSampleAccess: true
Server : NGINX RTMP (github.com/sergey-dryabzhinsky/nginx-rtmp-module)
displayWidth : 1280
displayHeight : 720
fps : 48
videokeyframe_frequency: 0
profile :
level :
Duration: 00:00:00.00, start: 28117.779000, bitrate: N/A
Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp, 327 kb/s
Stream #0:1: Video: h264 (High), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 2560 kb/s, 48 fps, 48 tbr, 1k tbn
[NULL # 0x5598ffb8bec0] Unable to find a suitable output format for 'mpeg4'
mpeg4: Invalid argument
I am positive that icecast2 can support video streams, however on the few occasions that I was able to actively stream successfully to it, it only showed an empty video embed.
I've re-written the command for AV multiple times while referencing ffmpeg documentation, however my above attempt seems to be the closest (concept-wise) that I have gotten.
What flags/formatting might I be missing which are causing the stream not to work?
Want to batch convert a bunch of different video files from cli instead of Rolands old-and-slow-drag-and-drop-one-file-at-a-time-software. I have used ffprobe in OS X Terminal here. This shows us what the software did to the file and I want to do the same. MJPEG AVI I get but the rest, how would my ffmpeg syntax look to achieve this result efter converting?
Example: My ffprobe give me this
Input #0, avi, from 'P10_0001.AVI':
Metadata:
comment :
encoder : Roland Corporation
Duration: 00:03:17.64, start: 0.000000, bitrate: 16694 kb/s
Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown), 640x480, 15285 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 2 channels, s16, 1411 kb/s
What would the ffmpeg syntax look like to do this with a new file.
I've been trying some simple ones but those are not accepted by the machine (Edirol p-10) and I hope someone can point me in the right direction. :)
Edit:
OK. The syntax I want to do is involving 3 files.
File that has the correct codec and everything to work with the machine. P10_0001.AVI
A file that does not have the correct format (codec etc.) softvision.mpg
A new file just as file 2 but with the codec of file number 1. P10_0002.AVI
ffmpeg -i gradomat.mpg -framerate 25 -vf scale=640:480 -vcodec mjpeg -pix_fmt yuvj422p -b:v 15285k -b:a 1411k -acodec pcm_s16le -ar 44100 -ac 2 -metadata encoder="Roland Corporation" P10_000X.AVI
Think this solved it temporarily but the problem is that I have to write that my self, it would have been better if ffprobe gave me that syntax instead.
This is also a solution, but in python.
https://github.com/cskonopka/rolandp10fp
i am trying to convert some different video formats to flv using ffmpeg. But it seems that only some videos go through.
ffmpeg -i /var/www/tmp/91640.avi -ar 22050 -ab 32 -f flv /var/www/videos/91640.flv
here is some debug info:
Seems stream 0 codec frame rate differs from container frame rate: 23.98 (65535/2733) -> 23.98 (5000000/208541)
Input #0, avi, from '/var/www/tmp/91640.avi':
Duration: 00:01:12.82, start: 0.000000, bitrate: 5022 kb/s
Stream #0.0: Video: mpeg4, yuv420p, 1280x528 [PAR 1:1 DAR 80:33], 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s
WARNING: The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s
Output #0, flv, to '/var/www/videos/91640.flv':
Stream #0.0: Video: flv, yuv420p, 1280x528 [PAR 1:1 DAR 80:33], q=2-31, 200 kb/s, 90k tbn, 23.98 tbc
Stream #0.1: Audio: adpcm_swf, 22050 Hz, 5.1, s16, 0 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Error while opening codec for output stream #0.1 - maybe incorrect parameters such as bit_rate, rate, width or height
also, if i try to grab one frame ad convert it to jpeg i get an error as well
ffmpeg -i /var/www/tmp/91640.avi -an -ss 00:00:03 -t 00:00:01 -r 1 -y /var/www/videos/91640.jpg
debug info
...
[mpeg4 # 0x1d7d810]Invalid and inefficient vfw-avi packed B frames detected
av_interleaved_write_frame(): I/O error occurred
Usually that means that input file is truncated and/or corrupted.
im thinking that the image fails because the video conversion failed in the first place, not sure though
any ideas what goes wrong?
Bits, not kbits
From your console output:
WARNING: The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s
Use 32k, not just 32.
Only stereo or mono is supported
The encoder adpcm_swf ony supports mono or stereo, so add -ac 2 as an output option. The console output would have suggested this if you were using a recent ffmpeg build.
Use -vframes 1 for single image outputs
Instead of -t 00:00:01 -r 1 use -vframes 1.
A better encoder
Instead of using the encoders flv and adpcm_swf, I recommend libx264 and libmp3lame:
ffmpeg -i input -vcodec libx264 -preset medium -crf 23 -acodec libmp3lame -ar 44100 -q:a 5 output.flv
-preset – Controls the encoding speed to compression ratio. Use the slowest preset you have patience for: ultrafast,superfast, veryfast, faster, fast, medium, slow, slower, veryslow.
-crf – Constant Rate Factor. A lower value is a higher quality. Range is 0-51 for this encoder. 0 is lossless, 18 is roughly "visually lossless", 23 is default, and 51 is worst quality. Use the highest value that still gives an acceptable quality.
-q:a – Audio quality for libmp3lame. Range is 0-9 for this encoder. A lower value is a higher quality.
Also see
FFmpeg and x264 Encoding Guide
Encoding VBR (Variable Bit Rate) mp3 audio
My Video file shows below meta-data with ffprobe/ffmpeg:
Duration: 00:44:27.52, start: 1333.760000, bitrate: 335 kb/s
Stream #0.0(und): Video: h264 (Main), yuv420p, 640x480, 25 tbr, 90k tbn, 50 tbc
Note: The file does not contain audio.
I am trying to convert this video file to other video file, using ffmpeg/avconv.
This works: (but encodes h.264 video to mpeg4)
ffmpeg -i input.mp4 output.mp4
& it generates output file of proper duration (44:27 - 1333 seconds = 22:14)
This does not work:
ffmpeg -i input.mp4 -vcodec copy output.mp4
Generates file without video.
The output contains:
$ avconv -i input.mp4 -vcodec copy output.mp4
avconv version 0.8.9-6:0.8.9-0ubuntu0.13.10.1, Copyright (c) 2000-2013 the Libav developers
built on Nov 9 2013 19:09:46 with gcc 4.8.1
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4':
Metadata:
major_brand : dash
minor_version : 0
compatible_brands: iso6avc1mp41
creation_time : 2014-01-19 22:43:21
Duration: 00:44:27.52, start: 1333.760000, bitrate: 335 kb/s
Stream #0.0(und): Video: h264 (Main), yuv420p, 640x480, 25 tbr, 90k tbn, 50 tbc
Metadata:
creation_time : 2014-01-19 22:43:21
Output #0, mp4, to 'output.mp4':
Metadata:
major_brand : dash
minor_version : 0
compatible_brands: iso6avc1mp41
creation_time : 2014-01-19 22:43:21
encoder : Lavf53.21.1
Stream #0.0(und): Video: ![0][0][0] / 0x0021, yuv420p, 640x480, q=2-31, 90k tbn, 90k tbc
Metadata:
creation_time : 2014-01-19 22:43:21
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press ctrl-c to stop encoding
frame= 0 fps= 0 q=-1.0 Lsize= 0kB time=10000000000.00 bitrate= 0.0kbits/s
video:0kB audio:0kB global headers:0kB muxing overhead inf%
FFmpeg development is very active
When experiencing an issue it is best to get a new build of ffmpeg from FFmpeg to ensure that you are not encountering a bug that has already been fixed.
Ubuntu uses a fork
Ubuntu does not use ffmpeg from FFmpeg, but an old, fake version from a fork. See Who can tell me the difference and relation between ffmpeg, libav, and avconv?
Get ffmpeg
You can:
Simply download a build of ffmpeg, or
Follow a step-by-step guide to compile ffmpeg.
Using the build
The build is easy. You just download, extract, and execute (notice the ./ before ffmpeg):
wget http://ffmpeg.gusari.org/static/32bit/ffmpeg.static.32bit.$(date +"%F").tar.gz
tar xzvf ffmpeg.static.32bit.$(date +"%F").tar.gz
./ffmpeg -i input -codec copy -map 0 output
Compiling
Compiling ffmpeg allows you to customize it how you like. The compile guide is non-invasive and easy to undo.
Reporting a bug
If a recent build still has the suspected bug then you can get help at the ffmpeg-user mailing list, or perform a search at the FFmpeg Bug Tracker and report it if it is a new bug. If you report the bug make sure to:
Check that you are using a recent build.
Provide the complete ffmpeg command and the complete ffmpeg console output.
Provide all necessary samples.
Use the minimal command that still shows the issue.
Provide any additional information that is useful for others who will attempt to duplicate the issue.
I have two .OGG files of similar size, FPS and duration. My goal is to combine them into a side-by-side presentation using FFMPEG. To this end I've tried the following cmd:
ffmpeg -i subject.ogg -vf "[in]pad=3*iw:3*ih[left];movie=clinician.ogg[right];[left] [right]overlay=100:0[out]" combined.ogg
Suffice to say that the resultant video is non-playable. During the combination process FFMPEG prints lots of errors that read like:
[Parsed_overlay_2 # 0x1eb7d3e0] Buffer queue overflow, dropping
What is this telling me?
Note:
both source files are playable
I padded the 'output' to be rather large in an attempt to understand the params
the placement of the 2nd video at 100:0 is arbitrary. Once I get the cmd working I'll move it to a better location in the output.
both videos began life as .FLV recorded from web cameras. I converted them to .ogg as FFMPEG didn't want to combine two .FLV files. If there is a better route to this, please let me know.
So - what's wrong with my parameters and what am I doing to cause these FFMPEG errors?
EDIT:
ffmpeg -i clinician.ogg
Input #0, ogg, from 'clinician.ogg':
Duration: 00:05:20.98, start: 0.001000, bitrate: 2273 kb/s
Stream #0:0: Video: theora, yuv420p, 500x500 [SAR 1:1 DAR 1:1], 1k tbr, 1k tbn, 1k tbc
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Stream #0:1: Audio: vorbis, 8000 Hz, stereo, s16
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
ffmpeg -i subject.ogg
Input #0, ogg, from 'subject.ogg':
Duration: 00:05:17.60, start: 0.001000, bitrate: 1341 kb/s
Stream #0:0: Video: theora, yuv420p, 300x300 [SAR 1:1 DAR 1:1], 83.33 tbr, 1k tbn, 1k tbc
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Stream #0:1: Audio: vorbis, 8000 Hz, stereo, s16
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Converting to x264 was a great suggestion. That seemed to turn the tide.
Here are some notes for posterity:
to convert flv to x264 and correct audio sync issues:
ffmpeg -y -i subject_s_2242_r_1658.flv -async 1 -ac 2 -strict -2 -acodec vorbis \
-c:v libx264 -preset slow -crf 22 subject.mkv
to merge two x264 files into a single side-by-side file and put the two mono audio tracks into stereo in the resultant file:
ffmpeg -y -i clinician.mkv -vf: "movie=subject.mkv[right];pad=iw*2:ih:0:0[left];[left][right]overlay=500:0" \
-filter_complex "amovie=clinician.mkv[l];amovie=subject.mkv[r];[l][r] amerge" final.mkv
I was unable to install AVISYNTH (running on CentOS 6.2) but it does look like a great solution.
It is probably easiest to do this using Avisynth.
Make the following input.avs file:
a = AviSource("first.avi")
b = AviSource("second.avi")
StackHorizontal(a,b)
Then run ffmpeg -i input.avs output.avi ... plus any other options you want.
EDIT: Another way to do it (not fast) is to dump the frames from both files to png and combine them with ImageMagick (for example montage) or similar image processing tools.
#!/bin/bash
ffmpeg -i first.avi first_%05d.png
ffmpeg -i second.avi second_%05d.png
for file in first_*.png ; do montage ${file} ${file/first/second} ${file/first/output} ; done
ffmpeg -i output_%05d.png output.avi
This actually lets you do a lot more image processing than just side-by-side, you can do arbitrary scale/overlay/background/etc. The problem is that the N-th frame from one file may not be at exactly the same time as the N-th frame from the other file, if they are variable frame rate, this is something that AviSynth handles perfectly for you. If the clips are constant frame rate that is not a problem.
Combining clips by making a new clip containing both like this (whether through avisynth or not) requires recompressing the video, and reduces video quality/increases file size.
I am not sure how to read ogg files into Avisynth, but there is probably a way. Check the FAQ on input formats.
Side comment: The choice of theora/ogg is strange. Better: H.264 in mp4 container.