Does Webm support cover art? - ffmpeg

I am converting MP3 to Webm and the MP3 file includes a video stream for the cover art.
ffprobe filename.mp3
...
Stream #0:0: Audio: mp3, 22050 Hz, stereo, fltp, 64 kb/s
Stream #0:1: Video: mjpeg (Baseline), yuvj444p(pc, bt470bg/unknown/unknown), 300x300, 90k tbr, 90k tbn, 90k tbc (attached pic)
Using ffmpeg with libopus codec to convert the file causes a VP9 video stream that doesn't work well. I noticed:
VLC Player doesn't show the duration and the progress scrubber doesn't move when playing.
Android Media Player doesn't show image for the cover art of the track.
ffprobe filename.webm
...
Input #0, matroska,webm, from 'webm_bad/B01___01_Matthew_____ENGWEBN2DA.webm':
...
Stream #0:0: Video: vp9 (Profile 1), yuv444p(tv, progressive), 300x300, SAR 1:1 DAR 1:1, 1k tbr, 1k tbn, 1k tbc (default)
If I tried to use -vcodec copy option, then I get this error:
[webm # 0x7fdddf028e00] Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:1 --
Does WebM support cover art? If so, how do I transfer the MP3 cover art over using ffmpeg (or other tool)?

No, WebM does not support cover art.
From the FAQ:
The WebM file structure is based on the Matroska media container.
The cover art in a Matroska container is stored in an attachment:
Attachment Elements can be used to store related cover art, [...]
A WebM container does not support attachments:
Attachment
WebM Support
Element Name
Description
Unsupported
Attachments
Contain attached files.
Unsupported
AttachedFile
An attached file.
Unsupported
FileDescription
A human-friendly name for the attached file.
Unsupported
FileName
Filename of the attached file.
Unsupported
FileMimeType
MIME type of the file.
Unsupported
FileData
The data of the file.
Unsupported
FileUID
Unique ID representing the file, as random as possible.
Unsupported
FileReferral
A binary value that a track/codec can refer to when the attachment is needed.
Unsupported
FileUsedStartTime
DivX font extension
Unsupported
FileUsedEndTime
DivX font extension
Maybe you can consider using a different container. Opus audio streams, like the ones in a WebM container, are supported by other containers:
Opus was originally specified for encapsulation in Ogg containers
If you still want to use WebM, an alternative would be to create a video stream with a still image along with an audio stream. The FFmpeg wiki covers that topic in the Slideshow page. Combining that with this answer, which explains how to extract the cover art of an MP3 file, you could do the following:
ffmpeg -i filename.mp3 -an -c:v copy cover.jpeg
ffmpeg -loop 1 -i cover.jpeg -i filename.mp3 -c:v libvpx-vp9 -c:a libopus -b:a 64k -shortest filename.webm
64k is the bitrate that you show in the output of ffprobe.
The encoding might be slow with the second command. The Encode/Youtube page in the FFmpeg wiki shows an example command to create a video with an still image that uses the -framerate 2 option, like this:
ffmpeg -loop 1 -framerate 2 -i cover.jpeg -i filename.mp3 -c:v libvpx-vp9 -c:a libopus -b:a 64k -shortest filename.webm
For some reason I do not know, the output video of that last command cannot be reproduced by my VLC and the player crashes. 6 was the minimum -framerate that did not crash my player, so be careful.

Related

How can I convert WebM file to WebP file with transparency?

I tried it with ffmpeg.
ffmpeg input.webm output.webp
input.webm contains transparent background and But the alpha channel becomes white in webp. I think that means alpha channel doesn't come together.
I extracted frames with this command:
ffmpeg -i input.xxx -c:v libwebp output_%03d.webp
And it also gives me webp files with white background.
How can I convert it properly with alpha channel? OR should I convert it from other format(extension)?
Use the -c:v libvpx option before the input to change the decoder like in this example for the first frame (-frames:v 1):
ffmpeg -c:v libvpx -i input.webm -frames:v 1 -c:v libwebp -y output.webp
This comment says that:
FFmpeg's native VPx decoders don't decode alpha. You have to use the libvpx decoder
You can check your decoders using ffmpeg -decoders | grep libvpx and you should see an output like this:
V....D libvpx libvpx VP8 (codec vp8)
V....D libvpx-vp9 libvpx VP9 (codec vp9)
According to that output, libvpx would be the decoder for VP8 and libvpx-vp9 for VP9.
You can check the codec of your video using ffprobe input.webm. You should see an output like this:
Stream #0:0(eng): Video: vp8, yuv420p(progressive), 640x360, SAR 1:1 DAR 16:9, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
Metadata:
alpha_mode : 1
For converting a whole webm (VP8) to an animated webp use:
ffmpeg -c:v libvpx -i input.webm output.webp
For converting a whole webm (VP9) to an animated webp use:
ffmpeg -c:v libvpx-vp9 -i input.webm output.webp

Is there a function to add a title to multiple videos? FFMPEG

I'm getting started with FFMPEG to add a title video to a few dozen videos I have. What would be the proper command to do this?
Use the concat demuxer
There are several methods to join/merge/concatenate one video to another. This method uses the concat demuxer in ffmpeg to join the title video to the main video. Although there are several steps, it has the advantage that it does not re-encode the video you are adding a title to. So the process is quick and the quality is preserved.
Example
See the attributes of the video you want to add a title video to. In this example it is named main.mp4. When making the title video you will need to ensure that it matches the attributes of the video you want to add a title to.
ffmpeg -i main.mp4
...
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 988 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Generate the title video. Make sure the title video matches the attributes of the main file so it can concatenate properly. This example uses the color and anullsrc source filters to make 5 seconds of black video and silent audio, and the drawtext filter to make text:
ffmpeg -f lavfi -i color=size=1280x720:rate=30000/1001:duration=5:color=black -f lavfi -i anullsrc=sample_rate=44100:channel_layout=stereo -vf "drawtext=text='your title':fontcolor=white:fontsize=48:x=(w-text_w)/2:y=(h-text_h)/2" -c:v libx264 -profile:v main -c:a aac -shortest title.mp4
Make a text file named input.txt. This will be used by the concat demuxer and lists the files that you want to concatenate.
file 'title.mp4'
file 'main.mp4'
Finally, concatenate the title video to the main video with the concat demuxer:
ffmpeg -f concat -i input.txt -c copy output.mp4
Batch mode
ffmpeg does not have a batch mode to automatically do this for a folder of videos. However, it can be done with shell scripting but that is a whole new topic that deserves its own question. See How do you convert an entire directory with ffmpeg? for some examples.

Is it possible to extract SubRip (SRT) subtitles from an MP4 video with ffmpeg?

I have checked the FFMpeg documentation and many forums and figured out the correct command-line to extract subtitles from an .MP4 video should look like so:
ffmpeg -i video.mp4 -vn -an -codec:s:0 srt out.srt
However, I get the following error, which lends me to question whether this is feasible at all:
Error while opening encoder for output stream #0:0 - maybe incorrect parameters
such as bit_rate, rate, width or height
Using ffmpeg -codecs, I can confirm that ffmpeg should be able to encode subrip subtitles.
Using ffmpeg -i video.mp4, I can see that there is two subtitle tracks embedded in the video :
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
...
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 720x572 [SAR 64:45 DAR 256:143], 1341 kb/s, 25 fps, 25 tbr, 90k tbn, 180k tbc
Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 191 kb/s
Stream #0:2(fra): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 191 kb/s
Stream #0:3(eng): Subtitle: dvd_subtitle (mp4s / 0x7334706D)
Stream #0:4(und): Subtitle: mov_text (text / 0x74786574)
EDIT
I have tested with the simplified command-line shown in the comments but I still get the same error. Here is a link to the detailed verbose output from running the command. I have also tried to completely disable metadata and chapters in the resulting output but that still produces the same error.
I enventually figured out why I did not succeed:
The specified command-line would have been perfectly fine if the subtitles from the source video were encoded in a text-based representation. However, as can be seen in the output to the ffmpeg -i command-line, the subtitles are encoded in the "dvd_subtitle" format.
The dvd_subtitle format stores bitmaps for each subtitle in the video. Therefore, there is no way ffmpeg would be able to translate the bitmaps into text.
For this task, one has to resort to an OCR-based software which assists a user with the task of identifying each subtitle as text from its bitmap représentation.
(There is a secondary text-based subtitle in the source video, but I don't know where it came from and is not seen by most popular players. For all intents and purposes, this "mov_text" subtitle seems to be a stub placeholder, probably an artifact of the conversion from the original DVD)
Just FYI, (Can't comment due to rep yet), but extracting SRT from an MP4 will result in a file formatted as MOV_Text not the regular SRT. It will still get added and work but its like changing mp4 to m4v. While it usually works, things don't work the same in the code. Mov_text is horrible for manually adjusting font/size etc etc. Best bet is to download and test an SRT from the web!
This will work, but will result in mov_text coded srt file:
ffmpeg -i in.mp4 out.srt
Try using map option...
if there are too many streams in input files....
Syntax would be:
ffmpeg -i video.mp4 -map 0:4 out.srt
Since there are two subtitle streams in your video, first on 0:3 which is dvdsubtitle so cannot be converted to srt so we will convert second subtitle on 0:4 stream which is mov_text and is soft copy of subtitle so can be easily converted....

Error while opening encoder for output stream #0.0 - maybe incorrect parameters such as bit_rate, rate, width or height

I am using this command to convert an avi,mov,m4v video files to flv format via FFMPEG
/usr/local/bin/ffmpeg -i '/home/public_html/files/video_1355440448.m4v' -s '640x360' -sameq -ab '64k' -ar '44100' -f 'flv' -y /home/public_html/files/video_1355440448.flv
[flv # 0x68b1a80] requested bitrate is too low
Output #0, flv, to '/home/files/1355472099-50cadce349290.flv':
Stream #0.0: Video: flv, yuv420p, 640x360, q=2-31, pass 2, 200 kb/s, 90k tbn, 25 tbc
Stream #0.1: Audio: adpcm_swf, 44100 Hz, 2 channels, s16, 64 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Error while opening encoder for output stream #0.0 - maybe incorrect parameters such as bit_rate, rate, width or height
-------------------------------
RESULT
-------------------------------
Execute error. Output for file "/home/public_html/files/video_1355472099.avi" was found, but the file contained no data. Please check the available codecs compiled with FFmpeg can support this type of conversion. You can check the encode decode availability by inspecting the output array from PHPVideoToolkit::getFFmpegInfo().
But if I manually used this command then its working
/usr/local/bin/ffmpeg -i '/home/public_html/files/video_1355440448.m4v' -s '640x360' -sameq -ab '64k' -ar '44100' -f 'flv' -y /home/public_html/files/video_1355440448.flv
This is because you have two streams and output will be encoding then resizing, see your output messages:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
... you use adpcm_swf audio and yuv420p video
The answer is very simple, you need to put copy as your audio codec ...
See my example with video mpeg4,yuv420p and audio ac3 ...
ffmpeg -i input.mkv -vf scale=720:-1 -acodec copy -threads 12 output.mkv
this will change first size = 720 with aspect ratio = -1 (unknown). Also you need to use:
-acodec copy -threads 12
If don't use this you will have one error.
For example: When I used it, the output encoding messages show me this and it works well:
[h624 # 0x874e4a0] missing picture in access unit93 bitrate=1034.4kbits/s
Last message repeated 1163 times5974kB time=53.47 bitrate= 915.3kbits/s
You need to use for flv format file, something like this:
ffmpeg -i input.mp4 -c:v libx264 -crf 19 output.flv
You are given an error message
[flv # 0x68b1a80] requested bitrate is too low
You need to change bitrate to a valid. It is better if you use a different codec
-acodec libmp3lame
And remove the option -sameq. This option does NOT mean 'same quality'. Actually means 'same quantizers'!
I had a similar problem due to size constraints. The original image size was strange (width=1343), meaning that when I tried to specify a new size with -s, any rounding error caused problems. Make sure that the new image size can have the exact same aspect ratio!
I have got the same issue
- requested bitrate is too low
and just resolved this issue by lowering down the bit rate
by adding -b:a 32k

FFMPEG - errors when combining videos

I have two .OGG files of similar size, FPS and duration. My goal is to combine them into a side-by-side presentation using FFMPEG. To this end I've tried the following cmd:
ffmpeg -i subject.ogg -vf "[in]pad=3*iw:3*ih[left];movie=clinician.ogg[right];[left] [right]overlay=100:0[out]" combined.ogg
Suffice to say that the resultant video is non-playable. During the combination process FFMPEG prints lots of errors that read like:
[Parsed_overlay_2 # 0x1eb7d3e0] Buffer queue overflow, dropping
What is this telling me?
Note:
both source files are playable
I padded the 'output' to be rather large in an attempt to understand the params
the placement of the 2nd video at 100:0 is arbitrary. Once I get the cmd working I'll move it to a better location in the output.
both videos began life as .FLV recorded from web cameras. I converted them to .ogg as FFMPEG didn't want to combine two .FLV files. If there is a better route to this, please let me know.
So - what's wrong with my parameters and what am I doing to cause these FFMPEG errors?
EDIT:
ffmpeg -i clinician.ogg
Input #0, ogg, from 'clinician.ogg':
Duration: 00:05:20.98, start: 0.001000, bitrate: 2273 kb/s
Stream #0:0: Video: theora, yuv420p, 500x500 [SAR 1:1 DAR 1:1], 1k tbr, 1k tbn, 1k tbc
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Stream #0:1: Audio: vorbis, 8000 Hz, stereo, s16
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
ffmpeg -i subject.ogg
Input #0, ogg, from 'subject.ogg':
Duration: 00:05:17.60, start: 0.001000, bitrate: 1341 kb/s
Stream #0:0: Video: theora, yuv420p, 300x300 [SAR 1:1 DAR 1:1], 83.33 tbr, 1k tbn, 1k tbc
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Stream #0:1: Audio: vorbis, 8000 Hz, stereo, s16
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Converting to x264 was a great suggestion. That seemed to turn the tide.
Here are some notes for posterity:
to convert flv to x264 and correct audio sync issues:
ffmpeg -y -i subject_s_2242_r_1658.flv -async 1 -ac 2 -strict -2 -acodec vorbis \
-c:v libx264 -preset slow -crf 22 subject.mkv
to merge two x264 files into a single side-by-side file and put the two mono audio tracks into stereo in the resultant file:
ffmpeg -y -i clinician.mkv -vf: "movie=subject.mkv[right];pad=iw*2:ih:0:0[left];[left][right]overlay=500:0" \
-filter_complex "amovie=clinician.mkv[l];amovie=subject.mkv[r];[l][r] amerge" final.mkv
I was unable to install AVISYNTH (running on CentOS 6.2) but it does look like a great solution.
It is probably easiest to do this using Avisynth.
Make the following input.avs file:
a = AviSource("first.avi")
b = AviSource("second.avi")
StackHorizontal(a,b)
Then run ffmpeg -i input.avs output.avi ... plus any other options you want.
EDIT: Another way to do it (not fast) is to dump the frames from both files to png and combine them with ImageMagick (for example montage) or similar image processing tools.
#!/bin/bash
ffmpeg -i first.avi first_%05d.png
ffmpeg -i second.avi second_%05d.png
for file in first_*.png ; do montage ${file} ${file/first/second} ${file/first/output} ; done
ffmpeg -i output_%05d.png output.avi
This actually lets you do a lot more image processing than just side-by-side, you can do arbitrary scale/overlay/background/etc. The problem is that the N-th frame from one file may not be at exactly the same time as the N-th frame from the other file, if they are variable frame rate, this is something that AviSynth handles perfectly for you. If the clips are constant frame rate that is not a problem.
Combining clips by making a new clip containing both like this (whether through avisynth or not) requires recompressing the video, and reduces video quality/increases file size.
I am not sure how to read ogg files into Avisynth, but there is probably a way. Check the FAQ on input formats.
Side comment: The choice of theora/ogg is strange. Better: H.264 in mp4 container.

Resources