ffmpeg extract alpha channel from prores - ffmpeg

I'm trying to extract alpha channel from ProRes (mov) in greyscale to a separate mp4 file (to emulate video with transperancy on the html page later).
ffmpeg -i in.mov -hide_banner -f mp4 -vcodec libx264 -vf alphaextract,format=yuv420p out.mp4
but I don't get a filled alpha channel but only a sort of border of it. Pretty sure that original file is ok (tried with different files) and encoding it to webm showed correct transperency.
What I get from ffmpeg
How original file looks like

It's a bug; patched in git master.
Workaround for older versions is
ffmpeg -i in.mov -vf format=yuva444p16le,alphaextract,format=yuv420p -c:v libx264 out.mp4

Related

Hardcoding subtitles from DVD or VOB file with ffmpeg

I have some DVDs that I would like to encode so that I can play them on a Chromecast, with subtitles. It seems that Chromecast only supports text-based subtitle formats, while DVD subtitles are in a bitmap format, so I need to hardcode the subtitles onto the video stream.
First I use vobcopy to create a VOB file:
vobcopy -I /dev/sr0
Next I want to use ffmpeg to encode it as a video stream in a format that is supported by the Chromecast. This is the closest I've come so far (based on the ffmpeg documentation):
ffmpeg -analyzeduration 100M -probesize 100M -i in.vob \
-filter_complex "[0:v:0][0:s:0]overlay[vid]" -map "[vid]" \
-map 0:3 -codec:v libx264 -crf 20 -codec:a copy out.mkv
The -filter_complex "[0:v:0] [0:s:0]overlay[vid] parameters should overlay the first subtitle stream on the first video stream (-map 0:3 is for the audio). This partially works, but the subtitles are only shown for a fraction of a second (I'm guessing one frame).
How can I make the subtitles display for the correct duration?
I'm using ffmpeg 4.4.1 on Linux, but I've also tried the latest snapshot version, and tried gstreamer and vlc (but didn't get far).
The only solution I found that worked perfectly was a tedious multi-stage process.
Copy the DVD with vobcopy
vobcopy -I /dev/sr0
Extract the subtitles in vobsub format using mencoder. This command will write subs.idx and subs.sub. The idx file can be edited if necessary to tweak the appearance of the subtitles.
mencoder *.vob -nosound -ovc frameno -o /dev/null \
-vobsuboutindex 0 -sid 0 -vobsubout subs
Copy the audio and video from the VOB into an mkv file. ffprobe can be used to identify the relevant video and audio stream numbers.
ffmpeg -fflags genpts -i *vob -map 0:1 -map 0:3 \
-codec:v copy -codec:a copy copied_av.mkv
Merge the subtitles with the audio/video stream.
mkvmerge -o merged.mkv copied_av.mkv subs.sub subs.idx
Then ffmpeg will work reliably with the mkv file to write hardcoded subtitles to the video stream.
ffmpeg -i merged.mkv -filter_complex "[0:v:0][0:s:0]overlay[vid]" \
-map [vid] -map 0:1 -codec:v libx264 -codec:a copy hardcoded.mkv

Converting a large number of MP3 files to videos for YouTube, each using same the JPEG image

I am looking for a way to convert large number of MP3 files to videos, each using the same image. Efficient processing time is important.
I tried the following:
ffmpeg -i image.jpg -i audio.mp3 -vcodec libx264 video.mp4
VLC media player played the resulting video file with the correct sound, but a blank screen.
Microsoft Media Player played the sound and showed the intended image. I uploaded the video to YouTube and received the message:
"The video has failed to process. Please make sure you are uploading a supported file type."
How can I make this work?
Create video:
ffmpeg -framerate 6 -loop 1 -i input.jpg -c:v libx264 -vf format=yuv420p -t 00:10:00 video.mp4
The duration (-t) should be ≥ the MP3 with the longest duration.
Now stream copy the same video for each MP3:
ffmpeg -i video.mp4 -i audio.mp3 -map 0:v -map 1:a -c copy -movflags +faststart -shortest output.mp4
Some notes regarding compatibility:
MP3 in MP4 does not have universal support, but will be fine in YouTube. If your target players do not like it then add -c:a aac after -c copy to output AAC audio.
If your target player does not like it then increase the -framerate value or add the -r output option with an appropriate value, such as -r 15. Again, YouTube should be able to handle it.

Create muted video and black screen video with FFmpeg

I'm trying to use FFmpeg to generate the following from a local mp4 file:
A copy of the original video with no audio
A copy of the original video with audio but without visuals (a black screen instead). This file also needs to be in mp4 format.
After reading through the documentation I am struggling to get the terminal commands right. To remove the audio I have tried this command without any success:
ffmpeg -i file.mp4 -map 0:0 -map 0:2 -acodec copy -vcodec copy
Could anyone guide me towards how to accomplish this?
Create black video and silent audio
Use the color and anullsrc filters. Example to make 10 second output, 1280x720, 25 frame rate, stereo audio, 44100 sample rate:
ffmpeg -f lavfi -i color=size=1280x720:rate=25:color=black -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -t 10 output.mp4
Remove audio
Only keep video:
ffmpeg -i input.mp4 -map 0:v -c copy output.mp4
Keep everything except audio:
ffmpeg -i input.mp4 -map 0 -map -0:a -c copy output.mp4
See FFmpeg Wiki: Map for more info on -map.
Make video black but keep the audio
Using the drawbox filter.
ffmpeg -i input.mp4 -vf drawbox=color=black:t=fill -c:a copy output.mp4
Generate silent audio
See How to add a new audio (not mixing) into a video using ffmpeg? and refer to the anullsrc example.
To remove the audio you can use this:
ffmpeg -i file.mp4 -c copy -an file-nosound.mp4
notice the -an option
-an (output)
Disable audio recording.
To keep audio but "replace" the video with a black screen, you could do this:
ffmpeg -i file.mp4 -i image.png -filter_complex overlay out.mp4
image.png is a black wallpaper that is placed on top of the video, but there should be better ways of full removing the frames, you could either extract the audio and later create a new video with the audio as a background

ffmpeg command to merge file with good quality

I am looking for a better command that can merge both audio & video files into one with a better quality.
I found this command from Muaz Khan's WebRTC APIs.
ffmpeg -i {$audioFile} -i {$videoFile} -map 0:0 -map 1:0 {$mergedFileName}
Later on server i had to add "-strict -2" with this command as on server it says that above command is experimental if I still want to use it you should add "-strict -2" with it.
It is working well but my video file (.webm) with size 2.2MB and audio file (.wav) with size 1.5MB was merged into a new file (.webm) with size 422.5KB. This new video file is having lag.
Also I want the meta information for duration of video is already written on the resulting video file.
Is there any command which can give the merged file without lagging and both video and audio of the new file are of good quality ?
Use
ffmpeg -i {$audioFile} -i {$videoFile} -map 0:0 -c:a libopus -map 1:0 -c:v copy {$mergedFileName}
This will encode only the audio, leaving the video intact. Use libvorbis if libopus isn't present in your FFmpeg.

How add scale in my ffmpeg command

i want convert video from any format to mp4. so i am using command:
ffmpeg -i ttt.mp4 -vcodec copy -acodec copy test.mp4
this is working perftectly but now i also add scale in this -s 320:240.
There also many other command for convert LIKE :
ffmpeg -i inputfile.avi -s 320x240 outputfile.avi
but after convert by this command video not play in html5 player
BUT this is not working so tell me in my command how i add scale;
So please provide me solution for this .
Thanks in advance.
You have several problems:
In your command, you have -vcodec copy you cannot scale video without reencoding.
In the command you randomly found on the Internet, they are using AVI, which is not HTML5-compatible.
What you should do is:
ffmpeg -i INPUT -s 320x240 -acodec copy OUT.mp4
Adding to Timothy_G:
Video copy will ignore the video filter chain of ffmpeg, so no scaling is available (man ffmpeg is a great source of information that you will not find on Google). Notice that once you start decoding-filtering-encoding (i.e., no copy) the process will be much slower (x100 time slower or even more). The libx264 is recommended if you want compatibility with all browsers.
$ ffmpeg -i INPUT -s 320x240 -threads 4 -c:a copy -c:v libx264 OUT.mp4
vp9 will provide nearly 50% extra bandwidth saving, but only for supported browsers (Firefox/Chrome), and the encoding will much slower compared to libx264 (that itself is much slower that v:c copy):
$ ffmpeg -i INPUT -s 320x240 -c:a copy -c:v vp9 OUT.webm
Notice that there is a set of formats (containers) accepted by browsers (most admit mp4, some also webm, ...) and for each format there is a set of audio/video codecs accepted. For example you can use mp3 or aac with an mp4 file (container), but not with webm files.
http://en.wikipedia.org/wiki/HTML5_video#Supported_video_formats

Resources