I have an ffmpeg version built with VMAF library. I can use it to calculate the VMAF scores of a distorted video against a reference video using commands like this:
ffmpeg -i distorted.mp4 -i original.mp4 -filter_complex "[0:v]scale=640:480:flags=bicubic[main];[main][1:v]libvmaf=model_path=model/vmaf_v0.6.1.json:log_path=log.json" -f null -
Now, I remember there was a way to get VMAF scores while performing regular ffmpeg encoding. How can I do that at the same time?
I want to encode a video like this, while also calulate the VMAF of the output file:
ffmpeg -i original.mp4 -crf 27 -s 640x480 out.mp4
[edited]
Alright, scratch what I said earlier...
You should be able to use [the `tee` muxer](http://ffmpeg.org/ffmpeg-formats.html#tee-1) to save the file and pipe the encoded frames to another ffmpeg process. Something like this should work for you:
ffmpeg -i original.mp4 -crf 27 -s 640x480 -f tee "out.mp4 | [f=mp4]-" \
| ffmpeg -i - -i original.mp4 -filter_complex ...
(make them into 2 lines and remove \ for Windows)
Here is what works on my Windows PC (thanks to #Rotem for his help)
ffmpeg -i in.mp4 -vcodec libx264 -crf 27 -f nut pipe:
|
ffmpeg -i in.mp4 -f nut -i pipe:
-filter_complex "[0:v][1:v]libvmaf=log_fmt=json:log_path=log.json,nullsink"
-map 1 -c copy out.mp4
The main issue that #Rotem and I missed is that we need to terminate the libvmaf's output. Also, h264 raw format does not carry header info, and using `nut alleviates that issue.
There are a couple caveats:
testing with the testsrc example that #Rotem suggested in the comment below does not produce any libvmaf log, at least as far as I can see, but in debug mode, you can see the filter is getting initialized.
You'll likely see [nut # 0000026b123afb80] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8) message in the log. This just means that the frames are piped in faster than the 2nd ffmpeg is processing. FFmpeg does block on both ends, so no info should be lost.
For the full disclosure, I posted my Python test script on GitHub. It just runs the shell command, so it should be easy to follow even if you don't do Python.
Related
I know it’s possible to get a part of a file using the command:
ffmpeg -ss 00:02:00.000 -i input.mp4" -t 00:00:05.000 out.mp4
But is it possible to combine multiple videos with text and other effects?
I want to create a output from the following
File1.mp4:
Read from 00:02:00.000 to 00:02:05.000
File2.mp4
Read from 00:00:00.000 to 00:01:30.000
Insert overlay image “logo.png” for 20 seconds
File3.mp4
Insert the whole file
Insert text from 00:00:10.000 to 00:00:30.000
It can be done with FFmpeg, but it isn't really an 'editor' so the command will get long, unwieldy and prone to execution errors the more the number of input clips and effects you apply.
That said, one way to do this is using the concat filter.
ffmpeg -i file1.mp4 -i file2.mp4 -i file3.mp4 -loop 1 -t 20 -i logo.png \
-filter_complex "[0:v]trim=120:125,setpts=PTS-STARTPTS[v1];
[1:v]trim=duration=90,setpts=PTS-STARTPTS[vt2];
[vt2][3:v]overlay=eof_action=pass[v2];
[2:v]drawtext=enable='between(t,10,30)':fontfile=font.ttf:text='Hello World'[v3];
[0:a]atrim=120:125,asetpts=PTS-STARTPTS[a1];
[1:a]trim=duration=90,setpts=PTS-STARTPTS[a2];
[v1][a1][v2][a2][v3][2:a]concat=n=3:v=1:a=1[v][a]" -map "[v]" -map "[a]" output.mp4
I haven't specified any encoding parameters like codec or bitrate..etc. Assuming you're familiar with those. Also, haven't specified arguments for overlay or drawtext like position..etc. Consult the documentation for a guide to those.
I want to loop same video 4 times and output as video using ffmpeg.
SO I create code like this in ffmpeg.
ffmpeg -loop 4 -i input.mp4 -c copy output.mp4
but when i run it it give the error like this.
Option Loop Not Found.
how to do this withour error. Please Help Me
In recent versions, it's
ffmpeg -stream_loop 4 -i input.mp4 -c copy output.mp4
Due to a bug, the above does not work with MP4s. But if you wrap to a MKV, it works for me.
ffmpeg -i input.mp4 -c copy output.mkv
then,
ffmpeg -stream_loop 4 -i output.mkv -c copy output.mp4
I've found an equivalent work-around with input concatenation for outdated/bugged versions -stream_loop:
ffmpeg -f concat -safe 0 -i "video-source.txt" -f concat -safe 0 -i "audio-source.txt" -c copy -map 0:0 -map 1:0 -fflags +genpts -t 10:00:00.0 /path/to/output.ext
This will loop video and audio independently of each other and force-stop the output at 10 hour mark.
Both text files consist of
file '/path/to/file.ext'
but you must make sure to repeat this line enough times to keep the output satisfied.
For example, if your total video time is less than total audio time then video output will stop earlier than intended and the audio will keep playing until either -t 10H is reached or audio ends prematurely.
I am trying to record my desktop and save it as videos but ffmpeg fails.
Here is the terminal output:
$ ffmpeg -f alsa -i pulse -r 30 -s 1366x768 -f x11grab -i :0.0 -vcodec libx264 - preset ultrafast -crf 0 -y screencast.mp4
...
Unable to find a suitable output format for 'pipe:'
typo
Use -preset, not - preset (notice the space). ffmpeg uses - to indicate a pipe, so your typo is being interpreted as a piped output.
pipe requires the -f option
For users who get the same error, but actually want to output via a pipe, you have to tell ffmpeg which muxer the pipe should use.
Do this with the -f output option. Examples: -f mpegts, -f nut, -f wav, -f matroska. Which one to use depends on your video/audio formats and your particular use case.
You can see a list of muxers with ffmpeg -muxers (not all muxers can be used with pipe).
I'm trying to convert a gif file to webm file using the below which works fine however I’m wondering is it also possible to reverse it as well using ffmpeg or would I need to reverse it using imagemagick first then cover it using ffmpeg
ffmpeg -i your_gif.gif -c:v libvpx -crf 12 -b:v 500K output.webm
Any help is appreciated
The script posted here might help you.
This one seems to be in bash but ripping the commands should work on Windows as well.
https://github.com/WhatIsThisImNotGoodWithComputers/ffmpeg-webm-scripts
These are the relevant lines of code (note that they need to edited for your setup):
ffmpeg -i "${INPUT_FILE}" -ss $START_TIME -to $TO_TIME -an -qscale 1 $TEMP_FOLDER/%06d.jpg
cat $(ls -r $TEMP_FOLDER/*jpg) | ffmpeg -f image2pipe -vcodec mjpeg -r 25 -i - -c:v libvpx -crf 20 -b:v $FRAMERATE $CROPSCALE -threads 0 -an $OUTPUT_FILE
You basically have to convert all stills to jpgs and then back into webm, but in reverse order.
From ffmpeg --help, you can see what codecs ffmpeg supports with ffmpeg -codecs. ffmpeg -codecs|grep -i gif on mine says it supports gif.
ffmpeg checks extensions to get file type if you don't override,
ffmpeg -i onoz.webm onoz.gif
does the trick just fine.
Does anyone know if it is possible to encode a video using ffmpeg in reverse? (So the resulting video plays in reverse?)
I think I can by generating images for each frame (so a folder of images labelled 1.jpg, 2.jpg etc), then write a script to change the image names, and then re-encode the ivdeo from these files.
Does anyone know of a quicker way?
This is an FLV video.
Thank you
No, it isn't possible using ffmpeg to encode a video in reverse without dumping it to images and then back again. There are a number of guides available online to show you how to do it, notably:
http://ubuntuforums.org/showthread.php?t=1353893
and
https://sites.google.com/site/linuxencoding/ffmpeg-tips
The latter of which follows:
Dump all video frames
$ ffmpeg -i input.mkv -an -qscale 1 %06d.jpg
Dump audio
$ ffmpeg -i input.mkv -vn -ac 2 audio.wav
Reverse audio
$ sox -V audio.wav backwards.wav reverse
Cat video frames in reverse order to FFmpeg as input
$ cat $(ls -r *jpg) | ffmpeg -f image2pipe -vcodec mjpeg -r 25 -i - -i backwards.wav -vcodec libx264 -vpre slow -crf 20 -threads 0 -acodec flac output.mkv
Use mencoder to deinterlace PAL dv and double the frame rate from 25 to 50, then pipe to FFmpeg.
$ mencoder input.dv -of rawvideo -ofps 50 -ovc raw -vf yadif=3,format=i420 -nosound -really-quiet -o - | ffmpeg -vsync 0 -f rawvideo -s 720x576 -r 50 -pix_fmt yuv420p -i - -vcodec libx264 -vpre slow -crf 20 -threads 0 video.mkv
I've created a script for this based on Andrew Stubbs' answer
https://gist.github.com/hfossli/6003302
Can be used like so
./ffmpeg_sox_reverse.sh -i Desktop/input.dv -o test.mp4
New Solution
A much simpler method exists now, simply use the command (adjusting input.mkv and reversed.mkv accordingly):
ffmpeg -i input.mkv -af areverse -vf reverse reversed.mkv
The -af areverse will reverse audio, and -vf reverse will reverse video. The video and audio will be in sync automatically in the output file reversed.mkv, no need to worry about the input frame rate or anything else.
On one video if I only specified the -vf reverse to reverse video (but not audio), the output file didn't play correctly in mkv format but did work if I changed it to mp4 output format (I don't think this use case of reversing video only but not audio is common, but if you do run into this issue you can try changing the output format). On large input videos that exceed the RAM available in your computer, this method may not work and you may need to chop up the input file or use the old solution below.
Old Solution
One issue is the frame rate can vary depending on the video, many answers depend on a specific frame rate (like "-r 25" for 25 frames per second). If the frame rate in the video is different, this will cause the reversed audio and video to go out of sync.
You can of course manually adjust the frame rate each time (you can get the frame rate by running ffmpeg -i video.mkv and look for the number in front of the fps, this is sometimes a decimal number like 23.98). But with some bash code you can easily extract the fps, store it in a variable, and automatically pass it to the programs.
Based on this I've created the following bash script to do that. Simply chmod +x it and run it ./make-reversed-video.sh input.mkv output.mkv. The code is as follows:
#!/bin/bash
#Partially based on https://nhs.io/reverse/, but with some modifications, including automatic extraction of the frame rate.
#Get parameters.
VIDEO_FILE=$1
OUTPUT_FILE=$2
TEMP_FOLDER=$3
echo Using input file: $VIDEO_FILE
echo Using output file: $OUTPUT_FILE
mkdir /tmp/create_reversed_video
#Get frame rate.
FRAME_RATE=$(ffmpeg -i "$VIDEO_FILE" 2>&1 | grep -o -P '[0-9\\. ]+fps' | grep -o -P '[0-9\\.]+')
echo The frame rate is: $FRAME_RATE
#Extract audio from video.
ffmpeg -i "$VIDEO_FILE" -vn -ac 2 /tmp/create_reversed_video/audio.wav
#Reverse the audio.
sox -V /tmp/create_reversed_video/audio.wav /tmp/create_reversed_video/backwards.wav reverse
#Extract each video frame as an image.
ffmpeg -i "$VIDEO_FILE" -an -qscale 1 /tmp/create_reversed_video/%06d.jpg
#Recombine into reversed video.
ls -1 /tmp/create_reversed_video/*.jpg | sort -r | xargs cat | ffmpeg -framerate $FRAME_RATE -f image2pipe -i - -i /tmp/create_reversed_video/backwards.wav "$OUTPUT_FILE"
#Delete temporary files.
rm -rf /tmp/create_reversed_video
I've tested it and it works well on my Ubuntu 18.04 machine on lots of videos (after installing the dependencies like sox). Please let me know if it works on other Linux distributions and versions.