ffmpeg php convert in background - ffmpeg

Hier is the deal: I converted my file with next command:
$output = exec("ffmpeg -i ".$directory_path_full." -ar 22050 -ab 32 -f flv -s 320x240 ".$directory_path.$file_name.".flv");
But now I need to tell database that processing is over! how to do that?
if converting is done, insert into database table videos row converted to 1.
also found this script:
$output = shell_exec('ffmpeg ' . escapeshellarg($directory_path_full) . ' ' . escapeshellarg($directory_path.$file_name.".flv"));
and again how to update database that completed?

exec() will not return until whatever you're executing completes, so basically:
exec("ffmpeg blah blah blah", $output, $return_var);
if ($return_val = "whatever value indicates success") {
... update database to indicate success ...
}
Don't assume the conversion succeeded and blindly update the database, always check if things actually succeeded.

Related

Use ffmpeg to record 2 webcams on raspberry pi

I want to record 2 webcams using ffmpeg, i have a simple python script but it doesn't work when I run the 2 subprocesses at the same time.
ROOT_PATH = os.getenv("ROOT_PATH", "/home/pi")
ENCODING = os.getenv("ENCODING", "copy")
new_dir = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
RECORDINGS_PATH1 = os.getenv("RECORDINGS_PATH", "RecordingsCam1")
RECORDINGS_PATH2 = os.getenv("RECORDINGS_PATH", "RecordingsCam2")
recording_path1 = os.path.join(ROOT_PATH, RECORDINGS_PATH1, new_dir)
recording_path2 = os.path.join(ROOT_PATH, RECORDINGS_PATH2, new_dir)
os.mkdir(recording_path1)
os.mkdir(recording_path2)
segments_path1 = os.path.join(recording_path1, "%03d.avi")
segments_path2 = os.path.join(recording_path2, "%03d.avi")
record1 = "ffmpeg -nostdin -i /dev/video0 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path1)
record2 = "ffmpeg -nostdin -i /dev/video2 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path2)
subprocess.Popen(record1, shell=True)
subprocess.Popen(record2, shell=True)
Also, i tried capturing the 2 sources side by side but it gives the error:`Filtering and streamcopy cannot be used together.
This has nothing to do with running two processes at the same time. FFmpeg clearly states that it cannot find /dev/video0 and /dev/video2. It seems your video camera is not detected. You can check this with following command :
$ ls /dev/ | grep video
will list all devices which have video in their name. If video0 and video2 do not exist, its clear FFmpeg gives such error. If they do exist, i do not know how to resolve this. You may try to run the FFmpeg commands directly in terminal.

How to Create video from selected images of a folder using FFMPEG?

For the time being I am doing
ProcessStartInfo ffmpeg = new ProcessStartInfo();
ffmpeg.CreateNoWindow = false;
ffmpeg.UseShellExecute = false;
ffmpeg.FileName = "e:\ffmpeg\ffmpeg.exe";
ffmpeg.Arguments = "for file in (D:\\Day\\*.jpg); do ffmpeg -i \"$file\" -vf fps=1/60 -q:v 3 \"D:\\images\\out.mp4\"; done;";
ffmpeg.RedirectStandardOutput = true;
Process x = Process.Start(ffmpeg);
Here I'm getting exception saying system cannot find specified file.
For time being I'm considering all the files in D:\Day\*.jpg but actually I need to query individual files from a list.
Where am I wrong in the above scenario?
We need to create a separate text file with the image names and use that text file to create your video.
inside frameList.txt :
file 'D:\20180205_054616_831.jpg'
file 'D:\20180205_054616_911.jpg'
file 'D:\20180205_054617_31.jpg'
file 'D:\20180205_054617_111.jpg'
and in Arguments of the process use,
"-report -y -r 15/1 -f concat -safe 0 -i frameList.txt -c:v libx264 -s 1920*1080 -b:v 2000k -vf fps=15,format=yuv420p out.mp4"

If conditions in a loop breaking ffmpeg zoom command

I have built a bash script where I am trying to zoom in an image with ffmpeg, for 10s:
ffmpeg -r 25 -i image0.jpg -filter_complex "scale=-2:10*ih,zoompan=z='min(zoom+0.0015,1.5)':d=250:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)',scale=-2:720" -y -shortest -c:v libx264 -pix_fmt yuv420p temp_1.mp4
This command is included in a while loop, with two "if" conditions at the beginning of the loop:
first=1017
i=0
while read status author mySource myFormat urlIllustration credit shot_id originalShot categories title_EN length_title_EN text_EN tags_EN title_FR length_title_FR text_FR tags_FR title_BR length_title_BR text_BR tags_BR; do
if [ $myFormat != "diaporama" ]; then
let "i = i + 1"
continue
fi
if [ "$shot_id" -lt "$first" ]; then
let "i = i + 1"
continue
fi
rm temp_1.mp4
ffmpeg -r 25 -i image0.jpg -filter_complex "scale=-2:10*ih,zoompan=z='min(zoom+0.0015,1.5)':d=250:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)',scale=-2:720" -y -shortest -c:v libx264 -pix_fmt yuv420p temp_1.mp4
let "i = i + 1"
done <../data.tsv
echo "All done."
(I have removed stuff in the loop, this is the minimal code that is able to capture the problem).
Now the weird bug: if I run this code like that, the video I am trying to generate will not be 10s long, only 1-2s long. ffmpeg exits with error "[out_0_0 # 0x2fa4c00] 100 buffers queued in out_0_0, something may be wrong."
Now if I remove one of the "if" conditions at the beginning of my loop (the first or the second, it doesn't matter), the video will be generated fine and be 10s long.
What could be the cause of this problem?

How to convert .rtp file(recorded using RTP Proxy codec G711) to .wav file

I need to convert a .rtp file (which has been recorded using RTP proxy) to .wav file.
If any one knows how it can be done, give me your solutions.
Thanks in advance:)
A little late to the party perhaps but I recently had the same problem and thought I should share my solution to it here if someone else has this question. I also used RTP-proxy to capture audio streams which were saved as two .rtp files, one for each channel, where .o. is the output of the one initiating the call (caller) and .a. is the one receiving the call (callee).
Solution 1.
RTP-proxy has a built in module which does the wav conversion for you called "extractaudio". The documentation is lacking to say the least but you can use it from the command-line as follows:
extractaudio -F wav -B /path/to/rtp /path/of/outfile.wav
This will convert one RTP file at a time to a WAV file. The module encode created WAV files with GSM-encoding. If this is undesired you can pass in -D pcm_16 as an extra argument to it to switch the encoding to Linear PCM 16, which is a much better format for retaining audio quality. I extracted WAV files this way programatically through python with the means of subprocesses in order to make command-line calls.
Solution 2.
You can extract the raw RTP data directly and convert it to a WAV file using a 3rd-part software like SoX or FFmpeg. This solution requires SoX, FFmpeg and tshark as dependencies. You could do without tshark if you opened the RTP file yourself and extracted the UDP data but it can be done easily with tshark.
Here is my code for it (Python 2.7.9):
import os
import subprocess
import shlex
import binascii
FILENAME = "my_file"
WORKING_DIR = os.path.dirname(os.path.realpath(__file__))
IN_FILE_O = "%s/%s.o.rtp" % (WORKING_DIR, FILENAME)
IN_FILE_A = "%s/%s.a.rtp" % (WORKING_DIR, FILENAME)
conversion_list = {"PCMU" : "sox -t ul -r 8000 -c 1 %s %s",
"GSM" : "sox -t gsm -r 8000 -c 1 %s %s" ,
"PCMA" : "sox -t al -r 8000 -c 1 %s %s",
"G722" : "ffmpeg -f g722 -i %s -acodec pcm_s16le -ar 16000 -ac 1 %s",
"G729": "ffmpeg -f g729 -i %s -acodec pcm_s16le -ar 8000 -ac 1 %s"
}
if __name__ == "__main__":
args_o = "tshark -n -r " + IN_FILE_O + " -T fields -e data"
args_a = "tshark -n -r " + IN_FILE_A + " -T fields -e data"
f_o = WORKING_DIR + "/" + "payload_o.g722"
f_a = WORKING_DIR + "/" + "payload_a.g722"
payload_o = subprocess.Popen(shlex.split(args_o), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True).communicate()[0]
payload_a = subprocess.Popen(shlex.split(args_a), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True).communicate()[0]
if os.path.exists(f_o):
os.remove(f_o)
if os.path.exists(f_a):
os.remove(f_a)
with open(f_o, "ab") as new_codec:
payload = payload_o.split("\n")
for line in payload:
line = line.rstrip()
tmp = "%s.o: " % FILENAME
for index, (op, code) in enumerate(zip(line[0::2], line[1::2])):
if index > 11:
new_codec.write(binascii.unhexlify(op + code))
with open(f_a, "ab") as new_codec:
payload = payload_a.split("\n")
for line in payload:
line = line.rstrip()
tmp = "%s.a: " % FILENAME
for index, (op, code) in enumerate(zip(line[0::2], line[1::2])):
if index > 11:
new_codec.write(binascii.unhexlify(op + code))
owav = WORKING_DIR + "/" + "%s.o.wav" % FILENAME
awav = WORKING_DIR + "/" + "%s.a.wav" % FILENAME
if os.path.exists(owav):
os.remove(owav)
if os.path.exists(awav):
os.remove(awav)
print("Creating %s with %s" % (owav, f_o))
print("Creating %s with %s" % (awav, f_a))
subprocess.Popen(shlex.split(conversion_list["G722"] % (f_o, owav)), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True).communicate()[0]
subprocess.Popen(shlex.split(conversion_list["G722"] % (f_a, awav)), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True).communicate()[0]
I have G722 hardcoded as input data in my solution but it should work with any type of input encoding given you had the correct SoX/FFmpeg command for it. I've added a few different encodings in a predefined dict. The drawback with this solution is that you have to know the encoding of the call recorded in the RTP file. I tried to find an equivalent parameter in the RTP file to the rtp.p_type found in PCAP files which entails the codec used but didn't have any luck. I'm not familiar enough with RTP files though so it might be present in the data somewhere. Another drawback of this is that the produced audio files can sometimes be shorter than the original audio. I'm assuming this is due to Silence Suppression in which case it could be fixed by inserting silence yourself at the places where the timestamps indicate silence has been removed (not transmitted).
A great way to view information about RTP files is through the tshark-command:
tshark -n -r /path/to/file.rtp
Hope it will help someone!
EDIT:
I found another question about detecting the encoding within a RTP file.

Can ffmpeg show a progress bar?

I am converting a .avi file to .flv file using ffmpeg. As it takes a long time to convert a file I would like to display a progress bar. Can someone please guide me on how to go about the same.
I know that ffmpeg somehow has to output the progress in a text file and I have to read it using ajax calls. But how do I get ffmpeg to output the progress to the text file?
I've been playing around with this for a few days. That "ffmpegprogress" thing helped, but it was very hard to get to work with my set up, and hard to read the code.
In order to show the progress of ffmpeg you need to do the following:
run the ffmpeg command from php without it waiting for a response (for me, this was the hardest part)
tell ffmpeg to send it's output to a file
from the front end (AJAX, Flash, whatever) hit either that file directly or a php file that can pull out the progress from ffmpeg's output.
Here's how I solved each part:
1.
I got the following idea from "ffmpegprogress". This is what he did: one PHP file calls another through an http socket. The 2nd one actually runs the "exec" and the first file just hangs up on it. For me his implementation was too complex. He was using "fsockopen". I like CURL. So here's what I did:
$url = "http://".$_SERVER["HTTP_HOST"]."/path/to/exec/exec.php";
curl_setopt($curlH, CURLOPT_URL, $url);
$postData = "&cmd=".urlencode($cmd);
$postData .= "&outFile=".urlencode("path/to/output.txt");
curl_setopt($curlH, CURLOPT_POST, TRUE);
curl_setopt($curlH, CURLOPT_POSTFIELDS, $postData);
curl_setopt($curlH, CURLOPT_RETURNTRANSFER, TRUE);
// # this is the key!
curl_setopt($curlH, CURLOPT_TIMEOUT, 1);
$result = curl_exec($curlH);
Setting CURLOPT_TIMEOUT to 1 means it will wait 1 second for a response. Preferably that would be lower. There is also the CURLOPT_TIMEOUT_MS which takes milliseconds, but it didn't work for me.
After 1 second, CURL hangs up, but the exec command still runs. Part 1 solved.
BTW - A few people were suggesting using the "nohup" command for this. But that didn't seem to work for me.
*ALSO! Having a php file on your server that can execute code directly on the command line is an obvious security risk. You should have a password, or encode the post data in some way.
2.
The "exec.php" script above must also tell ffmpeg to output to a file. Here's code for that:
exec("ffmpeg -i path/to/input.mov path/to/output.flv 1> path/to/output.txt 2>&1");
Note the "1> path/to/output.txt 2>&1". I'm no command line expert, but from what I can tell this line says "send normal output to this file, AND send errors to the same place". Check out this url for more info: http://tldp.org/LDP/abs/html/io-redirection.html
3.
From the front end call a php script giving it the location of the output.txt file. That php file will then pull out the progress from the text file. Here's how I did that:
// # get duration of source
preg_match("/Duration: (.*?), start:/", $content, $matches);
$rawDuration = $matches[1];
// # rawDuration is in 00:00:00.00 format. This converts it to seconds.
$ar = array_reverse(explode(":", $rawDuration));
$duration = floatval($ar[0]);
if (!empty($ar[1])) $duration += intval($ar[1]) * 60;
if (!empty($ar[2])) $duration += intval($ar[2]) * 60 * 60;
// # get the current time
preg_match_all("/time=(.*?) bitrate/", $content, $matches);
$last = array_pop($matches);
// # this is needed if there is more than one match
if (is_array($last)) {
$last = array_pop($last);
}
$curTime = floatval($last);
// # finally, progress is easy
$progress = $curTime/$duration;
Hope this helps someone.
There is an article in Russian which describes how to solve your problem.
The point is to catch Duration value before encoding and to catch time=... values during encoding.
--skipped--
Duration: 00:00:24.9, start: 0.000000, bitrate: 331 kb/s
--skipped--
frame= 41 q=7.0 size= 116kB time=1.6 bitrate= 579.7kbits/s
frame= 78 q=12.0 size= 189kB time=3.1 bitrate= 497.2kbits/s
frame= 115 q=13.0 size= 254kB time=4.6 bitrate= 452.3kbits/s
--skipped--
It’s very simple if you use the pipeview command. To do this, transform
ffmpeg -i input.avi {arguments}
to
pv input.avi | ffmpeg -i pipe:0 -v warning {arguments}
No need to get into coding!
You can do it with ffmpeg's -progress argument and nc
WATCHER_PORT=9998
DURATION= $(ffprobe -select_streams v:0 -show_entries "stream=duration" \
-of compact $INPUT_FILE | sed 's!.*=\(.*\)!\1!g')
nc -l $WATCHER_PORT | while read; do
sed -n 's/out_time=\(.*\)/\1 of $DURATION/p')
done &
ffmpeg -y -i $INPUT_FILE -progress localhost:$WATCHER_PORT $OUTPUT_ARGS
FFmpeg uses stdout for outputing media data and stderr for logging/progress information. You just have to redirect stderr to a file or to stdin of a process able to handle it.
With a unix shell this is something like:
ffmpeg {ffmpeg arguments} 2> logFile
or
ffmpeg {ffmpeg arguments} 2| processFFmpegLog
Anyway, you have to run ffmpeg as a separate thread or process.
Sadly, ffmpeg itself still cannot show a progress bar – also, many of the aforementioned bash- or python-based stop-gap solutions have become dated and nonfunctional.
Thus, i recommend giving the brand-new ffmpeg-progressbar-cli a try:
It's a wrapper for the ffmpeg executable, showing a colored, centered progress bar and the remaining time.
Also, it's open-source, based on Node.js and actively developed, handling most of the mentioned quirks (full disclosure: i'm its current lead developer).
If you just need hide all info and show default progress like ffmpeg in last line, you can use -stats option:
ffmpeg -v warning -hide_banner -stats ${your_params}
I found ffpb Python package (pip install ffpb) that passes arguments transparently to FFmpeg. Due to it's robustness, it doesn't need much maintenance. The last release is from Apr 29, 2019.
https://github.com/althonos/ffpb
javascript should tell php to start converting [1] and then do [2] ...
[1] php: start conversion and write status to file (see above):
exec("ffmpeg -i path/to/input.mov path/to/output.flv 1>path/to/output.txt 2>&1");
For the second part we need just javascript to read the file.
The following example uses dojo.request for AJAX, but you could use jQuery or vanilla or whatever as well :
[2] js: grab the progress from the file:
var _progress = function(i){
i++;
// THIS MUST BE THE PATH OF THE .txt FILE SPECIFIED IN [1] :
var logfile = 'path/to/output.txt';
/* (example requires dojo) */
request.post(logfile).then( function(content){
// AJAX success
var duration = 0, time = 0, progress = 0;
var resArr = [];
// get duration of source
var matches = (content) ? content.match(/Duration: (.*?), start:/) : [];
if( matches.length>0 ){
var rawDuration = matches[1];
// convert rawDuration from 00:00:00.00 to seconds.
var ar = rawDuration.split(":").reverse();
duration = parseFloat(ar[0]);
if (ar[1]) duration += parseInt(ar[1]) * 60;
if (ar[2]) duration += parseInt(ar[2]) * 60 * 60;
// get the time
matches = content.match(/time=(.*?) bitrate/g);
console.log( matches );
if( matches.length>0 ){
var rawTime = matches.pop();
// needed if there is more than one match
if (lang.isArray(rawTime)){
rawTime = rawTime.pop().replace('time=','').replace(' bitrate','');
} else {
rawTime = rawTime.replace('time=','').replace(' bitrate','');
}
// convert rawTime from 00:00:00.00 to seconds.
ar = rawTime.split(":").reverse();
time = parseFloat(ar[0]);
if (ar[1]) time += parseInt(ar[1]) * 60;
if (ar[2]) time += parseInt(ar[2]) * 60 * 60;
//calculate the progress
progress = Math.round((time/duration) * 100);
}
resArr['status'] = 200;
resArr['duration'] = duration;
resArr['current'] = time;
resArr['progress'] = progress;
console.log(resArr);
/* UPDATE YOUR PROGRESSBAR HERE with above values ... */
if(progress==0 && i>20){
// TODO err - giving up after 8 sec. no progress - handle progress errors here
console.log('{"status":-400, "error":"there is no progress while we tried to encode the video" }');
return;
} else if(progress<100){
setTimeout(function(){ _progress(i); }, 400);
}
} else if( content.indexOf('Permission denied') > -1) {
// TODO - err - ffmpeg is not executable ...
console.log('{"status":-400, "error":"ffmpeg : Permission denied, either for ffmpeg or upload location ..." }');
}
},
function(err){
// AJAX error
if(i<20){
// retry
setTimeout(function(){ _progress(0); }, 400);
} else {
console.log('{"status":-400, "error":"there is no progress while we tried to encode the video" }');
console.log( err );
}
return;
});
}
setTimeout(function(){ _progress(0); }, 800);
These answers that use multiple tools/consoles are complicating things too much.
pv is a good option but has the noted drawbacks of missing non-senquential data.
Just use the progress utility: Run ffmpeg as normal then in another console monitor with progress -m -c ffmpeg
Calling php's system function blocks that thread, so you'll need to spawn off 1 HTTP request for performing the conversion, and another polling one for reading the txt file, that's being generated.
Or, better yet, clients submit the video for conversion and then another process is made responsible for performing the conversion. That way the client's connection won't timeout while waiting for the system call to terminate. Polling is done in the same way as above.
Had problems with the second php part. So I am using this instead:
$log = #file_get_contents($txt);
preg_match("/Duration:([^,]+)/", $log, $matches);
list($hours,$minutes,$seconds,$mili) = split(":",$matches[1]);
$seconds = (($hours * 3600) + ($minutes * 60) + $seconds);
$seconds = round($seconds);
$page = join("",file("$txt"));
$kw = explode("time=", $page);
$last = array_pop($kw);
$values = explode(' ', $last);
$curTime = round($values[0]);
$percent_extracted = round((($curTime * 100)/($seconds)));
Outputs perfectly.
Would like to see something for multiple uploads for another progress bar. This passing for the current file for one percentage. Then an overall progress bar. Almost there.
Also, if people are having a hard time getting:
exec("ffmpeg -i path/to/input.mov path/to/output.flv 1> path/to/output.txt 2>&1");
To work.
Try:
exec("ffmpeg -i path/to/input.mov path/to/output.flv 1>path/to/output.txt 2>&1");
"1> path" to "1>path" OR "2> path" to "2>path"
Took me awhile to figure it out. FFMPEG kept failing. Worked when I changed to no space.
By taking the progress bar from this link, I create a simple script that shows the actual frame of the video being encoded and shows a progress bar at the bottom of it... at the same time that ffmpeg encodes the video, of course.
First, we have to get duration, width and height from the video, to create the bar. But, as color filter can't get this information from the file, we have to get them first with ffprobe. Then, we use them with ffmpeg.
#!/bin/bash
video_duration=`ffprobe -v quiet -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$1"`
video_width=`ffprobe -v quiet -select_streams v -show_entries stream=width -of csv=p=0:s=x "$1"`
video_height=`ffprobe -v quiet -select_streams v -show_entries stream=height -of csv=p=0:s=x "$1"`
five_percent=`expr $video_height / 20`
#echo $video_duration
#echo $video_width
#echo $video_height
#echo $five_percent
ffmpeg -i "$1" -filter_complex "color=c=red:s='$video_width'x$five_percent[bar];[0][bar]overlay=-w+(w/$video_duration)*t:H-h:shortest=1[bar]" "$2" -map [bar] -f xv display
Then, use script as:
sh encode_with_bar.sh video_in.mkv video_out.mp4
Performance: the filter used is very simple... but everything added consumes additional CPU. Testing a 10MB video file in my computer, this is the difference:
Without script: 14.46 seconds
With script: 17.05 seconds (18% more)
Yes, almost 20% more. For short videos, it's nice. For larger files, probably it's not a good idea 🤷.
This is my solution:
I use ffpb and python subprocess to tracking ffmpeg progress. Then I push status to a database (ex: Redis) for display a progress bar on website.
import subprocess
cmd = 'ffpb -i 400MB.mp4 400MB.avi'
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=True,
encoding='utf-8',
errors='replace'
)
while True:
realtime_output = process.stdout.readline()
if realtime_output == '' and process.poll() is not None:
break
if realtime_output:
print(realtime_output.strip(), flush=True)
print('Push status to Redis...')
After some investigation of this library I see that implementation should follow these steps:
Calc result duration. E.g. get initial duration (substract any trim seconds if required):
ffprobe -v error -show_entries format=duration -of csv=p=0 input.mkv
Result must be parsed as float and stored, e.g. in JavaScript:
const duration = parseFloat(result);
In task - specify additional flag, so ffmpeg will output processed duration:
ffmpeg -i input.mkv -progress pipe:1 ...
Or the same but pipe to stderr:
ffmpeg -i input.mkv -progress pipe:2 ...
Extract processed duration (from stdout or stderr), e.g. in JavaScript:
const matchedResult = chunkResult.match(/out_time_ms=(\d+)/);
if (Array.isArray(matchedResult)) {
const currentTimeMs = Math.round(Number(matchedResult[1]) / 1000000);
}
Compare processed duration with total duration
currentTimeMs/duration
Keeping it simple &… Here's a brief summary of the possibilities for using a progress bar.
No problem with pv (istefani's suggestion) for converting, e.g. YouTube videos. Not working for videos downloaded from Odysee though. I've got this error message:
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7fee39005400] stream 0, offset 0x30: partial file [mov,mp4,m4a,3gp,3g2,mj2 # 0x7fee39005400] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none(tv, bt709), 1280x720, 604 kb/s): unspecified pixel format Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
I ended up using ffpb, which still works great for me; no issues so far.
Using progress -m -c ffmpeg … is interesting, but one needs to open another console to run it after executing the normal ffmpeg command in the first console (not convenient if running from a shell script).
ffmpeg-progress-yield seems to be an excellent alternative to ffpb, but does not show (at least, for me) the name of the video being converted; it shows “test” instead of the actual filename.
Finally, ffmpeg-progressbar-cli is very similar to ffpb and ffmpeg-progress-yield, but apparently, is no longer maintained; I haven't tried it.

Resources