How to get better quality converting MP4 to WMV with ffmpeg? - ffmpeg

I am converting MP4 files to WMV with these two rescaling commands:
ffmpeg -i test.mp4 -y -vf scale=-1:360 test1.wmv
ffmpeg -i test.mp4 -y -vf scale=-1:720 test2.wmv
I've also tried:
ffmpeg -g 1 -b 16000k -i test1.mp4 test1.wmv
However, the .wmv files that are produced are "blocky and grainy" as you can see here in a small section of a video screenshot:
These are the sizes:
test.mp4 - 106 MB
test1.wmv - 6 MB
test2.wmv - 16 MB
How can I increase the quality/size of the resulting .wmv files (the size of the .wmv files is of no concern)?

Consider the following command instead (some outdated commands in the final answer section):
ffmpeg -i test.mp4 -c:v wmv2 -b:v 1024k -c:a wmav2 -b:a 192k test1.wmv
REFERENCES
https://askubuntu.com/questions/352920/fastest-way-to-convert-videos-batch-or-single

You can simply use the -sameq parameter ("use same quantizer as source") which produces a much larger sized video file (227 MB) but with excellent quality.
ffmpeg -sameq -i test.mp4 -y -vf scale=-1:360 test1.wmv
In newer versions of ffmpeg flag '-sameq' has been removed. To achieve similar results one should use 'qscale' flag with 0 value:
ffmpeg -sameq -i test.mp4 -qscale 0 -vf scale=-1:360 test1.wmv

Working answer in 2020, producing an output video without blockiness:
ffmpeg -i input.mp4 -q:v 1 -q:a 1 output.wmv

One thing I discovered after many frustrating attempts of enhancing the final quality was that if you don't specify a bitrate, it'll use a quite low average. Try -b 1000k for a starting point, and experiment increasing or decreasing it until you reach the desired result. Your file will be quite bigger or smaller, accordingly.

I used this and it turned out quite well
ffmpeg -i "file1.mp4" -q:v 0 -c:v wmv2 -b:v 1024k -c:a wmav2 -b:a 192k test2.wmv

Related

mp4 video written with ffmpeg has different first frame based on total number of frames

I'm trying to read and write videos using ffmpeg, and I got an interesting phenomenon where the first frame is not the same in videos I create that are made from the same frames, only with different lengths.
The commands I'm running to reproduce the problem:
ffmpeg -i <some_video>.mp4 -frames:v 20 -q:v 3 resource_images/00%04d.png
ffmpeg -hide_banner -loglevel error -framerate 30 -y -i resource_images/00%04d.png -c:v libx264 -pix_fmt yuv420p -frames:v 20 long_video.mp4 -y
ffmpeg -hide_banner -loglevel error -framerate 30 -y -i resource_images/00%04d.png -c:v libx264 -pix_fmt yuv420p -frames:v 10 short_video.mp4 -y
ffmpeg -i long_video.mp4 -vf "select=eq(n,0)" -q:v 3 long_frame0.png -y
ffmpeg -i short_video.mp4 -vf "select=eq(n,0)" -q:v 3 short_frame0.png -y
The images long_frame0.png and short_frame0.png are different (I loaded them using Python and compared them, there are many differences).
I find it very peculiar, since I create very short videos, it's those videos first frames, and they are keyframes of those videos (I checked it using ffprobe)
What is the cause of this issue and how do I overcome it to create a consistent first frame for a video, regardless of the video length?

How to replace the video track in a video file with a still image?

I am trying to use ffmpeg to replace the video track in a video file with a still image. I tried some commands I got from other questions such as the one here
ffmpeg -i x.png -i orig.mp4 final.mp4
ffmpeg -r 1/5 -i x.png -r 30 -i orig.mp4 final.mp4
But these didn't work. I'm not sure which of these arguments are required or not. The output should be accepted by YouTube as a valid video - I was able to simply remove the video track, but apparently they don't let you upload a video without a video track.
You can try looping the still image like this:
ffmpeg -loop 1 -i x.png -i orig.mp4 final.mp4
Then you can tweak the encoding process by introducing the following quality parameters:
ffmpeg -loop 1 -i x.png -i orig.mp4 -crf 22 -preset slow final.mp4
they are described here.
If your colorspace gets rejected by YouTube you can try adding: -pix_fmt yuv420p.
Solution: A final solution is something like this:
Where -t 30 is an example duration of 30 seconds.
Using -c:a copy will directly copy the original audio without a new re-encoding (is faster).
ffmpeg -loop 1 -i x.png -i orig.mp4 -map 0 -map 1:a -c:v libx264 -pix_fmt yuv420p -crf 22 -preset slow -c:a copy -shortest final.mp4

How to specify how lossy/lossless a .webm conversion will be (in ffmpeg)?

I can't seem to understand how to make the conversion lossless (or at least visually lossless)? The outputs have some fast moving parts at times, and the output would become blocky; I would like to keep it as lossless as possible, while still maintaining some compression. What would I have to tweak at the command line? Thanks you~
ffmpeg -c:v libvpx-vp9 -i in.webm -c:v libvpx -vf scale=400:416,hue=h=45:s=1 -auto-alt-ref 0 out.webm
According to FFmpeg Wiki: VP9, "two-pass is the recommended encoding method for libvpx-vp9 as some quality-enhancing encoder features are only available in 2-pass mode". Example of your command:
ffmpeg -c:v libvpx-vp9 -i in.webm -c:v libvpx-vp9 -vf scale=400:416,hue=h=45:s=1 -b:v 0 -crf 30 -pass 1 -an -f null /dev/null
ffmpeg -c:v libvpx-vp9 -i in.webm -c:v libvpx-vp9 -vf scale=400:416,hue=h=45:s=1 -b:v 0 -crf 30 -pass 2 -c:a copy output.webm
The CRF value can be from 0–63. Lower values mean better quality. Recommended values range from 15–35, with 31 being recommended for 1080p HD video. For more info see Google - Getting Started with VP9.

FFMPEG: How to avoid audio/video desync in output of crossfaded clips when input is variable frame rate video

I'm doing screen recordings of gameplay (Dota2) using my NVIDIA graphics card GeForce experience hardware recording (NVEC Encoder). This creates a variable frame rate output video. My NVIDIA settings are 60 fps 15000 kbps. I have paid a guy to make a program that generates scripts that given start/stop timepoints can extract clips from the video and merge them with crossfade. See example code below. The script works for many input recordings but fails often: The audio and video are desynchronized (usually audio delay) in many of the clips, ca 0.5 seconds. I think it fails more when frame rate dropped more during recording. He does not know how to fix the problem, and I wonder if anyone could point out if anything could be fixed in the script (example below)?
Processing speed is quite important (now making a 10 min 'highlight' video takes ca 7-10 min). Solutions increasing that amount very much more is not of too big interest, unfortunately. His approach has been to work separately with audio and video and merge in the end. He already has a program to make ffmpeg code for working with different scenarios (also adding overlays, adding music, intro/outro) so it would be preferable with some easy fixes to his code and not dramatic redesigning of the logic. But if nothing else can fix the problem, a redesign in logic is ok. Using other tools than ffmpeg is also ok, but should be automatable (scripts/cli) and not increase processing times too much.
Running the program "mediainfo" on the input video shows that framerate dropped quite low for this input video:
Frame rate mode: Variable
Frame rate : 60.000 FPS
Minimum frame rate: 3.059 FPS
Maximum frame rate: 63.739 FPS
Full report here: https://pastebin.com/TX061Wih
The input video can be downloaded from dropbox here (6 GB):
https://www.dropbox.com/s/ftwdgapazbi62pr/fullgame.mp4?dl=0
Here the example of a script when asked to extract two clips from input video at 9:57 (41 sec length) and 15:45 (28 sec length) and crossfade merge them with a 0.5 crossfade time. There might be some code-remnants from options that are not used in this example (overlays, music, intro/outro). Using the input video above, this creates audio/video desync.
6 commands excecuted in sequence:
ffmpeg.exe -loglevel warning -ss 00:09:57 -i fullgame.mp4 -t 00:00:41 -filter_complex "[0:a]afade=t=out:st=40.5:d=0.5[a1]" -map "[a1]" -y out_temp_00.mp4.wav
ffmpeg.exe -loglevel warning -i fullgame.mp4 -ss 00:09:57 -t 00:00:41 -an -vcodec copy -f mpegts -avoid_negative_ts make_zero -y out_temp_00.mp4.ts
ffmpeg.exe -loglevel warning -ss 00:15:45 -i fullgame.mp4 -t 00:00:28 -filter_complex "[0:a]afade=t=in:st=0:d=0.5[a1]" -map "[a1]" -y out_temp_01.mp4.wav
ffmpeg.exe -loglevel warning -i fullgame.mp4 -ss 00:15:45 -t 00:00:28 -an -vcodec copy -f mpegts -avoid_negative_ts make_zero -y out_temp_01.mp4.ts
ffmpeg.exe -loglevel warning -i out_temp_00.mp4.wav -i out_temp_01.mp4.wav -y -filter_complex "[0:a]adelay=0|0[a0];[1:a]adelay=40500|40500[a1];[a0][a1]amix=inputs=2:dropout_transition=68.5,atrim=duration=68.5[outa0];[outa0]loudnorm[outa]" -map "[outa]" -ar 48000 -acodec aac -strict -2 fullgame_Output.mp4.aac
ffmpeg.exe -loglevel warning -i out_temp_00.mp4.ts -i out_temp_01.mp4.ts -y -i fullgame_Output.mp4.aac -filter_complex "[0:v]trim=start=0.5,setpts=PTS-STARTPTS[0c];[1:v]trim=start=0.5,setpts=PTS-STARTPTS[1c];[0:v]trim=40.5:41,setpts=PTS-STARTPTS[fo];[1:v]trim=0:0.5[fi];[fi]format=pix_fmts=yuva420p,fade=t=in:st=0:d=0.5:alpha=1[z];[fo]format=pix_fmts=yuva420p,fade=t=out:st=0:d=0.5:alpha=1[x];[z]fifo[w];[x]fifo[q];[q][w]overlay[r];[0c][r][1c]concat=n=3[outv]" -map "[outv]" -map 2:a -shortest -acodec copy -vcodec libx264 -preset ultrafast -b 15000k -aspect 1920:1080 fullgame_Output.mp4
P.S.
I already asked for help at an ffmpeg chat room. One guy said he knew what the problem was, but didnt know how to fix it(?):
[00:10] <kepstin> oh, wait, you're using -vcodec copy
[00:10] <kepstin> that explains everything.
[00:10] <kepstin> when you're using -vcodec copy, the start time (set with -ss) is rounded to the nearest keyframe
[00:10] <kepstin> it's not exact
[00:11] <kepstin> depending on the keyframe interval, this will result in possibly quite large shifts
[00:11] <kepstin> (also, your commands are applying audio filters on commands with -an, which is confusing/contradictory)
[00:12] <birdboy88> so the problem is that the audio temporary clips are not being extracted from the same excat timepoints?
[00:13] <kepstin> birdboy88: yeah, your audio is being re-encoded to wav so it's being cut sample-accurate, but the video's not being precisely cut.
[00:16] <birdboy88> kepstin: so I need to use slow seek (?) to extract video accurately? Or somehow extract audio only where there are video keyframes?
[00:17] <kepstin> birdboy88: i don't know how to extract audio starting at video keyframes with ffmpeg cli. You're already doing slow seek, which doesn't help (you should move the -ss option to before the -i option to speed it up)
[00:17] <kepstin> if you want accurate video cutting when saving to a file, you have to re-encode the video
[00:18] <kepstin> (doing this in a single ffmpeg command means you don't have to save to a file, so you can avoid the issue)
[00:18] * kepstin is off for a bit now
EDIT:
Everything is done with the latest ffmpeg version.
I was unable to get Gyan's code to work. It always loses some audio (audio is either 40.5 or 27.5, so only one audio is used). This is the only one working for me (changes were adelay=40500|40500 and amix=inputs=2[a0];[a0]loudnorm):
ffmpeg -i fullgame.mp4 -filter_complex "[0]split=2[vpre][vpost];
[0]asplit=2[apre][apost];
[vpre]trim=start='00:09:57':duration='00:00:41',setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start='00:09:57':duration='00:00:41',asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start='00:15:45':duration='00:00:28',setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start='00:15:45':duration='00:00:28',asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,adelay=40500|40500[apost-t];
[vpre-t][vpost-t]overlay[v];
[apre-t][apost-t]amix=inputs=2[a0];[a0]loudnorm[a]" -map "[v]" -map "[a]" -y -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Then I tried using a similar setup but with 3 clips, but on one machine I got error: "Error while filtering: Cannot allocate memory". And my 16 GB memory machine the processing speed is 0.02x! Any way to avoid this? This is the code I tried:
ffmpeg -i fullgame.mp4 -filter_complex "[0]split=3[vpre][vpost][v3];
[0]asplit=3[apre][apost][a3];
[vpre]trim=start=357:duration=41,setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start=357:duration=41,asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start=795:duration=28,setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,fade=t=out:st=40.5:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start=795:duration=28,asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,afade=t=out:st=27.5:d=0.5,adelay=40500|40500[apost-t];
[v3]trim=start=95:duration=30,setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5,setpts=PTS+41+28-0.5/TB[v3-t];
[a3]atrim=start=95:duration=30,asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,adelay=68500|68500[a3-t];
[vpre-t][vpost-t]overlay[v1];
[v1][v3-t]overlay[v];
[apre-t][apost-t][a3-t]amix=inputs=3[a0];
[a0]loudnorm[a]" -map "[v]" -map "[a]" -y -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Just do it in one command.
Besides the keyframe seek issue, which is true, your present sequence has an error in the last command. You have [0:v]trim=start=0.5...[0c] which trims out the first 0.5 seconds and will cause a desync of its own. Since this is the first clip, it should be [0:v]trim=0:40.5.
The full single command should be
ffmpeg -i fullgame.mp4 -filter_complex
"[0]split=2[vpre][vpost];[0]asplit=2[apre][apost];
[vpre]trim=start='00:09:57':duration='00:00:41',setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start='00:09:57':duration='00:00:41',asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start='00:15:45':duration='00:00:28',setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start='00:15:45':duration='00:00:28',asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5[apost-t];
[vpre-t][vpost-t]overlay[v];
[apre-t][apost-t]acrossfade=d=0.5,loudnorm,aresample=48000[a]"
-map "[v]" -map "[a]" -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Your original sequence had -strict -2 for audio AAC encoding. That hasn't been needed since Dec 2015. You have a very old version of ffmpeg if your ffmpeg throws an error without it. Upgrade first.
I did not test the above with your file, as it will take too long to filter 16 min of Full HD 60 fps video, but I tested the below faster command and it works fine with the latest git build of ffmpeg:
ffmpeg -ss 00:09:57 -t 00:00:41 -i fullgame.mp4 -ss 00:15:45 -t 00:00:28 -i fullgame.mp4 -filter_complex
"[0]afade=t=out:st=40.5:d=0.5[apre-t];
[1]format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[1]afade=t=in:st=0:d=0.5[apost-t];
[0][vpost-t]overlay[v];
[apre-t][apost-t]acrossfade=d=0.5,loudnorm,aresample=48000:ocl=stereo[a]"
-map "[v]" -map "[a]" -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4

What command convert mjpeg IP camera streaming to mp4 file with lowest CPU usage?

like above question, I want find out what ffmpeg command can help me reduce cpu usage when running 50 IP camera (running same 50 command).
My ffmpeg command:
ffmpeg -f mjpeg -y -use_wallclock_as_timestamps 1 -i 'http://x.x.x.x:8090/test1?.mjpg' -r 3 -reconnect 1 -loglevel 16 -c:v mjpeg -an -qscale 10 -copyts '1.mp4'
50 command like that take my computer (4 core) 200% CPU
I want this computer can run for 150 camera, any advise?
=========================================================
using -c:v copy can make it faster but fize size is terrible
I try slow down frame rate by 3 with -r 3 or -framerate 3 to decrease file size but not succesful (because vcodec copy can't do that).
Have any option to force input frame rate by 3?
(sorry for my bad English)
by setting -c:v mjpeg you are decoding and re-encoding the stream. set -c:v copy to copy the data without re-encoding it.
ffmpeg -re -i 'rtsp://user:password#10.10.10.30/rtsp_tunnel' -pix_fmt yuv420p -c:v libx264 -preset ultrafast -profile baseline -crf 18 -f h264 udp://0.0.0.0:3001

Resources