How to properly compress single image videos with ffmpeg? - ffmpeg

I have a video that I am converting through an automated system that is 7mb, it is basically an mp3 with an image.
Now when its converted it magically becomes 17mb. My guess is its looping through the images instead of compressing them. The video was downloaded from youtube.
Here is my command that I'm converting it with:
/usr/local/bin/ffmpeg -i '/home/site/www-video/Upload/Temp/9d40b683eb2e8e8a036d64c741d04e01.flv' -pass 1 -vcodec libx264 -vpre fast_firstpass -s 480x360 -g 12 -fs 524288000 -vsync 2 -threads 0 -f rawvideo -an -y /dev/null
&&
/usr/local/bin/ffmpeg -i '/home/site/www-video/Upload/Temp/9d40b683eb2e8e8a036d64c741d04e01.flv' -pass 2 -acodec copy -vcodec libx264 -vpre fast -b 512k -g 12 -s 480x360 -fs 524288000 -vsync 2 -threads 0 -y /home/site/www-video/Upload/Temp/15616/video.flv
As you can see I'm converting it to the same format and it magically gains 10mb

I fixed the problem, ffmpeg was increasing the bitrate and I had to write some code in php to get the video's bitrate if it was lower than 512k and set the output bitrate to it.

Related

How to overlay a png to piped video source with audio mixed in via ffmpeg?

I'm successfully streaming silent video with music added from my Raspberry Pi (Raspbian) to YouTube via ffmpeg, with the help of this GitHub gist and this post:
raspivid -o - -t 0 -vf -hf -w 1280 -h 720 -fps 25 -b 4000000 | \
ffmpeg -i music.wav \
-f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental \
-f flv rtmp://a.rtmp.youtube.com/live2/STREAMKEY
The last step of my project to add a transparent, full width/height png overlay to the video (1280x720 size in my case). I've seen a few related answers such as this one and this one.
With the added complexity of piping in a camera feed, mixing in an audio source and outputting to a video stream, I haven't succeeded in adding the image overlay. Where/how would I add a transparent image overlay in the example above?
The ffmpeg part will be
ffmpeg -i music.wav \
-f h264 -i - -i overlay.png
-filter_complex "[1][2]overlay"
-vcodec libx264 -preset ultrafast -tune zerolatency -acodec aac -ab 128k -g 50 -strict experimental \
-f flv rtmp://a.rtmp.youtube.com/live2/STREAMKEY
Since you're altering the video contents, copy can't be used, and the video has to be re-encoded.

What command convert mjpeg IP camera streaming to mp4 file with lowest CPU usage?

like above question, I want find out what ffmpeg command can help me reduce cpu usage when running 50 IP camera (running same 50 command).
My ffmpeg command:
ffmpeg -f mjpeg -y -use_wallclock_as_timestamps 1 -i 'http://x.x.x.x:8090/test1?.mjpg' -r 3 -reconnect 1 -loglevel 16 -c:v mjpeg -an -qscale 10 -copyts '1.mp4'
50 command like that take my computer (4 core) 200% CPU
I want this computer can run for 150 camera, any advise?
=========================================================
using -c:v copy can make it faster but fize size is terrible
I try slow down frame rate by 3 with -r 3 or -framerate 3 to decrease file size but not succesful (because vcodec copy can't do that).
Have any option to force input frame rate by 3?
(sorry for my bad English)
by setting -c:v mjpeg you are decoding and re-encoding the stream. set -c:v copy to copy the data without re-encoding it.
ffmpeg -re -i 'rtsp://user:password#10.10.10.30/rtsp_tunnel' -pix_fmt yuv420p -c:v libx264 -preset ultrafast -profile baseline -crf 18 -f h264 udp://0.0.0.0:3001

BMDCapture (bmdtools) capture with Ultrastudio Mini recorder results is color problems

I'm trying to get a Blackmagic Ultrastudio Mini Recorder to stream via avconv to HLS. To test, it's hooked up to an AppleTV and this is the command I'm using:
./bmdcapture -m 14 -C 0 -F nut -f pipe:1 | avconv -vsync passthrough -y -i - -vcodec copy -pix_fmt yuyv422 -strict experimental -f hls -hls_list_size 999 +live -strict experimental out.m3u8
However, the colors are all messed up; suggesting the color format is set incorrectly. The input format is 1280x720 # 59.94 FPS (which is correct) and I've set the format to yuyv422 (though nothing else I've set this to has fixed the error).
Got it!
The Mini Recorder shoots at 10 bits rather than 8 (which I assumed considering Adobe's live encoder said it would be 8).
Here is the fixed code:
./bmdcapture -m 14 -p yuv10 -C 0 -F nut -f pipe:1 | avconv -vsync passthrough -y -i - -vcodec copy -pix_fmt uyvy422 -strict experimental -f hls -hls_list_size 999 +live -strict experimental out.m3u8

convert raw video .y4m to mpg video with certain GOP

I want to compress a raw video .y4m to mpg, and I want then to extract the frames from the mpg video, I need the GOP of the compression to be :IBBPBBPBBPBBPBBIBBP....15:2
I used this command:
ffmpeg -i video.ym4 -vcodec libx264 -sameq -y -r 30 output.avi 2>list.txt
ffmpeg -i output.avi -vcodec libx264 -y -sameq -vf showinfo -y -f image2 image%3d.jpeg -r 30 2>list1.txt
The output contains only 2 I frames, 100 P and 198 B frames, it is not 15:2 GOP, what to do?
I need one I-frame every 15 frames, and the pattern to b IBBPBBP..
Sorry, Im new to ffmpeg, please help me, this is the input to my project, it is the important step to me.
Try (according to http://ffmpeg.org/ffmpeg.html#Video-Encoders)
ffmpeg -i video.ym4 -vcodec libx264 -g 15 -y -r 30 output.avi
I think option -sameq (means "same quantizers") is not needed in your case.

How to get better quality converting MP4 to WMV with ffmpeg?

I am converting MP4 files to WMV with these two rescaling commands:
ffmpeg -i test.mp4 -y -vf scale=-1:360 test1.wmv
ffmpeg -i test.mp4 -y -vf scale=-1:720 test2.wmv
I've also tried:
ffmpeg -g 1 -b 16000k -i test1.mp4 test1.wmv
However, the .wmv files that are produced are "blocky and grainy" as you can see here in a small section of a video screenshot:
These are the sizes:
test.mp4 - 106 MB
test1.wmv - 6 MB
test2.wmv - 16 MB
How can I increase the quality/size of the resulting .wmv files (the size of the .wmv files is of no concern)?
Consider the following command instead (some outdated commands in the final answer section):
ffmpeg -i test.mp4 -c:v wmv2 -b:v 1024k -c:a wmav2 -b:a 192k test1.wmv
REFERENCES
https://askubuntu.com/questions/352920/fastest-way-to-convert-videos-batch-or-single
You can simply use the -sameq parameter ("use same quantizer as source") which produces a much larger sized video file (227 MB) but with excellent quality.
ffmpeg -sameq -i test.mp4 -y -vf scale=-1:360 test1.wmv
In newer versions of ffmpeg flag '-sameq' has been removed. To achieve similar results one should use 'qscale' flag with 0 value:
ffmpeg -sameq -i test.mp4 -qscale 0 -vf scale=-1:360 test1.wmv
Working answer in 2020, producing an output video without blockiness:
ffmpeg -i input.mp4 -q:v 1 -q:a 1 output.wmv
One thing I discovered after many frustrating attempts of enhancing the final quality was that if you don't specify a bitrate, it'll use a quite low average. Try -b 1000k for a starting point, and experiment increasing or decreasing it until you reach the desired result. Your file will be quite bigger or smaller, accordingly.
I used this and it turned out quite well
ffmpeg -i "file1.mp4" -q:v 0 -c:v wmv2 -b:v 1024k -c:a wmav2 -b:a 192k test2.wmv

Resources