Why the converted videos file size is greater than original file size? - ffmpeg

I am using ffmpeg to convert the videos into mp4.Its working fine and its playing with high quality.No problem.But the worst case is I uploaded 14Mb file and after converting it goes to 30 Mb file.I am using the following the script to convert
exec("ffmpeg -i videowithaudio.flv -vcodec libx264 -vpre hq -vpre ipod640 -b 250k -bt 50k -acodec libfaac -ab 56k -ac 2 -s 480x320 video_out_file.mp4 > output1.txt 2> apperror1.txt"); //webkit compatible
I am using PHP for executing this command.Could you please help me how to reduce the file size from this 30Mb (nearly to uploaded file size is ok) with same quality.

Files converted from flv to mp4 will always have greater size than the source file. Generally flv files are smaller than other formats, thats why youtube converts all files to flv.
you can use -sameq parameter to retain the quality of video and lesser file size of resulting output file.
Example 1:
ffmpeg -i input.flv -sameq -ar 22050 output.mp4
Example 2:
exec("/usr/bin/ffmpeg -y -i input.flv -acodec libfaac -sameq -ar 44100 -ab 96k -coder ac -me_range 16 -subq 5 -sc_threshold 40 -b 1600k -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -i_qfactor 0.71 -keyint_min 25 -b_strategy 1 -g 250 -r 20 output.mp4");
I created this command by searching alot and this fulfills my requirements, using this you can get a bit less file size but with same quality.
Hope this works for you also.

You should try playing around with your key frames rate (-g). Frames between key frames have only the pixels different from the previous key frame. If your key frames are too far apart, all pixels are present in middle frames (increasing size), too close and the number of key frames increases the file size.
Note that the optimal key frame rate will be different for each video, so you need to find a middle ground.

exec("ffmpeg -i videowithaudio.flv -vcodec libx264 -vpre hq -vpre ipod640 -b 250k -bt 50k -acodec libfaac -ab 56k -ac 2 -s 480x320 video_out_file.mp4 > output1.txt 2> apperror1.txt"); //webkit compatible
Important for the filesize, is the bitrate. The bitrate specifies how many bytes to use per second video. If you decrease the bitrate, the filesize will also become smaller.
You are currently using 250kbit/s video (-b 250k) and 56kbit/s audio (-ab 56k), so you have to decrease those numbers. For example you can try :-b 100k -ab 32k. But keep in mind that the quality will also decrease when you decrease the bitrate. If the quality becomes too bad, you can also decrease the framerate or frame size to increase the quality.

Related

ffmpeg how to show each frame for 1 second?

I have a video, recorded at 7 fps. It's 17 seconds long and has 122 frames.
I want to keep all frames but show them 1 per second, I want the same video to last 122 seconds. I don't want to lose information, but I also don't want the file size to increase.
How can I do that? All the ffmpeg options I see change the frame rate but keep the duration or create/drop frames.
You want to slow down your video without reincoding your video to (possibly) keep the file size. You must strip the timestamps by exporting the video to a raw bitstream format:
ffmpeg -i input.mp4 -map 0:v -c:v copy -bsf:v h264_mp4toannexb raw.h264
and than remux it with:
ffmpeg -fflags +genpts -r 1 -i raw.h264 -c:v copy output.mp4
-r 1 sets the framerate to 1.

While ffmpeg is recording, I want it to create a smaller and lower quality video

Currently I am using this...
ffmpeg -video_size 1920x1080 -framerate 1 -f x11grab -i :0.0+0,0 -f pulse -ac 2 -i default -t 00:00:10 Output.mkv
While ffmpeg is recording a video, I want it to significantly reduce both the size and quality compared to the ffmpeg command above.
In case you are curious, I am recording brief quality assurance videos to ensure a simple little web scraper I wrote in Python is scraping data properly (specifically, that it is clicking on a particular button, at a particular time, on a particular web page). My Python script triggers the command above to start recording my screen a few seconds before my Python script is supposed to click on that button.
Of course, to verify a button on a web page had been clicked on, low quality video resolution would normally suffice.
For libx264/libx265 the most important option to reduce both the size and quality is -crf. This option controls quality. A value of 51 provides the worst quality. If it's too terrible then use a lower number.
ffmpeg -video_size 1920x1080 -framerate 1 -f x11grab -i :0.0+0,0 -f pulse -channels 2 -i default -t 00:00:10 -c:v libx264 -crf 51 -c:a libopus Output.mkv
See FFmpeg Wiki: H.264.
For significantly reduce both the size and quality of Output.mkv you can use the next ffmpeg configuration:
crop: iw-(cut width in pixels):ih-(cut heigth in pixels)
scale: to set the ratio and resolution (example 1700x800)
crf: to set quality, where 0 is lossless, 23 is default, and 51 is worst possible. A lower value is a higher quality and a subjectively sane range is 18-28. Consider 18 to be visually lossless
bitrate value: -b:v value -minrate value and -maxrate max value (example -b:v 4000K -minrate 2000K -maxrate 6000K)
preset: in theory slow is best quality/size, and you can probe ultrafast
superfast
veryfast
faster
fast
medium – default preset
slow
slower
veryslow
Cut a part of video, changue the resolution, changue crf, and bitrate, first only video, then you can add audio in other work, dont mix, repeat, is very important, encoding/decoding audio and video separate, then you can mix all, but first the video like in this example
ffmpeg -i Output.mkv -map 0:v -vf crop=iw-150:ih-85,scale=ih*16/9:ih,scale=1072:732,setsar=1 -c:v libx264 -crf 17 -b:v 4000K -maxrate 6000K -bufsize 4M -movflags -faststart -preset veryfast -dn video-new.mkv

FFMPEG: Youtube streaming quality and speed issues

I am trying to make a reliable stream from my Icecast/Shoutcast servers to Youtube live. The command that I use is:
ffmpeg -v verbose -framerate 30 -loop 1 -i /var/image.jpg -re -i http://127.0.0.1:4700/radio -c:v libx264 -preset ultrafast -b:v 2250k -maxrate 6000k -bufsize 6000k -c:a copy -ab 128k -s 1920x1080 -framerate 30 -g 60 -keyint_min 60 -f flv rtmp://a.rtmp.youtube.com/live2/xxx
As you can see I am using recommended bitrate for Youtube, insert keyframes every 2 seconds and streaming at 30 frames per second.
The stream is working but after running for some time two thing are happening:
FFMPEG speed falls from 1x to something like 0.998x
Youtube starts complaining that video stream speed is slow, markes the quality as bad and sometimes video starts buffering.
Why is this happening? CPU load is normal, connectivity is ok (the stream is running on a 1Gg/s dedicated server).
Since in my example above I am streaming a single image as a logo of the stream I also tried to generate a short 30 seconds video with that image and broadcast that video instead of an image, but that did not help as well.
The command I used for conversion:
ffmpeg -framerate 30 -loop 1 -i /var/image.jpg -c:v libx264 -preset ultrafast -tune stillimage -b:v 2250k -minrate 2250k -maxrate 6000k -bufsize 6000k -framerate 30 -g 60 -keyint_min 60 -t 30 out4.mp4
And broadcast with
ffmpeg -stream_loop -1 -i out4.mp4 -re -i http://127.0.0.1:4700/radio -c:v copy -c:a copy -framerate 30 -g 60 -keyint_min 60 -f flv rtmp://a.rtmp.youtube.com/live2/xxx
ffmpeg version is 4.1.1
Are you sure that your original stream is really keeping up with the wall-clock?
Depending on how it's encoded there are possibilities that it gets heavily skewed. This ultimately leads to buffer under (or overruns if it's too fast) and the player complaining/skipping.
Can you try and dump several hours worth of stream to a file and then stream that with FFmpeg? If that works, then it's a strong indication that your original stream timing (sample rate) is off.
Getting the sample rate right is why professional/expensive sound cards use high precision Quartz-Crystal controlled oscillators. Purely virtual processing (e.g. files get encoded into a stream) can easily get skewed, especially inside virtual machines. Also, cheap USB sound cards are often among the worst offenders in terms of frequency accuracy and stability.
FFmpeg might have an option to deal with too slow input. Keywords could be 'padding' or 'missing samples'.
Youtube's error saying "...buffer....." is not a buffer issue on your PC, but simply data you are sending to youtube is too small.
1)note that [-preset ultrafast] and [-preset fast] does not make big difference.
2) change your ffmpeg comannd for broadcast one. like, [-b:v 2250k] to [-b:v 15000k],and set fps to 12→[-r 12] option.
I's gonna be.
ffmpeg -stream_loop -1 -i out4.mp4 -re -i http://127.0.0.1:4700/radio -preset fast -r 12 -framerate 30 -g 60 -video_track_timescale 1000 -b:v 15000k -f flv rtmp://a.rtmp.youtube.com/live2/xxx
I hope this will be good for you !!(^v^)Y

Scale watermark in ffmpeg based on video size [duplicate]

i have some videos and i want to add watermark to them
but problem is coz in every video "watermark size" is different
(in some videos watermark is smaller and in some is bigger - i think because of video input size coz its different)
here is my ffmpeg command (just link is different)
ffmpeg -i "http://VIDEO-LINK" -i "/var/www/logo/logo.png" -filter_complex 'overlay=17:17' -vcodec h264 -crf 25 -preset veryfast -maxrate 600k -bufsize 600k -aspect '640:360' -s '640:360' -acodec libfdk_aac -hls_time 10 -hls_wrap 10 -start_number 1 -y "1.m3u8"
is there a way to make any percentage or fixed watermark based on output which is 640x360
coz if input video is 640x360 it show big watermark with this command
if input link is 1280x720 then watermark is so small
You can use the scale2ref filter.
-filter_complex "[1][0]scale2ref=iw/8:ih/8[wm][vid];[vid][wm]overlay=17:17[out]"
If the aspect ratio of your watermark is not the same as your video inputs, then the scale2ref will distort your logo. It's best to perform a one-time operation where the logo is padded so that the image has the same aspect ratio as your videos.

Does a scencut feature of H.264 increases a file size?

When I use this command:
ffmpeg -i original.mp4 -codec:v:0 libx264 -b:v 650k -crf 21 -minrate:v 0k -maxrate:v 750k -bufsize:v 5000k -r 30 -preset slow -x264opts "no-scenecut" -vcodec libx264 -force_key_frames "expr:bitor(eq(t,0),gte(t,prev_forced_t+5))" -f mp4 test.mp4
I always get smaller file size than from this command (same command but without: -x264opts "no-scenecut"):
ffmpeg -i original.mp4 -codec:v:0 libx264 -b:v 650k -crf 21 -minrate:v 0k -maxrate:v 750k -bufsize:v 5000k -r 30 -preset slow -vcodec libx264 -force_key_frames "expr:bitor(eq(t,0),gte(t,prev_forced_t+5))" -f mp4 test.mp4
I thought that the scencut puts I frames only if it is more efficient to use I frame insted of P or B frame.
In what cases we need to use the scencut feature?
When a scenecut triggers it puts either an IDR if the distance is greater than min-keyint OR an I-frame otherwise.
Here's some pseudo-code posted on the doom9.org forum (adding it here for future reference):
encode current frame as (a really fast approximation of) a P-frame and an I-frame.
if ((distance from previous keyframe) > keyint) then
set IDR-frame
else if (1 - (bit size of P-frame) / (bit size of I-frame) < (scenecut / 100) * (distance from previous keyframe) / keyint) then
if ((distance from previous keyframe) >= minkeyint) then
set IDR-frame
else
set I-frame
else
set P-frame
encode frame for real.
You should use scenecut when you don't need a fixed GOP / forced keyframes. If you're trying to encode for ABR delivery then you can alternatively use two-pass encoding and generate a stat file for the highest resolution on pass-1 and then reuse it on pass-2 for each rendition.

Resources