I am creating a small script to stream a images on rtmp server but FFMPEG command taking 100% CPU. Please have a look on my code.
ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -loop 1 -i "Digital-Wallet-.jpg" -t 00:30:00 -r 1 -c:v libx264 -c:a aac -preset:v ultrafast -pix_fmt yuv420p -f flv "rtmp://rtmpserver"
Encoding is CPU intensive. Remove -r 1 and add -framerate 1, -re, and -shortest.
ffmpeg -f lavfi -i anullsrc -loop 1 -framerate 1 -re -i "Digital-Wallet-.jpg" -t 00:30:00 -c:v libx264 -c:a aac -preset:v ultrafast -pix_fmt yuv420p -shortest -f flv "rtmp://rtmpserver"
The default image demuxer frame rate is 25, so your command was unnecessarily converting 25 frames per second to 1 frame per second which is inefficient. The above changes fixes that.
-re will slow down the reading of the input to the native frame rate of the input. It is useful for real-time output and live streaming. Otherwise ffmpeg will attempt to encode as fast as possible.
I added -shortest to end the output when the shortest stream ends (the image) because anullsrc was set to encode indefinitely.
Related
Here I got a video which has the FPS 30, duration 10s, and has 300 frames. How could I turn the video to 25FPS without dropping frames.
I suppose the -r or fps=fps=25 is kind of resampling method or not working.
My commands are like:
ffmpeg -i input.flv -vf "scale=800:450, fps=25" output1.flv
or
ffmpeg -i intput.flv -filter:v fps=fps=25 -c:v libx264 -c:a copy -pix_fmt yuv420p -profile:v high -f mp4 -vf scale=800:450 output2.mp4
The result is that output1.flv dropped frames, and output2.mp4 didn't work.
If you're re-encoding the video stream, then
ffmpeg -r 25 -i input.flv ...
If there's audio, you'll have to adjust its tempo as well by adding
-af atempo=0.834
where 0.834 is 25/30.
I use ffmpeg to convert a sequence of images to a video, i find that after i feed first image to it and almost 6 seconds later ffmpeg output first video frame to me.
I use command as follow:
ffmpeg -f image2pipe -r 100 -i pipe:0 -f flv -r 100 -tune zerolatency -preset ultrafast -bufsize 2M -codec:v libx264 -codec:a libmp3lame -bf 0 -muxdelay 0.001 -s 478x850 -b:v 2M pipe:1
Is my options is right?
Or others led to this result?
How can i get first video frame quickly once i feed the first frame?
Change probesize & analyzeduration options
I am trying to encode a video to webm for playing through a HTML5 video tag. I have these settings...
ffmpeg -i input.mp4 -c:v libvpx-vp9 -b:a 128k -b:v 1M -c:a libopus output.webm
The results aren't great, video has lost lot's of it's sharpness. Looking at the original file I can see the bitrate is 1694kb/s.
Are there any settings I can add or change to improve the output? Would maybe a 2 pass encode improve things?
Try with
ffmpeg -i input.mp4 -c:v libvpx-vp9 -crf 30 -b:v 0 -b:a 128k -c:a libopus output.webm
Adjust the CRF value till the quality/size tradeoff is ok. Lower values produce bigger but better files.
Try to run two passes:
ffmpeg -i file.mp4 -b:v 0 -crf 30 -pass 1 -an -f webm -y /dev/null
ffmpeg -i file.mp4 -b:v 0 -crf 30 -pass 2 output.webm
From - https://trac.ffmpeg.org/wiki/Encode/VP9
I'm creating a video that:
uses a still image as a source
has a text overlay
fades in and out
has a silent stereo audio track.
So far, I have this, and it (almost) works correctly:
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -i turtle-2.jpg -c:v libx264 -t 5 -r 30 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=in:0:60,fade=out:90:60 -acodec aac turtle11.mp4
The only problem is that the fade out doesn't seem to be going to black, even tho this is a 150 frame video and I believe I am following the ffmpeg documentation correctly.
The resulting video is here:
http://video.blivenyc.com/vid-from-image/turtle11.mp4
Any thoughts?
Well, I'm not sure why but this works, even tho it appears to be equivalent:
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -i turtle-2.jpg -c:v libx264 -t 5 -r 30 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=t=in:st=0:d=1,fade=t=out:st=4:d=1 -acodec aac turtle12.mp4
Basically, frame-based syntax:
fade=in:0:60,fade=out:90:60
gets substitued with time-based:
fade=t=in:st=0:d=1,fade=t=out:st=4:d=1
And somehow it works. Not sure why this is.
The video stream on which the fade filter operates is not 150 frames long. Input and output framerates are different here. The use of -r to set output rate happens after all filtering is done. At that stage, ffmpeg will drop or duplicate frames to obtain the output rate.
The input rate for an image or image sequence is 25, unless expressly set otherwise. In your command, since there is no override, it's 25. So fade out of 60 frames starting at frame 90, will end at frame 125 (5 seconds x 25). ffmpeg will duplicate 5 frames of each input second to get it to 30.
To get the desired result, use
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -framerate 30 -i turtle-2.jpg -c:v libx264 -t 5 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=in:0:60,fade=out:90:60 -acodec aac turtle11.mp4
THE INPUT FILES
An overlay image that has is being updated every 5 seconds by a Python script
A small MP4 file that will be looped by a concat input
An MP3 file as audio source
THE COMMAND (UPDATED)
This is the command I'm currently using to combine and stream the inputs.
ffmpeg -re -i music.mp3 -f concat -i videoincludes.txt
-r 1 -loop 1 -f image2 -i overlay.png
-c:v libx264 -c:a aac -shortest -crf 23 -pix_fmt yuv420p
-maxrate 2500k -bufsize 2500k -preset ultrafast -r 30 -g 60 -b:v 2000k -b:a 192k -ar 44100
-filter_complex "[1:v][2:v] overlay=0:0" -map 0:a -strict -2
-f flv rtmp://a.rtmp.youtube.com/live2/{key}
Als tried using -framerate 1 instead of -r 1
THE ISSUE
So the issue is that the image doesn't always update. Sometimes it does update every couple seconds at the start but it stops updating after 10-20 seconds without any difference in log output and sometimes it just doesn't update.
I can however confirm that the image is being updated by the Python script but FFmpeg is just not picking this up.
I read setting the input format of the image to image2 should allow it to update so I am not sure what is wrong or what I can do to improve it.
I'm working on the same task, and finally, I think, I found the answer.
Because streams different from each other we must reset their timestamps with setpts=PTS-STARTPTS to have them begin in the same zero timestamp . And, also, try to use image2pipe instead of image2.
This is your code with timestamp reset:
ffmpeg -re -i music.mp3 -f concat -i videoincludes.txt
-r 1 -loop 1 -f image2pipe -i overlay.png
-c:v libx264 -c:a aac -shortest -crf 23 -pix_fmt yuv420p
-maxrate 2500k -bufsize 2500k -preset ultrafast -r 30 -g 60 -b:v 2000k -b:a 192k -ar 44100
-filter_complex "[1:v]setpts=PTS-STARTPTS[out_main]; [2:v]setpts=PTS-STARTPTS[out_overlay]; [out_main][out_overlay]overlay=0:0" -map 0:a -strict -2
-f flv rtmp://a.rtmp.youtube.com/live2/{key}
p.s and I think, there is no need in -r or -framerate anymore