I am trying reset pts on input stream and create new pts and publish stream to RTMP.
ffmpeg -re -f lavfi -i "movie=${SOURCE}:s=0+1[out0][out1];[0:v]setpts=N/(FRAME_RATE*TB),[0:a]asetpts=N/(FRAME_RATE*TB)" \
-r 24 -crf 20 \
-c:v libx264 \
-c:a aac -ar 44100 -ab 128k -ac 2 -strict -2 \
-f flv ${DEST}
If I remove the setpts and asetpts filters them command works. But I need to setpts and asetpts at source before it is given to encoder.
Please help.
Alter the PTS outside the source graph.
ffmpeg -re -f lavfi -i "movie=${SOURCE}:s=0+1" \
-vf setpts=N/FRAME_RATE/TB -af asetpts=N/SR/TB
-r 24 -crf 20 \
-c:v libx264 \
-c:a aac -ar 44100 -ab 128k -ac 2 -strict -2 \
-f flv ${DEST}
Related
When I run the command below, only the 1280x720 line works. The other two lines do not work. Where am I missing?
ffmpeg -i test.mp4 -s:v:0 1280x720 -c:a aac -c:v:0 libx264 -b:v:0 2000k \
-s:v:1 640x480 -c:a aac -c:v:1 libx264 -b:v:1 1000k \
-s:v:2 320x240 -c:a aac -c:v:2 libx264 -b:v:2 600k \
-f hls -hls_playlist_type vod -master_pl_name test.m3u8 \
-hls_segment_filename test_%v/test%06d.ts -use_localtime_mkdir 1 stream_%v.m3u8
You need to map the input video stream for each output video stream explicitly like this:
ffmpeg -i test.mp4 \
-map 0:v:0 -s:v:0 1280x720 -c:v:0 libx264 -b:v:0 2000k \
-map 0:v:0 -s:v:1 640x480 -c:v:1 libx264 -b:v:1 1000k \
-map 0:v:0 -s:v:2 320x240 -c:v:2 libx264 -b:v:2 600k \
-c:a aac \
-f hls -hls_playlist_type vod -master_pl_name test.m3u8 \
-hls_segment_filename test_%v/test%06d.ts -use_localtime_mkdir 1 stream_%v.m3u8
transport stream over ip generated by ffmpeg is not detected by DVB Receiver.Receiver status is PCR not detected.I am using the following command
ffmpeg -re -i testvideo.mp4 -map 0:v:0 -map 0:a:0 -pix_fmt yuv420p -r 25 -s 720x576 -aspect 4:3 -qmin 2 -qmax 35 -b:v 1000k -minrate 1000k -maxrate 1000k -bufsize 500k -vcodec libx264 -acodec aac -ab 128k -ac 2 -f mpegts -mpegts_original_network_id 1 -mpegts_transport_stream_id 1 -mpegts_service_id 1 -mpegts_pmt_start_pid 4096 -streamid 0:289 -streamid 1:337 -program title="service1":st=0:st=1 -metadata service_provider="MYCALL" -muxrate 2000k -metadata service_name="My Station ID" -y udp://239.0.0.1:5000?pkt_size=1316&localaddr=192.168.100.114
I have a 2 seconds length 1920x1080 dimensions video and I want to size of the file to be 3Mb
I tried the method below:
const fileSize = 3000; // Kilobytes
const duration = 2; // Second
const videoBitRate = Math.round((fileSize * 8) / duration));
So videoBitRate is 12000 right now
Then I use Two-Pass encoding
ffmpeg -y -i input -c:v libx264 -preset medium -b:v 12000k -pass 1 -c:a aac -b:a 128k -f mp4 /dev/null && \
ffmpeg -i input -c:v libx264 -preset medium -b:v 12000k -pass 2 -c:a aac -b:a 128k output.mp4
Expecting file size: 3Mb
Actual file size: 2.6Mb
If I'd use a 35 seconds video,
videoBitRate = 685k
ffmpeg -y -i input -c:v libx264 -preset medium -b:v 685k -pass 1 -c:a aac -b:a 128k -f mp4 /dev/null && \
ffmpeg -i input -c:v libx264 -preset medium -b:v 685k -pass 2 -c:a aac -b:a 128k output.mp4
Expecting file size: 3Mb
Actual file size: 3.6Mb
What is the point I'm doing wrong?
Isn't there a more accurate way to calculate? Why are the results always different?
I'm coming to you today because I just want to add an overlay to my feed on youtube I already have all the code that plays the video but I can not add an image. Here is the code that I am currently using and I have seen on another post how to add an image but I can not add it:
function goto {
VBR="2000k" # Bitrate de la vidéo en sortie
FPS="30" # FPS de la vidéo en sortie
QUAL="fast" # Preset de qualité FFMPEG
YOUTUBE_URL="rtmp://x.rtmp.youtube.com/live2" # URL de base RTMP youtube
result="$(ls Video | shuf -n 1)"
SOURCE="Video/${result}" # Source UDP (voir les annonces SAP)
KEY="ergtre498ter64t" # Clé à récupérer sur l'event youtube
ffmpeg \
-i "$SOURCE" -deinterlace -vf realtime -af arealtime \
-vcodec libx264 -pix_fmt yuv420p -preset $QUAL -r $FPS -g $(($FPS * 2)) -b:v $VBR \
-acodec libmp3lame -ar 44100 -threads 6 -qscale 3 -b:a 712000 -bufsize 512k \
-framerate 2 -f flv "$YOUTUBE_URL/$KEY"
goto
}
goto
And the code I found
ffmpeg -i input.mp4 -i image.png \
-filter_complex "[0:v][1:v] overlay=25:25:enable='between(t,0,20)'" \
-pix_fmt yuv420p -c:a copy \
output.mp4
FFmpeg command would be
ffmpeg \
-i "$SOURCE" -i "$IMAGE" -filter_complex "[0]yadif[m];[m][1]overlay=25:25,realtime" -af arealtime \
-vcodec libx264 -pix_fmt yuv420p -preset $QUAL -r $FPS -g $(($FPS * 2)) -b:v $VBR \
-acodec libmp3lame -ar 44100 -threads 6 -qscale 3 -b:a 712000 -bufsize 512k \
-f flv "$YOUTUBE_URL/$KEY"
$IMAGE should be set to your image file URL.
I am trying to apply a watermark and also to scale it to the current video size via ffmpeg command:
Here is my inital comand that works without watermark
ffmpeg -v 0 -vcodec h264_qsv -i 'udp://#some.ip:1234?fifo_size=1000000&overrun_nonfatal=1&buffer_size=1000000' -vf scale=iw:ih -profile baseline -acodec aac -ac 1 -ar 44100 -ab 64k -deinterlace -vcodec h264_qsv -bufsize 4000k -maxrate 3500k -preset veryfast -vb 2000k -f flv rtmp://127.0.0.1/app/720
Now I tried to add the picture as a watermark. There was a conflict while using with -vf scale=-1:ih*.5, in order to eliminate the problem I used -s 1280x720 to specify the resolution for the video stream, it worked but not properly.
ffmpeg -v 0 -vcodec h264_qsv -i 'udp://#some.ip:1234?fifo_size=1000000&overrun_nonfatal=1&buffer_size=1000000' -i logo.png -filter_complex "overlay=10:10" -s 1280x720 -profile baseline -acodec aac -ac 1 -ar 44100 -ab 64k -deinterlace -vcodec h264_qsv -bufsize 4000k -maxrate 3500k -preset veryfast -vb 2000k -f flv rtmp://some.ip/app/720
The problem:
How can I specify in the ffmpeg command the both sizes of video and logo(watermark) so they don't conflict with each other and they auto adjust like -vf scale=-1:ih*.5 dose.
Thank you!
The scale2ref filter allows one to a video/image stream with reference to the dimensions of another video or image stream
e.g.
ffmpeg -v 0 -vcodec h264_qsv -i 'udp://#some.ip:1234?fifo_size=1000000&overrun_nonfatal=1&buffer_size=1000000' \
-loop 1 -i logo.png \
-filter_complex "[1:v][0:v]scale2ref=iw/8:-1[logo][0v];[0v][logo]overlay=10:10[v]" \
-map "[v]" -map 0:a \
-profile baseline -acodec aac -ac 1 -ar 44100 -ab 64k \
-deinterlace -vcodec h264_qsv -bufsize 4000k -maxrate 3500k \
-preset veryfast -vb 2000k \
-f flv rtmp://some.ip/app/720
Here 1:v - the logo image - is being scaled to 1/8th the width of [0:v], the H.264 stream.
For the command given in the comments:
ffmpeg -v 0 -vcodec h264_qsv -i 'input' \
-loop 1 -i logo.png \
-filter_complex "[0:v]scale=iw:ih[v0]; \
[1:v][v0]scale2ref=iw/8:-1[logo][0v];[0v][logo]overlay=10:10[v]" \
-map "[v]" -map 0:a \
-profile baseline -acodec aac -ac 1 -ar 44100 -ab 64k \
-deinterlace -vcodec h264_qsv -bufsize 4000k -maxrate 3500k \
-preset veryfast -vb 2000k \
-f flv out1 \
-filter_complex "[0:v]scale=-1:ih/2[v0]; \
[1:v][v0]scale2ref=iw/8:-1[logo][0v];[0v][logo]overlay=10:10[v2]" \
-map "[v2]" -map 0:a \
-profile baseline -acodec aac -ac 1 -ar 44100 -ab 64k \
-deinterlace -vcodec h264_qsv -bufsize 4000k -maxrate 2000k \
-preset veryfast -vb 1000k \
-f flv out2 \
-filter_complex "[0:v]scale=-1:ih/4[v0]; \
[1:v][v0]scale2ref=iw/8:-1[logo][0v];[0v][logo]overlay=10:10[v3]" \
-map "[v3]" -map 0:a \
-profile baseline -acodec aac -ac 1 -ar 44100 -ab 64k \
-deinterlace -vcodec h264_qsv -bufsize 4000k -maxrate 1000k \
-preset veryfast -vb 512k \
-f flv out3 \