I want to combine the output from an RTSP stream into both an HLS stream and several image stills. I can do this fine separately (obviously) but i'm having trouble combining things. Can I get a quick hand?
Here are my outputs (that works):
Outputting HLS streams:
ffmpeg -rtsp_transport tcp -i '$RTSP_URL'
-c:v copy -b:v 64K -f flv rtmp://localhost/hls/stream_low \
-c:v copy -b:v 512K -f flv rtmp://localhost/hls/stream_high
Outputting image stills:
ffmpeg -hide_banner -i '$(RTSP_URL)' -y \
-vframes 1 -vf "scale=1920:-1" -q:v 10 out/screenshot_1920x1080.jpeg \
-vframes 1 -vf "scale=640:-1" -q:v 10 out/screenshot_640x360.jpeg \
-vframes 1 -vf "scale=384:-1" -q:v 10 out/screenshot_384x216.jpeg \
-vframes 1 -vf "scale=128:-1" -q:v 10 out/screenshot_128x72.jpeg
Any help is appreciated (I also posted a bounty ^_^)
Thanks guys!
Simply
ffmpeg -rtsp_transport tcp -i '$RTSP_URL'
-c:v copy -b:v 64K -f flv rtmp://localhost/hls/stream_low \
-c:v copy -b:v 512K -f flv rtmp://localhost/hls/stream_high \
-vframes 1 -vf "scale=1920:-1" -q:v 10 out/screenshot_1920x1080.jpeg \
-vframes 1 -vf "scale=640:-1" -q:v 10 out/screenshot_640x360.jpeg \
-vframes 1 -vf "scale=384:-1" -q:v 10 out/screenshot_384x216.jpeg \
-vframes 1 -vf "scale=128:-1" -q:v 10 out/screenshot_128x72.jpeg
Note that your "HLS" streams is actually a RTMP stream as the output protocol says. Also, with -c:v copy, there's no video encoding, so -b:v has no effect.
Related
I'm using the following to stream an image to YouTube:
ffmpeg -threads:v 2 -threads:a 8 -filter_threads 2 -thread_queue_size 1080 \
-loop 1 -re -i ./image.png \
-i ./track.mp3 \
-pix_fmt yuv420p -c:v libx264 -qp:v 19 -profile:v high -rc:v cbr_ld_hq -level:v 4.2 -r:v 60 -g:v 120 -bf:v 3 -refs:v 16 -preset fast -f flv rtmp://a.rtmp.youtube.com/live2/xxx
And the looping for the image (to keep it streaming over) works, but not the sound.
Remember that FFmpeg input options are applied per input. So, -loop 1 is only specified for -i image.png input, and -i ./track.mp3 has no input options defined. Now, to loop audio track, you need to use -stream_loop input option like this:
ffmpeg -threads:v 2 -threads:a 8 -filter_threads 2 -thread_queue_size 1080 \
-loop 1 -re -i ./image.png \
-stream_loop -1 -i ./track.mp3 \
...
I'll update this question
ffmpeg -version
ffmpeg -version
ffmpeg version 4.3.1-4ubuntu1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 10 (Ubuntu 10.2.0-9ubuntu2)
I run this command to use ffmpeg to stream to youtube ;
ffmpeg -y -threads 12 \
-loop 1 -framerate 30 -re \
-i ./1280x720.jpg \
-i ./audio.mp3 \
-video_size 1280x720 \
-vcodec libx264 -pix_fmt yuv420p \
-b:v 4500k -maxrate 5500k -bufsize 22000k \
-preset ultrafast -crf 23 -tune stillimage \
-b:a 128k -ar 44100 -ac 2 -acodec aac \
-filter_complex "dynaudnorm=f=150:g=15" \
-r 30 -g 60 \
-f flv rtmp://a.rtmp.youtube.com/live2/xxxx 2>&1 | tee _LOG
The stream is excellent for 45-53 minutes then i'll get an error like this from ffmpeg:
[flv # 0x56077027cd80] Delay between the first packet and last packet in the muxing queue is 10034000 > 10000000: forcing output
then youtube starts to say, no data being received and the stream will end, which it does.
This is the full log: http://0x0.st/-zUH.txt
Your MP3 duration is 00:49:57.42 so the stream messes up after it ends. Loop the audio with -stream_loop -1 and add -re for real-time reading of the input:
ffmpeg -y \
-loop 1 -framerate 30 -re -i ./1280x720.jpg \
-re -stream_loop -1 -i ./audio.mp3 \
-c:v libx264 -pix_fmt yuv420p \
-b:v 4500k -maxrate 5500k -bufsize 22000k \
-preset ultrafast -tune stillimage \
-b:a 128k -ar 44100 -ac 2 -c:a aac \
-filter_complex "dynaudnorm=f=150:g=15" \
-g 60 -f flv rtmp://a.rtmp.youtube.com/live2/xxxx
Alternatively, remove -re -stream_loop -1 and add the output option -shortest if you want the stream to end when the audio ends.
Unrelated changes:
No need to set -threads. Let ffmpeg auto choose.
-video_size 1280x720 is an input option for certain demuxers and does nothing in your command. Removed. Your input is already 1280x720 anyway: otherwise, see Resizing videos with ffmpeg to fit a specific size.
-b:v and -crf are mutually exclusive. In your case -b:v is being ignored. For streaming you probably want to use -b:v. Removed -crf.
You already set the frame rate with -framerate 30 so -r 30 is not needed. Removed.
Recommend using the slowest -preset that still encodes fast enough.
I am trying to make a video from a bundle of image files and then apply a an overlay on top of it.Another requirement is to make the video loop 3x. It is simply not working.
The first three paths are pointing toward the same image bundle. (A folder containing images like the following DSC0001_0013.jpg,DSC0002_0013.jpg,etc)
Observed symptoms:
The script runs infinitely.I produces a video file of 0 KB.I have to abort script using CTRL+C
This is my script.
ffmpeg
-start_number 1 -framerate 3/1
-i "C:\Users\xxx\AppData\Local\xxx\xxx\xxx\xxx\xxx\xxx\xxx\963d9d9b8e1\DSC%04d_0013.jpg"
-i "C:\Users\xxx\AppData\Local\xxx\xxx\projects\xxx\xxx\xxx\xxx\963d9d9b8e1\DSC%04d_0013.jpg"
-i "C:\Users\xxx\AppData\Local\xxx\xxx\projects\xxx\xxx\xxx\xxx\963d9d9b8e1\DSC%04d_0013.jpg"
-i "C:\Users\xxx\AppData\Local\xxx\xxx\projects\1237\1138\overlay.png"
-i "C:\Users\xxx\AppData\Local\xxx\xxx\projects\1237\1138\overlay.png"
-i "C:\Users\xxx\AppData\Local\xxx\xxx\projects\1237\1138\overlay.png"
-filter_complex " [0:v]scale=600x900[scaled1]; [1:v]scale=600x900[scaled2]; [2:v]scale=600x900[scaled3]; [scaled1][3:v]overlay[tmp1]; [scaled2][4:v]overlay[tmp2]; [scaled3][5:v]overlay[tmp3]; [tmp1][tmp2][tmp3]concat=n=3[scaled] "
-map [scaled] -r 10 -vcodec libx264 -pix_fmt yuv420p -crf 23 "C:\Users\xxx\Documents\Projets\2020\xxx\video test ffmpeg\test.mp4"
Use the -stream_loop option:
ffmpeg -stream_loop 3 -framerate 3/1 -i DSC%04d_0013.jpg -i overlay.png -filter_complex "[0]scale=600:900[bg];[bg][1]overlay=format=auto,format=yuv420p[v]" -map "[v]" -r 10 -c:v libx264 -crf 23 output.mp4
#Ilogan, This is our solution.
-start_number 1 -framerate 3/1
-i DSC%04d_0013.jpg
-loop 1 -i overlay.png"
-filter_complex "
[0:v]scale=600x900[scaled];
[scaled][1:v]overlay,trim=duration=3,loop=loop=2:size=9[tmp]
" -map [tmp] -r 10 -vcodec libx264 -pix_fmt yuv420p -crf 23
test.mp4
I am trying to use FFmpeg to split different clips, concatenate them, and then reencode the concatenated stream. Here is the command line that I would like to use with 2 input clips (actually I would like to use more than 2, but 2 would suffice for illustrating this problem) as example:
./ffmpeg -y -noautorotate -ss 4.9 -i in0.ts -noautorotate -i in1.ts \
-threads 0 -map_chapters -1 -write_tmcd 0 \
-metadata location= -max_muxing_queue_size 2000 -f mp4 \
-movflags faststart -filter_complex "[0:v:0]yadif=deint=interlaced,scale=1280:720:flags=bicubic,setdar=1.7777778[v0];[1:v:0]yadif=deint=interlaced,scale=1280:720:flags=bicubic,setdar=1.7777778[v1];[v0][0:a:0][v1][1:a:0]concat=n=2:v=1:a=1[cat_v][cat_a]"
-map "[cat_a]" -acodec aac -ac 2 -ar 44100 -b:a 160k -async 1
-sn -map "[cat_v]" -vcodec libx264 -profile:v baseline -level 4 -b:v \
5400k -preset medium -x264opts ref=3:keyint=90 \
-r 30000/1001 -vsync 1 -metadata:s:v rotate= -pix_fmt yuv420p outputfile01.mp4
But the FFmpeg hangs and is stuck at frame 0. The in0.ts has its last key frame at 4s. If I were to change the -ss 4.9 to -ss X where X <= 4.0, then there is no issue.
My FFmpeg version is 3.3. I am aware that this problem does not exist in FFmpeg 4.0.x onwards or in FFmpeg 3.2.x but exists in 3.3.x and 3.4.x. Can someone help me understand exactly what bug has been introduced in 3.3.x and 3.4.x that there is this problem?
-ss before -i relies on seeking using the demuxer. For files with inter-coded video streams, the seek target will be a keyframe. The callback seek function in MPEG-TS demuxer returns the first keyframe after the specified point.
BTW, I can reproduce the effect with the latest builds. Why do you say the behaviour doesn't occur in 4.0 or 3.2?
To achieve the intended result, you can use the trim filters,
./ffmpeg -y -noautorotate -i in0.ts -noautorotate -i in1.ts -filter_complex "[0:v:0]yadif=deint=interlaced,trim=4.9,setpts=PTS-STARTPTS,scale=1280:720:flags=bicubic,setdar=1.7777778[v0];[1:v:0]yadif=deint=interlaced,scale=1280:720:flags=bicubic,setdar=1.7777778[v1];[0:a:0]atrim=4.9,asetpts=PTS-STARTPTS[a0];[v0][a0][v1][1:a:0]concat=n=2:v=1:a=1[cat_v][cat_a]" -sn -map "[cat_a]" -async 1 -ac 2 -ar 44100 -c:a aac -b:a 160k -map "[cat_v]" -r 30000/1001 -vsync 1 -pix_fmt yuv420p -c:v libx264 -threads 0 -profile:v baseline -level:v 4 -b:v 5400k -preset medium -x264opts ref=3:keyint=90 -map_chapters -1 -metadata location= -metadata:s:v rotate= -max_muxing_queue_size 2000 -f mp4 -write_tmcd 0 -movflags faststart outputfile01.mp4
I am trying to stream a video loop to justin.tv using FFmpeg? I have managed to loop an image sequence and combine it with line in audio:
ffmpeg -loop 1 -i imageSequence%04d.jpg -f alsa -ac 2 -ar 22050 -ab 64k \
-i pulse -acodec adpcm_swf -r 10 -vcodec flv \
-f flv rtmp://live.justin.tv/app/<yourStreamKeyHere>
Is it possible to do this with a video file?
Definitely possible. In the recent versions of ffmpeg they have added a -stream_loop flag that allows you to loop the input as many times as required.
The gotcha is that if you don't regenerate the pts from the source, ffmpeg will drop frames after the first loop (as the timestamp will suddenly go back in time). To avoid this, you need to tell ffmpeg to generate the pts so you get an increasing timestamp between loops. This is done with the +genpts call (it has to be before the -i arg).
Here's an example ffmpeg call (replace $F with your input file). This example generates two output streams and the -stream_loop -1 argument tells ffmpeg to continuously loop the input. The output in this case is for a similar stream broadcast ingest (MetaCDN), adjust accordingly to your requirements.
ffmpeg -threads 2 -re -fflags +genpts -stream_loop -1 -i $F \
-s 640x360 -ac 2 -f flv -vcodec libx264 -profile:v baseline -b:v 600k -maxrate 600k -bufsize 600k -r 24 -ar 44100 -g 48 -c:a libfdk_aac -b:a 64k "rtmp://publish.live.metacdn.com/2050C7/dfsdfsd/lowquality_664?hello&adbe-live-event=lowquality_" \
-s 1920x1080 -ac 2 -f flv -vcodec libx264 -profile:v baseline -b:v 2000k -maxrate 2000k -bufsize 2000k -r 24 -ar 44100 -g 48 -c:a libfdk_aac -b:a 64k "rtmp://publish.live.metacdn.com/2050C7/dfsdfsd/highquality_2064?mate&adbe-live-event=highquality_"
Sinclair Media has found a solution by using the lavfi filter and appending :loop=0 to the file name :
This is untested:
ffmpeg -f lavfi -re -i movie=StreamTest.avi:loop=0 \
-acodec libfaac -b:a 64k -pix_fmt yuv420p -vcodec libx264 \
-x264opts level=41 -r 25 -profile:v baseline -b:v 1500k \
-maxrate 2000k -force_key_frames 50 -s 640×360 -map 0 -flags \
-global_header -f segment -segment_list index_1500.m3u8 \
-segment_time 10 -segment_format mpeg_ts \
-segment_list_type m3u8 segmented.ts
But it should create a local "index_1500.m3u8" file that streams the video in "StreamTest.avi".
I just reuse the Rob's answers with a few of modifications in order to provide a file to live streaming
ffmpeg -threads 2 -re -fflags +genpts -stream_loop -1 -i gvf.mp4 -c copy -f mpegts -mpegts_service_id 102 -metadata service_name=My_channel -metadata service_provider=My_Self -max_interleave_delta 0 -use_wallclock_as_timestamps 1 -flush_packets 1 "udp://233.0.0.1:1001?localaddr=10.60.4.237&pkt_size=188"