rmtp lavfi & image2 queue blocking - ffmpeg

[image2 # 0x7fb9ffc105e0] Thread message queue blocking; consider raising the thread_queue_size option (current value: 1024)
How to resolve this, after adding lavfi & raising the queue value to 1024 error still present....
ffmpeg -f image2 -loop 1 -framerate 1 -thread_queue_size 512 -i bg.png \
-f lavfi -thread_queue_size 512 -i "amovie=audio.m4a:loop=0,asetpts=N/SR/TB" \
-vf "realtime" \
-f flv rtmp://server.dev:1935/live/mystream
No errors w/o lavfi
ffmpeg -f image2 -loop 1 -framerate 1 -i bg.png \
-vf "realtime" \
-f flv rtmp://server.dev:1935/live/mystream

Related

FFMPEG command playing the audios all at once

I want to play all the audios at a designated time frame using this command tho they all play at the beginning of the video here is the command
ffmpeg -y -i ./temp/background.mp4 -itsoffset 1 -i ./temp/tts/0.mp3 -itsoffset 3 -i ./temp/tts/1.mp3 -itsoffset 4 -i ./temp/tts/10.mp3 -itsoffset 7 -i ./temp/tts/11.mp3 -itsoffset 8 -i ./temp/tts/12.mp3 -itsoffset 10 -i ./temp/tts/13.mp3 -itsoffset 12 -i ./temp/tts/14.mp3 -itsoffset 14 -i ./temp/tts/2.mp3 -itsoffset 15 -i ./temp/tts/3.mp3 -itsoffset 18 -i ./temp/tts/4.mp3 -itsoffset 20 -i ./temp/tts/5.mp3 -itsoffset 22 -i ./temp/tts/6.mp3 -itsoffset 23 -i ./temp/tts/7.mp3 -itsoffset 25 -i ./temp/tts/8.mp3 -itsoffset 27 -i ./temp/tts/9.mp3 -filter_complex amix=inputs=5[a] -map 0:v -map [a] -c:v copy -async 1 -c:a aac ./temp/withoutsubs.mp4
I was expecting all the audios to play at their right time
amix doesn't sync by timestamp. You have to pad the head of each mp3 to start from timestamp 0 like this,
ffmpeg -y \
-i ./temp/background.mp4 \
-itsoffset 1 -i ./temp/tts/0.mp3 \
-itsoffset 3 -i ./temp/tts/1.mp3 \
-itsoffset 4 -i ./temp/tts/2.mp3 \
-filter_complex \
"[1]aresample=async=1:first_pts=0[1a];
[2]aresample=async=1:first_pts=0[2a];
[3]aresample=async=1:first_pts=0[3a];
[1a][2a][3a]amix=inputs=3[a]" \
-map 0:v -map [a] \
-c:v copy -c:a aac \
./temp/withoutsubs.mp4

Change ffmpeg input while streaming

Is there a way to change ffmpeg input while streaming to rtmp?
I have this bash script
#! /bin/bash
VBR="1500k"
FPS="24"
QUAL="superfast"
RTMP_URL="rtmp://live.live/live"
KEY="xxxxxxxxxxxxxxxxxxxxx"
VIDEO_SOURCE="video.mp4"
AUDIO_SOURCE="song.mp3"
NP_SOURCE="song.txt"
FONT="font.ttf"
ffmpeg \
-re -f lavfi -i "movie=filename=$VIDEO_SOURCE:loop=0, setpts=N/(FRAME_RATE*TB)" \
-thread_queue_size 512 -i "$AUDIO_SOURCE" \
-map 0:v:0 -map 1:a:0 \
-map_metadata:g 1:g \
-vf drawtext="fontsize=25: fontfile=$FONT: \
box=1: boxcolor=black#0.5: boxborderw=20: \
textfile=$NP_SOURCE: reload=1: fontcolor=white#0.8: x=50: y=50" \
-vcodec libx264 -pix_fmt yuv420p -preset $QUAL -r $FPS -g $(($FPS * 2)) -b:v $VBR \
-acodec libmp3lame -ar 44100 -threads 6 -qscale:v 3 -b:a 320000 -bufsize 512k \
-f flv "$RTMP_URL/$KEY"
What i want to do is to be able to change VIDEO_SOURCE on the fly, i was thinking if it's possible to make the input a directory then change the video in that directory on the fly, i'm new to dealing with scripts so i don't know how to do that
This is a complete guess, based on what little I know about how ffmpeg handles interactive input:
while :; do
ffmpeg \
-re -f lavfi -i "movie=filename=$VIDEO_SOURCE:loop=0, setpts=N/(FRAME_RATE*TB)" \
-thread_queue_size 512 -i "$AUDIO_SOURCE" \
-map 0:v:0 -map 1:a:0 \
-map_metadata:g 1:g \
-vf drawtext="fontsize=25: fontfile=$FONT: \
box=1: boxcolor=black#0.5: boxborderw=20: \
textfile=$NP_SOURCE: reload=1: fontcolor=white#0.8: x=50: y=50" \
-vcodec libx264 -pix_fmt yuv420p -preset $QUAL -r $FPS -g $(($FPS * 2)) -b:v $VBR \
-acodec libmp3lame -ar 44100 -threads 6 -qscale:v 3 -b:a 320000 -bufsize 512k \
-f flv "$RTMP_URL/$KEY"
read -p "Next movie?" VIDEO_SOURCE
[ "$VIDEO_SOURCE" = q ] && break
done
ffmpeg should(?) exit if you send q to standard input. Your script will then prompt you for a new value for VIDEO_SOURCE. If you type q again, the loop exits. Otherwise, it restarts ffmpeg with the new video source file.
If this works, you can perhaps adapt it for something closer to your needs.

Unble to create video(.mp4) from images (.jpeg)

When I execute the ffmpeg command I get this error:
Unable to find a suitable output format for ffmpeg
String strCommand = "ffmpeg -loop 1 -t 3 -i " + list.get(0)" -loop 1 -t 3 -i " + list.get(1)+" -loop 1 -t 3 -i " + list.get(2)+" -loop 1 -t 3 -i " + list.get(3)+ " -filter_complex[0:v]trim=duration=3,fade=t=out:st=2.5:d=0.5[v0];[1:v]trim=duration=3,fade=t=in:st=0:d=0.5,fade=t=out:st=2.5:d=0.5[v1];[2:v]trim=duration=3,fade=t=in:st=0:d=0.5,fade=t=out:st=2.5:d=0.5[v2];[3:v]trim=duration=3,fade=t=in:st=0:d=0.5,fade=t=out:st=2.5:d=0.5[v3];[v0][v1][v2][v3]concat=n=4:v=1:a=0,format=yuv420p[v] -map [v] -preset ultrafast " + getVideoFilePath();
You command seems to be ok, please be sure that getVideoFilePath() can get a right path with the supported format, for example, /path/.../slide.mp4, here is a workable example ffmpeg line:
ffmpeg \
-loop 1 -t 3 -i pic001.jpg \
-loop 1 -t 3 -i pic002.jpg \
-loop 1 -t 3 -i pic003.jpg \
-loop 1 -t 3 -i pic004.jpg \
-filter_complex "\
[0:v]trim=duration=3,fade=t=out:st=2.5:d=0.5[v0];\
[1:v]trim=duration=3,fade=t=in:st=0:d=0.5,fade=t=out:st=2.5:d=0.5[v1];\
[2:v]trim=duration=3,fade=t=in:st=0:d=0.5,fade=t=out:st=2.5:d=0.5[v2];\
[3:v]trim=duration=3,fade=t=in:st=0:d=0.5,fade=t=out:st=2.5:d=0.5[v3];\
[v0][v1][v2][v3]concat=n=4:v=1:a=0,format=yuv420p[v]" \
-map "[v]" -preset ultrafast slide.mp4

FFMpeg : Add audio stream via -filter_complex

In Reference to https://superuser.com/questions/833232/create-video-with-5-images-with-fadein-out-effect-in-ffmpeg
I have a bunch of images. and I want to make a video using these images.
Below is my code for FFMpeg. But the problem is how can I append an audio file to -filter_complex (with bitrate)
ffmpeg \
-loop 1 -i input0.png \
-loop 1 -i input1.png \
-loop 1 -i input2.png \
-loop 1 -i input3.png \
-loop 1 -i input4.png \
-filter_complex \
"[0:v]trim=duration=15,fade=t=out:st=14.5:d=0.5[v0]; \
[1:v]trim=duration=15,fade=t=in:st=0:d=0.5,fade=t=out:st=14.5:d=0.5[v1]; \
[2:v]trim=duration=15,fade=t=in:st=0:d=0.5,fade=t=out:st=14.5:d=0.5[v2]; \
[3:v]trim=duration=15,fade=t=in:st=0:d=0.5,fade=t=out:st=14.5:d=0.5[v3]; \
[4:v]trim=duration=15,fade=t=in:st=0:d=0.5,fade=t=out:st=14.5:d=0.5[v4]; \
[v0][v1][v2][v3][v4]concat=n=5:v=1:a=0,format=yuv420p[v]" -map "[v]" out.mp4

Inserting variables as command arguments in a bash script

I have a file for screencasting on linux, and I want to modify parts of it based on user input. At the moment I can't even get it to stick an argument in a command from a variable.
#!/bin/bash -x
fps="30"
capturesize="hd1080"
outputsize="720"
filter=-vf 'scale=${outputsize}'
avconv \
-f x11grab -r $fps -s $capturesize -i :0.0 \
-f alsa -ac 2 -i pulse \
-f alsa -ac 2 -i pulse \
-f alsa -ac 1 -i pulse \
-map 0:0 -map 1:0 -map 2:0 -map 3:0 \
-vcodec libx264 \
$filter \
-pre:v lossless_ultrafast \
-acodec libmp3lame \
-threads 4 \
-y $#
$fps and $capturesize are evaluated properly but $filter assignment gives a nice little:
+ filter=-vf
+ 'scale=${outputsize}'
~/bin/screencap: line 9: scale=${outputsize}: command not found
Changing the line to:
filter="-vf 'scale=$outputsize'"
Gives an even less pleasant:
+ filter='-vf '\''scale=720'\'''
+ avconv -f x11grab [...] -vf ''\''scale=720'\''' [...]
[...]
No such filter: 'scale=720'
Error opening filters!
Use an array. It has the added benefit of protecting items that contain spaces, something
you can't do if you try to store it in a space-separated list.
#!/bin/bash -x
fps="30"
capturesize="hd1080"
outputsize="720"
filter=( -vf "scale=${outputsize}" )
avconv \
-f x11grab -r "$fps" -s "$capturesize" -i :0.0 \
-f alsa -ac 2 -i pulse \
-f alsa -ac 2 -i pulse \
-f alsa -ac 1 -i pulse \
-map 0:0 -map 1:0 -map 2:0 -map 3:0 \
-vcodec libx264 \
"${filter[#]}" \
-pre:v lossless_ultrafast \
-acodec libmp3lame \
-threads 4 \
-y "$#"
You can put all the options in a single array; a small example:
options=( -f x11grab -r "$fps" -s "$capturesize" )
options+=( -i :0.o )
# etc
avconv "${options[#]}" -y "$#"
filter="-vf scale=${outputsize}"

Resources