I have built a bash script where I am trying to zoom in an image with ffmpeg, for 10s:
ffmpeg -r 25 -i image0.jpg -filter_complex "scale=-2:10*ih,zoompan=z='min(zoom+0.0015,1.5)':d=250:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)',scale=-2:720" -y -shortest -c:v libx264 -pix_fmt yuv420p temp_1.mp4
This command is included in a while loop, with two "if" conditions at the beginning of the loop:
first=1017
i=0
while read status author mySource myFormat urlIllustration credit shot_id originalShot categories title_EN length_title_EN text_EN tags_EN title_FR length_title_FR text_FR tags_FR title_BR length_title_BR text_BR tags_BR; do
if [ $myFormat != "diaporama" ]; then
let "i = i + 1"
continue
fi
if [ "$shot_id" -lt "$first" ]; then
let "i = i + 1"
continue
fi
rm temp_1.mp4
ffmpeg -r 25 -i image0.jpg -filter_complex "scale=-2:10*ih,zoompan=z='min(zoom+0.0015,1.5)':d=250:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)',scale=-2:720" -y -shortest -c:v libx264 -pix_fmt yuv420p temp_1.mp4
let "i = i + 1"
done <../data.tsv
echo "All done."
(I have removed stuff in the loop, this is the minimal code that is able to capture the problem).
Now the weird bug: if I run this code like that, the video I am trying to generate will not be 10s long, only 1-2s long. ffmpeg exits with error "[out_0_0 # 0x2fa4c00] 100 buffers queued in out_0_0, something may be wrong."
Now if I remove one of the "if" conditions at the beginning of my loop (the first or the second, it doesn't matter), the video will be generated fine and be 10s long.
What could be the cause of this problem?
Related
currently i create with imagick (convert -size 640x480 xc:#FF000 hex1.png and hex2.png) 2 png images and save both png files.
now i need the following (but i have no idea how i can do that (maybe ffmpeg?)):
create a video 640x480 for example a length of 10 seconds like this method:
0.00s (hex1) > 0.55s (hex2) > 1.10s (hex1) > 1.65s (hex2) > 2.2s (hex1).... until the 10 seconds has reached.
the hex1 and hex2 images should always morph/fade from hex1 => hex2 => hex1, ...
but the time is very critical. the time must be exact always have 0.55.
maybe i can create on same way direct the HEX colors without creating first a png image for that purposes.
can anybody helps me how i can do that best way?
thank you so much and many greets iceget
currently i have created only a single image with that function:
ffmpeg -loop 1 -i hex1.png -c:v libx264 -t 10 -pix_fmt yuv420p video.mp4
Here is one of the approaches to achieving your goal without pre-creation of images:
ffmpeg -hide_banner -y \
-f lavfi -i color=c=0x0000ff:size=640x480:duration=0.55:rate=20 \
-filter_complex \
"[0]fade=type=out:duration=0.55:color=0xffff00[fade_first_color]; \
[fade_first_color]split[fade_first_color1][fade_first_color2]; \
[fade_first_color1]reverse[fade_second_color]; \
[fade_first_color2][fade_second_color]concat[fade_cycle]; \
[fade_cycle]loop=loop=10/(0.55*2):size=0.55*2*20,trim=duration=10" \
flicker.mp4
Since loop filter operates with frames, not seconds, and you have time constraints, you may choose only the few FPS rates correspondig to the integer number of frames that fit in the 0.55 seconds period (e.g. 20, 40, 60).
The filtergraph is self-explanable.
The result of such command will be like this:
Almost universal way (added in response to the OP's new questions)
#!/bin/bash
# Input parameters
color_1=0x0000ff
color_2=0xffff00
segment_duration=0.55
total_duration=10
# Magic calculations
sd_numerator=${segment_duration#*.}
sd_denominator=$(( 10**${#sd_numerator} ))
FPS=$(ffprobe -v error -f lavfi "aevalsrc=print('$sd_denominator/gcd($sd_numerator,$sd_denominator)'\,16):s=1:d=1" 2>&1)
FPS=${FPS%.*}
# Preparing the output a little bit longer than total_duration
# and mark the cut point with the forced keyframe
ffmpeg -hide_banner -y \
-f lavfi -i color=c=$color_1:size=640x480:duration=$segment_duration:rate=$FPS \
-filter_complex \
"[0]fade=type=out:duration=$segment_duration:color=$color_2[fade_first_color]; \
[fade_first_color]split[fade_first_color1][fade_first_color2]; \
[fade_first_color1]reverse[fade_second_color]; \
[fade_first_color2][fade_second_color]concat[fade_cycle]; \
[fade_cycle]loop=loop=ceil($total_duration/($segment_duration*2))+1: \
size=$segment_duration*2*$FPS,fps=fps=25" \
-force_key_frames $total_duration \
flicker_temp.mp4
# Fine cut of total_duration
ffmpeg -hide_banner -y -i flicker_temp.mp4 -to $total_duration flicker_${total_duration}s.mp4
# Clean up
rm flicker_temp.mp4
I want to record 2 webcams using ffmpeg, i have a simple python script but it doesn't work when I run the 2 subprocesses at the same time.
ROOT_PATH = os.getenv("ROOT_PATH", "/home/pi")
ENCODING = os.getenv("ENCODING", "copy")
new_dir = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
RECORDINGS_PATH1 = os.getenv("RECORDINGS_PATH", "RecordingsCam1")
RECORDINGS_PATH2 = os.getenv("RECORDINGS_PATH", "RecordingsCam2")
recording_path1 = os.path.join(ROOT_PATH, RECORDINGS_PATH1, new_dir)
recording_path2 = os.path.join(ROOT_PATH, RECORDINGS_PATH2, new_dir)
os.mkdir(recording_path1)
os.mkdir(recording_path2)
segments_path1 = os.path.join(recording_path1, "%03d.avi")
segments_path2 = os.path.join(recording_path2, "%03d.avi")
record1 = "ffmpeg -nostdin -i /dev/video0 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path1)
record2 = "ffmpeg -nostdin -i /dev/video2 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path2)
subprocess.Popen(record1, shell=True)
subprocess.Popen(record2, shell=True)
Also, i tried capturing the 2 sources side by side but it gives the error:`Filtering and streamcopy cannot be used together.
This has nothing to do with running two processes at the same time. FFmpeg clearly states that it cannot find /dev/video0 and /dev/video2. It seems your video camera is not detected. You can check this with following command :
$ ls /dev/ | grep video
will list all devices which have video in their name. If video0 and video2 do not exist, its clear FFmpeg gives such error. If they do exist, i do not know how to resolve this. You may try to run the FFmpeg commands directly in terminal.
I'm working with ffmpeg to choose the better thumbnail for my video. and the selection would be based on the slider.
as per the requirement multiple thumbnails not needed just a single long film image and with the help of slider select the thumbnail and save it.
i used below command to get the long strip thumbnail.
ffmpeg -loglevel panic -y -i "video.mp4" -frames 1 -q:v 1 -vf "select=not(mod(n\,40)),scale=-1:120,tile=100x1" video_preview.jpg
I followed the instructions from this tutorial
I'm able to get the long film image:
This is working fine, they moving the image in slider which is fine.
My question is how can I select a particular frame from that slider / film strip. How can I calculate the exact time duration from the slider and then execute a command to extract that frame?
In one of my project I implemented the below scenario. In the code where I'm getting Video duration from the ffmgpeg command, if duration is less than 0.5 second than I set a new thumbnail interval for the same. In your case you should set time interval for the thumbnail creation. Hope this will help you.
$dur = 'ffprobe -i '.$video.' -show_entries format=duration -v quiet -of csv="p=0"';
$duration= exec($dur);
if($duration < 0.5) {
$interval = 0.1;
} else {
$interval = 0.5;
}
screenshot size
$size = '320x240';
ffmpeg command
$cmd = "ffmpeg -i $video -deinterlace -an -ss $interval -f mjpeg -t 1 -r 1 -y $image";
exec($cmd);
You can try this:
ffmpeg -vsync 0 -ss duration -t 0.0001 -noaccurate_seek -i filename -ss 00:30 -t 0.0001 -noaccurate_seek -i filename -filter_complex [0:v][1:v] concat=n=2[con];[con]scale=80:60:force_original_aspect_ratio=decrease[sc];[sc]tile=2x1[out] -map [out]:v -frames 1 -f image2 filmStrip.jpg
2 frame
Following bash script:
#!/bin/bash
#-_-_-_--- CONFIGURATION ---_-_-_-
CAMERA_COUNT=2
SRC_WIDTH=320
SRC_HEIGHT=240
MARGIN=320
#-_-_-_--- END CONFIGURATION---_-_-_-
CHARS=( {A..Z} )
FULL_WIDTH=$((SRC_WIDTH * CAMERA_COUNT))
FFMPEG="ffmpeg "
for ((i = 0; i < CAMERA_COUNT; i++))
do
FFMPEG+="-f flv -i rtmp://127.0.0.1:1935/live/PanoView${CHARS[i]} "
done
FFMPEG+="-filter_complex \"nullsrc=size=$((SRC_WIDTH * CAMERA_COUNT))x${SRC_HEIGHT} [base]; [base][0:v] overlay [tmp1]; "
for ((i = 1; i < CAMERA_COUNT - 1; i++))
do
FFMPEG+="[tmp${i}][${i}:v] overlay=x=$((SRC_WIDTH * i)) [tmp$((i + 1))];"
done
FFMPEG+="[tmp$((CAMERA_COUNT - 1))][$((CAMERA_COUNT - 1)):v] overlay=x=$((SRC_WIDTH * (CAMERA_COUNT - 1)))\" -f flv -b:v 1M -an -r 25 rtmp://127.0.0.1:1935/live/test"
$FFMPEG
echo $FFMPEG
The output of the last echo:
ffmpeg -f flv -i rtmp://127.0.0.1:1935/live/PanoViewA -f flv -i rtmp://127.0.0.1:1935/live/PanoViewB -filter_complex "nullsrc=size=640x240 [base]; [base][0:v] overlay [tmp1]; [tmp1][1:v] overlay=x=320" -f flv -b:v 1M -an -r 25 rtmp://127.0.0.1:1935/live/test
When I execute the compound command in the FFMPEG variable in the script, I get the error message:
[AVFilterGraph # 0x7fffed837620] No such filter: '"nullsrc'
Error initializing complex filters.
Invalid argument
However, if I copy the composite code, which I have the script output at the end, paste it and execute it, everything works as expected. What could be the reason?
For the time being I am doing
ProcessStartInfo ffmpeg = new ProcessStartInfo();
ffmpeg.CreateNoWindow = false;
ffmpeg.UseShellExecute = false;
ffmpeg.FileName = "e:\ffmpeg\ffmpeg.exe";
ffmpeg.Arguments = "for file in (D:\\Day\\*.jpg); do ffmpeg -i \"$file\" -vf fps=1/60 -q:v 3 \"D:\\images\\out.mp4\"; done;";
ffmpeg.RedirectStandardOutput = true;
Process x = Process.Start(ffmpeg);
Here I'm getting exception saying system cannot find specified file.
For time being I'm considering all the files in D:\Day\*.jpg but actually I need to query individual files from a list.
Where am I wrong in the above scenario?
We need to create a separate text file with the image names and use that text file to create your video.
inside frameList.txt :
file 'D:\20180205_054616_831.jpg'
file 'D:\20180205_054616_911.jpg'
file 'D:\20180205_054617_31.jpg'
file 'D:\20180205_054617_111.jpg'
and in Arguments of the process use,
"-report -y -r 15/1 -f concat -safe 0 -i frameList.txt -c:v libx264 -s 1920*1080 -b:v 2000k -vf fps=15,format=yuv420p out.mp4"