ffmpeg choose exact frame from long film strip - ffmpeg

I'm working with ffmpeg to choose the better thumbnail for my video. and the selection would be based on the slider.
as per the requirement multiple thumbnails not needed just a single long film image and with the help of slider select the thumbnail and save it.
i used below command to get the long strip thumbnail.
ffmpeg -loglevel panic -y -i "video.mp4" -frames 1 -q:v 1 -vf "select=not(mod(n\,40)),scale=-1:120,tile=100x1" video_preview.jpg
I followed the instructions from this tutorial
I'm able to get the long film image:
This is working fine, they moving the image in slider which is fine.
My question is how can I select a particular frame from that slider / film strip. How can I calculate the exact time duration from the slider and then execute a command to extract that frame?

In one of my project I implemented the below scenario. In the code where I'm getting Video duration from the ffmgpeg command, if duration is less than 0.5 second than I set a new thumbnail interval for the same. In your case you should set time interval for the thumbnail creation. Hope this will help you.
$dur = 'ffprobe -i '.$video.' -show_entries format=duration -v quiet -of csv="p=0"';
$duration= exec($dur);
if($duration < 0.5) {
$interval = 0.1;
} else {
$interval = 0.5;
}
screenshot size
$size = '320x240';
ffmpeg command
$cmd = "ffmpeg -i $video -deinterlace -an -ss $interval -f mjpeg -t 1 -r 1 -y $image";
exec($cmd);

You can try this:
ffmpeg -vsync 0 -ss duration -t 0.0001 -noaccurate_seek -i filename -ss 00:30 -t 0.0001 -noaccurate_seek -i filename -filter_complex [0:v][1:v] concat=n=2[con];[con]scale=80:60:force_original_aspect_ratio=decrease[sc];[sc]tile=2x1[out] -map [out]:v -frames 1 -f image2 filmStrip.jpg
2 frame

Related

ffmpeg - create a morph video between 2 images (png solid hex color) 0.55 seconds for a length of 10 seconds

currently i create with imagick (convert -size 640x480 xc:#FF000 hex1.png and hex2.png) 2 png images and save both png files.
now i need the following (but i have no idea how i can do that (maybe ffmpeg?)):
create a video 640x480 for example a length of 10 seconds like this method:
0.00s (hex1) > 0.55s (hex2) > 1.10s (hex1) > 1.65s (hex2) > 2.2s (hex1).... until the 10 seconds has reached.
the hex1 and hex2 images should always morph/fade from hex1 => hex2 => hex1, ...
but the time is very critical. the time must be exact always have 0.55.
maybe i can create on same way direct the HEX colors without creating first a png image for that purposes.
can anybody helps me how i can do that best way?
thank you so much and many greets iceget
currently i have created only a single image with that function:
ffmpeg -loop 1 -i hex1.png -c:v libx264 -t 10 -pix_fmt yuv420p video.mp4
Here is one of the approaches to achieving your goal without pre-creation of images:
ffmpeg -hide_banner -y \
-f lavfi -i color=c=0x0000ff:size=640x480:duration=0.55:rate=20 \
-filter_complex \
"[0]fade=type=out:duration=0.55:color=0xffff00[fade_first_color]; \
[fade_first_color]split[fade_first_color1][fade_first_color2]; \
[fade_first_color1]reverse[fade_second_color]; \
[fade_first_color2][fade_second_color]concat[fade_cycle]; \
[fade_cycle]loop=loop=10/(0.55*2):size=0.55*2*20,trim=duration=10" \
flicker.mp4
Since loop filter operates with frames, not seconds, and you have time constraints, you may choose only the few FPS rates correspondig to the integer number of frames that fit in the 0.55 seconds period (e.g. 20, 40, 60).
The filtergraph is self-explanable.
The result of such command will be like this:
Almost universal way (added in response to the OP's new questions)
#!/bin/bash
# Input parameters
color_1=0x0000ff
color_2=0xffff00
segment_duration=0.55
total_duration=10
# Magic calculations
sd_numerator=${segment_duration#*.}
sd_denominator=$(( 10**${#sd_numerator} ))
FPS=$(ffprobe -v error -f lavfi "aevalsrc=print('$sd_denominator/gcd($sd_numerator,$sd_denominator)'\,16):s=1:d=1" 2>&1)
FPS=${FPS%.*}
# Preparing the output a little bit longer than total_duration
# and mark the cut point with the forced keyframe
ffmpeg -hide_banner -y \
-f lavfi -i color=c=$color_1:size=640x480:duration=$segment_duration:rate=$FPS \
-filter_complex \
"[0]fade=type=out:duration=$segment_duration:color=$color_2[fade_first_color]; \
[fade_first_color]split[fade_first_color1][fade_first_color2]; \
[fade_first_color1]reverse[fade_second_color]; \
[fade_first_color2][fade_second_color]concat[fade_cycle]; \
[fade_cycle]loop=loop=ceil($total_duration/($segment_duration*2))+1: \
size=$segment_duration*2*$FPS,fps=fps=25" \
-force_key_frames $total_duration \
flicker_temp.mp4
# Fine cut of total_duration
ffmpeg -hide_banner -y -i flicker_temp.mp4 -to $total_duration flicker_${total_duration}s.mp4
# Clean up
rm flicker_temp.mp4

Use ffmpeg to record 2 webcams on raspberry pi

I want to record 2 webcams using ffmpeg, i have a simple python script but it doesn't work when I run the 2 subprocesses at the same time.
ROOT_PATH = os.getenv("ROOT_PATH", "/home/pi")
ENCODING = os.getenv("ENCODING", "copy")
new_dir = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
RECORDINGS_PATH1 = os.getenv("RECORDINGS_PATH", "RecordingsCam1")
RECORDINGS_PATH2 = os.getenv("RECORDINGS_PATH", "RecordingsCam2")
recording_path1 = os.path.join(ROOT_PATH, RECORDINGS_PATH1, new_dir)
recording_path2 = os.path.join(ROOT_PATH, RECORDINGS_PATH2, new_dir)
os.mkdir(recording_path1)
os.mkdir(recording_path2)
segments_path1 = os.path.join(recording_path1, "%03d.avi")
segments_path2 = os.path.join(recording_path2, "%03d.avi")
record1 = "ffmpeg -nostdin -i /dev/video0 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path1)
record2 = "ffmpeg -nostdin -i /dev/video2 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path2)
subprocess.Popen(record1, shell=True)
subprocess.Popen(record2, shell=True)
Also, i tried capturing the 2 sources side by side but it gives the error:`Filtering and streamcopy cannot be used together.
This has nothing to do with running two processes at the same time. FFmpeg clearly states that it cannot find /dev/video0 and /dev/video2. It seems your video camera is not detected. You can check this with following command :
$ ls /dev/ | grep video
will list all devices which have video in their name. If video0 and video2 do not exist, its clear FFmpeg gives such error. If they do exist, i do not know how to resolve this. You may try to run the FFmpeg commands directly in terminal.

ffmpeg downloading parts of Youtube videos but for some of them they have a black screen for a few seconds

So im using ffmpeg to download some youtube videos with specific start and stop times. My code looks like os.system("ffmpeg -i $(youtube-dl --no-check-certificate -f 18 --get-url %s) -ss %s -to %s -c:v copy -c:a copy %s"% (l, y, z, w)) where the variables would all be the name of the file, the url, and the start and stop times. Some of the vidoes come out just fine, others have a black screen and only a portion of the video, and a very few amount have just audio files. My time is formated as x.y where x would be the seconds and y would be the milliseconds. Is this the issue so I need to transform it to 00:00:00.0 format? Any help is appreciated
os.system("ffmpeg -ss %s -i $(youtube-dl --no-check-certificate -f 18 --get-url %s) -t %s -c:v copy -c:a copy %s"% (l, y, z, w))
-ss start the video with 00:00:00.0000 format
-t the duration of scene in seconds
example if you want to extract a scene from second 30, and a duration of 3 seconds
os.system("ffmpeg -ss 00:00:30.0000 -i $(youtube-dl --no-check-certificate -f 18 --get-url %s) -t 3 -c:v copy -c:a copy %s"% (l, y, z, w))
Try this with Python :)
Add '-c:a', 'copy',to ffmpeg command line (part) helps out with the black picture / frames / screen in the video.
def ydl_info():
ydl_opts = {
'format': 'bestvideo[height<=720][tbr>1][filesize>0.05M]',
'outtmpl': '%(id)s.%(ext)s', # Template for output names.
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
info = ydl.extract_info(
'https://www.youtube.com/watch?v=HZhWTjnIn78',
download=False # False for just extract the info
)
return info
def ffmpeg_cut():
info = ydl_info()
URL = info['formats'][-1]['url'] # media_url # If there are several media formats then to get media url for the last format:
START = args.start_time
END = '00:00:03.00'
OUTPUT = os.path.join(args.output, args.name+args.format)
print('Output:', OUTPUT)
#ffmpeg -ss 00:00:15.00 -i "OUTPUT-OF-FIRST URL" -t 00:00:10.00 -c copy out.mp4
#cmd = "ffmpeg -ss {} -i {} -t {} -c copy {}".format(START, URL, END, OUTPUT)
subprocess.call([
'ffmpeg',
'-i', URL,
'-ss', START,
'-t', END,
'-c:a', 'copy', OUTPUT, # '-c:v' copies only video and '-c:a' only audio
])
return None

ffmpeg overlay voice on a song with fade in and fade out

I have a fade in and fade out problem and used below code, but not completely resolve.
I have a voice as voice.mp3 name with voice_length seconds length and a song that biggest from voice.
I want mix with song from start_mix_time time.
When voice start volume should be 0.2 and when voice end, volume return to 1.0.
For example, if i have a voice by 10 s length and a song, song start playing and at position 3 s, starting to fade out to vol 0.2 and then, at 5 s, voice start over song and after 10 seconds, song fade in to vol 1 and play to end.
Here is a sample :
ffmpeg -i song1.mp3 -i voice2.mp3 -filter_complex "[0]asplit[a][b]; \
[a]atrim=duration=voice_length,volume='1-max(0.25*(t-start_mix_time-2),0)':eval=frame[pre]; \
[b]atrim=start=start_mix_time,asetpts=PTS-STARTPTS[song]; [song][1]amix=inputs=2:duration=first:dropout_transition=2[post]; \
[pre][post]concat=n=2:v=0:a=1[mixed]" \
-map "[mixed]" output.mp3
#Mulvya
For the example given - volume fades from 1 to 0.2 between t=3 & 5 and then fades back to 1 from t=15 to 17.
ffmpeg -i song.mp3 -i voice.mp3 -filter_complex
"[0]volume='1-max((t-start_fade_t)*0.8/2,0)':eval=frame:enable='between(t,3,5)',volume=0.4:eval=frame:enable='between(t,5,15)',volume='0.2+max((t-end_fade_t)*0.8/2,0)':eval=frame:enable='between(t,15,17)'[song]; \
[1]adelay=5000|5000[vo]; \
[song][vo]amix=inputs=2:duration=first:dropout_transition=0.01" \
output.mp3
Three volume filters are applied to the song - one for fade-in, one for fade-out and one during the overlay. Since the amix filter reduces the volume of its inputs, the overlay volume filter value is set to double the desired volume.
Finally, solved parametric by this :
ffmpeg -i song.mp3 -i voice.mp3 -filter_complex \
"[0]volume='1-((t-start_song_fade_out)*fade_power) + min_song_volume':eval=frame:enable='between(t,start_song_fade_out,end_song_fade_out)', \
volume=min_song_volume:eval=frame:enable='between(t,end_song_fade_out,end_song_fade_out + voice_duration - 1)', \
volume='(t-end_song_fade_out + voice_duration - 1)*fade_power + min_song_volume' \
:eval=frame:enable='between(t,end_song_fade_out + voice_duration - 1,start_song_fade_in_back + end_song_fade_out-start_song_fade_out)' \
[song]; [1]adelay='(end_song_fade_out*1000)'|'(end_song_fade_out*1000)'[vo]; \
[song][vo]amix=inputs=2:duration=first:dropout_transition=0.01" -id3v2_version 3 output.mp3

ffmpeg 1 image with many frames

Is that possible to create thumbnails of video using ffmpeg in this format:
I need to output a single image with vertical shots every 10 seconds.
I know only how to create one image with one frame:
<?php
$ffmpeg = '/usr/local/bin/ffmpeg';
$video = '1.mp4';
$image = '1.png';
$interval = 1;
$size = '300x210';
$cmd = "$ffmpeg -i $video -deinterlace -an -ss $interval -f mjpeg -t 1 -r 1 -y -s $size $image 2>&1";
$return = `$cmd`;
?>
You can do this with one ffmpeg command.
Example
ffmpeg -i alone_in_the_wilderness.mp4 -filter_complex \
"select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)',yadif,scale=240:-1,tile=1x3" \
-vframes 1 -t 30 -q:v 4 strip.jpg
Example with borders
tile=1x3:margin=10:padding=10
Also see
select, yadif, scale, tile filters documentation
Combine multiple images to form a strip of images ffmpeg
FFmpeg output screenshot gallery
You can get a single image with one frame every 10 seconds with ffmpeg (e.g. 1.png, 2.png, 3.png) in a for loop and then merge the images horizontally using imagemagick:
convert 1.png 2.png 3.png -append vertical.png

Resources