exist any fast way of join images to video faster? - image

Im using ffmpeg but takes more than 10minutes to convert 2000 images to mp4
ffmpeg -y -r 1/5 -f concat -safe 0 -i ffmcommand.txt -c:v libx264 -vf fps=25,format=yuv420p out22.mp4
how I could do faster please help me

Related

ffmpeg: HLS to mp4

I have one m3u8 file with ts segments. I am trying to convert a part of it to mp4 using the below command.
ffmpeg -i playlist.m3u8 -ss 30 -t 120 -c copy -bsf:a aac_adtstoasc -flags +global_header -y output.mp4
I manually calculated where my segments are located and concatenated those to form output.ts. And then converted that to mp4 using the below commands.
ffmpeg -f concat -safe 0 -i <(for f in ./*.ts; do echo "file '$PWD/$f'"; done) -c copy output.ts
ffmpeg -i output.ts -c copy -bsf:a aac_adtstoasc -flags +global_header -y output.mp4
I found that the second approach is taking far lesser time compared to the first one, an order of 10s of seconds. Someone, please let me know whether the comparison makes any sense and why there is so much difference between the two.
I was using -ss incorrectly for the live stream.
-ss has to be used along side -live_start_index 0 before input file option -i input.m3u8.
For the live streaming from FFMpeg part, one should use -f hls -hls_playlist_type event than -f segment -segment_list_flags live for seek to work on live streaming.
As mentioned in the document for -ss, seek doesn't start exactly at 15th sec. And the duration is also not honoured(< 30secs).
ffmpeg -live_start_index 0 -ss 15 -i playlist.m3u8 -t 00:00:30 -c copy -bsf:a aac_adtstoasc -flags +global_header -y input.mp4
When used without -c copy and with transcoding and -accurate_seek, the duration is fine. But the seek position is the same as the one with -c copy.

FFmpeg slideshow concat outputs only the last image

I'am trying to produce image slideshow by ffmpeg concat.
The problem is that the output video only plays the last image from my input file with images.
The input:
file '/var/www/html/docroot/types/video/images/img0.jpg'
duration 10
file '/var/www/html/docroot/types/video/images/img1.jpg'
duration 10
file '/var/www/html/docroot/types/video/images/img2.jpg'
duration 10
The command:
ffmpeg -y -r 1/10 -f concat -safe 0 -i /var/www/html/docroot/types/video/info.txt -c:v libx264 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2,fps=30,format=yuv420p" /var/www/html/docroot/types/video/output.mp4
And in the output I have something like this:
GIF
Remove -r 1/10 and ,fps=30:
ffmpeg -y -f concat -safe 0 -i /var/www/html/docroot/types/video/info.txt -c:v libx264 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2,format=yuv420p" /var/www/html/docroot/types/video/output.mp4

animation between images using FFmpeg

Hi I am new in FFmpeg,
I have made video from slideshow of sequential images (img001.jpg, img002.jpg, img003.jpg....). Using following commands in Ubuntu 14.04
ffmpeg -framerate 1/5 -i img%03d.jpg -c:v libx264 -r 30 -pix_fmt yuv420p -vf scale=320:240 out.mp4
But now I want to put animation like fade-in, fade-out between each sequential images, I want to generate video,
can anybody help me how to make it, i have searched lots of things but could not get....
The best way to do this is create intermediate mpeg's for each image and then concatenate them all into a video. For example, say you have 5 images; you would run this for each one of the images to create the intermediate mpeg's with a fade in at the beginning and a fade out at the end.
ffmpeg -y -loop 1 -i image -vf "fade=t=in:st=0:d=0.5,fade=t=out:st=4.5:d=0.5" -c:v mpeg2video -t 5 -q:v 1 image-1.mpeg
where t is the duration, or time, of each image. Once you have all of these mpeg's, you use ffmpeg's concat command to combine them all into an mp4.
ffmpeg -y -i image-1.mpeg -i image-2.mpeg -i image-3.mpeg -i image-4.mpeg -i image-5.mpeg -filter_complex '[0:v][1:v][2:v][3:v][4:v] concat=n=5:v=1 [v]' -map '[v]' -c:v libx264 -s 1280x720 -aspect 16:9 -q:v 1 -pix_fmt yuv420p output.mp4
This gives you the desired video and is the simplest and highest quality solution with ffmpeg. Let me know if you have any questions about how the above command works.

Ffmpeg to convert gif to webm with reverse function

I'm trying to convert a gif file to webm file using the below which works fine however I’m wondering is it also possible to reverse it as well using ffmpeg or would I need to reverse it using imagemagick first then cover it using ffmpeg
ffmpeg -i your_gif.gif -c:v libvpx -crf 12 -b:v 500K output.webm
Any help is appreciated
The script posted here might help you.
This one seems to be in bash but ripping the commands should work on Windows as well.
https://github.com/WhatIsThisImNotGoodWithComputers/ffmpeg-webm-scripts
These are the relevant lines of code (note that they need to edited for your setup):
ffmpeg -i "${INPUT_FILE}" -ss $START_TIME -to $TO_TIME -an -qscale 1 $TEMP_FOLDER/%06d.jpg
cat $(ls -r $TEMP_FOLDER/*jpg) | ffmpeg -f image2pipe -vcodec mjpeg -r 25 -i - -c:v libvpx -crf 20 -b:v $FRAMERATE $CROPSCALE -threads 0 -an $OUTPUT_FILE
You basically have to convert all stills to jpgs and then back into webm, but in reverse order.
From ffmpeg --help, you can see what codecs ffmpeg supports with ffmpeg -codecs. ffmpeg -codecs|grep -i gif on mine says it supports gif.
ffmpeg checks extensions to get file type if you don't override,
ffmpeg -i onoz.webm onoz.gif
does the trick just fine.

convert raw video .y4m to mpg video with certain GOP

I want to compress a raw video .y4m to mpg, and I want then to extract the frames from the mpg video, I need the GOP of the compression to be :IBBPBBPBBPBBPBBIBBP....15:2
I used this command:
ffmpeg -i video.ym4 -vcodec libx264 -sameq -y -r 30 output.avi 2>list.txt
ffmpeg -i output.avi -vcodec libx264 -y -sameq -vf showinfo -y -f image2 image%3d.jpeg -r 30 2>list1.txt
The output contains only 2 I frames, 100 P and 198 B frames, it is not 15:2 GOP, what to do?
I need one I-frame every 15 frames, and the pattern to b IBBPBBP..
Sorry, Im new to ffmpeg, please help me, this is the input to my project, it is the important step to me.
Try (according to http://ffmpeg.org/ffmpeg.html#Video-Encoders)
ffmpeg -i video.ym4 -vcodec libx264 -g 15 -y -r 30 output.avi
I think option -sameq (means "same quantizers") is not needed in your case.

Resources