It works very well. But it can not end the video while the text has run out. And it runs forever. The bottom is my code. It has gaps with the background image forever. I want to stop it when the text is over. please complete it help me.
ffmpeg -loop 1 -i "C:\Users\Cu\Desktop\Lam\Lam\a.jpg" -i "C:\Users\Cu\Desktop\Lam\Lam\a.mp3" -vf drawtext='fontfile="Arial"\:style=bold:fontsize=70:textfile="C\:/Users/CuEm/Desktop/91990756.txt":fontcolor=#FFFFFF':x=0:y=h-20*t,format=yuv420p,scale=852x480,setsar=1:1 -vcodec libx264 -b:v 1000k -preset superfast "C:\Users\Cu\Desktop\b_o.mp4"
enter image description here
The drawtext filter has no concept of completion or progress. Assuming that you want the video to stop when the audio does, add -shortest.
Related
Before posting I have searched and found similar questions on stackoverflow (I list some below) - none have helped me towards a solution, hence this post. The duration that each image is shown within the movie file differs from many posts that I have seen thus far.
A camera captures 1 image every 30 seconds. I need stream them, preferably via HLS, thus I wrap 2 images in an MP4. I then convert MP4 to mpegts. Each MP4 and TS file play fine individually (each contain two images, each image transitions after 30seconds, each movie file is 1minute long).
When I reference the two TS files in an M3U8 playlist, only the first TS file gets played. Can anyone advise why it stops and how I can get it to play all the TS files that I expect to create, not just the first TS file? Besides my ffmpeg commands, I also include my VLC log file (though I expect to stream to Firefox/Chrome clients). I am using ffmpeg 4.2.2-static installed on an AWS EC2 with AMI2 Linux.
I have four jpgs named image11.jpg, image12.jpg, image21.jpg, image22.jpg - The images look near identical as only the timestamp in top left changes.
The following command creates 1.mp4, using image11.jpg and image12.jpg, each image displayed for 30 seconds, total duration of the mp4 is 1 minute. It plays like expected.
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 1.mp4
I then convert 1.mp4 to an mpegts file, creating 1.ts. It plays like expected.
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 1.ts
I repeat the above steps except specific to image21.jpg and image22.jpg, creating 2.mp4 and 2.ts
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 2.mp4
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 2.ts
Thus now I have 1.mp4, 1.ts, 2.mp4, 2.ts and all four play individually just fine.
Using ffprobe I can confirm their duration is 60seconds, for example:
ffprobe -i 1.ts -v quiet -show_entries format=duration -hide_banner -print_format json
My m3u8 playlist follows:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-TARGETDURATION:60.000
#EXTINF:60.0000,
1.ts
#EXTINF:60.000,
2.ts
#EXT-X-ENDLIST
Can anyone advise where I am going wrong?
VLC Error Log (though I expect to play via web browser)
I have researched the process using these (and other pages) as a guide:
How to create a video from images with ffmpeg
convert from jpg to mp4 by ffmpeg
ffmpeg examples page
FFMPEG An Intermediate Guide/image sequence
How to use FFmpeg to convert images to video
Take a look at the start_pts/start_time in the ffprobe -show_streams output, my guess is that they all start at zero/near-zero which will cause playback to fail after your first segment.
You can still produce them independently but you will want to use something like -output_ts_offset to correctly set the timestamps for subsequent segments.
The following solution works well for me. I have tested it uninterrupted for more than two hours and believe it ticks all my boxes. (Edited because I forgot the all important -re tag)
ffmpeg will loop continuously, reading test.jpg and stream it to my RTMP server. When my camera posts an image every 30seconds, I copy the new image on top of the existing test.jpg which in effect changes what is streamed out.
Note the command below is all one line, I have put new lines in to assist reading and The order of the parameters are important - the loop and fflags genpts for example must appear before the -i parameter
ffmpeg
-re
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test
Some arguments explained:
-re implies play in real time
loop 1 (1 turns the loop on, 0 off)
-fflags +genpts is something I only half understand. PTS I believe is the start/end time of the segment and without this flag, the PTS is reset to zero with every new image. Using this arguement means I avoid EXT-X-DISCONTINUITY when a new image is served.
-framerate 1/30 means one frame for 30seconds
-i test.jpg is my image 'placeholder'. As new images are received via a separate script, it overwrites this image. When combined with loop it means the ffmpeg output will reference the new image.
-c:v libx264 is for H264 video output formating
-vf fps=25 Removing this, or using a different value resulted in my output stream not being 30seconds.
-pix_fmt yuvj420p (sometimes I have seen yuv420p referenced but this did not work on my environment). I believe there are different jpg colour palettes and this switch ensures I can process a wider choice.
-crf 30 implies highest quality image, lowest compression (important for my client)
-f fifo -attempt_recovery 1 -recovery_wait_time 1 -f flv rtmp://localhost:5555/video/test is part of the magic to go with loop. I believe it keeps the connection open with my stream server, reduces the risk of DISCONTINUITY in the play list.
I hope this helps someone going forward.
The following links helped nudge me forward and I share as it might help others to improve upon my solution
Creating a video from a single image for a specific duration in ffmpeg
How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion
Display images on video at specific framerate with loop using FFmpeg
Loop image ffmpeg HLS
https://trac.ffmpeg.org/wiki/Slideshow
https://superuser.com/questions/1699893/generate-ts-stream-from-image-file
https://ffmpeg.org/ffmpeg-formats.html#Examples-3
https://trac.ffmpeg.org/wiki/StreamingGuide
I am cutting out silent parts of a 45 minute video (a lecture).
To do this, I use a filter to select, say one hundred, non-silent parts (I already know their start and end times).
ffmpeg -i in.mp4
-vf "select='between(t,start_1,stop_1)+...+between(t,start_100,stop_100)', setpts=N/FRAME_RATE/TB"
-af "aselect='between(t,start_1,stop_1)+...+between(t,start_100,stop_100)', asetpts=N/SR/TB"
-c:a aac -c:v libx264 out.mp4
It works, but at the end of the video the images are delayed relative to the audio.
After reading this answer I also added
-shortest -avoid_negative_ts make_zero -fflags +genpts
at the end of the command. It didn't help.
As audio and video are concatenated independently I'm not surprised that tiny time errors due to finite frame rate add up.
Is there a solution that doesn't involve saving every non-silent part as a file?
I am trying to apply a filter to only the first few seconds of a video clip - and leave the rest of the video unchanged.
why?
I got some video clips that I wanted to put on a website - unfortunatelly those clips are starting with a black background, which does not fit the website's design. Therefor I was changing the background to transparent.
I got that filter working from many of the great answers here (thanks to Gyan) and those videos are playing fine in common browsers:
ffmpeg -i ${1} -filter_complex "[0]split[m][a];
[a]geq='if(lt(lum(X,Y),16),0,255)',hue=s=0[al];
[m][al]alphamerge,format=yuva420p" -c:v libvpx-vp9 -b:v 0 -crf 18 -an -auto-alt-ref 0 ${1}.webm
the problem now: of course this replaces all black pixels during the video, which leads to many artefacts later on. Therefor I am searching for a way to apply that filter only to the first 5-ish seconds.
I think I need a second split and a crop or a trim and a concat filter with a timestamp - but I can't make it work :(
ffmpeg -i ${1} -filter_complex "[0]split[f][s];
[f]trim=start=0,duration=5[ft];
[s]trim=start=6[st];
[st]split[m][a];
[a]geq='if(lt(lum(X,Y),16),0,255)',hue=s=0[al];
[m][al]alphamerge,format=yuva420p[mal];
[ft][mal]concat" -c:v libvpx-vp9 -b:v 0 -crf 18 -an -auto-alt-ref 0 ${1}.webm
/edit: I am changing the subject slighty, to reflect the actual problem.
Use
ffmpeg -i ${1} -filter_complex "[0]split[m][a];
[a]geq='if(lt(lum(X,Y),16),0,255)',hue=s=0,drawbox=c=white:t=fill:enable='gte(t,6)'[al];
[m][al]alphamerge,format=yuva420p" -c:v libvpx-vp9 -b:v 0 -crf 18 -an -auto-alt-ref 0 ${1}.webm
Since we're adding an alpha plane, it has to be added to all frames. We just want to skip transparency after a certain point, so we use the drawbox filter to fill it with white starting at 6 seconds, before merging with the main video.
I have an mp4 that I want to overlay on top of a jpeg. The command I'm using is:
Ffmpeg -y -i background.jpg -i video.mp4 -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -codec:a copy output.mp4
But for some reason, the output is 0 second long but the thumbnail does show the first frame of the video centred on the image properly.
I have tried using -t 4 to set the output's length to 4 seconds but that does not work.
I am doing this on windows.
You need to loop the image. Since it loops indefinitely you then must use the shortest option in overlay so it ends when video.mp4 ends.
ffmpeg -loop 1 -i background.jpg -i video.mp4 -filter_complex \
"overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2:shortest=1" \
-codec:a copy -movflags +faststart output.mp4
See overlay documentation for more info.
Well you should loop the image until the video duration. So to do the you need to add -loop 1 before the input image. Then the image will have a infinite duration. So to control it specify -shortest before the output file which will trim all the streams to the shortest duration among them. Else you can use -t to trim the image duration to the video length. This will do what you want.
Hope this helps!
I'm trying to create a video using the follwing code:
`$`ffmpeg -loop 1 -r 5 -i video.png -r 5 -i progress.png -filter_complex "overlay=x='if(gte(t,0), -W+(t)*5, NAN)':y=H-h" -i video.mp3 -acodec copy video.mp4
I have the following files
video.png
this is a 1280x720 px still frame that is simply a background with a waveform of the video.mp3 file
progress.png
this is simply a 1280x100 px semi-transparent image that should simulate an animation (from from 0 to 100% of the width of the video.png file, in order to simulate "fill up" animation.
My issues are as following:
The video is not in sync with the audio. The progress bar is way off, instead of finishing at the end of the song, it just keeps going on and on and on and on...
Also... it just keeps going on and on! I left it create a 1 hour video and it never stopped.
I know I'm missing something in the filter, but I have no idea how I could fix it.
Could someone lend me some help?
As Pranav said, use -shortest at the end of the stream to sort the duration issue.
Now to sync the progress of your frame you've got to figure out how much your overlayed picture needs to move per second. This is simple: you need to move your picture by "Width of your video / Duration of your video"
For instance if you've got a 3 minutes song and a video width of 1280:
3 minutes = 3x60 = 180 seconds.
"Width of your video / Duration of your video" = 1280 / 180 = 7.11 pixels / second.
7.11 is the value to use instead of 5 in -W+(t)*5,.
I hope this is clear enough.
Sorry for answering so late!
#AJ29: I decided to give up on the whole overlay video deal and you were spot on.
The overlay filter was processing with yuv420, thus moving the image in 2px increments; setting overlay=format=yuv444 made the animation pixel-by-pixel.
ffmpeg -y -i "wanderhouse - Sugar.mp3" -loop 1 -i frame.png -i waveform.png -i progress.png -filter_complex "[1][2]overlay=x=0:y=H-h:eval=init[over];[over]drawtext=fontfile=impact.ttf:fontsize=72:text='WANDERHOUSE- - SUGAR':x=(w-text_w)/2:y=(h-170-text_h):fontcolor=white:shadowy=2:shadowx=2:shadowcolor=black[text];[text][3] overlay=x='floor(if(lt(-W+(n)*0.248182258846,0), -W+(n)*0.248182258846))':y=H-h:format=yuv444" -shortest -acodec copy -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -crf 10 video.mp4 -report
Is what I came up in the end. 0.248182258846 is the amount of pixels I have to move the "progress.png" each frame (n).
Thanks for your tips!
#mark4o: I figured that out, eventually. I tried going the qrtle way of creating a video with an alpha channel then setting it on top of my "other" video", but I failed miserably at syncing them and I gave up.
The end result of my work: https://www.youtube.com/watch?v=H8uHTIXO0p0