Okay, I know this question has been asked a bajillion times. However, I have one small addition to the question that I haven't seem to have been able to find in my googling.
I'm certainly not a pro at FFMPEG...I've been using the standard speed up/slow down template for FFMPEG, the one I'm using is:
ffmpeg -i input.mp4 -filter:v "setpts=PTS/60" -an output.mp4
I'm currently working with an hour long 4K/60FPS video...I want to shrink it down to about 30 seconds or so, so I'm using PTS/100, and I don't need audio...the problem is, this is taking FOREVER...which I completely expected it to.
But as I'm sitting here waiting for it to finish...I can't help but wonder...is there a faster/more efficient way to accomplish this? I know there's a lot of weird things about FFMPEG in regards to the order of the commands you use to speed up seek time, and presets and etc.
You can use
ffmpeg -itsscale 0.016667 -i input.mp4 -c copy -an output.mp4
where 0.016667 is 1/60.
However, this will keep all frames, and if the input timebase doesn't have sufficient resolution, you'll have incorrect timestamps. You can work around that by creating a temp file first.
ffmpeg -i input.mp4 -c copy -video_track_timescale 90k -an temp.mp4
and then running the first command on this temp file.
This sequence of commands may be helpful to solve that issue:
ffmpeg -i source.avi -r 0.016667 image/image%05d.bmp
ffmpeg -i image/image%05d.bmp -vcodec libx264 -b:v 500k -f avi video.avi
Related
Before posting I have searched and found similar questions on stackoverflow (I list some below) - none have helped me towards a solution, hence this post. The duration that each image is shown within the movie file differs from many posts that I have seen thus far.
A camera captures 1 image every 30 seconds. I need stream them, preferably via HLS, thus I wrap 2 images in an MP4. I then convert MP4 to mpegts. Each MP4 and TS file play fine individually (each contain two images, each image transitions after 30seconds, each movie file is 1minute long).
When I reference the two TS files in an M3U8 playlist, only the first TS file gets played. Can anyone advise why it stops and how I can get it to play all the TS files that I expect to create, not just the first TS file? Besides my ffmpeg commands, I also include my VLC log file (though I expect to stream to Firefox/Chrome clients). I am using ffmpeg 4.2.2-static installed on an AWS EC2 with AMI2 Linux.
I have four jpgs named image11.jpg, image12.jpg, image21.jpg, image22.jpg - The images look near identical as only the timestamp in top left changes.
The following command creates 1.mp4, using image11.jpg and image12.jpg, each image displayed for 30 seconds, total duration of the mp4 is 1 minute. It plays like expected.
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 1.mp4
I then convert 1.mp4 to an mpegts file, creating 1.ts. It plays like expected.
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 1.ts
I repeat the above steps except specific to image21.jpg and image22.jpg, creating 2.mp4 and 2.ts
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 2.mp4
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 2.ts
Thus now I have 1.mp4, 1.ts, 2.mp4, 2.ts and all four play individually just fine.
Using ffprobe I can confirm their duration is 60seconds, for example:
ffprobe -i 1.ts -v quiet -show_entries format=duration -hide_banner -print_format json
My m3u8 playlist follows:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-TARGETDURATION:60.000
#EXTINF:60.0000,
1.ts
#EXTINF:60.000,
2.ts
#EXT-X-ENDLIST
Can anyone advise where I am going wrong?
VLC Error Log (though I expect to play via web browser)
I have researched the process using these (and other pages) as a guide:
How to create a video from images with ffmpeg
convert from jpg to mp4 by ffmpeg
ffmpeg examples page
FFMPEG An Intermediate Guide/image sequence
How to use FFmpeg to convert images to video
Take a look at the start_pts/start_time in the ffprobe -show_streams output, my guess is that they all start at zero/near-zero which will cause playback to fail after your first segment.
You can still produce them independently but you will want to use something like -output_ts_offset to correctly set the timestamps for subsequent segments.
The following solution works well for me. I have tested it uninterrupted for more than two hours and believe it ticks all my boxes. (Edited because I forgot the all important -re tag)
ffmpeg will loop continuously, reading test.jpg and stream it to my RTMP server. When my camera posts an image every 30seconds, I copy the new image on top of the existing test.jpg which in effect changes what is streamed out.
Note the command below is all one line, I have put new lines in to assist reading and The order of the parameters are important - the loop and fflags genpts for example must appear before the -i parameter
ffmpeg
-re
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test
Some arguments explained:
-re implies play in real time
loop 1 (1 turns the loop on, 0 off)
-fflags +genpts is something I only half understand. PTS I believe is the start/end time of the segment and without this flag, the PTS is reset to zero with every new image. Using this arguement means I avoid EXT-X-DISCONTINUITY when a new image is served.
-framerate 1/30 means one frame for 30seconds
-i test.jpg is my image 'placeholder'. As new images are received via a separate script, it overwrites this image. When combined with loop it means the ffmpeg output will reference the new image.
-c:v libx264 is for H264 video output formating
-vf fps=25 Removing this, or using a different value resulted in my output stream not being 30seconds.
-pix_fmt yuvj420p (sometimes I have seen yuv420p referenced but this did not work on my environment). I believe there are different jpg colour palettes and this switch ensures I can process a wider choice.
-crf 30 implies highest quality image, lowest compression (important for my client)
-f fifo -attempt_recovery 1 -recovery_wait_time 1 -f flv rtmp://localhost:5555/video/test is part of the magic to go with loop. I believe it keeps the connection open with my stream server, reduces the risk of DISCONTINUITY in the play list.
I hope this helps someone going forward.
The following links helped nudge me forward and I share as it might help others to improve upon my solution
Creating a video from a single image for a specific duration in ffmpeg
How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion
Display images on video at specific framerate with loop using FFmpeg
Loop image ffmpeg HLS
https://trac.ffmpeg.org/wiki/Slideshow
https://superuser.com/questions/1699893/generate-ts-stream-from-image-file
https://ffmpeg.org/ffmpeg-formats.html#Examples-3
https://trac.ffmpeg.org/wiki/StreamingGuide
I am developing an application that makes system calls to FFmpeg.
I found a way to get the drawtext filter isolated and fade out, but the render time increased about 5x.
I just want to see if there is something obviously wrong with the command I came up with.
ffmpeg -y -i input.mp4 -c:v libx264 -filter_complex "[0]scale=1920:1080,format=rgba, split[base][text];[text]drawtext=fontfile=font1.ttf:text='Text1':fontcolor= 'white':fontsize=34:box=1:boxcolor=mediumpurple:boxborderw=50:x=0:y=690,format=yuva444p,drawtext=fontfile=./resources/fonts/font2.ttf:text='Text2':fontcolor='white':fontsize=26:x=0:y=725,fade=t=out:st=12:d=0.2:alpha=1[title];[base][title]overlay" -force_key_frames "expr:gte(t,n_forced*0.05)" output.mp4
Yes! There is something obviously wrong with it. There is no need to split the stream and process both.
After some trial and error, I was able to put together this command which runs much much faster; virtually no overhead to add the fading title card.
ffmpeg -i input.mp4 -filter_complex "scale=1920:1080,drawtext=fontfile=font1.ttf:text=Text1':fontcolor='white':fontsize=34:box=1:boxcolor=mediumpurple:boxborderw=50:x=12:y=690:alpha='min(between(t,0,2.2),lerp(0,1,(1+((2-t)/0.2))))',drawtext=fontfile=font2.ttf:text='Text2':fontcolor='white':fontsize=26:x=12:y=730:alpha='min(between(t,0,2.2),lerp(0,1,(1+((2-t)/0.2))))'" output.mp4
The need
Hello, I need to extract two regions of a .h264 video file via the crop filter into two files. The output videos need to be monochrome and extension .mp4. The encoding (or format?) should guarantee that video frames are organized monotonically. Finally, I need to get the timestamps for both files (which I'd bet are the same timestamps that I would get from the input file, see below).
In the end I will be happy to do everything in one command via an elegant one liner (via a complex filter I guess), but I start doing it in multiple steps to break it down in simpler problems.
In this path I get into many difficulties and despite having searched in many places I don't seem to find solutions that work. Unfortunately I'm no expert of ffmpeg or video conversion, so the more I search, the more details I discover, the less I solve problems.
Below you find some of my attempts to work with the following options:
-filter:v "crop=400:ih:260:0,format=gray" to do the crop and the monochrome conversion
-vf showinfo possibly combined with -vsync 0 or -copyts to get the timestamps via stderr redirection &> filename
-c:v mjpeg to force monotony of frames (are there other ways?)
1. cropping each region and obtaining monochrome videos
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:260:0,format=gray" outL.mp4
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:1280:0,format=gray" outR.mp4
The issue here is that in the output files the frames are not organized monotonically (I don't understand why; how come would that make sense in any video format? I can't say if that comes from the input file).
EDIT. Maybe it is not frames, but packets, as returned by av .demux() method that are not monotonic (see below "instructions to reproduce...")
I have got the advice to do a ffmpeg -i outL.mp4 outL.mjpeg after, but this produces two videos that look very pixellated (at least playing them with ffplay) despite being surprisingly 4x bigger than the input. Needless to say, I need both monotonic frames and lossless conversion.
EDIT. I acknowledge the advice to specify -q:v 1; this fixes the pixellation effect but produces a file even bigger, ~12x in size. Is it necessary? (see below "instructions to reproduce...")
2. getting the timestamps
I found this piece of advice, but I don't want to generate hundreds of image files, so I tried the following:
$ ffmpeg -y -hide_banner -i outL.mp4 -vf showinfo -vsync 0 &>tsL.txt
$ ffmpeg -y -hide_banner -i outR.mp4 -vf showinfo -vsync 0 &>tsR.txt
The issue here is that I don't get any output because ffmpeg claims it needs an output file.
The need to produce an output file, and the doubt that the timestamps could be lost in the previous conversions, leads me back to making a first attempt of a one liner, where I am testing also the -copyts option, and the forcing the encoding with -c:v mjpeg option as per the advice mentioned above (don't know if in the right position though)
ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:1280:0,format=gray" -vf showinfo -c:v mjpeg eyeL.mp4 &>tsL.txt
This does not work because surprisingly the output .mp4 I get is the same as the input. If instead I put the -vf showinfo option just before the stderr redirection, I get no redirected output
ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:260:0,format=gray" -c:v mjpeg outR.mp4 -vf showinfo dummy.mp4 &>tsR.txt
In this case I get the desired timestamps output (too much: I will need some solution to grab only the pts and pts_time data out of it) but I have to produce a big dummy file. The worst thing is anyway, that the mjpeg encoding produces a low resolution very pixellated video again
I admit that the logic how to place the options and the output files on the command line is obscure to me. Possible combinations are many, and the more options I try the more complicated it gets, and I am not getting much closer to the solution.
3. [EDIT] instructions how to reproduce this
get a .h264 video
turn it into .mp by ffmpeg command $ ffmpeg -i inVideo.h264 out.mp4
run the following python cell in a jupyter-notebook
see that the packets timestamps have diffs greater and less than zero
%matplotlib inline
import av
import numpy as np
import matplotlib.pyplot as mpl
fname, ext="outL.direct", "mp4"
cont=av.open(f"{fname}.{ext}")
pk_pts=np.array([p.pts for p in cont.demux(video=0) if p.pts is not None])
cont=av.open(f"{fname}.{ext}")
fm_pts=np.array([f.pts for f in cont.decode(video=0) if f.pts is not None])
print(pk_pts.shape,fm_pts.shape)
mpl.subplot(211)
mpl.plot(np.diff(pk_pts))
mpl.subplot(212)
mpl.plot(np.diff(fm_pts))
finally create also the mjpeg encoded files in various ways, and check packets monotony with the same script (see also file size)
$ ffmpeg -i inVideo.h264 out.mjpeg
$ ffmpeg -i inVideo.h264 -c:v mjpeg out.c_mjpeg.mp4
$ ffmpeg -i inVideo.h264 -c:v mjpeg -q:v 1 out.c_mjpeg_q1.mp4
Finally, the question
What is a working way / the right way to do it?
Any hints, even about single steps and how to rightly combine them will be appreciated. Also, I am not limited tio the command line, and I would be able to try some more programmatic solution in python (jupyter notebook) instead of the command line if someone points me in that direction.
i'm first burning subtitles of mkv and then adding watermark which is taking so long to convert one video. It takes like x2 time i guess. For example on my current server it takes 30 min on each command. My server may not be good enough. But i was thinking if there way to do this in one command instead? will it effect the speed? i really have almost zero knowledge on ffmpeg:
here is command for burning subtitle. I'm using python for achieving this:
ffmpeg -i subtitles=/Users/Test/Desktop/test.mkv -vf subtitles=subtitles=subtitles=/Users/Test/Desktop/test.mkv -c:v libx264 -c:a aac -preset ultrafast -strict -2 /Users/Test/Desktop/test.mp4
command for adding watermark:
ffmpeg -i /Users/Test/Desktop/test.mp4 -i /Users/Test/Desktop/watermark-logo.png -filter_complex "[1][0]scale2ref=w='iw*10/100':h='ow/mdar'[wm][vid]; [vid][wm]overlay=main_w-overlay_w-5:main_h-overlay_h-5" /Users/Test/Desktop/output.mp4
If there are more ways to speed up then kindly let me know. All i want to achieve this faster and expecting best result.
Thank you.
First apply the subtitles on the video and then feed that to scale2ref inside the complex filtergraph.
Use
ffmpeg -i /Users/Test/Desktop/test.mkv -i /Users/Test/Desktop/watermark-logo.png -filter_complex "[0]subtitles=/Users/Test/Desktop/test.mkv[v];[1][v]scale2ref=w='iw*10/100':h='ow/mdar'[wm][vid]; [vid][wm]overlay=main_w-overlay_w-5:main_h-overlay_h-5" -preset fast /Users/Test/Desktop/output.mp4
I've been using ffmpeg quite a lot in the past few weeks, and recently I've encountered a very annoying issue - when I use ffmpeg with an input stream (usually, just a url as the input) and try to set a start time (with -ss option), I always get a warn message that says "could not seek to position: XXX".
Then, ffmpeg just starts to download the file, and it ouputs nothing until it has downloaded enough data and got to my desired start time.
I'll give an example:
I use this command to execute ffmpeg:
ffmpeg -ss 50 -re -i https://ascent.usbank.com/acp/videos/041114ascent.flv -b:a 128k -ac 2 -acodec libvorbis -b:v 1024k -vcodec libtheora -strict 2 -preset ultrafast -tune zerolatency -pix_fmt yuv420p -f ogg pipe:1
and I get the warn message
https://ascent.usbank.com/acp/videos/041114ascent.flv: could not seek to position 50.000
Then, it takes about 30 seconds until ffmpeg starts to output data to stdout. And when I try this with longer videos (and longer seek times), it takes even longer.
My question is, what can I do? I guess it's impossible for ffmpeg to seek when it haven't got the whole input stream... Am I wrong? Or is there any other solution?
Of course I try to avoid downloading the entire file from the web...
Thanks in advance!
Roee.
I guess you can't do really anything about it other than to buffer the FLV locally and (eventually) seek that.
Whether or not a http resource allows seeking largely depends on the capabilities of the server, unfortunately...