I have say 5 segments of audio and i want to set each segment to a specific volume but when i concat the segments i'd like there to be a fade from previous volume to the next volume.
I see afade filter is for simply fading in/out. I see acrossfade which would more than likely be desirable except that the video the audio will play over will not be cross-faded.
I'm wondering if this can be done with something like aeval or if there are any good ideas out there.
Maybe someone can explain this filter function or where to learn about the syntax:
Fade volume after time 10 with an annihilation period of 5 seconds:
volume='if(lt(t,10),1,max(1-(t-10)/5,0))':eval=frame
I realized i can use afade and just manipulate the start time of the fade by calculating the time it takes to fade out to the next volume on a linear scale.
So to fade to 50% volume on a 15 second video, i would do something like
afade=t=out:st=14:d=2
Meaning i start a 2 second linear fade with 1 second left of audio, therefor leaving it at 50% volume on finish.
Related
Problem Description:
we have a camera that is sending video of a live sports game in 30 frames per second.
on the other side we have a screen that is representing immediately every fram that is coming.
Assumptions
*frames will arrive in order
1.what will be the experience for a person that is wathcing the screen?
2.what can we do in order to improve it?
Your playback will have very variable framerate which would cause visible artifacts during any smooth movement ...
To remedy this You need to implement image FIFO that will cover bigger time that is your worst delay difference (Idealy at least 2x times more). So if you got 300ms-100ms delay difference and 30 fps then minimal FIFO size is:
n = 2 * (300-100) * 0.001 * 30 = 12 images
Now the reproduction should be like this:
init playback
simply start obtaining images into FIFO until the FIFO is half FULL (contains images for biggest delay difference)
playback
so any incoming image is inserted into FIFO at time of receival (unless FIFO is full in which case you wait until you have room to place new image or skip frame). Meanwhile in some thread or timer (that runs in parallel) you fetch image from FIFO every 1/30 seconds and render it (if the FIFO is empty you use last image and you can even go to bullet #1 again).
playback stop
once FIFO is empty for longer duration then some threshold (no new frames are incoming) you stop the playback.
The FIFO size reserve and point when to start playback depends on the image source timing properties (so it does not overrun nor underrun the FIFO)...
In case you need to implement your own FIFO class then cyclic buffer of constant size is your friend (so you do not need to copy all the stored images on FIFO in/out operations).
I am looking for an easy way to set the time points for the fade filter. Particularly for the fade-out at the end. At best would be a time based format. I understand the fade filter works based on frames. But is there a way to change that to time stamp? Particularly in the end I have a hard time getting the number of the last frame. Some means to tell the fade filter to start the fade 0.5sec before the end would be awesome. Maybe something like:
-filter:v 'fade=out:-0.5:0.3'
Read : 'start fade out 0.5sec before end and have fade take 0.3sec. I.e. have 0.2sec of black at the end.
I would also be OK if this would be in frame number counts.
My grief right now is that the frame count i.e. reported by ffprobe seems to be somewhat half of what it really is. A fade filter applied on the count provided by ffprobe turn my video to black about half-way through the clip. I'm not sure what I'm missing there.
Again my primary question is: How to determine the correct frame number for the fade out filter?
Thanks,
Gert
The fade filter does take time input: e.g. -vf fade=out:st=23:d=2. This starts a 2-second fade-out at t=23 seconds
Problem
I want to convert a long movie into a series on animated GIFs.
Each GIF needs to be <5MB.
Is there any way to determine how large a GIF will be while it is being encoded?
Progress So Far
I can split the movie into individual frames:
ffmpeg -i movie.ogv -r 25 frameTemp.%05d.gif
I can then use convert from ImageMagick to create GIFs. However, I can't find a way to determine the likely file size before running the command.
Alternatively, I can split the movie into chunks:
ffmpeg -i movie.ogv -vcodec copy -ss 00:00:00 -t 00:20:00 output1.ogv
But I've no way of knowing if, when I convert the file to a GIF it will be under 5MB.
A 10 second scene with a lot of action may be over 5MB (bad!) and a static scene could be under 5MB (not a problem, but not very efficient).
Ideas
I think that what I want to do is convert the entire movie into a GIF, then find a way to split it by file size.
Looking at ImageMagick, I can split a GIF into frames, but I don't see a way to split it into animated GIFs of a certain size / length.
So, is this possible?
There currently is no "Stop at this filesize" option in avconv that i'm aware of. It can, of course, be hacked together quite quickly, but currently libav project doesn't do quick hacks, so it'll likely appear in ffmpeg first.
In addition to this you are facing a problem of animated gif being a very old format, and thus doing some rather strange things. Let me explain the way it normally works:
You create a series of frames from first to last and put them on top of one another.
You make all the "future" frames invisible, and set to appear at the specific time.
In order to make the size of the file smaller, you look "below" the new frames, and if the previous pixel is the same, you set that particular pixel as opaque.
That third step is the only time compression that is done in the animated gif, without it the file size will be much larger (since every pixel must be saved again and again).
However, if you are unsure when was the last break, you cannot determine if the pixel is the same as the previous "frames". After all, this particular frame can be the very first one in the image.
If the limit of 5MiB is soft enough to allow going a little over it, you probably can put something together that just keeps adding frame after frame, and calculating the final file size right away. As soon as one goes over the limit, just stop and use the next frame as the starting point for the next file.
I am looking through the docs for ffmpeg and am having a hard time finding my way around.
Is there a way to add blurring to a video for a specific range in the video.
Example: a 1 minute video. I want to blur seconds 30-35.
Is this possible and if so what does the command look like?
I'm trying to figure out how the seek function using the left/right arrows in ffplay works
i went inside thier open source code and tried to change the values from 10,-10 to different values so i can see if the seek moves correctly but after few attempts i saw that the movie postion after using either left or right arrow isnt moving to exactly the value i specified.
For example, if i used the default value 10, and the movie was on 00:10:00, after pressing the right arrow which suppose to move the movie to 00:20:00 i got something like 00:21:35 and it was not constant.
I tried that on varity of movies and got diffrenet results each time.
Anyone has any idea what i'm doing wrong? or can explain how the seek works in ffplay?
Video seeking precision depends on a variety of factors, but mainly PTS, DTS, and GOP Length. a GOP (Group of Pictures) starts with an I frame (or fixed picture). When you seek, it's just probably trying to find the closest I frame that has a PTS (Presentation Timestamp) grater than 20. What complicates things even further is that not all videos have a fixed GOP length (also called closed GOP) so seeking 10 seconds further in different positions will not always add 11.35 seconds.
Check out this article on GOP
http://en.wikipedia.org/wiki/Group_of_pictures