(Excuse for My english it's freak i´m from LA)
I'm trying to finish a trascoding process in VB6.0 , i'm working with ffmpeg , its a very good transcoder , to finish the project i want a progress bar for the trascoding process but it's so very hard , first i need to understand , how a program can calculate the time remaining to the process if i have the inputs
Average Bitrate
Frame rate
Start file Size.
I'm trying with : File size (KB) / Average Bitrate Kb/s.
In theory this must to work , but the calculated time it`s very small than the real time processed. Somebody have any idea about this , what is the formula (snipped) to calculate the time remaining in a trascoding process. in this wonderfull web i find many answer to mys projects..
The bitrate won't help you in calculating progress.
If you have the file length in seconds, and the frame rate, and ffmpeg outputs what frame its processing right now, you can calculate the approximate time.
The general solution for "time remaining," given:
A number total_units that represents the size, number of units, etc. to be processed
A number units_processed that represents how many M's have been processed so far
A number start_seconds that gives the time, in seconds since the operation started
is:
seconds_elapsed = current time - start time
seconds_per_unit = seconds_elapsed / units_processed
units_left = total_units - units_processed
seconds_remaining = unit_left / seconds_per_unit
This algorithm does best when the times to process each unit are nearly the same, or at least when the time/unit has little correlation with elapsed time. It stinks on ice if time/unit varies with elapsed time.
Related
This is a bit of a logical question
I am tracking an object in a video running at N fps. In a practical system, the frames don't have the exact gap and may even have frame drops
-> I am also provided with timestamp with every frame.
-> I start my track at frame X and end at Frame Y
I have divided my video into grids (spatially) and at each instance, I place the object into one of the grids
Now the simplest case of this issue is as follows,
Suppose the object was identified in just 1 frame, what should be the duration for that track?
Options:
1. Exclusive duration computation
Duration = end time - start time = 0 for this instance
Inclusive duration computation
Duration = end time - start time + 1 = 1* for this instance
*let us assume we have the information in milliseconds for now
Add frame gap. Since we know the FPS, we can compute 1/N to be the frame gap
Duration = end time - start time + 1/N
Add average time spent in a grid by computing speed of the person. I am not sure how to compute this since, it depends on previous 3 definitions of duration
Any other metric I can take?
Thank you
If an object appears at frame X, and disappears at X+1, end time shall be the time of frame X+1, not frame X. Now, duration is obviously computed as end time - start time which in this case will be frame gap.
This is a two part question:
I have a fluid flow sensor connected to an NI-9361 on my DAQ. For those that don't know, that's a pulse counter card. None the less, from the data read from the card, I'm able to calculate fluid flowing through the device in Gallons per hour, min, sec, etc. But what I need to do is the following:
Calculate total number of gallons of fluid that has flowed through the sensor since the loop began running
if possible, store that total so that it can be incremented next time the program runs
I know how to calculate it by hand, just not sure how to achieve the running summation required to calculate total amount of fluid that has passed through the sensor, or how to store the variable being incremented at the next program execution. I'm presuming the latter would involve writing a TDMS file, then opening and reading back the data, unless there's a better way?
Edit:
Below is the code used to determine GPM flow through my sensor. This setup is in accordance with the 9361 manual; it executes and yields proper results.
See this link for details:
http://zone.ni.com/reference/en-XX/help/373197L-01/criodevicehelp/crio-9361/
I can extrapolate how many gallons flow per second, or sample period, the 1526.99 scalar is the fluid flow manufacturer's constant - number of pulses per gallon passing through the sensor. The 9361 is set to frequency/period mode, so I'm calculating cycles per second, dividing by the constant for cycles per gallon to get gallons per second/min.
I suppose I could get a time reference by looking at the sample period, so I guess the better question is, how do I keep an incrementing sum?
My beacons have advertisement interval of 330ms. I use an iOS device to scan the advertisement packet whose scanning rate is 1 scan per second on average. I want to use the moving average filter to smooth the fluctuating RSSI values. Considering the walking speed of 1.2 m/s and the advertisement interval of 330 ms, what should be the size of a window in the moving average filter? Is there any mathematical relationship between them?
Thank you.
There is no one correct answer here. It is a trade-off between noise in the distance estimate and lag time.
The large (and longer) your statistical sample, the more lag time there will be in a running average. A 20 second window will tell you where you were on average over the last 20 seconds, and filter out a lot of noise. A 5 second running average will tell you where you were on average over the last 5 seconds, but with much more noise on the calculation.
How much lag you can tolerate and how much noise you can tolerate all depend on your use case. Use cases that are very time sensitive may sacrifice accuracy for the sake of less lag. Conversely use cases needing greater accuracy may accept more lag to filter out more noise on the estimate.
I've been working on an Arduino (ATMega328p) prototype that has to log data during certain events. An LSM6DS33 sensor is used to generate 6 values (2 bytes each) at a sample rate of 104 Hz. This data needs to be logged for a period of 500-20000ms.
In my code, I generate an interrupt every 1/104 sec using Timer1. When this interrupt occurs, data is read from the sensor, calibrated and then written to an SD card. Normally, this is not an issue. Reading the data from the sensor takes ~3350us, calibrating ~5us and writing ~550us. This means a total cycle takes ~4000us, whereas 9615us is available.
In order to save power, I wish to lower the voltage to 3.3V. According to the atmel datasheet, this also means that the clock frequency should be lowered to 8MHz. Assuming everything will go twice as slow, a measurement cycle would still be possible because ~8000us < 9615us.
After some testing (still 5V#16MHz), however, it occured to me that every now and then, a write cycle would take ~1880us instead of ~550us. I am using the library SdFat to write and test SD cards (RawWrite example). The following results came in when I tested the card:
Start raw write of 100000 KB
Target rate: 100 KB/sec
Target time: 100 seconds
Min block write time: 1244 micros
Max block write time: 12324 micros
Avg block write time: 1247 micros
As seen, the average time to write is fairly consistent, but sometimes a peak duration of 10x average occurs! According to the writer of the library, this is because the SD card needs some erase cycles in between x amount of write cycles. This causes a write delay (src:post#18). This delay, however, pushes the time required for a cycle out of the available 9615us bracket, because the total measure cycle would be 10672us.
The data I am trying to write, is first put into a string using sprintf:
char buf[20] = "";
sprintf(buf,"%li\t%li\t%li\t%li\t%li\t%li",rawData[0],rawData[1],rawData[2],rawData[3],rawData[4],rawData[5]);
myLog.println(buf);
This writes the data to a txt file. But at my speed rate, only 21*104=2184 B/s would suffice. Lowering the speed of the RawWrite example to 6 KB/s, causes the SD card to write without getting an extended write delay. Yet my code still has them, even though less data is written.
My question is: how do I prevent this delay from occurring (if possible)? And if not possible, how can I work around it? It would help if I understood why exactly the delay occurs, because the interval is not always the same (every 10-15 writes).
Some additional info:
The sketch currently uses 69% of RAM (2kB) with variables. Creating two 512 byte buffers - like suggested in the same forum - is not possible for me.
Initially, I used two strings. Merging them into one, didn't affect the write speed with any significance.
I don't know how to work around the delay, but I experience a more stable and faster writing time, if I wrote to a binary file instead of a ".csv" or .txt" file.
The following link provide a fine script to write data as a binary struct to the SD card. (There are some small typo in his example, it is easily fixed)
https://hackingmajenkoblog.wordpress.com/2016/03/25/fast-efficient-data-storage-on-an-arduino/
This will not help you with the time variation, but it might minimize the writing time, and thus negleting the time issue.
I've started looking into lwjgl and I'm particularly having trouble understanding how Delta works. I have browsed other questions and websites related to this but it is still a confusing topic to wrap my head around. It would be great if someone here can help me out so please bear with me.
I understand that the Delta time for 60fps would be 16, around double that if the frame-rate is 30. I don't understand how this is calculated. Is it the time it takes between frames? Sorry for the noobish question.
private long getTime() {
return (Sys.getTime() * 1000) / Sys.getTimerResolution();
}
private int getDelta() {
long currentTime = getTime();
int delta = (int)(currentTime - lastTime);
lastTime = getTime();
return delta;
}
As opiop65 already said, the delta time is simply the time spent between your last frame's beggining and your current frame's beggining.
How does it work?
Delta time can be any kind of unit: nanoseconds, milliseconds (<- usually this is the standard) or seconds. As you said delta time is 16 when the game is running on 60FPS and 32 when the game runs on 30FPS. As for the why, it's simple: In order for a game to run at 60 frames per second it has to produce a frame every 1000/60 (= 16.666667) milliseconds, but if it running at 30 frames then it has to produce a frame every 1000/30 (= 33.333333) milliseconds.
But why do we use delta time?
We use delta time because we want to do movement and all sorts of stuff time dependant and not frame depentdant. Lets say that you want one of your game's character to move 1 unit horizontally per second. How do you do that? Obviously, you can't just add 1 to the character's location's X value, because it would get moved 1*x times per second where x is equal to your FPS (assuming that you would update the character every frame). That would mean that if somebody runs the game on 1 FPS his character would move 1 units per second, where if somebody runs the game on 5000 FPS his character would move 5000 units per second. Of course that is unacceptable.
One could say that he would move the character 1/16.6667 units on every update but then again if somebody has 1 FPS he moves 1/16.6667 units per second, opposed to that guy who runs on 5000 FPS, thus moving 5000*(1/16.6667) units per second.
Yes, you can enable V-Sync but what if somebody has a 120Hz monitor (or even higher) and not 60Hz?
Yes, you can lock the framerate but your players wouldn't be too happy about that. Also that wouldn't stop the character from slowing down when the game drops below 60FPS. So what now?
Delta time to the rescue!
All you have to do is just to move your character 1*delta on every update.
Delta time is low if the game runs on a high FPS and high if the game runs on a low FPS thus making those character go slower who runs the game on a higher FPS (so he would move smaller amounts but more frequently) and those character faster who runs the game on a lower FPS (so he would move larger amounts less frequently) and in the end they would move equal distances over the same time.
Please note that it does matter what unit you use when multiplying with the delta time:
If you use millis then at 60FPS your delta would be 16.6667 ending up with 1*16.6667 = 16.6667 movement every frame. However, if you would measure your delta time in seconds then at 60FPS your delta time would be 0.016667 meaning that your character would move 0.016667 units every frame.
This is not something you should worry about, just keep it in mind.
Delta time is simply the time it takes for one frame to "dispose" of itself and then another to display on the screen. Its basically the time between frames, as you put it. From Google:
Mathematics. an incremental change in a variable.
Let's pick apart your code.
return (Sys.getTime() * 1000) / Sys.getTimerResolution();
This line simply returns the current time in (I believe) milliseconds?
long currentTime = getTime();
int delta = (int)(currentTime - lastTime);
lastTime = getTime();
return delta;
The first line simply gets the current time. Second line then calculates delta by subtracting the current time (which is the time when the current frame was displayed) by the lastTime variable (which is the time when the last frame was displayed). Then lastTime is set to the currentTime, which is when the current frame is displayed. Its really simple when you think about it, its just the change in time between frames.