remaining time in the progress dialog - user-interface

In a GUI application I would like to show a progess dialog displaying how much time left for the task to accomplish, how may I get the remaining time before the task ends and count it down please? thanks

How to get the remaining time is something no-one but your application (or you) can know.

Assuming you have the code for this GUI application, to determine remaining time you simply need to know the total time a task takes and subtract the amount of time that passed since the start of the task.

Related

How to get remaining time of a shell script execution?

I have a script which do different things as per the requirement.
It takes options from the user and execute the commands accordingly.
So the question is how can i find the remaining time of total execution.
Thanks
You'll probably have to make an estimate of how much time each command will take before you run them and then subtract from the total as each command completes.
You can improve on this by readjusting the total and remaining time when each command completes based on the difference between your estimate and the actual time.
To get it even sharper, you can record times over invocations and use that to make a better guess but I think that would be overkill for just a script.
It's not possible however to get it exactly correct. Cf. http://xkcd.com/612/

Do CSS animations work on strict time basis?

That is to say, if I set an animation to take 1 second, will it always take exactly 1 second (i.e. skip frames in order to achieve that interval)?
Part 2 of my question involves how to utilize CSS animations in asynchronous Javascript calls. I would need to be able to recall the animation once it had completed.
The times are not guarenteed to be exactly correct. To demonstrate, I setup a test case that shows times vary a bit from the 1 second mark. As for the second part of your question, you can use the animationend event to restart it, or you can also set it to iterate (like I've done in my example).
Update It's hard to simulate the browser choking, but I have notice significant deviation from the animation when it has choked naturally. For example, upon loading the page, my Firebug started up, which caused the first animation to jump down to 0.574 seconds, almost half my original value. So it looks like the browser does try to compensate a bit, but it may also overcompensate as well. I have also seen times as long as 2 seconds, so I don't think you can say that the timing is going to be exact in any way.
Update 2 So, I was able to get the browser to choke (had to count to 1000000 in FF... I'm impressed), and the quick answer to your question is no, it does not do any compensation to try and get the time accurate. It just chokes and does not animate. Mind you that is a tight loop, so it may perform better if it can get other calculations in there, but I don't know that for sure.
The answer to your question basically is all here at MDN. The gist of it is that:
The times are not perfectly accurate, but they're close.
There are events that fire at various times during animations (and transitions). You can attach event handlers to them.

Android - have progress bar increase smoothly instead of jumping in bigger steps.

I have some sequence of situations let say 5 situations.
The point is I do not know how long each of them will last. I just know when each of those situations is finished.
so imagine this method is called randomly, it is not randomly but I could not know when will be called.
public void refresh(int i){
}
know I can update the progressbar by increasing him each time refresh will be called, but the user experience will be "see the bar increased by 20%, than wait and nothing moves, than again jump of 20% and stack again, and so on...
The thing I want to achieve is to have something more sooth.
I know this is not easy task but I know that someone have meet this problem before, so what are the workarounds ?
If the updates are going to arrive essentially at random, you might consider using a "spinner" instead of a progress bar. The end result is almost the same: the user has some idea that the process is underway. If you want to give a little more information, you can update a text label next to the spinner to indicate which part of the sequence is underway:
I've encountered this before. What I did was to estimate each one of my "situations" as to how much of the 100% progress they would need. So, task 1 would go up to 15% and task 2 might go up to 50%, etc. Keep in mind that this was just a general estimate. Then, within each situation, I would calculate the individual task for 100% completeness and increment that accordingly. Then, I would do the math to calculate the actual progress based on the given task i was in. So, if I was in task 2, I would be calculating based on 100% progress of the specific task calculated against the 35% of the total progress that I estimated task 2 to actually take. I hope this helps you.
Quick question, is it a progress bar you do want?
From what I read, you have no clue of estimated time, only that each chunk is 20%, and you do not know how long each chunk takes, or their time to run through, but you want the bar to move more smooth then large 20% completed chunks.
As you only got 5 measurement points unless you can add another layer of progress on each task to report back, else you are stuck guessing.
If you want to guess, you could make a rough estimate of the time a task will take. Again based on task at hand you might be able to make a good or bad estimate by hard coding expected time. The trick you ask for is looking like progress (hide the fact you are at an uncertain point in an operation 1/5)
Now have the counter move up slowly toward the next mark for the given time (say 1 minute) but as you approach the time, reduce the progress, this will mean that you will either see the bar move faster or slower as you approach expected next point. You will have progress, but it would be pure random guessing. If task end prior you have some "missing" % to make next progress gap larger with, if your slow, the bar slows down more and more.
This is a lot of extra work to show something that is misleading at best, to show the progress has not stuck, you might want to look for other options along with the progress bar.

When timing how long a quick process runs, how many runs should be used?

Lets say I am going to run process X and see how long it takes.
I am going to save into a database a date I ran this process, and the time it took. I want to know what to put into the DB.
Process X almost always runs under 1500ms, so this is a short process. It usually runs between 500 and 1500ms, quite a range (3x difference).
My question is, how many "runs" should be saved into the DB as a single run?
Every run saved into the DB as its
own row?
5 Runs, averaged, then save that
time?
10 Runs averaged?
20 Runs, remove anything more than 2
std deviations away, and save
everything inside that range?
Does anyone have any good info backing them up on this?
Save the data for every run into its own row. Then later you can use and analyze the data however you like... ie, all you the other options you listed can be performed after the fact. It's not really possible for someone else to draw meaningful conclusions about how to average/analyze the data without knowing more about what's going on.
The fastest run is the one that most accurately times only your code.
All slower runs are slower because of noise introduced by the operating system scheduler.
The variance you experience is going to differ from machine to machine, and even on identical machines, the set of runnable processes will introduce noise.
None of the above. Bran is close though. You should save every measurment. But don't average them. The average (arithmetic mean) can be very misleading in this type of analysis. The reason is that some of your measurments will be much longer than the others. This will happen becuse things can interfere with your process - even on 'clean' test systems. It can also happen becuse your process may not be as deterministic as you might thing.
Some people think that simply taking more samples (running more iterations) and averaging the measurmetns will give them better data. It doesn't. The more you run, the more likelty it is that you will encounter a perturbing event, thus making the average overly high.
A better way to do this is to run as many measurments as you can (time permitting). 100 is not a bad number, but 30-ish can be enough.
Then, sort these by magnitude and graph them. Note that this is not a standard distribution. Compute compute some simple statistics: mean, median, min, max, lower quaertile, upper quartile.
Contrary to some guidance, do not 'throw away' outside vaulues or 'outliers'. These are often the most intersting measurments. For example, you may establish a nice baseline, then look for departures. Understanding these departures will help you fully understand how your process works, how the sytsem affecdts your process, and what can interfere with your process. It will often readily expose bugs.
Depends what kind of data you want. I'd say one line per run initially, then analyze the data, go from there. Maybe store a min/max/average of X runs if you want to consolidate it.
http://en.wikipedia.org/wiki/Sample_size
Bryan is right - you need to investigate more. if your code has that much variance even "most" of the time then you might have a lot of fluctuation in your test environment because of other processes, os paging or other factors. If not it seems that you have code paths doing wildly varying amount of work and coming up with a single number/run data to describe the performance of such a multi-modal system is not going to tell you much. So i'd say isolate your setup as much as possible, run at least 30 trials and get a feel for what your performance curve looks like. Once you have that, you can use that wikipedia page to come up with a number that will tell you how many trials you need to run per code-change to see if the performance has increased/decreased with some level of statistical significance.
While saying, "Save every run," is nice, it might not be practical in your case. However, I do think that storing only the average eliminates too much data. I like storing the average of ten runs, but instead of storing just the average, I'd also store the max and min values, so that I can get a feel for the spread of the data in addition to its center.
The max and min information in particular will tell you how often corner cases arise. Is the 1500ms case a one-in-1000 outlier? Or is it something that recurs on a regular basis?

Time remaining from NSProgressIndicator

I am developing an application in cocoa,I need to calculate the time remaining from NSProgressIndicator .Is that posiible???
Thanks in advance........
This is a "how long is a piece of string" question. Without knowing more about what your app is doing, there is no way we can answer it sufficiently.
You are responsible for updating the progress indicator to give the user an idea of how far through a task you are. How you obtain that information (if it is obtainable) will determine how you update the progress indicator.
A progress indicator has no concept of time. If it isn't indeterminate, it has a minimum value, current value, and maximum value. That's pretty much it.
Tracking the current value, the time you're taking, and your estimate for how long you have yet to take is your job.

Resources