Best practice update interval for desktop application progress bar - user-interface

I have a progress bar in a desktop application user interface (written in JavaFX). In general, is there an ideal update interval to use (in milliseconds) for updating progress bars with continuously changing progress (such as in the case of a file copy or download)? Right now I am getting good results with a 20ms update interval. That is, I have a timer thread that updates the progress bar every 20ms. My reasoning for using this value is that 20ms is higher than 30 FPS, which is supposed to be where the human eye stops seeing individual frames. Is there any reason to avoid using lower intervals, such as 1ms? What is the best practice for this?

Use a Task for your operation and call updateProgress on the task as often as you like. Bind your ProgressBar's progress property to the task's progress property. The JavaFX system will coalesce any superfluous updates so that the progress property is updated once per pulse. By default the JavaFX system will process pulses 60 times a second.

Related

Timeout for Volume Shadow Copy (programmatically)

When I start the Volume Shadow copy programmatically how much time is maximum that I should wait before the Shadow is prepared ie: before I proceed taking backup. Just now it happens with about 10 secs in my machine. But I need timeout when deploying elsewhere.
This article (which describes the flow of events that happen when you create a Shadow Copy using a "Hardware Provider") has a nice sequence diagram that illustrates the timeout in the 'I/O Flush & Hold' window: https://msdn.microsoft.com/en-us/library/windows/desktop/aa384615(v=vs.85).aspx
The total timeout enforced by VSS in this window is 60 seconds. Hence, you can expect the IVssAsync::Wait() call, on the IVssAsync pointer returned by the call to DoSnapshotSet(), to automatically timeout in 60 seconds.
As such, you do not need to implement a timeout yourself, since VSS does this for you.

How to change the resolution time calculation in JIRA?

Is it possible to change the resolution time calculation to start not with the issue creation time, but rather with the time when an issue was transferred into a certain state?
The use case is as follows - We use a kanban-ish development method, where we create most issues/featues/stories in a backlog upfront; thus, this kills the usefulness of the resolution time gadget. In our case, the lead/resolution time should rather be calculated using the time where an issue has been pulled to the selected issues.
As this calculation is the basis for multiple gadgets, maybe it could be changed per gadget in order to avoid unforeseen issues with other gadgets?
There is a service level management tool SLAdiator (http://sladiator.com) which calculates resolution / reaction times based on the duration that ticket has spent in a certain status (or statuses). You can view these tickets online as well as get reports.

without using DoEvents, how to detect if a button has been pressed?

Currently, I call DoEvents in order to check if Button Foo in Form Bar has been clicked. This approach works but it takes too much processing power, delaying the program.
I believe that the delay could be reduced if I could only check if Button Foo has been clicked, instead of all the other forms that DoEvents has to go through.
Any ideas on how can I check if Button Foo was clicked?
VB6 was not really designed for what you seem to be doing (some sort of long-running straight-line code that does not exit back to give the message loop control). Normally such a task would be delegated to a worker thread, and in VB6 this means some external component implemented in C++ most of the time.
There are only a very few kinds of approaches to take to do this for your ad-hoc logic:
Hacks creating separate threads via API calls, not very reliable in VB6 for a number of reasons.
A tricky thread-per-object ActiveX EXE implementing a class to handle your long-running workload.
A separate non-interactive worker process to be run and monitored by your GUI program.
That's pretty much it.
The prescribed method of doing this sort of thing is described in the VB6 documentation. You break your long-running loop up and invert the logic into a repeatable "quantum" of work (like n iterations of your processing loop), and maintain the state of your workload in Form-global data. Then you use a Timer control with its interval set to 1 or 16 (hardly matters, they normally take at least 16ms to trigger) and run your workload quantum within its event handler.
So if you simply had a loop that currently iterates 100,000 times doing something you might break it up so that it runs 500 times for each Timer tick. The quantum size will probably need to be tuned based on what is done within the loop - 500 is just a value chosen for illustration. You'll want to adjust this until it leaves the UI responsive without starving your background workload too much (slowing completion down).
If your code is heavy enough to not call DoEvents or just finish running periodically, then your app won't even know the button has been pressed. The DoEvents call allows windows, and your application to catch up on all notifications.
The correct way to resolve this is a worker thread (see this article on how to do something like this in VB6) but failing that, a periodic DoEvents is required and in turn, some re-entrancy blocking on the call into the long running code.

DispatcherTimer behavior in WP7 application

I am writing an audio recording application for WP7. I have a DispatcherTimer object in my ViewModel class, that when the recording is happening, counts the elapsed seconds to show the length of the recording to a user. I have the following problem with the app:
The tick interval for the DispatcherTimer is set to one second (1000) ms.
When I press the start button, the DispatcherTimer starts.
When I press the stop button, the DispatcherTimers thread exits. (in a second!, thought I didn't intend it to be that way))
If I do press the start button after pressing stop to swiftly (less then a second inbetween), my DispatcherTimer fails to start again, since it hasn't yet stopped. (it's thread hasn't exited)
Basically, my biggest concern is why does DispatcherTimer has to wait until it's time for its tick, to realize that it has been stopped, and the thread it created to perform ticks in has to exit?
How can I work around this problem? Thank you.
DispatcherTimers are not guaranteed to execute exactly when the time interval occurs, but they are guaranteed to not execute before the time interval occurs. This is because DispatcherTimer operations are placed on the DispatcherTimer queue like other operations. When the DispatcherTimer operation executes is dependent on the other jobs in the queue and their priorities.
Reference: http://msdn.microsoft.com/en-us/library/system.windows.threading.dispatchertimer(v=VS.95).aspx
You should better use a System.Threading.Timer, which is a timer class that fires on a separate thread. This is good for purely numerical timing, where you're not trying to update the UI, etc.

Run code for a certain length of time and kill if necessary

I'm using the scripting bridge to query iTunes from my cocoa application. Sometimes iTunes pops up a window (like if an ipod needs updating etc.) and while that popup window is open I can't get any information from iTunes. So if I request information from iTunes when it's in this state my application completely locks-up until that popup window is dismissed.
So I need some sort of mechanism where I can ask itunes something simple in a separate thread to see if I can get a response from it... and if that separate thread doesn't receive a response within a short period of time my main thread will just kill that thread and thus know not to query itunes at that particular time.
Any ideas how to create such a mechanism? I searched for ways to kill a thread but haven't found any.
Your problem is nothing to do with threads; it's that your timeout is too long. Whatever you're doing should fail after about a minute.
To fix this, send a setTimeout: message to the SBApplication object, passing the amount of time you want it to wait. The value is in ticks, of which there are exactly 60 per second.
(Some sources say 60.15, and Apple's own docs say “approximately” 60, but I just measured ten minutes' worth of TickCount, and the result of the division by 600 seconds is exactly 60.0. The code I used:
NSLog(#"Ticks per second: %f", (end - start) / (60.0 * numMinutes)); where end and start are results from TickCount.)
Check out NSOperation/NSOperationQueue.

Resources