Do CSS animations work on strict time basis? - animation

That is to say, if I set an animation to take 1 second, will it always take exactly 1 second (i.e. skip frames in order to achieve that interval)?
Part 2 of my question involves how to utilize CSS animations in asynchronous Javascript calls. I would need to be able to recall the animation once it had completed.

The times are not guarenteed to be exactly correct. To demonstrate, I setup a test case that shows times vary a bit from the 1 second mark. As for the second part of your question, you can use the animationend event to restart it, or you can also set it to iterate (like I've done in my example).
Update It's hard to simulate the browser choking, but I have notice significant deviation from the animation when it has choked naturally. For example, upon loading the page, my Firebug started up, which caused the first animation to jump down to 0.574 seconds, almost half my original value. So it looks like the browser does try to compensate a bit, but it may also overcompensate as well. I have also seen times as long as 2 seconds, so I don't think you can say that the timing is going to be exact in any way.
Update 2 So, I was able to get the browser to choke (had to count to 1000000 in FF... I'm impressed), and the quick answer to your question is no, it does not do any compensation to try and get the time accurate. It just chokes and does not animate. Mind you that is a tight loop, so it may perform better if it can get other calculations in there, but I don't know that for sure.

The answer to your question basically is all here at MDN. The gist of it is that:
The times are not perfectly accurate, but they're close.
There are events that fire at various times during animations (and transitions). You can attach event handlers to them.

Related

performance of layered canvases vs manual drawImage()

I've written a small graphics engine for my game that has multiple canvases in a tree(these basically represent layers.) Whenever something in a layer changes, the engine marks the affected layers as "soiled" and in the render code the lowest affected layer is copied to its parent via drawImage(), which is then copied to its parent and so on up to the root layer(the onscreen canvas.) This can result in multiple drawImage() calls per frame but also prevents rerendering anything below the affected layer. However, in frames where nothing changes no rendering or drawImage() calls take place, and in frames where only foreground objects move, rendering and drawImage() calls are minimal.
I'd like to compare this to using multiple onscreen canvases as layers, as described in this article:
http://www.ibm.com/developerworks/library/wa-canvashtml5layering/
In the onscreen canvas approach, we handle rendering on a per-layer basis and let the browser handle displaying the layers on screen properly. From the research I've done and everything I've read, this seems to be generally accepted as likely more efficient than handling it manually with drawImage(). So my question is, can the browser determine what needs to be re-rendered more efficiently than I can, despite my insider knowledge of exactly what has changed each frame?
I already know the answer to this question is "Do it both ways and benchmark." But in order to get accurate data I need real-world application, and that is months away. By then if I have an acceptable approach I will have bigger fish to fry. So I'm hoping someone has been down this road and can provide some insight into this.
The browser cannot determine anything when it comes to the canvas element and the rendering as it is a passive element - everything in it is user rendered by the means of JavaScript. The only thing the browser does is to pipe what's on the canvas to the display (and more annoyingly clear it from time to time when its bitmap needs to be re-allocated).
There is unfortunately no golden rule/answer to what is the best optimization as this will vary from case to case - there are many techniques that could be mentioned but they are merely tools you can use but you will still have to figure out what would be the right tool or the right combination of tools for your specific case. Perhaps layered is good in one case and perhaps it doesn't bring anything to another case.
Optimization in general is very much an in-depth analysis and break-down of patterns specific to the scenario, that are then isolated and optimized. The process if often experiment, benchmark, re-adjust, experiment, benchmark, re-adjust, experiment, benchmark, re-adjust... of course experience reduce this process to a minimum but even with experience the specifics comes in a variety of combinations that still require some fine-tuning from case to case (given they are not identical).
Even if you find a good recipe for your current project it is not given that it will work optimal with your next project. This is one reason no one can give an exact answer to this question.
However, when it comes canvas what you want to achieve is a minimum of clear operations and minimum areas to redraw (drawImage or shapes). The point with layers is to groups elements together to enable this goal.

How to draw graphs using d3.js for a big dataset?

I tried creating 10 linecharts all of them had 3000 points, 300*300 svg size. It crashed my browser, I checked task manager, google renderer was going crazy with memory utilization 1.2G and CPU utilization 100%.
There's no easy solution for things like this. You can scrutinize your code and make it as efficient as possible, but no matter what, if your code needs to do hundreds of thousands of operations in one "thread" things will freeze up.
A general solution to avoid this freeze-up is to split the drawing process into smaller tasks, which you call asynchronously (i.e. from inside a setTimeout). This way the browser doesn't lock up for extended periods while it runs your JS code, and perhaps (I'm no expert on this) the garbage collector has a chance to clean things up midway too.
The result is not a faster overall draw time, but to a user it "feels" faster, because the browser doesn't freeze. And you can even add a progress bar then.
Some drawing operations can't be broken down into sub-tasks. For example, you can't split up svg.line(), the d3 function that generates your graph's path definitions. However, you can split up the drawing code of the 10 charts such that it draws one chart at a time on every tick of a setTimeout. You can also similarly split up the preparation of the data from the actual drawing.
I wrote an answer to a different scenario but a similar problem here: CSS transitions blocked by JavaScript

Android - have progress bar increase smoothly instead of jumping in bigger steps.

I have some sequence of situations let say 5 situations.
The point is I do not know how long each of them will last. I just know when each of those situations is finished.
so imagine this method is called randomly, it is not randomly but I could not know when will be called.
public void refresh(int i){
}
know I can update the progressbar by increasing him each time refresh will be called, but the user experience will be "see the bar increased by 20%, than wait and nothing moves, than again jump of 20% and stack again, and so on...
The thing I want to achieve is to have something more sooth.
I know this is not easy task but I know that someone have meet this problem before, so what are the workarounds ?
If the updates are going to arrive essentially at random, you might consider using a "spinner" instead of a progress bar. The end result is almost the same: the user has some idea that the process is underway. If you want to give a little more information, you can update a text label next to the spinner to indicate which part of the sequence is underway:
I've encountered this before. What I did was to estimate each one of my "situations" as to how much of the 100% progress they would need. So, task 1 would go up to 15% and task 2 might go up to 50%, etc. Keep in mind that this was just a general estimate. Then, within each situation, I would calculate the individual task for 100% completeness and increment that accordingly. Then, I would do the math to calculate the actual progress based on the given task i was in. So, if I was in task 2, I would be calculating based on 100% progress of the specific task calculated against the 35% of the total progress that I estimated task 2 to actually take. I hope this helps you.
Quick question, is it a progress bar you do want?
From what I read, you have no clue of estimated time, only that each chunk is 20%, and you do not know how long each chunk takes, or their time to run through, but you want the bar to move more smooth then large 20% completed chunks.
As you only got 5 measurement points unless you can add another layer of progress on each task to report back, else you are stuck guessing.
If you want to guess, you could make a rough estimate of the time a task will take. Again based on task at hand you might be able to make a good or bad estimate by hard coding expected time. The trick you ask for is looking like progress (hide the fact you are at an uncertain point in an operation 1/5)
Now have the counter move up slowly toward the next mark for the given time (say 1 minute) but as you approach the time, reduce the progress, this will mean that you will either see the bar move faster or slower as you approach expected next point. You will have progress, but it would be pure random guessing. If task end prior you have some "missing" % to make next progress gap larger with, if your slow, the bar slows down more and more.
This is a lot of extra work to show something that is misleading at best, to show the progress has not stuck, you might want to look for other options along with the progress bar.

When timing how long a quick process runs, how many runs should be used?

Lets say I am going to run process X and see how long it takes.
I am going to save into a database a date I ran this process, and the time it took. I want to know what to put into the DB.
Process X almost always runs under 1500ms, so this is a short process. It usually runs between 500 and 1500ms, quite a range (3x difference).
My question is, how many "runs" should be saved into the DB as a single run?
Every run saved into the DB as its
own row?
5 Runs, averaged, then save that
time?
10 Runs averaged?
20 Runs, remove anything more than 2
std deviations away, and save
everything inside that range?
Does anyone have any good info backing them up on this?
Save the data for every run into its own row. Then later you can use and analyze the data however you like... ie, all you the other options you listed can be performed after the fact. It's not really possible for someone else to draw meaningful conclusions about how to average/analyze the data without knowing more about what's going on.
The fastest run is the one that most accurately times only your code.
All slower runs are slower because of noise introduced by the operating system scheduler.
The variance you experience is going to differ from machine to machine, and even on identical machines, the set of runnable processes will introduce noise.
None of the above. Bran is close though. You should save every measurment. But don't average them. The average (arithmetic mean) can be very misleading in this type of analysis. The reason is that some of your measurments will be much longer than the others. This will happen becuse things can interfere with your process - even on 'clean' test systems. It can also happen becuse your process may not be as deterministic as you might thing.
Some people think that simply taking more samples (running more iterations) and averaging the measurmetns will give them better data. It doesn't. The more you run, the more likelty it is that you will encounter a perturbing event, thus making the average overly high.
A better way to do this is to run as many measurments as you can (time permitting). 100 is not a bad number, but 30-ish can be enough.
Then, sort these by magnitude and graph them. Note that this is not a standard distribution. Compute compute some simple statistics: mean, median, min, max, lower quaertile, upper quartile.
Contrary to some guidance, do not 'throw away' outside vaulues or 'outliers'. These are often the most intersting measurments. For example, you may establish a nice baseline, then look for departures. Understanding these departures will help you fully understand how your process works, how the sytsem affecdts your process, and what can interfere with your process. It will often readily expose bugs.
Depends what kind of data you want. I'd say one line per run initially, then analyze the data, go from there. Maybe store a min/max/average of X runs if you want to consolidate it.
http://en.wikipedia.org/wiki/Sample_size
Bryan is right - you need to investigate more. if your code has that much variance even "most" of the time then you might have a lot of fluctuation in your test environment because of other processes, os paging or other factors. If not it seems that you have code paths doing wildly varying amount of work and coming up with a single number/run data to describe the performance of such a multi-modal system is not going to tell you much. So i'd say isolate your setup as much as possible, run at least 30 trials and get a feel for what your performance curve looks like. Once you have that, you can use that wikipedia page to come up with a number that will tell you how many trials you need to run per code-change to see if the performance has increased/decreased with some level of statistical significance.
While saying, "Save every run," is nice, it might not be practical in your case. However, I do think that storing only the average eliminates too much data. I like storing the average of ten runs, but instead of storing just the average, I'd also store the max and min values, so that I can get a feel for the spread of the data in addition to its center.
The max and min information in particular will tell you how often corner cases arise. Is the 1500ms case a one-in-1000 outlier? Or is it something that recurs on a regular basis?

What is the shortest perceivable application response delay?

A delay will always occur between a user action and an application response.
It is well known that the lower the response delay, the greater the feeling of the application responding instantaneously. It is also commonly known that a delay of up to 100ms is generally not perceivable. But what about a delay of 110ms?
What is the shortest application response delay that can be perceived?
I'm interested in any solid evidence, general thoughts and opinions.
The 100 ms threshold was established over 30 yrs ago. See:
Card, S. K., Robertson, G. G., and Mackinlay, J. D. (1991). The information visualizer: An information workspace. Proc. ACM CHI'91 Conf. (New Orleans, LA, 28 April-2 May), 181-188.
Miller, R. B. (1968). Response time in man-computer conversational transactions. Proc. AFIPS Fall Joint Computer Conference Vol. 33, 267-277.
Myers, B. A. (1985). The importance of percent-done progress indicators for computer-human interfaces. Proc. ACM CHI'85 Conf. (San Francisco, CA, 14-18 April), 11-17.
What I remember learning was that any latency of more than 1/10th of a second (100ms) for the appearance of letters after typing them begins to negatively impact productivity (you instinctively slow down, less sure you have typed correctly, for example), but that below that level of latency productivity is essentially flat.
Given that description, it's possible that a latency of less than 100ms might be perceivable as not being instantaneous (for example, trained baseball umpires can probably resolve the order of two events even closer together than 100ms), but it is fast enough to be considered an immediate response for feedback, as far as effects on productivity. A latency of 100ms and greater is definitely perceivable, even if it's still reasonably fast.
That's for visual feedback that a specific input has been received. Then there'd be a standard of responsiveness in a requested operation. If you click on a form button, getting visual feedback of that click (eg. the button displays a "depressed" look) within 100ms is still ideal, but after that you expect something else to happen. If nothing happens within a second or two, as others have said, you really wonder if it took the click or ignored it, thus the standard of displaying some sort of "working..." indicator when an operation might take more than a second before showing a clear effect (eg. waiting for a new window to pop up).
New research as of January, 2014:
http://newsoffice.mit.edu/2014/in-the-blink-of-an-eye-0116
...a team of neuroscientists from MIT has found that the human brain
can process entire images that the eye sees for as little as 13
milliseconds...That speed is far faster than the 100 milliseconds
suggested by previous studies...
At the San Francisco Opera house, we routinely setup precise delay setting for each of our speakers. We can detect 5 millisecond changes in delay times to our speakers. When you make such subtle changes, you change where the sound sources from. Often times we want sound to sound as if it's coming from someplace other than were the speakers are. Precise delay adjustments make this possible. Sound delays of 15 milliseconds are very obvious even to untrained ears because it radically shifts where the sound sources from. A simple test is to prove this is to play sound through multiple speakers, and have the subject close their eyes and point to where the sound is coming from. Now make a slight change in the delay time to one of the speakers of just a few milliseconds, and have the person point again to where the sound is coming from. Making changes in delay times is acoustically very similar to moving the actual speakers.
I don't think anecdotes or opinions are really valid for answers here. This question touches on the psychology of user experience and the sub-conscious mind. The human brain is powerful and fast and mere milliseconds do count and are registered. I am no expert but I know there is much science behind e.g. what Matt Jacobsen mentioned. Check out Google's study here http://services.google.com/fh/files/blogs/google_delayexp.pdf for an idea of how much it can affect site traffic.
Here's another study by Akami - 2 second response time
http://www.akamai.com/html/about/press/releases/2009/press_091409.html (From https://ux.stackexchange.com/questions/5529/once-apon-a-time-there-was-a-10-seconds-to-load-a-page-rule-what-is-it-nowa )
Does anyone have any other studies to share?
Persistence of vision is around 100ms so it should be a reasonable visual feedback delay. 110ms should make no difference, as it is an approximate value. In practice you won't notice a delay below 200ms.
Out of my memory, studies have shown that users lose patience and retry an operation after around 2s of inactivity (in the absence of feedback), e.g. clicking on a confirm or action button. So plan on using some kind of animation if the action takes longer than 1s.
I worked on an application that had a explicit business goal of being blindingly fast, and we had a max allowed server time of 150ms for processing a full web page.
No solid evidence but for our own application, we allow a maximum of one second between a user action and feedback. If it does take longer, a "waiting box" should be shown.
A user should see "something" happening within a second of causing an action.
Use the dual of test for visual spatial resolution ( two parallel black bars, with an equal width, and an equal gap between them. Reduce angular subtense until they appear to be one line, ie scale down or simply move away. The point at which it seems to merge into one line shows the threshold).
Use function gen to blink an LED on for an interval, then off, then on, then off --- same time delay each interval, but repeat the pattern while gradually decreasing that delay, thus same as above, but time in place of space.
Imagine an oscilloscope image like so:
_________/^d^\_d_/^d^\_________
I note that at 41 ms interval, I perceive one longer blink only, but at 42 ms, I just perceive it as extremely rapid double blink. Thus, threshold is ~42ms. Probably varies depending on person, age, condition etc.
This is close to 24 fps, which is probably why cinema works at that presentation rate.
Reaction time to see something, then decide to react, say by clicking mouse etc, is longer much longer again. Thus, it's not surprising that experiments requiring a reaction response to measure yield a longer time, but that longer delay wasn't what you were asking for, and the above experiment is easy and illuminating!
But note also -- smoothly moving animations require the visual cortex to work harder, delaying visual comprehension. This delay is 'hidden' from perception, so longer delays (several hundred ms) can be 'hidden' by just providing something thats difficult to see because moving.
The effect that hides it is called Chronostasis. Basically, glancing somewhere 'new' requires the visual cortex to work harder to 'de-render' / 'recognise' the scene. This takes a remarkably long time, during which your consciousness is essentially 'paused'.
Once looking at a mostly-constant scene, only changes need this processing, so smaller/faster changes are possible and your perceptual experience resumes, and faster/smaller movements are detectable.
The detection of changes visually is processed basically on your retina. Your eyes also have a natural 'bandpass' response -- stare unblinkingly at anything for sufficient time, and at sufficient distance for saccades to be unable to change the image much, and you will find your visual feed fading out to 'grey'. This is what gives us our 'white balance', and is somewhat similar to the automatic gain control on analogue radio/tv.
The point is, that your eyes themselves have a time constant to respond, but this is actually dependant on the strength of the stimulus. (brightness of the LED, for our case).
Too bright, and the ability of your retinal cells to 'relax' back from the brightness, ie, respond to the 'sudden dark', is compromised.
The effect which keeps you seeing bright things after the light has stopped is called 'persistence of vision', and old cathode-ray picture tubes more or less depend heavily on it for them to work at all.
This is the one that's usually 100 ms or so, but it's not a 'sharp' interval -- more like a exponential roll-off, and again -- changes duration depending on how bright the stimulus is relative to how dark-adjusted (ie, sensitive) the eye is at that moment.
For duller, faster changes, especially changes outside your fovea, you will perceive even higher rates easily. Eg, flickering lights. Those outer parts of your retina (most of the area, actually) are adapted to detecting movement, and bringing it to your attention. So it makes sense that although lacking spatial resolution, they have greater time resolution / shorter response rate.
But this also means animating things usually requires even finer time steps, otherwise 'jumpiness' is perceptible, mostly due to that faster response.
Note all the scaling/sliding full screen animations iOS uses -- these essentially exploit chronostasis to hide technically unavoidable loading delays, giving the perception that those products respond instantly and smoothly at all times.
So, show something different within 42 ms -> instant response.
Keep animating otherwise useless hard-to-see-properly visuals continuously at high frame rates, then stop suddenly when done -> hides the delay so long as enough is visually busy, and the delay isn't too long. (probably 250ms is pushing the friendship).
This also seems to tee up with other's perceptions of input lag, for example : http://danluu.com/input-lag/
100ms is totally wrong. You can prove this yourself using your fingers, a desk, and a watch with visible seconds. Synchronising to the watch's seconds, drum out beats on the desk continuously such that 16 beats are drummed out every second. I chose 16 because it is natural to drum out multiples of two, so it's like four strong beats with three weak beats in between. Adjacent beats are clearly discernible by their sound. The beats are separated by about 60ms, so even 60 ms is actually still too high. Therefore the threshold is way below 100ms, especially if sound is involved.
For instance, a drum app or a keyboard app needs a delay of more like 30ms, or else it gets really annoying, because you hear the sound coming from the physical button / pad / key well before the sound comes out of the speakers. Software like ASIO and jack were made specifically to deal with this issue, so no excuses. If your drum app has a 100ms delay, I will hate you.
The situation for VoIP and high powered gaming is actually worse, because you need to react to events in real time, and in music, at least you get to plan ahead at least a little. For an average human reaction time of 200ms, a further 100ms delay is an enormous penalty. It noticeably changes the conversational flow of VoIP. In gaming, 200ms reaction time is generous, especially if the players have a lot of practice.
For a reasonably current scholarly article, try out How Much Faster is Fast Enough? User Perception of
Latency & Latency Improvements in Direct and Indirect Touch (PDF). While the main focus was on JND (Just Noticeable Difference) of delay, there is some good background on on absolute delay perception and they also acknowledge and account for 60Hz monitors (16.7 ms repaint times) in their second experiment.
I am a cognitive neuroscientist who studies visual perception and cognition.
The paper by Mary Potter mentioned above regards the minimum time required to categorize a visual stimulus. However, understand that this is under laboratory conditions in the absence of any other visual stimuli, which certainly would not be the case in the real world user experience.
The typical benchmark for a stimulus-response / input-stimulus interaction, that is, the average amount of time for an individuals minimum reaction speed or input-response detection is ~200ms. to be certain there is no detectable difference, this threshold could be lowered to around 100ms. Below this threshold, the temporal dynamics of your cognitive processes take longer to compute the event than the event itself, so there is nearly no chance of any ability to detect or differentiate it. You could go lower to say 50 ms, but it really wouldn't be necessary. 10 ms and you've gone into the territory of overkill.
For web applications 200ms is considered as unnoticable delay, while 500ms is acceptable.

Resources