The hammerJS docs mention the following:
domEvents: false
Let Hammer also fire DOM events. This is a bit slower, so disabled by default.
What type of measurable effect is "a bit slower"? How does it impact performance? and under what circumstance?
Related
I'm writing a Chrome extension and I want to measure how it affects performance, specifically currently I'm interested in how it affects page load times.
I picked a certain page I want to test, recorded it with Fiddler and I use this recording as the AutoResponder in Fiddler. This allows me to measure load times without networking traffic delays.
Using this technique I found out that my extension adds ~1200ms to the load time. Now I'm trying to figure out what causes the delay and I'm having trouble understanding the DevTools Performance results.
First of all, it seems there's a discrepancy in the reported load time:
On one hand, the summary shows a range of ~13s, but on the other hand, the load event arrived after ~10s (which I also corroborated using performance.timing.loadEventEnd - performance.timing.navigationStart):
The second thing I don't quite understand is how the number add up (or rather don't add up). For example, here's a grouping of different categories during load:
Neither of this columns sums up to 10s nor to 13s.
When I group by domain I can get different rows for the extension and for the rest of the stuff:
But it seems that the extension only adds 250ms which is much lower than the exhibited difference in load times.
I assume that these numbers represent just CPU time, and do not include any wait time. Is this correct? If so, it's OK that the numbers don't add up and it's possible that the extension doesn't spend all its time doing CPU bound work.
Then there's also the mysterious [Chrome extensions overhead], which doesn't explain the difference in load times either. Judging by the fact that it's a separate line from my extension, I thought they are mutually exclusive, but if I dive deeper into the specifics, I find my extension's functions under the [Chrome extensions overhead] subdomain:
So to summarize, this is what I want to be able to do:
Calculate the total CPU time my extension uses - it seems it's not enough to look under the extension's name, and its functions might also appear in other groups.
Understand whether the delay in load time is caused by CPU processing or by synchronous waiting. If it's the latter, find where my extension is doing a synchronous wait, because I'm pretty sure that I didn't call any blocking APIs.
Update
Eventually I found out that the reason for the slowdown was that we also activated Chrome accessibility whenever our extension was running and that's what caused the drastic slowdown. Without accessibility the extension had a very minor effect. I still wonder though, how I could see in the profiler that my problem was the accessibility. It could have saved me a ton of time... I will try to look at it again later.
I've just realised that the code I'm going through sometimes uses load/DOMcontentloaded events and sometimes uses the onreadystatechange event.
I'm trying to uniformize it but I really don't know which one is better (I'm thinking about performance and resource use for mobile).
The only difference I can see is that onreadystatechange always applies to document and load/DOMcontentloaded is applied to window/document.
Does this mean that the way browsers implement them is different and matters or is this just a "too much" engineered question?
I am developing a web application with a somewhat complex user interface. It seems like it might be a good idea to back the UI with a corresponding state machine, defining the transitions possible between various states and the corresponding behavior.
The perceived benefits are that the code for controlling the behavior is structured consistently, and that the state of the UI can be persisted and resumed easily.
Can anyone who has tried this lend any insights into this approach? Are there any pitfalls I need to be aware of?
Off the top of my head, these are a bit obvious, but still, as nobody replied anything:
i'd advise to persist the state of the application server side, indexed via a session variable/user id for security and flexibility reasons;
interfaces are better modeled by an event-based approach IMHO, but this is a bit dependent on what layer of the UI you're developing, and also on your language of choice for development. You may be able to store some logic on item triggers and items themselves.
By event-based approach, i refer somewhat to this technique, which some "more visual" oriented environments (adobe flex, oracle forms and also html, in a sort of limited fashion) use. In a nutshell, you have triggers (item.on_click, label.on_mouse_over, text_field.on_record_update) which you use to drive the states of the interface.
One very common caveat of this kind of approach (distributed control) is endless loops: you have an item that enables another item, which when enabled fires its own triggers and eventually gets the first item to fire that same trigger again. This is quite often not obvious when developing, but very common to detect when testing.
Some languages/environments offer some protection against the more obvious cases, but this is something to be on the lookout for.
This is probably useful for your approach.
In this MSO bug report, our very own waffles makes the following observation:
This bug also happens to be a heisenbug, when debugging it if your first breakpoint is too early, stepping through shows that everything is good.
(Ref: Wikipedia's entry on Heisenbugs)
How is it even possible for the location of a breakpoint make a difference in whether a bug appears?
(Yes, I know the Wikipedia article answers this, but I thought it'd be a good question for SO to have the answer to, and I bet SO can do better anyways.)
If there is any kind of asynchronous activity going on then this could affect heisenbugs. e.g. threads, I/O, interrupts, etc. Setting breakpoints in different locations would affect the relative timing of the main thread and the asynchronous events which could then potentially result in related bugs either showing up or disappearing.
A common source is timing, in particular with multiple threads.
Lets say you have a GUI app with some event handlers and a bug where a table selection is not handled correctly, perhaps because Swing sometimes start updating the table before your event is handled.
By pausing a thread at a breakpoint, you may change the order in which the table component receives events and thus you might see a different outcome with and without the breakpoint. It's a very common problem, and one of the things that can make debugging complex GUI apps with lots of events really painful.
I've always wondered about when and where is the best time to cache a property value... Some of them seem pretty simple, like the one below...
public DateTime FirstRequest {
get {
if (this.m_FirstRequest == null) {
this.m_FirstRequest = DateTime.Now;
}
return (DateTime)this.m_FirstRequest;
}
}
private DateTime? m_FirstRequest;
But what about some more complicated situations?
A value that comes from a database, but remains true after it's been selected.
A value that is stored in a built-in cache and might expire from time to time.
A value that has to be calculated first?
A value that requires some time to initialize. 0.001s, 0.1s, 1s, 5s???
A value that is set, but something else may come and set it to null to flag that it should be repopulated.
??? There seems to be limitless situations.
What do you think is the point that a property can no longer take care of itself and instead require something to populate its value?
[EDIT]
I see suggestions that I'm optimizing too early, etc. But my question is for when it is time to optimize. Caching everything isn't what I'm asking, but when it is time to cache, whose responsibility should it be?
In general, you should get the code working first and then optimize later and then only do optimizations that profiling say will help you.
I think you need to turn your question the other way around lest you fall into a trap of optimizing too early.
When do you think is the point that a property no longer needs recalculating on every call and instead uses some form of caching?
The caching of a value is an optimization and should therefore not be done as the norm. There are some cases where it is clearly relevant to use this optimization but in most cases, you should implement the property to work correctly every time by doing the appropriate work to get the value and then look to optimize it once you've profiled and shown that optimization is required.
Some reasons why caching is or is not a good idea:
- Don't cache if the value is prone to frequent change
- Do cache if it never changes and you are responsible for it never changing
- Don't cache if you are not responsible for providing the value as you're then relying on someone else's implementation
There are many other reasons for and against caching a value but there is certainly no hard and fast rule on when to cache and when not to cache - each case is different.
If you have to cache...
Assuming that you've determined some form of caching is the way to go, then how you perform that caching depends on what you are caching, why, and how the value is provided to you.
For example, if it is a singleton object or a timestamp as in your example, a simple "is it set?" condition that sets the value once is a valid approach (that or create the instance during construction). However, if it's hitting a database and the database tells you when it changes, you could cache the value based on a dirty flag that gets dirtied whenever the database value says it has changed. Of course, if you have no notification of changes, then you may have to either refresh the value on every call, or introduce a minimum wait time between retrievals (accepting that the value may not always be exact, of course).
Whatever the situation, you should always consider the pros and cons of each approach and consider the scenarios under which the value is referenced. Do the consumers always need the most recent value or can they cope with being slightly behind? Are you in control of the value's source? Does the source provide notification of changes (or can you make it provide such notifications)? What is the reliability of the source? There are many factors that can affect your approach.
Considering the scenarios you gave...
Again, assuming that caching is needed.
A value that comes from a database, but remains true after it's been selected.
If the database is guaranteed to retain that behavior, you can just poll the value the first time it is requested and cache it thereafter. If you can't guarantee the behavior of the database, you may want to be more careful.
A value that is stored in a built-in cache and might expire from time to time.
I would use a dirty flag approach where the cache is marked dirty from time to time to indicate that the cached value needs refreshing, assuming that you know when "time to time" is. If you don't, consider a timer that indicates the cache is dirty at regular intervals.
A value that has to be calculated first?
I would judge this based on the value. Even if you think caching is needed, the compiler may have optimised it already. However, assuming caching is needed, if the calculation is lengthy, it might be better during construction or via an interface such as ISupportInitialize.
A value that requires some time to initialize. 0.001s, 0.1s, 1s, 5s???
I'd have the property do no calculation for this and implement an event that indicates when the value changes (this is a useful approach in any situation where the value might change). Once the initialization is complete, the event fires, allowing consumers to get the value. You should also consider that this might not be suited for a property; instead, consider an asynchronous approach such as a method with a callback.
A value that is set, but something else may come and set it to null to flag that it should be repopulated.
This is just a special case of the dirty flag approach discussed in point 2.