I've just realised that the code I'm going through sometimes uses load/DOMcontentloaded events and sometimes uses the onreadystatechange event.
I'm trying to uniformize it but I really don't know which one is better (I'm thinking about performance and resource use for mobile).
The only difference I can see is that onreadystatechange always applies to document and load/DOMcontentloaded is applied to window/document.
Does this mean that the way browsers implement them is different and matters or is this just a "too much" engineered question?
Related
I have a project running Ember#3.20. We are currently in the process of migrating from classic to glimmer based components and have come across some expensive computational patterns which would benefit from caching.
My question is, what is the best approach to caching functionality to getters for glimmer components? It looks like there are currently a few ways to do this:
#cached via tracked-toolbox - I believe this was released prior to the ember cached api. I didn't peek under the hood but it has the has a #cached decorator which might collide with future ember #cached.
ember-cache-primitive-polyfill - Mentioned in the Ember docs as a polyfill for the ember cached API (3.22) but the syntax isn't as concise as the #cached decorator
ember-cached-decorator-polyfill - related to RFC566 appears to be based on option 2 with a more ergonomic syntax
Upgrade to 3.22 - Trying to avoid bumping ember unless there is a significant benefit. At a glance, I didn't see #cached included here though.
Any additional insight/guidelines into how expensive a getter should be to warrant it being cached? For example, preventing re-renders seems a fairly obvious use case but there can be a wide range of what developers might consider an "expensive" computation.
There are two categories of things here:
The two #cached decorators.
The caching primitives introduced via RFC 0566.
In the vast majority of Ember or Glimmer app or normal library code, you’ll just be using the decorator. You’d only ever really reach for the caching primitives if you were building some low-level library code yourself (not never, but not exactly common, either).
As for the #cached decorators, they have basically the same semantics. The tracked-toolbox version was research that fed into the the development of the primitive that Glimmer ships (and Ember uses), and so ember-cached-decorator-polyfill is implemented using the actual public API—polyfilling it via ember-cache-primitive-polyfill if necessary.
In terms of the performance characteristics, you don’t even actually need to think about it in terms of preventing re-renders: that’s not how the system works anyway. (See this blog post I wrote last year (2020) for a deep dive on how re-rendering gets scheduled in Ember and Glimmer using the autotracking concepts.) It’s also worth remembering that caching is not free! So it’s not as simple as “this thing costs something, so I should cache it”—the caching has to pay for itself to be worth it, and it costs both memory use and CPU time to create and to check caches.
With that caveat firmly in mind, I tend to think of “expense” here in the following categories:
am I rendering this hundreds or thousands of times?
does rendering this cause a long-running computation that will impact render (i.e. on the order of multiple milliseconds)
does this trigger asynchronous behavior?
(especially) does this trigger an API call?
In a lot of normal app code, the only getters you’ll really need to decorate with #cached are getters which produce API calls based on the components’ arguments. Since the getter will otherwise be invoked every time it is referenced, you will end up with multiple API calls, which can produce a situation where the apparent state in the UI flips back and forth as references to different promises resolve.
A simple search for DoEvents brings up lots of results that lead, basically, to:
DoEvents is evil. Don't use it. Use threading instead.
The reasons generally cited are:
Re-entrancy issues
Poor performance
Usability issues (e.g. drag/drop over a disabled window)
But some notable Win32 functions such as TrackPopupMenu and DoDragDrop perform their own message processing to keep the UI responsive, just like DoEvents does.
And yet, none of these seem to come across these issues (performance, re-entrancy, etc.).
How do they do it? How do they avoid the problems cited with DoEvents? (Or do they?)
DoEvents() is dangerous. But I bet you do lots of dangerous things every day. Just yesterday I set off a few explosive devices (future readers: note the original post date relative to a certain American holiday). With care, we can sometimes account for the dangers. Of course, that means knowing and understanding what the dangers are:
Re-entry issues. There are actually two dangers here:
Part of the problem here has to do with the call stack. If you call .DoEvents() in a loop that itself handles messages that use DoEvents(), and so on, you're getting a pretty deep call stack. It's easy to over-use DoEvents() and accidentally fill up your call stack, resulting in a StackOverflow exception. If you're only using .DoEvents() in one or two places, you're probably okay. If it's the first tool you reach for whenever you have a long-running process, you can easily find yourself in trouble here. Even one use in the wrong place can make it possible for a user to force a stackoverflow exception (sometimes just by holding down the enter key), and that can be a security issue.
It is sometimes possible to find your same method on the call stack twice. If you didn't build the method with this in mind (hint: you probably didn't) then bad things can happen. If everything passed in to the method is a value type, and there is no dependance on things outside of the method, you might be fine. But otherwise, you need to think carefully about what happens if your entire method were to run again before control is returned to you at the point where .DoEvents() is called. What parameters or resources outside of your method might be modified that you did not expect? Does your method change any objects, where both instances on the stack might be acting on the same object?
Performance Issues. DoEvents() can give the illusion of multi-threading, but it's not real mutlithreading. This has at least three real dangers:
When you call DoEvents(), you are giving control on your existing thread back to the message pump. The message pump might in turn give control to something else, and that something else might take a while. The result is that your original operation could take much longer to finish than if it were in a thread by itself that never yields control, definitely longer than it needs.
Duplication of work. Since it's possible to find yourself running the same method twice, and we already know this method is expensive/long-running (or you wouldn't need DoEvents() in the first place), even if you accounted for all the external dependencies mentioned above so there are no adverse side effects, you may still end up duplicating a lot of work.
The other issue is the extreme version of the first: a potential to deadlock. If something else in your program depends on your process finishing, and will block until it does, and that thing is called by the message pump from DoEvents(), your app will get stuck and become unresponsive. This may sound far-fetched, but in practice it's surprisingly easy to do accidentally, and the crashes are very hard to find and debug later. This is at the root of some of the hung app situations you may have experienced on your own computer.
Usability Issues. These are side-effects that result from not properly accounting for the other dangers. There's nothing new here, as long as you looked in other places appropriately.
If you can be sure you accounted for all these things, then go ahead. But really, if DoEvents() is the first place you look to solve UI responsiveness/updating issues, you're probably not accounting for all of those issues correctly. If it's not the first place you look, there are enough other options that I would question how you made it to considering DoEvents() at all. Today, DoEvents() exists mainly for compatibility with older code that came into being before other credible options where available, and as a crutch for newer programmers who haven't yet gained enough experience for exposure to the other options.
The reality is that most of the time, at least in the .Net world, a BackgroundWorker component is nearly as easy, at least once you've done it once or twice, and it will do the job in a safe way. More recently, the async/await pattern or the use of a Task can be much more effective and safe, without needing to delve into full-blown multi-threaded code on your own.
Back in 16-bit Windows days, when every task shared a single thread, the only way to keep a program responsive within a tight loop was DoEvents. It is this non-modal usage that is discouraged in favor of threads. Here's a typical example:
' Process image
For y = 1 To height
For x = 1 to width
ProcessPixel x, y
End For
DoEvents ' <-- DON'T DO THIS -- just put the whole loop in another thread
End For
For modal things (like tracking a popup), it is likely to still be OK.
I may be wrong, but it seems to me that DoDragDrop and TrackPopupMenu are rather special cases, in that they take over the UI, so don't have the reentrancy problem (which I think is the main reason people describe DoEvents as "Evil").
Personally I don't think it's helpful to dismiss a feature as "Evil" - rather explain the pitfalls so that people can decide for themselves. In the case of DoEvents there are rare cases where it's still reasonable to use it, for example while a modal progress dialog is displayed, where the user can't interact with the rest of the UI so there is no re-entrancy issue.
Of course, if by "Evil" you mean "something you shouldn't use without fully understanding the pitfalls", then I agree that DoEvents is evil.
Is there any trick to navigate on other page through synchronous process?
I'm guessing this is linked to your other question (http://stackoverflow.com/questions/4593463/about-synchronized-navigation).
Synchronous process are (generally) bad for perceived performance. This perception is amplified on mobile diveices (such as phones).
Because of this, most operations on WP7 happen asynchronously. As a general rule you should learn to work with this behaviour rather than against it as it's there for a reason.
If you have a specific issue where you wish to perform an operation synchronously then post the specific details and we'll advise accordingly.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm sure you've all seen them. Line of Business UIs that have logic such as: "When ComboA is selected, query for values based on that selection, and populate Textbox B", or "When ButtonC is pressed, disable Textboxes C and D", and on and on ... it gets particularly bad when you can have multiple permutations of the logic above.
If presented with the chance to redesign one of these lovely screens, how would you approach it? would you put a wizard in front of the UI? would you keep the single screen paradigm but use some other pattern to make the logic of the UI state maintainable? what would be the process you use to determine how this would ideally be presented and implemented?
Not that I think this should matter for the responses, but I am currently presented with just this "opportunity", and it's an ASP.NET web page that uses javascript to respond to the user's choices, disable controls, and make ajax calls for additional data.
Something you might want to look at is whether some of those dependencies don't imply that while looking similar and serving the same functions these elements should be split out over multiple pages that are similar but actually different. Someone maybe grouped these onto on page because there were enough similarities.
If you can try looking at the problem as if it were not implemented at all, how would you structure the user interface if you had to implement it now. If it is too radically different and existing users would have major problems, you might have to compromise. But as Elie said look at it from the users view. They are the ones that have to work with your product.
I would model the state of the whole UI on one object. That object should keep track of the state in which each UI object should be in, including the list of options of a combo box (and which option is selected of course).
That means that having one state object you can correctly re-draw the whole screen and not end up in broken states on the UI. Of course, refreshing all the components each time anything changes is not the way to go, so I would refresh them on callbacks from each of the setters in the state object. This would also allow you to have two UIs over the same state if you ever want that.
Start with the KISS principle, and work from there. Don't over-engineer the solution, and try to think about the problem from the user's POV. Your first impression of what would make a good layout is probably close to what you should be building, as a good UI is intuitive.
That being said, single screen versus multiple screens, JavaScript or AJAX, it doesn't really matter. If it looks good, is easy to understand, and behind the scenes, is well commented and written with clear code, it works. It should be maintainable, so aim for modular code blocks with clear functionality.
I think what is most important is the user experience and to a lesser extent the maintainability of the code. On the web I try to minimize the roundtrips as much as possible so I'm not sure that I would take a wizard approach since that would lead the user through multiple pages or require replacing nearly the entire page via AJAX (which just seems wrong). I typically work with my customers to capture the functionality that they require, though I aim at the functionality, not the implementation. I might mock up a few examples to show them alternatives or just hand draw them on a whiteboard to give them ideas. I don't mind doing "hard" or "complex" things in the app if the result is a much improved user interface. Of course, I make it as simple as I can and definitely use good practices, even in Javascript.
I have some forms that communicate with server using AJAX for real reasons: cascade combos, suggestions, multiple correlated selections (e.g. I have {elementary} knowledge of {French} [add], and {good} knowledge of {German} [add]...). I also have some regular fields that I handle trough get.
Thing is that once I've made connection to server-side, it would be easier to me to push all data that way. Is it going too far? What about if I have no reason for AJAX in the first place, and I still use it for pushing form data? I would feel obligated to provide fallback for people with javascript off, but most of the underlying logic would be the same, so it doesn't seam to me as an much of an overhead. It is a kind of data I would push through post anyway, so I'm not losing get parameters that would be good for something.
Any reason not to do things this way?
User experience is an important part of any software product. If you can improve the experience by providing better interactions, there's no reason not to do it.
Make sure though that you write unobtrusive and degradable javascript, so users with screen-readers or javascript-disabled browsers can still complete the interactions.
The only problem with this strategy is that you're in a lot deeper trouble if someone decides they want a non-javascript solution as well. I think it's fairly wise to use the "least fancy" mechanism that will get the desired result on the web. If it's just a form post, then keep it a form post unless there's a reason to do otherwise.