I was naturally drawn to Ember's nice API/design/syntax compared to the competitors but was very saddened to see the performance was significantly worse. (For example, see the now well known http://jsfiddle.net/samdelagarza/ntMdB/167/ .) My eyes tell me at least 4 times slower than Backbone in Chrome.
The version 0.9.6 of EmberJS apparently has many performance fixes, in particular around bindings and rendering. However the above benchmark still performs poorly when using this version of Ember.
I see the above benchmark as demonstrative of one framework's binding cost. I come from Flex where bindings perform well enough that you don't have to constantly think whether these 5 bindings per renderer (multiplied by maybe 20 renderers) you want to use aren't going to be too much of an overhead. Ease of use is nice, but only if good enough performance is maintained. (Even more so since HTML5 also often targets mobiles).
As it stands, I tend to think the beauty of Ember is not worth the performance hit compared to some of its competitors, as we're talking about big apps with many bindings here, else you wouldn't need such framework in the first place. I could live with Ember performing slightly less well; after all it brings more to the table.
So my questions are fairly general and open:
Is the Ember part of the benchmark written well enough that it shows
a genuine issue?
Are the 0.9.6 performance updates maybe very low
key?
Are the areas of bad performances identified by the main
contributors?
This isn't really an issue of bindings being slow, but doing more DOM updates than necessary. We have been doing some investigation into this particular issue and we have some ideas for how to coalesce these multiple operations into one, so I do expect this to improve in the future.
That said, I can't see that this is a realistic benchmark. I would never recommend doing heavy animation in Ember (or with Backbone, for that matter). In standard app development, you shouldn't ever have to update that many different views simultaneous with that frequency.
If you can point out slow areas in a normal app we would be very happy to investigate. Performance is of great concern to us, and if things are truly slow during normal operation, we would consider that a bug. But, like I said, performant binding driven animations is not one of our goals, nor do I know of anyone for whom it is. Ember generally plays well with other libraries so it should be possible to plug in an animation library to do the animations outside of Ember.
Related
I recently started doing some investigations into ASM and I've played with a few of the demos online. I must say that the Unreal demo was quite impressive... I've been developing an app using Three for a large number of months now. It works beautifully on fast machines, but on lower end ones it tends to struggle. When I ran the unreal demo on my lower end machines, the demo worked like a dream. My question is, what place might ASM have with Three - could it vastly speed up the engine? Is it worth while investigating or developing a solution that utilises both and switching between them based on the browser? Also would there be any plans for Three to take advantage of it in the future?
I came from a C++ background, and would be quite interested in the prospect of developing something. But at the same time it would mean having to re-learn the language and even more problematic might be the large amount of time it would take to get it to a usable point.
What are your thoughts?
This is my opinion:
First and foremost, asm.js isn't really meant to be written by hand. Altough I say that It certainly is possible to write it as it has a validator. The unreal demo is something that has been compiled into asm.js with emscripten. It also doesn't need to interact with other code outside the code that gets compiled. So it generates highly optimized code because of the fact that the unreal demo is already highly optimized code in C++, It gets optimized by a compiler and then gets another run of optimisations through asm.js.
Secondly, asm.js is actually only supported by firefox. Altough all other browsers can execute it but on most it still incurs a performance penalty. This penalty is if you compare asm.js code that does the same as normal javascript code. Just search jsperf.com for examples of this.
Okay, This is some general guide lines about asm.js. Now let's talk about Three.js.
Firstly, because THREE.js has to interact with usercode, it isn't easy to write an asm.js library because of it's many restrictions (no objects).
Secondly, Three.js will not gain much performance a whole lot in performance for calculations where asm.js is strong in. But will gain more from future updates from the browsers. (for instance, the creation of typedarrays in chrome which is now a pain point in THREE.js is comming soon. V8 issue)
Thirdly, The code in asm.js needs to manage its own memory. Which would mean that THREE.js has to figure out a way to make a large apps work with limited memory. Or make every application very memory hungry.
Fourth, comparing the unreal demo with three.js is a bit unfair due to the fact that three.js tries to allow everyone to write 3D apps while the unreal engine is a highly optimized engine for 3D games.
As you've noticed, I'm mostly against asm.js in three.js. But that's becuase it's too early to tell what the best way to go is. There is a high probability that asm.js will get a place in three.js eventually but for more a limited use as renderer-only for instance. But for now, there are still too many unsolved questions around asm.js.
But If you want to use asm.js and use C++, Then i recommend emscripten which was used to build the unreal demo.
This is of course my opinion. But I think it somewhat represents what #Mr.doob and #WestLangley had in mind. And sorry about the long post.
The best way to find out is to write a small demo in C (by hand) then compile to asm.js and run it, then write the same small demo in JS with Three.js (by hand) then run that, and compare the differences in both developer experience as well as performance.
Interesting note on the performance of these technologies. Are saying? which choose to do a project? and I'm looking for one of these technologies for a project
http://paulhammant.com/2012/04/12/performance-testing-knockout-angular-and-backbone-with-selenium2/
I don't think that this post is conclusive in downgrading angular.js due to performance problems. So you're question leads basically to comparing these three technologies...
They solve very different kind of problems, e.g. backbone.js is in fact only a library for building event-based MV* architectures, while knockout.js and angular.js are more opinionated frameworks. So it's really comparing apples to oranges... But people try anyways: http://codebrief.com/2012/01/the-top-10-javascript-mvc-frameworks-reviewed/
None of the frameworks are made for performance. They are made to give direction to the developer.
Backbone is by far the least performant but even with Backbone if it's tuned right you can get a high FPS on tablets, mobiles and desktop.
Rendering Performance means:
only create DOM elements once, update DOM with new model contents
use object pooling as much as possible
minimize image loading/parsing until the last minute possible
watch out of JavaScript that triggers CSS to relayout
tie your render loop to the browser's paint loop
be smart about when to use GPU layers and compositing
opt out of the Garbage Collector as much as possible to maintain high frame rates
I have a PerfView on github that extends Backbone to make rendering performant. https://github.com/puppybits/BackboneJS-PerfView It can maintain 120FPS in Chrome and 56FPS on iPad with some real world examples.
Since I am a Lone Developer, I have to think about every aspect of the systems I am working on. Lately I've been thinking about performance of my two websites, and ways to improve it. Sites like StackOverflow proclaim, "performance is a feature." However, "premature optimization is the root of all evil," and none of my customers have complained yet about the sites' performance.
My question is, is performance always important? Should performance always be a feature?
Note: I don't think this question is the same as this one, as that poster is asking when to consider performance and I am asking if the answer to that question is always, and if so, why. I also don't think this question should be CW, as I believe there is an answer and reasoning for that answer.
Adequate performance is always important.
Absolute fastest possible performance is almost never important.
It's always worth keeping an eye on performance and being aware of anything outrageously non-optimal that you're doing (particularly at a design/architecture level) but that's not the same as micro-optimising every line of code.
Performance != Optimization.
Performance is a feature indeed, but premature optimization will cost you time and will not yield the same result as when you optimize the parts that need optimization. And you can't really know which parts need optimization until you can actually profile something.
Performance is the feature that your clients will not tell you about if it's missing, unless it's really painfully slow and they're forced to use your product. Existing customers may report it in the end, but new customers will simply not bother if the performance is required.
You need to know what performance you need, and formulate it as a requirement. Then, you have to meet your own requirement.
That 'root of all evil' quote is almost always misused and misunderstood.
Designing your application to perform well can be mostly be done with just good design. Good design != premature optimization, and it's utterly ridiculous to go off writing crap code and blowing off doing a better job on the design as an 'evil' waste. Now, I'm not specifically talking about you here... but I see people do this a lot.
It usually saves you time to do a good job on the design. If you emphasize that, you'll get better at it... and get faster and faster at writing systems that perform well from the start.
Understanding what kinds of structures and access methods work best in certain situations is key here.
Sure, if you're app becomes truly massive or has insane speed requirements you may find yourself doing tricked out optimizations that make your code uglier or harder to maintain... and it would be wrong to do those things before you need to.
But that is absolutely NOT the same thing as making an effort to understand and use the right algorithms or data patterns or whatever in the first place.
Your users are probably not going to complain about bad performance if it's bearable. They possibly wouldn't even know it could be faster. Reacting to complaints as a primary driver is a bad way to operate. Sure, you need to address complaints you receive... but a lack of them does not mean there isn't a problem. The fact that you are considering improving performance is a bit of an indicator right there. Was it just a whim, or is some part of you telling you it should be better? Why did you consider improving it?
Just don't go crazy doing unnecessary stuff.
Keep performance in mind but given your situation it would be unwise to spend too much time up front on it.
Performance is important but it's often hard to know where your bottleneck will be. Therefore I'd suggest planning to dedicate some time to this feature once you've got something to work with.
Thus you need to set up metrics that are important to your clients and you. Keep and analyse these measurements. Then estimate how long and how much each would take to implement. Now you can aim on getting as much bang for you buck/time.
If it's web it would be wise to note your page size and performance using Firebug + yslow and/or google page speed. Again, know what applies to a small site like yours and things that only apply to yahoo and google.
Jackson’s Rules of Optimization:
Rule 1. Don’t do it.
Rule 2 (for experts only). Don’t do it
yet— that is, not until you have a
perfectly clear and unoptimized
solution.
—M. A. Jackson
Extracted from Code Complete 2nd edition.
To give a generalized answer to a general question:
First make it work, then make it right, then make it fast.
http://c2.com/cgi/wiki?MakeItWorkMakeItRightMakeItFast
This puts a more constructive perspective on "premature optimization is the root of all evil".
So to parallel Jon Skeet's answer, adequate performance (as part of making something work, and making it right) is always important. Even then it can often be addressed after other functionality.
Jon Skeets 'adequate' nails it, with the additional provision that for a library you don't know yet what's adequate, so it's better to err on the safe side.
It is one of the many stakes you must not get wrong, but the quality of your app is largely determined by the weakest link.
Performance is definitely always important in a certain sense - maybe not the one you mean: namely in all phases of development.
In Big O notation, what's inside the parantheses is largely decided by design - both components isolation and data storage. Choice of algorithm will usually only best/worst case behavior (unless you start with decidedly substandard algorithms). Code optimizations will mostly affect the constant factor - which shouldn't be neglected, either.
But that's true for all aspects of code: in any stage, you have a good chance to fail any aspect - stability, maintainability, compatibility etc. Performance needs to be balanced, so that no aspect is left behind.
In most applications 90% or more of execution time is spend in 10% or less of the code. Usually there is little use in optimizing other code than these 10%.
performance is only important to the extent that developing the performance improvement takes less time than the total amount of time that will be saved for the user(s).
the result is that if you're developing something for millions... yeah it's important to save them time. if you're coding up a tool for your own use... it might be more trouble than it's worth to save a minute or even an hour or more.
(this is clearly not a rule set in stone... there are times when performance is truly critical no matter how much development time it takes)
There should be a balance to everything. Cost (or time to develop) vs Performance for instance. More performance = more cost. If a requirement of the system being built is high performance then the cost should not matter, but if cost is a factor then you optimize within reason. After a while, your return on investment suffers in that more performance does not bring in more returns.
The importance of performance is IMHO highly correlated to your problem set. If you are creating a site with an expectation of a heavy load and lot of server side processing, then you might want to put some more time into performance (otherwise your site might end up being unusable). However, for most applications the the time put into optimizing your perfomance on a website is not going to pay off - users won't notice the difference.
So I guess it breaks down to this:
Will users notice the improvements?
How does this improvement compare to competing sites?
If users will notice AND the improvement would be enough to differentiate you from the competition - performance is an important feature - otherwise not so much. (To a point - I don't recommend ignoring it entirely - you don't want your site to turtle along after all).
No. Fast enough is generally good enough.
It's not necessarily true, however, that your client's ideas about "fast enough" should trump your own. If you think it's fast enough and your client doesn't then yes, you need to accommodate your ideas to theirs. But if you're client thinks it's fast enough and you don't you should seriously consider going with your opinion, no theirs (since you may be more knowledgeable about performance standards in the wider world).
How important performance is depends largely and foremost on what you do.
For example, if you write a library that can be used in any environment, this can hardly ever have too much performance. In some environments, a 10% performance advantage can be a major feature for a library.
If you, OTOH, write an application, there's always a point where it is fast enough. Users won't neither realize nor care whether a button pressed reacts within 0.05 or 0.2 seconds - even though that's a factor of 4.
However, it is always easier to get working code faster, than it is to get fast code working.
No. Performance is not important.
Lack of performance is important.
Performance is something to be designed in from the outset, not tacked on at the end. For the past 15 years I have been working in the performance engineering space and the cause of most project failures that I work on is a lack of requirements on performance. A couple of posts have noted "fast enough" as an observation and whether your expectation matches that of your clients, but what about when you have a situation of your client, your architectural team, your platform engineering team, your functional test team, your performance test team and your operations team all have different expectations on performance, none of which have been committed to stone and measured against. Bad Magic to be certain.
Capture those expectations on the part of your clients. Commit them to a specific, objective, measurable requirement that you can evaluate at each stage of production of your software. Expectations may not be uniform, with one section of your app/code needing to be faster than others, nor will each customer have the same expectations on what is considered acceptable. Having this information will force you to confront decisions in the design and implementation that you may have overlooked in the past and it will result in a product which is a better match to your clients expectations.
In most cases it is better to refactor than rewrite a full codebase. We have quite interesting situation. In our application business layer is pretty good. With unit tests, separation of concerns, etc. It does have some problems, but it can be refactored.
However UI layer is outdated. It is ASP.NET + some AJAX, but we want to migrate to pure AJAX application (ExtJS + REST). The application is quite large and has about 100 separate screens. What would you advise?
I'm very familiar with your app Michael. I've also been in that exact situation twice in the past few years. There is ENORMOUS benefit in rewriting from scratch. You can't incrementally improve where you are. See this from Kathy Sierra.
Bite the bullet and redesign from the ground up. You have an opportunity to blow away the competition, but it's never going to happen with incremental improvements.
I don't see prospect of a truly helpful, generic, answer. Given complete freedom I'd agree with Glen and bit the bullet and go for the whole thing. However commercial realities may prevent that, if the application is big enough then the up front investment may just be too big. Then question is whether you can possibly get to where you need to be in increments.
For example, suppose that your app follows a common pattern, there are selection screens where the items to be worked on are picked by a user and they lead to data entry screens. Then, can you rework the selection screen with nice ajax stuff but retain the data entry screens. Address those on the next iteration.
All you Stackoverflowers,
I was wondering why GUI code is responsible for sucking away many, many cpu cycles. In principle, the graphical rendering is far less complex than Doom (although most corporate GUIs will introduce lots of window dressing). The event handling layer is also seemingly a heavy cost, however, it seems that a well-written implementation should switch between contexts efficiently on modern processors with a lot of memory/cache.
If anybody has run a profiler on their big GUI application, or a common API itself, I'm interested in where the bottlenecks lie.
Possible explanations (that I imagine) may be:
High levels of abstraction between hardware and application interface
Lots of levels of indirection to the correct code to execute
Low priority (compared to other processes)
Misbehaving applications flooding API with calls
Excessive object orientation?
Complete poor design choices in API (not just issues, but design philosophy)
Some GUI frameworks are much better than others, so I'd like to hear varied perspectives. For example, the Unix/X11 system is much different than Windows and even than WinForms.
Edit: Now a community wiki - go for it. I have one more thing to add -- I'm an algorithms guy in school and would be interested if there are inefficient algorithms in GUI code and which they are. Then again, it's probably just the implementation overhead.
I've no idea generally, but I'd like to add another item to your list - font rendering and calculations. Finding vector glyphs in a font and converting them to bitmap representations with anti-aliasing is no small task. And often it needs to be done twice - first to calculate the width/height of the text for positioning, and then actually drawing the text at the right coordinates.
Also, most drawing code today relies on clipping mechanisms to update just a part of the GUI. So, if just one part needs to be redrawn, the code actually redraws the whole window behind the scenes, and then takes just the needed part to actually update.
Added:
In the comments I found this:
I'm also very interested in this. It can't be that the gui is rendered using only the cpu because if you don't have proper drivers for your gfx-card, desktop graphics render incredibly slow. If you have gfx-drivers however desktop-gfx go kinda fast but never as fast as a directx/opengl app.
Here's the deal as I understand it: every graphic card out there today supports a generic interface for drawing. I'm not sure if it's called "VESA", "SVGA", or if those are just old names from the past. Anyway, this interface involves doing everything through interrupts. For every pixel there is an interrupt call. Or something like that. The proper VGA driver however is able to take advantage of DMA and other enhancements that make the whole process WAY less CPU-intensive.
Added 2: Ah, and for OpenGL/DirectX - that's another feature of today's graphics cards. They are optimized for 3D operations in exclusive mode. That's why the speed. The normal GUI just utilizes basic 2D drawing procedures. So it gets to send the contents of the whole screen every time it wants an update. 3D applications however send a bunch of textures and triangle definitions to the VRAM (video-RAM) and then just reuse them for drawing. They just say something like "take the triangle set #38 with the texture set #25 and draw them". All these things are cached in the VRAM so this is again way faster.
I'm not sure, but I would suspect that the modern 3D-accelerated GUIs (Vista Aero, compiz on Linux, etc.) also might take advantage of this. They could send common bitmaps to the VGA up front and then just reuse them directly from the VRAM. Any application-drawn surfaces however would still need to be sent directly every time for updates.
Added 3: More ideas. :) The modern GUI's for Windows, Linux, etc. are widget-oriented (that's control-oriented for Windows speakers). The problem with this is that each widget has its own drawing code and associated drawing surface (more or less). When the window needs to get redrawn, it calls the drawing code for all its child-widgets, who in turn call the drawing code for their child-widgets, etc.. Every widget redraws its whole surface, even though some of it is obscured by other widgets. With above mentioned clipping techniques some of this drawn information is immediately discarded to reduce flickering and other artifacts. But still it's lots of manual drawing code that includes bitmap blitting, stretching, skewing, drawing lines, text, flood-filling, etc.. And all this gets translated to a series of putpixel calls that get filtered through clipping filters/masks and other stuff. Ah, yes, and alpha blending has also become popular today for nice effects which means even more work. So... yes, you could say this is because of lots of abstraction and indirection. But... could you really do it any better? I don't think so. Only 3D techniques might help, because they take advantage of GPU for alpha-calculations and clipping.
Let's begin by saying that writing libraries is much harder than writing a stand-alone code. The requirement that your abstraction be reusable in as many contexts as possible, including contexts which you haven't though of yet, makes the task challenging even for experienced programmers.
Amongst libraries, writing a GUI toolkit library is a famously difficult problem. This is because the programs which use GUI libraries range over a very wide variety of domains with very different needs. Mr Why and Martin DeMollo discussed the requirements placed of GUI libraries a little while ago.
Writing GUI widgets themselves is difficult because computer users are very sensitive minute details of the behavior of the interface. Non-native widget never feel right, don't they? In order to get non-native widget right -- in order to get any widget right, in fact -- you need to spend an inordinate amount of time tweaking the details of the behavior.
So, GUI are slow because of the inefficiencies introduced by the abstraction mechanisms used to create highly-reusable components, that added to shortness of time available to optimize the code once so much time has been spent just getting the behavior right.
Uhm, that's quite a lot.
The most simple but probably obvious answer is that the programmers behind these GUI apps, are really bad programmers. You can go along way in writing code which does the most bizarre things and it will be faster but few people seem to care how to do this or they deem it to be an expensive non-profitable time wasted effort.
To set things straight off-loading computations to the GPU won't necessarily fix any problems. The GPU is just like the CPU except it's less general purpose and more a data paralleled processor. It can do graphics computations exceptionally well. Whatever graphics API/OS and driver combination you have doesn't really matter that much... well OK, with Vista as an example, they changed the desktop composition engine. This engine is far better composting only that which has changed, and since the number one bottle neck for GUI apps is redrawing is a neat optimization strategy. This idea of virtualizing your computational needs and only update the smallest change every time.
Win32 sends WM_PAINT messages to windows when they need to be redrawn, this can be a result of windows occluding each other. However it's up to the window itself to figure out whats actually changed. More than so nothing did change or the change that was made was trivial enough so that it could have been just preformed on top of what ever top most surface you had.
This kind of graphics handling doesn't necessarily exist today. I would say that people have refrained from writing really efficient and virtualizing rendering solutions because the benefit/cost ration is rather low/high (bad).
Something Windows Presentation Foundation (WPF) does, which I think is far superior to most other GUI API is that it splits layout updates and rendering updates into two separate passes. And while WPF is managed code the rendering engine is not. What happens with rendering is that the managed WPF rendering engine builds a command queue (this is what DirectX and OpenGL does) which is then handed of to the native rendering engine. What's a bit more elegant here is that WPF will then try to retain any computation which didn't change the visual state. A trick if you may, where you avoid costly rendering calls for things that doesn't have to be rendered (virtualizing).
In contrast to WM_PAINT which tells a Win32 window to repaint itself a WPF app would check what parts of that window requires repainting and only repaint the smallest change.
Now WPF is not supreme, it's a solid effort from Microsoft but it's not the holy grail yet... the code which runs the pipeline could still be improved and the memory footprint of any managed app is still more than I would want. But I hope this is the kind of answer you are looking for.
WPF is able to do some things asynchronously rather decent, which is a huge deal if you wanna make a really responsive low-latency/low-cpu UI. Asynchronous operations is more than off-loading work on a different thread.
To summarize things slow and expensive GUI means too much repainting and the kind of repainting which is very expensive i.e. the entire surface area.
I does to some degree depend on the language. You might have noticed that Java and RealBasic applications are a fair bit slower than their C-based (C++, C#, Objective-C) counterparts.
However GUI applications are much more complex than command line apps. The Terminal window needs only to draw a simple window that doesn't support buttons.
There are also multiple loops for extra inputs and features.
I think that you can find some interesting thoughts on this topic in "Window System Design: If I had it to do over again in 2002" by James Gosling (the Java guy, also known for his work on pre-X11 windowing systems). Available online here[pdf].
The article focuses on the positive side (how to make it fast), not on the negative side (what's making it slow), but it is still a good read on the topic.