I use :last-child selector plenty of times, mostly when using border-bottom in a list where I use border: none; for the last child, or when using margins. So my question is, is the :last-child selector bad from a performance point of view?
Also I've heard that it was removed from the CSS2 specification because using :first-child is easy for the browser to detect, but for detecting :last-child it needs to loop back.
If it was deferred from CSS2 for performance concerns, but reintroduced in Selectors 3, I suspect that it's because performance is no longer an issue as it previously was.
Remember that :last-child is the definitive and only way to select the last child of a parent (besides :nth-last-child(1), obviously). If browser implementers no longer have performance concerns, neither should we as authors.
The only compelling reason I can think of for overriding a border style with :first-child as opposed to :last-child is to allow compatibility with IE7 and IE8. If that boosts performance, let that be a side effect. If you don't need IE7 and IE8 support, then you shouldn't feel compelled to use :first-child over :last-child. Even if browser performance is absolutely critical, you should be addressing it the proper way by testing and benchmarking, not by premature optimization.
In general, the core CSS selectors perform well enough that most of us should not be worried about using them. Yes, some of them do perform worse than others, but even the worst performing ones are unlikely to be the main bottleneck in your site.
Unless you've already optimised everything else to perfection, I would advise not worrying about this. Use a profiling tool like YSlow to find the real performance issues on your site and fix those.
In any case, even if there is a noticable performance implication for a given CSS selector (or any other browser feature), I would say that it's the browser makers' responsibility to fix it, not yours to work around it.
I believe it's still the simplest, low-performance way to get your last child.
by that I mean, all others way to get the last child will be worse for performance, because it won't have any work done by the W3C community before.
Related
We are trying to optimize LCP of our site by implementing "Infinite scroll" on our web pages either using Lazy load using Intersection observer API or CSS content-visibilty rule.
I need to know which one is more effective?
Thanks,
This is somewhat dependent on your site in question. I would immediately say that just setting content-visibility: auto to sections below the fold is a known pattern for improving rendering performance (just be sure to use contain-intrinsic-size as well to avoid layout shifts. Letting the browser to as much as possible with just HTML and CSS will typically result in better performance (provided all is done correctly).
However, the better puzzle may be looking into what is causing the LCP measurements to be slow. If there are multiple images on the page, changing the fetchpriority of images can tell the browser to load the main image first. Looking at script order and making sure all code is deferred when possible is another easy win for LCP. Most of the time, there are other opportunities to more significantly change the LCP measurement (unless the DOM is truly massive, e.g. David Bowie's news page).
Spoiler alert: Outlining is not what I'm searching for.
Context
I'm a great fan of Code Contracts, at least on a philosophical level, but sometimes the amount of code-contract statements gets to a level where the readability of the core functional code feels significantly impaired.
The same thing goes for when you are doing the xml-docs as you should, I've noticed that a high amount of fluff between the purely functional parts of the code can significantly reduce the efficient perception of the functional aspect of the code.
Now of course we can use outlining to at least reduce the amount of visible xml-docs in that case, although I'd argue that it would be more beneficial if you could tell VS to hide those lines totally and reduce the complexity of using the outlining tool for the functional aspect of the code that you sometime might want to focus on.
And of course, outlining does not work in cases like code-contracts, unless you try to wrap these in regions, and consequently introducing even more overhead.
Question
Do Visual Studio have any infrastructure that would make it possible to completely hide certain lines of code while maintaining correct line-numbering with markings of hidden lines in a way separated from the outlining functionality?
A couple of toggle-buttons somewhere to totally hide stuff like xml-docs and code-contracts would be totally awesome, even more awesome if you manually can configure what to hide, like custom logging function calls.
I was naturally drawn to Ember's nice API/design/syntax compared to the competitors but was very saddened to see the performance was significantly worse. (For example, see the now well known http://jsfiddle.net/samdelagarza/ntMdB/167/ .) My eyes tell me at least 4 times slower than Backbone in Chrome.
The version 0.9.6 of EmberJS apparently has many performance fixes, in particular around bindings and rendering. However the above benchmark still performs poorly when using this version of Ember.
I see the above benchmark as demonstrative of one framework's binding cost. I come from Flex where bindings perform well enough that you don't have to constantly think whether these 5 bindings per renderer (multiplied by maybe 20 renderers) you want to use aren't going to be too much of an overhead. Ease of use is nice, but only if good enough performance is maintained. (Even more so since HTML5 also often targets mobiles).
As it stands, I tend to think the beauty of Ember is not worth the performance hit compared to some of its competitors, as we're talking about big apps with many bindings here, else you wouldn't need such framework in the first place. I could live with Ember performing slightly less well; after all it brings more to the table.
So my questions are fairly general and open:
Is the Ember part of the benchmark written well enough that it shows
a genuine issue?
Are the 0.9.6 performance updates maybe very low
key?
Are the areas of bad performances identified by the main
contributors?
This isn't really an issue of bindings being slow, but doing more DOM updates than necessary. We have been doing some investigation into this particular issue and we have some ideas for how to coalesce these multiple operations into one, so I do expect this to improve in the future.
That said, I can't see that this is a realistic benchmark. I would never recommend doing heavy animation in Ember (or with Backbone, for that matter). In standard app development, you shouldn't ever have to update that many different views simultaneous with that frequency.
If you can point out slow areas in a normal app we would be very happy to investigate. Performance is of great concern to us, and if things are truly slow during normal operation, we would consider that a bug. But, like I said, performant binding driven animations is not one of our goals, nor do I know of anyone for whom it is. Ember generally plays well with other libraries so it should be possible to plug in an animation library to do the animations outside of Ember.
Would anyone enlighten me some comprehensive performance comparison between XPath and DOM in different scenarios? I've read some questions in SO like xPath vs DOM API, which one has a better performance and XPath or querySelector?. None of them mentions specific cases. Here's somethings I could start with.
No iteration involved. getElementById(foobar) vs //*[#id='foobar']. Is former constantly faster than latter? What if the latter is optimized, e.g. /html/body/div[#id='foo']/div[#id='foobar']?
Iteration involved. getElementByX then traverse through child nodes vs XPath generate snapshot then traverse through snapshot items.
Axis involved. getElementByX then traverse for next siblings vs //following-sibling::foobar.
Different implementations. Different browsers and libraries implement XPath and DOM differently. Which browser's implementation of XPath is better?
As the answer in xPath vs DOM API, which one has a better performance says, average programmer may screw up when implementing complicated tasks (e.g. multiple axes involved) in DOM way while XPath is guaranteed optimized. Therefore, my question only cares about the simple selections that can be done in both ways.
Thanks for any comment.
XPath and DOM are both specifications, not implementations. You can't ask questions about the performance of a spec, only about specific implementations. There's at least a ten-to-one difference between a fast XPath engine and a slow one: and they may be optimized for different things, e.g. some spend a lot of time optimizing a query on the assumption it will be executed multiple times, which might be the wrong thing to do for single-shot execution. The one thing one can say is that the performance of XPath depends more on the engine you are using, and the performance of DOM depends more on the competence of the application programmer, because it's a lower-level interface. Of course all programmers consider themselves to be much better than average...
This page has a section where you can run tests to compare the two and see the results in different browsers. For instance, for Chrome, xpath is 100% slower than getElementById.
See getElementById vs QuerySelector for more information.
I agree with Michael that it may depends on implementation, but I would generally say that DOM is faster. The reason is because there is no way that I see you can optimize the parsed document to make XPath faster.
If you're traversing HTML and not XML, specialized parser is able to index all the ids and classes in the document. This will make getElementById and getElementsByClass much faster.
With XPath, there's only one way to find the element of that id...by traversing, either top down or bottom up. You may be able to memoize repeated queries (or partial queries), but I don't see any other optimization that can be done.
When writing application code, it's generally accepted that premature micro-optimization is evil, and that profiling first is essential, and there is some debate about how much, if any, higher level optimization to do up front. However, I haven't seen any guidelines for when/how to optimize generic code that will be part of a library or framework, where you never know exactly how your code will be used in the future. What are some guidelines for this? Is premature micro-optimization still evil? How should performance be balanced with other design goals such as ease of use, ease of demonstrating correctness, ease of implementation, and flexibility?
"How should performance be balanced with other design goals...?"
Get it to work.
Optimize it until it cannot be optimized further.
Note the order. Avoid premature optimization means optimize it after it works.
Optimization is still very, very important. Premature optimization does not mean NO optimization. It means optimize after it works.
I would say that optimization must take a back seat to other design goals such as ease of use, ease of demonstrating correctness, ease of implementation, and flexibility.
Try to write your code intelligently using good practices and avoiding the obvious pitfalls. Still, don't optimize until you can do it with a profiler and real use cases.
You will still encounter some use cases you never thought of but you can't optimize for them if you never thought of them.
A well designed framework will usually be a reasonably performing one too.
I heard an interesting and very enlightening discussion about the famous knuth quote on a podcast recently (think it was deep fried bytes), which I'll try summarize:
Everyone knows the famous quote: Premature optimization is the root of all evil..
However, that's only half of it. The full quote is:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Look at this carefully - say about 97% of the time.
The other side of that statement is about 3% of the time, "small" efficiencies are critical.
My monitor displays about 50 lines of code. Statistically, at least 1-2 lines of code on every screen will contain something performance sensitive! Following the common wisdom of 'do it now, optimize it later' doesn't seem like such a cunning plan when you think that on every screen you have a possible performance issue.
IMHO you should always be thinking about performance. You shouldn't expend a great deal of effort or sacrifice maintainability for it until proven by profiling/testing, but you should definitely have it in the back of your mind.
I'd personally apply this to generic code like this:
You are bound to have some code somewhere, which when you wrote it you thought "this will be slow", or "this is a dumb algorithm, but it's not important right now, so I'll fix it later." As you're in a shared library and you can't assert that method A will only ever get called with 5 items, you should go in and clean all this stuff up.
Once you've sorted those things out, I wouldn't bother going much further. Maybe run the profiler over your unit tests to make sure nothing dumb has snuck through, but otherwise wait for feedback from the consumers of your library.
My rule of thumb is:
don't optimize
The full rule is actually:
if you don't have a metric, don't optimize
This means that if you haven't measured the performance and generated a concrete metric, you shouldn't be doing anything to make the code perform better.
After all: without a metric, how do you know what to optimize?
Once you have one some profiling, you may actually be surprised by where the performance bottlenecks of your system are ... in my experience it is often the case that relatively minor changes can have a drastic impact.
You're right it's not always clear where the best bang for the buck is for your time. Your best bet is to be a user of your framework as well as its designer.
Employ your own framework in a non-trivial application, try to exercise the whole range of functionality. The more you use it, it will become clear which are the things you need most to be optimal.
Also, get feedback and suggestions from other users as frequently as possible. You will inevitably find that other people want to do things to do with your framework that you would never think of.
I think the best approach is to have a really good set of use cases for how your framework will be exercised. Only then will you have any good idea of whether the performance is adequate for its intended use.
Sure, you're never going to know how somebody is going to use your framework in the future (in the early years of my career, it never failed to amaze me the creative ways that users put my software to use - ways I'd never envisaged!) but having thought about how you think it will be used should get you most of the way there.