Would anyone enlighten me some comprehensive performance comparison between XPath and DOM in different scenarios? I've read some questions in SO like xPath vs DOM API, which one has a better performance and XPath or querySelector?. None of them mentions specific cases. Here's somethings I could start with.
No iteration involved. getElementById(foobar) vs //*[#id='foobar']. Is former constantly faster than latter? What if the latter is optimized, e.g. /html/body/div[#id='foo']/div[#id='foobar']?
Iteration involved. getElementByX then traverse through child nodes vs XPath generate snapshot then traverse through snapshot items.
Axis involved. getElementByX then traverse for next siblings vs //following-sibling::foobar.
Different implementations. Different browsers and libraries implement XPath and DOM differently. Which browser's implementation of XPath is better?
As the answer in xPath vs DOM API, which one has a better performance says, average programmer may screw up when implementing complicated tasks (e.g. multiple axes involved) in DOM way while XPath is guaranteed optimized. Therefore, my question only cares about the simple selections that can be done in both ways.
Thanks for any comment.
XPath and DOM are both specifications, not implementations. You can't ask questions about the performance of a spec, only about specific implementations. There's at least a ten-to-one difference between a fast XPath engine and a slow one: and they may be optimized for different things, e.g. some spend a lot of time optimizing a query on the assumption it will be executed multiple times, which might be the wrong thing to do for single-shot execution. The one thing one can say is that the performance of XPath depends more on the engine you are using, and the performance of DOM depends more on the competence of the application programmer, because it's a lower-level interface. Of course all programmers consider themselves to be much better than average...
This page has a section where you can run tests to compare the two and see the results in different browsers. For instance, for Chrome, xpath is 100% slower than getElementById.
See getElementById vs QuerySelector for more information.
I agree with Michael that it may depends on implementation, but I would generally say that DOM is faster. The reason is because there is no way that I see you can optimize the parsed document to make XPath faster.
If you're traversing HTML and not XML, specialized parser is able to index all the ids and classes in the document. This will make getElementById and getElementsByClass much faster.
With XPath, there's only one way to find the element of that id...by traversing, either top down or bottom up. You may be able to memoize repeated queries (or partial queries), but I don't see any other optimization that can be done.
Related
We are trying to optimize LCP of our site by implementing "Infinite scroll" on our web pages either using Lazy load using Intersection observer API or CSS content-visibilty rule.
I need to know which one is more effective?
Thanks,
This is somewhat dependent on your site in question. I would immediately say that just setting content-visibility: auto to sections below the fold is a known pattern for improving rendering performance (just be sure to use contain-intrinsic-size as well to avoid layout shifts. Letting the browser to as much as possible with just HTML and CSS will typically result in better performance (provided all is done correctly).
However, the better puzzle may be looking into what is causing the LCP measurements to be slow. If there are multiple images on the page, changing the fetchpriority of images can tell the browser to load the main image first. Looking at script order and making sure all code is deferred when possible is another easy win for LCP. Most of the time, there are other opportunities to more significantly change the LCP measurement (unless the DOM is truly massive, e.g. David Bowie's news page).
I have a xpath $x/descendant-or-self::*/#y which I have changed to $x//#y as it improved the performance.
Does this change have any other impact?
As explained in the W3C XPath Recommendation, // is short-hand for /descendant-or-self::node()/, so that is a slight difference. But since attributes can only occur on elements, I think this replacement is safe.
That might also explain why you see a performance boost, since MarkLogic will need to worry less whether there really are elements in between.
HTH!
I use :last-child selector plenty of times, mostly when using border-bottom in a list where I use border: none; for the last child, or when using margins. So my question is, is the :last-child selector bad from a performance point of view?
Also I've heard that it was removed from the CSS2 specification because using :first-child is easy for the browser to detect, but for detecting :last-child it needs to loop back.
If it was deferred from CSS2 for performance concerns, but reintroduced in Selectors 3, I suspect that it's because performance is no longer an issue as it previously was.
Remember that :last-child is the definitive and only way to select the last child of a parent (besides :nth-last-child(1), obviously). If browser implementers no longer have performance concerns, neither should we as authors.
The only compelling reason I can think of for overriding a border style with :first-child as opposed to :last-child is to allow compatibility with IE7 and IE8. If that boosts performance, let that be a side effect. If you don't need IE7 and IE8 support, then you shouldn't feel compelled to use :first-child over :last-child. Even if browser performance is absolutely critical, you should be addressing it the proper way by testing and benchmarking, not by premature optimization.
In general, the core CSS selectors perform well enough that most of us should not be worried about using them. Yes, some of them do perform worse than others, but even the worst performing ones are unlikely to be the main bottleneck in your site.
Unless you've already optimised everything else to perfection, I would advise not worrying about this. Use a profiling tool like YSlow to find the real performance issues on your site and fix those.
In any case, even if there is a noticable performance implication for a given CSS selector (or any other browser feature), I would say that it's the browser makers' responsibility to fix it, not yours to work around it.
I believe it's still the simplest, low-performance way to get your last child.
by that I mean, all others way to get the last child will be worse for performance, because it won't have any work done by the W3C community before.
I have tried to be disciplined about decomposing into small reusable methods when possible. As the project growing, I am re-implementing the exact same method.
I would like to know how to deal with this in an automated way. I am not looking for an IDE specific solution. Dependency on method names may not be sufficient. Unix and scripting are solutions that would be extremely beneficial. Answers such as "take care" etc. are not the solutions I am seeking.
I think the cheapest solution to implement might be to use Google Desktop. A more accurate solution would probably be much harder to implement - treat your code base as a collection of documents where the identifiers (or tokens in the identifiers) are words of the document, and then use document clustering techniques to find the closest matching code to a query. I'm aware of some research similar to that, but nothing close to out-of-the-box code that you could use. You might try looking on Google Code Search for something. I don't think they offer a desktop version, but you might be able to find some applicable code you can adapt.
Edit: And here's a list of somebody's favorite code search engines. I don't know whether any are adaptable for local use.
Edit2: Source Code Search Engine is mentioned in the comments following the list of code search engines. It appears to be a commercial product (with a free evaluation version) that is intended to be used for searching local code.
I'm searching for reliable data on OpenGL's functions performance. A site that could for example:
...answer me how much more efficient is using glInterleavedArrays compared to gl*Pointer based implementation with strides, or without them. If applicable, show the comparisions on nVidia vs. ATI cards vs. embedded systems.
...answer me how much of a boost is gained in using VBO's vs. non-buffered data in the cases of static, dynamic and stream data.
I'd like to find a site that has "no-bullshit" performance data, not just vague statements like "glInterleavedArrays are usually faster than direct gl*Pointer usage".
Is there such a dream-site? Or at least somewhere where I can get answers to the forementioned questions?
(yes, I know that nothing will beat hand-profiling, but the fact that something works faster on my machine, doesn't mean it's faster generally on all cards...)
It's more about application level benchmarking than measuring performance of individual features, but it might be possible to learn something from specviewperf, especially if it's possible to discover more about what OpenGL mode each benchmark uses to perform it's rendering. The benchmark seems to include some options to tweak usage of display lists, vertex arrays etc, but I don't think SPECs published results go into any analysis of the effects of changing these from the defaults. They don't seem to have any VBO coverage yet.