Polymer svg iconset and performance with multiple same icon - performance

I am using the polymer svg iconset and iron-icon element in my page. Lets say I have a large number if rows in a table and each row has a couple of icons in it (uses iron-icon). These icons are repeated in every row. When I inspect the DOM, I see that each iron-icon in each row has the same icon svg as part of the DOM (inside the shadow root of icon-icon).
Isn't this a huge performance bottleneck? IE11 is slow in parsing DOM and this can cause further slowness. Would a font base icon set be more optimized here? Is Polymer's approach to use a svg iconset wrong?

From my experience, the performance issue is not in the size of the DOM itself, but the JS API interaction with it. The way Polymer implements the icons, it acts as a polyfill for Web Components custom elements. What actually happens in older browsers that don't understand their declaration is that if you write
<iron-icon icon="search"></iron-icon>
a scripts cycling through the DOM replaces what are considered unknown elements with DOM elements the browser understands (You'd have to look in the DOM inspector to see what actually is used in a specific browser).
A more direct approach could be to use something that IE understands natively, for example the SVG sprite pattern. Include an invisible <svg> element that contains symbols
<svg display="none">
<symbol id="search" viewBox="..."><path d="..." /></symbol>
...
</svg>
and reference them
<svg class="icon"><use xlink:href="#search"/></svg>
If you can achieve that when compiling the page server-side, it avoids the use of scripting in the client and should give a nice performance boost.
Even if your table cells are constructed client-side, adding these elements to the DOM directly might still be faster than first adding somesthing that a script has to replace later in a second run. (But that is only my guess without experience to back it up.)

Related

Spritesheet performance using canvas vs div + js

I would like to know, does anybody have any experience what would be better:
1.) Using spritesheet and drawing it on canvas element.
2.) Using spritesheet with normal div and moving spritesheet via js or css.
Thanks, Luka
Moving sprite-sheets with CSS is generally faster as most of the logic is done internally by the browser in compiled code, while doing it in JavaScript adds an overhead due to JavaScript itself.
You won't be able to avoid JavaScript completely of course, but reducing the amount of calls through JavaScript helps performance (in general, and this is also why you probably want to avoid jQuery for this specific purpose as jQuery comes with an overhead of its own).
With canvas you have more options in terms of altering the sprite-sheet, but if you don't need this I would recommend you use CSS and plain JavaScript where needed.

Re-rendering SVGs from the cache. Recomputed or remembered?

I can't find an answer to this question, when the browser takes an SVG from the cache does it re-compute the xml or not, does it store the 'IMAGE' that it has already created? (How?)
I would've have thought not, but then I've noticed how fast repeated svgs load.
I've also noticed slowness on a page logo (in mobile browsers) which make me think THEY re-compute the SVG, so i've moved to PNG's (which are obviously cached) for mobile to save a lot of computational work for the low end phones.
So maybe, does the answer depend on the browser / browser type / browser settings?
*my svg's are compressed svgz's by the way
Sometimes it does and sometimes it doesn't. Most browsers go to some effort not to rerender things unless they have to. There is a buffered-rendering property in SVG 1.2 Tiny that may help if you're using Opera, other browsers try to do things automatically without requiring such hints though.
Browsers generally don't cache SVG content as a simple bitmap though. They do cache things like the absolute position and size of shapes and text with transforms applied, the css tree structure, gradients etc and then use this information they can redraw the content much more quickly than having to work it out each repaint. Such information allows browsers to copy with javascript and SMIL animation of parts of the SVG content as well as user scrolling.

what drives "reflow/layout" times in Chrome & other browsers

When using an application developed with backbone.js Chrome freezes for about 7-10 seconds when adding to the DOM the content of a large document that has been retrieved with an AJAX call. Chrome's event timeline shows that the main issue is a single 'layout' event that takes about 6-8 seconds (times measured in a modern MB Air if that matters)
The content being loaded is about 800kbs of uncompressed HTML, 15000 DOM nodes, memory usage after the content is loaded is about to 30-35 Mbs; it's a large document but such a long freeze just doesn't feel right.
is such a large "layout" time to be expected for a document like that, or is this a sign of other issues? (like too complex CSS rules, bad HTML structure, etc.)
what other factors besides document size may have an impact in the performance of the 'layout' event?
besides the obvious and probably right solution of breaking the content in pieces, is there any trick that can be done to make it easier for the browser to compute the layout event? (I am thinking in something like placing the monster content inside an iframe or a div with fixed positioning, or avoiding specific CSS features inside the content)

Render css3 transformed elements to canvas / image

Basically, I want to save a certain DOM element of my page as an image, and store this on a server (and also allow the user to save the image to a local disk). I reckon the only way of doing this currently is to render a canvas, which allows me to send the image data via AJAX and also make image elements in the DOM. I found a promising library for this, however my DOM element has
multiple transparent backgrounds
css 3d tranforms
And html2canvas simply fails there. Is there currently any way to neatly save an image representation of the current state of a DOM element, with all its CSS3 glory?
Browsers may never allow a DOM element to be truly rendered as it is to a canvas, because there are very serious security concerns around being able to do that.
Your best bet is html2canvas plus your own hacks. You may simply need to implement your own render code in a customised way. Multiple backgrounds should be doable with drawImage calls, and you may be able to work in css3 transforms when canvas 2D gets setTransform() (which I think is only in the next version of the spec).
In this stage of CSS3 development and crossbrowser support is this probably not possible without writing your own html2canvas extension.
You can try dig into Google Chrome bug reporter as they allow you to send current snapshot of web page. But I think it's some internal function which isn't available in JS api.
Also, I think this can be easily abused for spying on users, so don't expect any official support from browser development teams.

Determine character index in HTML source given a DOMRange from a WebKit selection

I'm attempting to synchronize a DOMRange (representing a user-selection from a Cocoa WebView) to the original HTML source currently rendered in that view, as a kind of Dreamweaver-split-editor:
My first idea was to get the DOMRange object's startContainer and offset and walk up the DOM tree from there, accumulating the overall character offset up to the body tag.
Unfortunately this task presents some problems:
Clearly the document's outerHTML will differ from the original HTML source if the DOM was manipulated via Javascript or the parser needed to clean up malformed tags.
I can't figure out how to get the offset of a node within its parent text node (e.g., 4 characters to target in <p>some<div>target</div>text</p>), and normalize doesn't seem to make this any easier.
Trying to account for some of the problems in #1, or just going from HTML source to WebView will probably require separately parsing the HTML and then correlating the two DOM-trees.
One ray of hope is that HTML5 specifies a standard parsing algorithm for dealing with invalid HTML (which WebKit has since adopted), so in theory it should be possible to use an off-the-shelf HTML5 parser to generate the same tree as WebKit — right?
This is the most similar existing question I could find, but it's for a slightly different problem:
Getting source HTML from a WebView in Cocoa
Your problem #1 is actually not so bad; you can just turn off JS interpretation.
Look at QWebSettings::JavascriptEnabled, or just drop this in before you load any html:
QWebSettings::globalSettings()->setAttribute(QWebSettings::JavascriptEnabled, false);
That should leave your DOM un-mangled by JS. Good luck!

Resources