svg out of screen, is rendered? - performance

Scenario: I have SVG image that I can zoom-in and zoom-out. Depending on the zoom, I will display more/less details on the visible part.
The question is: should I take care of not displaying details on the parts that are not currently visible (out of the screen), or the rendering engine is smart enough to skip (clip) those parts before they are rendered?

Yes, browsers are usually clever enough to not render things outside the viewport area.
Note however that the browser still needs to traverse the entire document tree, so even things outside the viewport area can have an impact. It's usually enough to mark the non-interesting subtrees with display="none" to let the browser skip over them when traversing. On small documents that's usually not something that you need to worry about.

I guess clipping will always be applied to the current viewport. But you are probably changing the DOM by updating with the detail visibility changes and restricting that to the visible parts only can make a difference.
The easiest way to find this out is to measure, though. Make two prototypes, one with manual clipping, one without and look for differences in rendering speed in various renderers.

Related

Custom infinite scrolling with items being removed while being scrolled out of view

The problem
Today's modern websites may use infinite scrolling technique to replace paged lists to make a more seamless experience to their users.
This is all nice and dandy as long as users don't scroll to far down which means that your document becomes very complex with huge amounts of DOM nodes. There are of course ways to mitigate this problem (i.e. replacing over-top overflowed scrolled elements with a single DIV of appropriate height for example), but they're either complex to implement and they still have their flaws.
The idea
I was thinking if anyone has already seen an implementation where items, that get scrolled out (top or bottom) somehow become smaller and fainter until they disappear and get replaced by its adjacent item.
I'm thinking of a mix of experience of:
scrolling
fading
scaling
Fading and scaling can be seen on Medium.com when you get to the bottom of any article and you click the next recommended one displayed below (click the title). When you do click and if you pay attention you can see the effect of original article disappearing while being replaced by an up-sliding new article.
Content scrolling could be done in this way and infinite scrolling would be much smoother and less resource consuming as elements would get replaced on the fly and in-place.
Number of simultaneously displayed items of course depends on items' size. In case of Medium-line articles it would likely be one article that would also scroll until you'd scroll it to the very bottom (or top). In case of posts like Facebook, it would be many more items simultaneously as they don't take as much vertical space.
Coverflow works in somewhat similar way as it displays middle content completely and rest is either hidden or scaled/transformed.
The question
Has anybody seen such an implementation on the web? If done properly it would actually make a much nicer infinite scrolling experience without hogging our browsers.
But to make my question more clear and non-debatable. Can you provide a working (albeit simplified) example of such experience?
Requirements:
when an item gets scrolled out it disappears (using fade/scale/both)
when items appear at the bottom (or top when scrolled up) they should display in the opposite to scrolled out items
pressing usual scrolling buttons Home, End, Page Up, Page Down and Space should work.
invisible items should be removed from DOM
scrolling should somehow be available using some sort of scrollbar as well
I think I might have found what you were looking for.
http://engineering.linkedin.com/linkedin-ipad-5-techniques-smooth-infinite-scrolling-html5

Are off-stage DisplayObjects in Flash still slowing down my game?

How does Flash deal with elements that are off-stage?
Obviously Flash doesn't actually render them (because they don't appear anywhere on-screen), but is the process of rendering them still existent, slowing down my game as much as it would if the elements were on-screen?
Or does Flash intelligently ignore elements who don't fall into a renderable area?
Should I manually manage removing objects off the DisplayList and adding them back on as the exit and enter the stage, or is this going to be irrelevant?
Yes, they are slowing down your game.
In one of my early experiments I've developed a sidescroller game with many NPCs scattered around the map, not all visible in the same screen. I still had to calculate stuff but they weren't on the screen. The performance was significantly better when I handled their removal off the display list when irrelevant (by simply checking their X in relation to the 'camera'). Again, I'm not talking about additional code and events that may be attached to them, just plain graphical children of a movieclip.
The best practice though, in my experience, is drawing the objects in bitmaps. Of course if you're too deep into your game already this may be irrelevant, but if you have the time to invest, this is one of the best ways to get the most out of AS3 regarding 2D games. I found some of the greatest tutorials regarding bitmaps and AS3 in 8bitrocket
http://www.8bitrocket.com/books/the-essential-guide-to-flash-games/ I can elaborate on the subject if you want, but I think I'm going off topic here.
Even if some display objects are out of the stage area, they are still executed. If they have any animation playing in them, that might slow down the performance.
The question arises, why do we need to keep unused items outside the stage area? if you need to 'cache' the movieClips for faster loading , then load them in a keyframe where the control will never go. for eg. load the display objects which you want to show in frame 1, then put a stop() in the actions panel of the frame, make it a key frame, and in frame 2 load the unused animations. since there is a stop() in frame 1, the control never goes in frame 2, but the display objects are cached.
Or, if you have codes in the unsused displayobjects, and thus need to load them along with the main game components, then, try putting stop() in the frames of the unused display objects so that they don't animate.

When to use PresentationParameters.BackBufferWidth vs .Viewport.Width

Had to shorten the calls to make the question more readable but...
When is correct or incorrect to use on or the other ?
I guess in most cases is the same as you just have the one Viewport but if going split screen I guess you ll have more
Usually you want the viewport size, as this is the region within which rendering actually takes place.
If you ever add anything like split-screen or picture-in-picture rendering, then you must use the viewport. So you may as well use it to begin with.
You should use the backbuffer size only when that is what you actually want. For example, you want the backbuffer when taking screenshots, or setting viewport positions.
I've got a more detailed answer to a very similar question over on the game dev site.

Multiple Core Animation Viewports

I have a complex structure of CALayers forming a motion graphics system that can be manipulated by the user. This is being displayed in the main window as a part of the UI. I am looking for a good way to display multiple small sections of the CALayer stack on a second display as "viewports", which will likely be at a higher resolution that the main view. I am aware that I could render them out and redraw them, but want to maintain the resolution independence of the CALayers.
My thought process was something to the effect of adding the main CALayer to multiple superlayers and then using a combination of masks and transforms to get the viewport to display the portion needed. Unfortunately, a CALayer can only have one superlayer.
Is there any good way to achieve this? Thanks in advance.
Unfortunately I think you'll need to maintain multiple CALayer stacks, one for each view. Since all the sets of layers should just be reflecting the state of a single model it should be relatively straightforward to keep them in sync.
You could optimise the zoomed view to only manage layers that are actually visible, which would cut down on resource usage.

What affects browser page rendering performance?

By browser rendering performance I mean things like: scrolling, moving elements in animated fashion, z-order changes.
In particular I get tremendous slowdown in Firefox 3.6 and IE8 when I move an image with top, left styles over my page. I have no problems with Chrome 8.
With firebug I tried hiding page elements one by one and the largest improvent by far came from the page wide background Jpeg that I use. I wonder how is it affecting performance as the image is moving above another element that obscures the background. This another element is partly transparent PNG (but not in the part the movement happens), maybe this has something to do with it? I use a lot of transparency and CSS3 effects and somehow they slow down everything, even things that look completely unrelated.
Overall I get the impression that the browser is rerendering the whole page when something is moving, instead of only the affected pixels.
Any educated guess as to why all this happens?
EDIT Any picture or text that sits below my moving image causes it to slow down a lot when passing over it. The moving image itself is with transparent background, but changing it to opaque had almost no effect.
Moving a transparent element (particularly an element with a shadow) over a fixed background forces it to be recomposited every frame. Opaque shadowless elements on the other hand can be moved with a simple blit.
If you want to see a huge slowdown in most browsers, make a page with a bunch of elements with border-radius and box-shadow, then set the background of the page to background-attachment:fixed.

Resources