I have a website with 3 full screen background images that use a custom parallax scrolling script utilizing requestAnimationFrame and transform:translate3d() for its animation. I did this because after a fair amount of research on the subject of visual performance this was the best alternative then using canvas.
My problem is that my page runs very smooth on Firefox 29.1 (*because it is most definitely using the computers GPU to render the page and composite layers) however for some reason, Chrome is has some major bottlenecks.
I am getting tremendous performance spikes (way less then 30fps) when I scroll down my page... but an interesting thing I noticed was that it happens specifically just as one of the background images that is being animated with the script (set as background-position:cover) enters into the viewport.
There is a repaint operation happening because the background image is being resized to fit the viewport width/height and that is causing a tremendous performance hit. Considering that the gpu isnt working correctly for my Chrome, but also that I would like to make my page perform super silky smooth when scrolling even without hardware acceleration, is there a method of preloading images/frames and having them already resized before they are scrolled onto screen? Like a frame buffering technique to ensure all the calculations and resizing is finished well before a user scrolls to that image on screen?
Related
I'm trying to implement a cross-platform UI library that takes as little system resource as possible. I'm considering to either use my own software renderer or opengl.
For stationary controls everything's fine, I can repaint only when it's needed. However, when it comes to implementing animations, especially animated blinking carets like the 'phase' caret in sublime text, I don't see a easy way to balance resource usage and performance.
For a blinking caret, it's required that the caret be redrawn very frequently(15-20 times per sec at least, I guess). On one hand, the software renderer supports partial redraw but is far too slow to be practical(3-4 fps for large redraw regions, say, 1000x800, which makes it impossible to implement animations). On the other hand, opengl doesn't support partial redraw very well as far as I know, which means the whole screen needs to be rendered at 15-20 fps constantly.
So my question is:
How are carets usually implemented in various UI systems?
Is there any way to have opengl to render to only a proportion of the screen?
I know that glViewport enables rendering to part of the screen, but due to double buffering or other stuff the rest of the screen is not kept as it was. In this way I still need to render the whole screen again.
First you need to ask yourself.
Do I really need to partially redraw the screen?
OpenGL or better said the GPU can draw thousands of triangles at ease. So before you start fiddling with partial redrawing of the screen, then you should instead benchmark and see whether it's worth looking into at all.
This doesn't however imply that you have to redraw the screen endlessly. You can still just redraw it when changes happen.
Thus if you have a cursor blinking every 500 ms, then you redraw once every 500 ms. If you have an animation running, then you continuously redraw while that animation is playing (or every time the animation does a change that requires redrawing).
This is what Chrome, Firefox, etc does. You can see this if you open the Developer Tools (F12) and go to the Timeline tab.
Take a look at the following screenshot. The first row of the timeline shows how often Chrome redraws the windows.
The first section shows a lot continuously redrawing. Which was because I was scrolling around on the page.
The last section shows a single redraw every few 500 ms. Which was the cursor blinking in a textbox.
Open the image in a new tab, to see better what's going on.
Note that it doesn't tell whether Chrome is fully redrawing the window or only that parts of it. It is just showing the frequency of the redrawing. (If you want to see the redrawn regions, then both Firefox and Chrome has "Show Paint Rectangles".)
To circumvent the problem with double buffering and partially redrawing. Then you could instead draw to a framebuffer object. Now you can utilize glScissor() as much as you want. If you have various things that are static and only a few dynamic things. Then you could have multiple framebuffer objects and only draw the static contents once and continuously update the framebuffer containing the dynamic content.
However (and I can't emphasize this enough) benchmark and check if this is even needed. Having two framebuffer objects could be more expensive than just always redrawing everything. The same goes for say having a buffer for each rectangle, in contrast to packing all rectangles in a single buffer.
Lastly to give an example let's take NanoGUI (a minimalistic GUI library for OpenGL). NanoGUI continuously redraws the screen.
The problem with not just continuously redrawing the screen is that now you need a system for issuing a redraw. Now calling setText() on a label needs to callback and tell the window to redraw. Now what if the parent panel the label is added to isn't visible? Then setText() just issued a redundant redrawing of the screen.
The point I'm trying to make is that if you have a system for issuing redrawing of the screen. Then that might be more prone to errors. Thus unless continuously redrawing is an issue, then that is definitely a more optimal starting point.
I have noticed that when I view a image in a browser using either the zoom provided in the setting or on a webpage using style attributes the pixelation is either negligible or un noticable. But when you use programs such as paint or photoshop or windows picture viewer you start to notice pixelation. Does anyone know how the browser zoom its image contents?
Here is a sample image the one on right is from paint while one on left is when viewing in chrome. The zoom is set at 500%.
For fonts, I believe it has to do with font sizing. Okay, so say you are in a word processor and type something up you increase the font size the text gets bigger. A similar thing happens in a web browser when you zoom in.
On the other hand when you take an image the resolution is set so as you zoom in the the pixels become larger and more noticeable this is called aliasing. Many times a program or browser, etc. will try and smooth the edges in the image to make the pixels look less blocky to the eye, this is called anti-aliasing.
As far as the actual algorithms behind behind paint or a web browser go, I am unsure. It may take some more research to find out.
I just finished designing this splash page for my startup: http://beta.mergenote.com/
Load time and initial render is a snap across all the browsers I've tested. (I haven't looked at any IE versions, but sourced it to friends who all felt it was similar enough to Chrome they didn't notice anything.. they aren't web devs though so if you spot something let me know).
The web page used jQuery for a simple slideshow, and for parallax scrolling it uses skrollr https://github.com/Prinzhorn/skrollr
It uses an SVG sprite, whose width and height have been set to 3x the largest rendered size of any of it's icons (because of an Opera and Firefox rendering issue where SVGs don't get redrawn at their final size).
On Chrome / Safari, the site is smooth, fast, no issues. On Firefox and Opera (and especially Firefox) the page takes a very long time to repaint on either resize or scroll events, and the animations are all extremely choppy.
I suspect it may be the SVG sprite, but I'm really not sure. The problems I'm having may intensified slightly when I sized it larger, but were there before hand.
Any ideas?
For me it's pretty obvious that the SVGs are the issue. I've disabled them one by one and the page is now fast (it was lagging a lot before).
Even if a single SVG comes into the viewport, the page starts lagging immediately.
It uses an SVG sprite, whose width and height have been set to 3x the largest rendered size of any of it's icons
Could you elaborate? This SVG is 2250 by 10350 pixels. It will take a huge amount of RAM to rasterize. It could as well be 225x1035.
Im using a canvas that updates every milisecond; with an empty background my application has a good performance, after adding a background based on a tiled image, it seems to work about the same...I´m thinking on adding some new features to my application and was wondering, is it better to use a new static canvas as a background or to use a background based on a tiled image?
I made a test of drawImage performance back in October that tested images versus a canvas: http://jsperf.com/canvas-vs-image
It seems that drawing from an image is faster in firefox and opera, slower in IE9, and about the same in Chrome.
I would think that a tiled image would be better, especially if it were large, but I wouldn't worry much about which one you pick until it is really time to optimize down the road.
By browser rendering performance I mean things like: scrolling, moving elements in animated fashion, z-order changes.
In particular I get tremendous slowdown in Firefox 3.6 and IE8 when I move an image with top, left styles over my page. I have no problems with Chrome 8.
With firebug I tried hiding page elements one by one and the largest improvent by far came from the page wide background Jpeg that I use. I wonder how is it affecting performance as the image is moving above another element that obscures the background. This another element is partly transparent PNG (but not in the part the movement happens), maybe this has something to do with it? I use a lot of transparency and CSS3 effects and somehow they slow down everything, even things that look completely unrelated.
Overall I get the impression that the browser is rerendering the whole page when something is moving, instead of only the affected pixels.
Any educated guess as to why all this happens?
EDIT Any picture or text that sits below my moving image causes it to slow down a lot when passing over it. The moving image itself is with transparent background, but changing it to opaque had almost no effect.
Moving a transparent element (particularly an element with a shadow) over a fixed background forces it to be recomposited every frame. Opaque shadowless elements on the other hand can be moved with a simple blit.
If you want to see a huge slowdown in most browsers, make a page with a bunch of elements with border-radius and box-shadow, then set the background of the page to background-attachment:fixed.