I'm using a tile engine to generate huge maps based off an array. The map is divided into zones, only tiles in zones around the "view" are drawn which gives great performance on giant maps and smooth movement.
However, I've hit a limit only with Firefox that I cannot figure out.
At first I thought it was just because I'm using giant arrays, firefox is able to handle 100x100 64px square tiles without error, but anything above that gives a "stop script" error and locks up FF.
At the same time, IE, Chrome, Safari, and even my 2.5 year old HTC android phone can generate maps 500x300+ tiles (though the phone only runs at 4fps but it at least makes it through the initialization + draws the map, unlike Firefox on my desktop pc!!)!!!!!! Which is 150,000 tiles !!!!!!!!!!!!!!! Compared to Firefox choking at 10,000 tiles....how can my 2.5 year old phone generate maps 15x+ larger than firefox running on my desktop?????
In resource monitor my CPU and RAM max immediately in FF then give the "close document to prevent data loss" error...but my phone can handle much larger maps than my desktop which makes me believe there's a bug with how FF handles loops compared to IE, Chrome, Safari, Opera, and my phone which handle much larger loops???
Here's the version that works in all browsers including FF http://simplehotkey.com/TileEngine/tiles/main.html with a 100x100 tile map
Here's a version with a 500x100 tile map (50,000 tiles) that chokes FF but runs on all the other browsers and at least loads on my phone: http://simplehotkey.com/Tiles/main.html
Anyone have an explanation why an older phone would generate the map 15x larger than FF can handle on my desktop????????
Are all 150,000 tiles being shown at once? Or are they just loaded but not used until needed?
64px square decoded images use 16KB of RAM each. So 10,000 of them would use 160MB of RAM. 150,000 tiles would use 2+GB of RAM. And what I see here is Firefox using up a lot of RAM so that it starts swapping....
Related
Todays displays have a quite huge range in size and resolution. For example, my 34.5cm × 19.5cm display (resulting in a diagonal of 39.6cm or 15.6") has 1366 × 768 pixels, whereas the MacBook Pro (3rd generation) with a 15" diagonal has 2880×1800 pixels.
Multiple people complained that everything is too small with such high resolution displays (see example). That is simple to explain when developers use pixels to define their GUI. For "traditional displays", this is not a big problem as the pixels might have about the same size on most monitors. But on the new monitors with much higher pixel density the pixels are simply smaller.
So how can / should user interface developers deal with that problem? Is it possible to get the physical size of the screen? Is it possible to set physical sizes instead of pixel-based ones? Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
(While css seems to support cm, when I try here it, it is not the set size).
how can / should user interface developers deal with that problem?
Use a toolkit or framework that support resolution independence. WPF is built from the ground up to be resolution-independent, but even old framework like Windows Forms can learn new tricks. OSX/iOS and Windows (or browser if we're talking about web) itself may try to take care the problem by automatic scaling, but if there's bitmap graphic involved, developers might need to provide different bitmaps such in Android (which face most varying resolution and densities compared to other OS)
Is it possible to get the physical size of the screen?
No, and developers shouldn't care about it. Developers should only care about the class of the device (say, different UI for tablet and smartphone), and perhaps the DPI to decide which bitmap resource to use. Vector resource and font should be scaled by the framework.
Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
Depend on when you last read about it. Windows support is still spotty, even for the internal apps itself, and while anyone developing in WPF or UWP have it easy, don't expect major third party apps to join soon. OSX display scaling seems to work a bit better, while modern mobile OS are either running on limited range of resolution (iOS and Windows Phone) or handle every resolution imaginable quite nicely (Android)
There are a few ways to deal with different screen sizes, for example when I make mobile apps in java, I either use DIP(Density Independent Pixels; They stay at a fixed size) or make objects occupy a percentage of the screen with simple math. As for web development, you can use VW and VH (Viewport Width and Viewport Height), by adding these to the end of a value instead of px, the objects take up a percentage of the viewport. For example 100vh takes 100% of the viewport height. Then what I think is the best way to do it, but time consuming, is to use a library like Bootstrap that automatically resizes elements, even when the window is resized. W3Schools has a good tutorial on bootstrap and more detailed explainations on any of these options can be looked up with an easy google search.
The design of the GUI in today display diversity era is real challenge. I would suggest several hints, mainly about the GUI applications design:
Never set or expect constant pixel size of the text - the user can change it from the system settings of the OS. Use some real-world measures for the text and check its pixel size when drawing. Provide some way to put the random size text in the boundaries of the window.
Never set or expect constant pixel size of the GUI widgets. Try to position them on the window in some adaptive way - according to the size of the window. Most GUI widget toolkits today have such instruments.
Never set or expect constant pixel size dialog windows. Let the OS to choose the size for you and then use what you get (X). Or, if you need to set some size and position (Windows), define it as a percent of the screen size.
If possible use scalable image formats for the icons. SVG is great for icons actually. Using sets of bitmap icons with different sizes is acceptable, but highly non-optimal as memory use and still will not provide perfect scaling in most cases.
I am using iText PDF 5.4 along with the Java2D interface (java.awt.Graphics canvas), and I have a severe problem with gradients.
I am painting many rectangular shapes whose paint is a LinearGradientPaint. This results in large files (e.g., 10 MB), and trying to open the results in e.g. Preview.app brings the computer to total halt. The problem seems to be memory usage, because the first dozens of boxes paint rather quickly and then performance slows down somewhat linearly with more boxes, which means that for a typical page it takes >10 minutes to open.
Adobe Acrobat is also slow but at least it takes some 4 or 5 seconds instead of several minutes.
Is this a bug of iText? Is there a setting or tweak in iText that controls the representation of gradients? I guess it decomposes them into hundreds of separate paint commands instead of using a direct gradient component (if that exists—I know it exists in SVG, but PDF I have no clue).
The condition is that I stay in the awt.Graphics, I cannot rewrite my rendering code to not use Java2D.
An alternative idea would be to use Apache Batik and output to SVG instead. There is an example that shows how to enable the correct transcoding of LinearGradientPaint to the SVG equivalent.
EDIT: There seems to be a new Java2D-to-SVG library JFreeSVG. Recent changes indicate that gradients are implemented.
I'm having some real difficulty with offscreen rendering with HTML5!
The code I have written runs perfectly fine with Firefox, using drawImage on canvas elements draws smooth images and does so very quickly.
However, using Chrome 21, the drawImage performance is just terrible. I'm unable to see where I'm going wrong.
Here is a jsfiddle with some sample code:
http://jsfiddle.net/DXgum/3/
In Firefox, I can get about 60fps, chrome only delivers a framerate of 10fps.
The performance does not differ if I'm using in-dom canvas elements or not-in-dom.
Rendering without buffering on Chrome is faster than Firefox, so I'm actually not sure why Chrome is having such a problem with drawImage.
Any help or light into the matter would be greatly appreciated!
System Information:
Windows 7 32bit
Intel QX9650
Nvidia GTS 250
4GB DDR2 RAM
Chrome Version: 21.0.1180.60 m
Firefox Version: 14.0.1
I've been struggling with this for years. One version of chrome it's fine, the next release it's slow again... My ultimate solution is a little hacky, but it works.
Using your fiddle I was able to determine for sure that performance massively dropped off if the size of the canvas is <= 256.
My canvas was 200 x 200.
I simply made my canvas 258 x 258, change all the math to accommodate the new dimensions and used the CSS zoom property on the canvas's wrapper div to shrink the appearance of the canvas down to the approximate original size.
Doing this, I went from 30-40 fps to a nice steady 60.
I'm currently working on a little site for my family. One of the things I wanted to do was to make a basic 'making of' stop-motion video. I could assemble it and upload it to Vimeo or something but I thought it was a perfect opportunity to use nothing but HTML, CSS, and Javascript.
I've got everything styled and my JS is working, etc. except that it performs atrociously in Chrome and Safari. Interestingly, it works great in Firefox and I'm not supporting it yet in IE. I'm hoping for 8 to 12 frames per second, with music playing, which I haven't bothered trying yet due to this. Bad performance is anything less than that. Currently I'm getting roughly 3 fps in Firefox (acceptable, but not what I was looking for) and in Chrome and Safari I'm getting roughly .6795 fps.
When running the Chrome Profiler, I get the following (relevant) output.
99.96% 99.96% (program)
0.03% 0.03% (garbage collector)
0.01% 0.01% script.js:5:nextSlide
I've never used the Profiler before but I believe this is showing me that my JS is not what's hitting the performance so hard.
I've published a test page that documents the performance differences that you can visit with Chrome and Firefox.
I've also discovered that this seems to be related to the images cycled. Cycling different, simpler images seems to work just fine in both Chrome and Firefox, despite the fact that Chrome is still a little more power hungry than Firefox.
As further proof of at least this conclusion, though it's entirely unacceptable, is demonstrated here, after running the images through convert -compress JPEG -quality 1. They cycle much more efficiently, but of course the quality is terrible.
I have run these test pages in Chrome (16.0.912.63), Safari (5.1.2 (6534.52.7)), WebKit nightly (Version 5.1.2 (6534.52.7, r102985)), and Mobile Safari (latest as of 2011/12/28) and only Mobile Safari performs as well as FireFox. The desktop browsers were tested on a MacBook Pro.
2.7 GHz Intel Core i7
8 GB 1333 MHz DDR3
Interestingly, Mobile Safari on an iPad 2 performs as well as FireFox when rendering the test page. Though Mobile Safari is based on WebKit, in this instance it performs entirely different.
Decreasing the setTimeout call to 144 from 244 also seems to not do anything. I've arrived at 244 entirely arbitrarily at this point as it became clear early on that the timing of the display compared to the call didn't seem to correspond nearly directly. This leads me to believe that I'm rendering the slide show as quickly as I can on each browser.
So my question is, how can I make this performant in WebKit?
You can debug the page performance in Chrome using the Timeline tab under the Chrome developer tools. The problem with your script is that your repaint cycle is simply too expensive, it currently takes 1.35s to repaint every frame.
The bad performance has nothing to do with the quality of the jpeg images (although the image quality also affects the page render time). The problem is that you are updating the z-index which causes the Chrome to repaint all images instead of just the next frame (You have a O(n) image slider website!).
The browsers try to do the minimal possible actions in response to a change e.g.: changes to an elements color will cause only repaint of the element.
Changing the element z-index property is basically the same as removing a node from the tree and adding another node to it. This will cause layout and repaint of the element, its children and possibly siblings. My guess is that in Chrome, the siblings are being repainted too, this explains the horrible performance.
A way to fix this problem is to update the opacity property instead of the z-index. Unlike the z-index, the opacity does not modifies the DOM tree. It only tells the render to ignore that element. The element is still 'physically' present in the DOM. That means that only one element gets repainted and not all siblings and children.
This simple changes in your CSS should do the trick:
.making-of .slide#slide-top {
opacity: 1;
/* z-index: 5000; */
}
.making-of .slide {
position: fixed;
/* z-index: 4000; */
opacity: 0;
....
}
And this is the result, the repaint went from 1.35s to 1ms:
EDIT:
Here is a jsfiddle using the opacity solution, I also added CSS3 transitions (just for fun!)
http://jsfiddle.net/KN7Y5/3/
More info on how the browser rendering works:
http://www.html5rocks.com/en/tutorials/internals/howbrowserswork/
I took a look at the code on your site and found two things that are limiting the speed.
1) In the JavaScript, you have a timeout of approximately 1/4 second (244 milliseconds). This means that your best-cast frame-rate is about 4 FPS (frames-per-second). This can be fixed by simply reducing the delay to match the frame rate that you actually want. I see that your most recent edit addresses this point, but I didn't want to ignore it since it is ultimately critical to achieving the higher frame-rates that you want.
2) You are using z-index to control which image is visible. In the general case, z-index handling allows for objects that have different sizes and positions to be ordered so that you can control which object is visible at locations where two or more objects overlap. In your case, all of the objects overlap perfectly, and the z-index approach works fine except for one major problem: browsers don't optimize z-index processing for this case and therefore they are actually processing every image on every frame. I verified this by creating a modified version of your demo which used twice as many images -- the FPS was reduced by nearly a factor of 2 (in other words, it took 4 times as long to display the entire set).
I hacked together an alternative approach that achieved a much higher FPS (60 or more) under both Chrome and Firefox. The gist of it was that I used the display property instead of manipulating z-index:
.making-of .slide#other {
display: none;
}
.making-of .slide#slide-top {
display: inline;
}
and the JavaScript:
function nextSlide() {
...
topSlide.id='other';
nextTopSlide.id='slide-top';
...
setTimeout(nextSlide, 1);
...
}
I made some changes in the HTML too, notably including id="other" in the tag for each image.
So why is WebKit so slow? As has been pointed out in other comments, the extra-poor performance that you are seeing on Webkit seems to be Mac specific. My best guess about this is that the Mac version of WebKit is not actually using the "turbo" version of libjpeg (despite the fact that it is listed in the credits). In your test, JPEG decompression could very well be the gating factor if it is actually decompressing every image on every frame (as is likely the case). Benchmarking of libjpeg-turbo has shown about a 5x improvement in decompression speed. This roughly matches the difference that you are seeing between Firefox and Chrome (3 FPS vs. 0.6795 FPS).
For more notes on libjpeg-turbo and how this hypothesis explains some of your other results, see my other answer.
Key in my experience is to keep as less as possible images in the DOM and in javascript arrays, so don't load all of the at once, keep it to a minimum. Also make sure you destroyed already used DOM elements as well as javascript objects holding images, manual garbage collection. This will improve performance.
Random guess: GPU acceleration. It is device-dependent, and there is a big race among browsers now.
You could try with a more recent Chrome like the canaries, http://tools.google.com/dlpage/chromesxs (it's 18.x now), just to get more data.
about:version in Chrome should give you version of WebKit.
Also, have you tried existing slideshow solutions like http://jquery.malsup.com/cycle/ ? I wonder if playing with the z-index is the bottleneck here... maybe having only 1-2 images displayed (all the rest using display:none) would help. This is again a guess.
The best way to achieve better performance when it comes to graphics is to compress them, but like you want, but keep
If you are using Linux, I have used JPEG compression tool http://linuxpoison.blogspot.com/2011/01/utility-to-optimize-compress-jpeg-files.html before. It doesn't hurt quality as much as the ImageMagick example you gave.
Also http://trimage.org/ has JPG support, and would be my first recommendation!
If you are on Windows, maybe something like this:
http://www.trans4mind.com/personal_development/convertImage/index.html
I have not tested the Windows method, and I'm not even sure it supports batch
Hope that helps!
P.S. For PNGs I use sometimes use http://pmt.sourceforge.net/pngcrush/ along with or without http://trimage.org/
There has been some relatively recent work on the JPEG image compression library that is used in many applications including browsers such as Firefox and Chrome. This new library achieves a significant speed increase by using special media-processing instructions available in modern CPUs. It may simply be that your version of Chrome doesn't use the new library.
Your question requests a way to fix your images, but that shouldn't be necessary -- after all, some other browsers work fine. Therefore, the fix should be in the browser (and browsers are constantly being improved).
You said that you improved Chrome's speed by dramatically reducing the quality or complexity of your images. This could be explained by the fact that for areas of very low detail, the JPEG decompression algorithm can bypass a lot of the work that it would normally need to perform. If an 8x8 tile of pixels can be reduced to a single color, then decompression of that tile becomes a very simple matter.
This Wikipedia article provides some additional info and sources. It says that Chrome version 11 has the new library. You can enter "chrome://credits" in your location bar and see if it references "libjpeg-turbo". "libjpeg" is the original library and "libjpeg-turbo" is the optimized version.
One other possibility is that libjpeg-turbo isn't supported in Webkit on the Mac (although I don't specifically know that). There is a hint as to why that might be the case posted here.
P.S. You may get better decompression speed by compressing with a different algorithm, such as PNG (although your compression ratios will likely suffer). On the other hand, maybe you should use HTML5 video, probably with the WebM format.
I tested it in opera and it ran slow as hell, i noticed that opera had queued 150+ images to download it could be worth a try to download ~20 at a time?
An alternative approach would be to render this content as a video - it is ideal for this kind of thing and can easily contain audio and subtitles. You can access each pixel from each frame using JavaScript if you want to get funky.
Suppose we're making a strategy game (think Civilization) in a web browser. The game has a visible map portion - say 30x30 squares. Each square is 30x30px and has several overlaid images - the terrain, resources, units, roads, etc. The classical way of drawing this would be with a huge <table> where each cell would contain absolutely positioned images. It would probably be rendered in Javascript to reduce traffic. But it's still several thousand images and a huge table.
Can the browser take it? Will the performance not drop below any acceptable limits? Alternatively I could keep a pre-rendered map image with as many overlays as possible, but that would be more work, I think.
You should really look into using the canvas element which does not require the browser to store and compute the whole layout and other DOM stuff.
That being said, a modern browser on a high-performance workstation can display hundreds of images at the same time as demonstrated with the FishIETank. However, many devices - ranging from smart phones to old PCs - can not. Oh, and using a table is probably slower than a div with position:relative; or absolute and absolutely images therein.
Look at online games like grepolis, they already do some sort of a grid like game, and modern browsers can take this easily.