Windows Phone 7 memory management - performance

I'd like to know if there are any specific strategies for handling memory, especially with respect to image caching on the Windows Phone. I have a very graphics intensive silverlight App which needs to keep it graphics that it retrieves from the internet and it needs to be able to freely roam about - but the memory requirement becomes quite huge after using the app for a couple of minutes.
I have tried setting the image's UriSource to null but I need to maintain the image backgrounds when I come back to the page. I'm at a loss because there isn't much information on the internet. The inbuilt profiling showed me "Texture Memory Dominant" and asked me to Analyze Heap Memory to resolve the issue, but I'm still clueless about these.
Any pointers to move forward?

My answer will be general - similarly to your question. I presume that you know for sure that the problem is in images. (Because a simple ListBox with a few hundred text items can cost you many MB.)
If you search the web you'll find plenty of links such as this one. But a general analysis is easy to do.
Take an image of the WP7 screen size, i.e. 480x800. 32-bit bitmap (I suppose this is what WP7 uses when the image is opened) takes roughly 1.5 MB (a simple multiplication).
The same jpg file can have 10x smaller size (for high quality compression) or even less.
Now what's done behind the scenes when you use the construction
<image source="http://..."/>.
(In the absence of any information from you, this is what I suppose you use.)
WP7 downloads the image and adds it to the cache. The cache apparently traces the use of the Uri pointing to the image.
As next the image gets opened, i.e. converted to a bitmap of native image size. Image gets downsampled in this process if it would exceed max. WP7 texture size.
You can customize the bitmap size as described here. If you care of quality, then you should use downscale factor 2, 4, or 8. In case of jpeg these factors represent by far the fastest option. (Well, I have no idea if you know the image resolution before the image gets loaded into the Image control. It is not too difficult to get this info from a jpg file, but right now I have no idea how it can be easily done on WP7.)
The bitmap gets freed if (my speculation) if the control's source is set to null. The downloaded image is purged from the cache when Uri is set to null. (This is reported on the web plenty times.)
If you take all this info, it should be possible to (kind of) control your use of the image cache. You can roughly estimate the image size and can decide which images remain in the cache. Maybe it will need some tricks such as storing Uri objects in you own structures and releasing them as needed. I am not saying this is easy to do, but it is certainly possible.

Related

Can't Optimize Huge Sprite Sheet in Unity

I was shocked to find that a game I had just created takes up a whopping 330 megabytes. According to the Editor Log, my textures are to blame:
From the list I started at the top with the Chieftain Walk animation spritesheet. The file was huge, so I opened it in Photoshop and decreased the image resolution dramatically.
However, even after saving in Photoshop, the Editor Log claims that the texture takes up the same amount of memory. What am I doing wrong, and also, when does the Editor Log update? Is it upon building the game? Many thanks.
First of all, you don't need to reduce resolution on the actual PNG file. When Unity builds player, it will store the imported uncompressed file in its Data folder near the executable. The size of the texture will be as it is in your importer settings. By default it is 2048x2048 if I remember correctly. If you change importer settings for your texture, the PNG file will remain the same (which is in the editor), but the texture object (which is used in actual standalone) will become much smaller.
Also, is there any particular reason why you didn't make it squared? Like 512x512. Always make it a square and a power of 2. If not, Unity will be unable to make any optimizations for your sprites
EDIT:
This is the texture import settings, set max size to lower and your game will take less memory (both in hard drive and in RAM/GPU when game is running). You can also add compression level, it will take even less memory, but will take longer to load (in game). When loaded will take same amount of RAM/GPU memory as non-compressed. A win on app size, a lose on load performance. (Test it out and choose what is better for you)
Why power of 2 and square, well:
By ensuring the texture dimensions are a power of two, the graphics pipeline can take advantage of optimizations related to efficiencies in working with powers of two. For example, it can be faster to divide and multiply by powers of two. It will also be easier for unity to create mip-maps (they might take more memory if texture is not square). There are many sources on internet about mip-mapping.

XNA Texture loading speed (for extra large Texture sizes)

[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.

Jank Busting large fluid images

I have a page with many large fluid images. http://altarjewelry.com/gallery
I want to get a smooth 60fps webapp feel while scrolling. The Chrome DevTools tell me my paint times are the biggest problem (which you can check for yourself while scrolling). I'm assuming this is due to my many large fluid images.
I've read every article on HTML5Rocks about performance. I found many good tips on JS performance but no help optimizing large image paint times other then using small fixed size images, which is not an option for me as I'm building a responsive site.
I'm already serving up responsive images depending on the client.
Thank you for your help.
Not really sure about how your gallery looks because it never loaded from the URL in your post, and I don't know if that's a javascript issue or what--but I'll take a stab at helping you come up with a solution. Image optimization is image optimization, regardless of whether or not you're building a responsive site.
Approach and Design Considerations
Do you really need one large, high resolution image for each item, at the same DPI/PPI and compression, that should be responsive?
Or, should you serve appropriately sized images at differing DPI/PPI and compression, to different displays, all of which are still used in a responsive application?
Popular Convention
You're showing a gallery, and typically, you want smaller representations of the actual image--thumbnails or placeholders, generally of lower resolution, which link to the actual image at a higher resolution. This is an accepted design approach, and if you're going to vary from it, be sure it's with good reason.
The Lowest Common Denominator
If you're building a responsive site, some users will obviously be on mobile devices which may have resolutions as small as 320 pixels wide. Consider things like that, and this: even if someone shows up on a desktop, are you going to have huge, full width images loading? They will take forever to load, and visitors will never see your gallery. How is your gallery to look on a wide screen desktop? If your intention is to have one image full width across the entire page, and load the same image regardless of the device accessing your site, you may be using responsive design, but you'll find that's far away from best or even good practice.
The Flip-Side, Large/Wide Screens
Why not have four gallery images going across a desktop? Or more? And if that's the case, they're likely to have a maximum size in any case. I honestly don't know because I've tried to load your site a few times and get nothing. But consider that if there's a maximum size practically for your gallery images in an initial display, say 6 images at 200 pixels each across a 1200 pixel max layout width (Or, are you using a % based framework and using 100% of the display width? Even responsive sites often limit the max width of the content area, and these things all would help determining a more appropriate answer) solutions begin to emerge.
Since no image needs to be larger than 200 pixels in that case, and on a phone where your columns might be displaying only one image that you want full width, you can work with a maximum initial width of 480px wide images.
Higher Quality, Smaller Files
We'll assume you want them high quality. That's fine. You still need to reduce files size, and you do that with compression. Now, you may feel compressing a photo to 50% or even more makes it blurry, and it certainly will at low ppi (pixels per inch) settings.
The Secret To Better Compression
What you need to do is change default image editor settings from traditional defaults like 72 or 90 ppi, and crank them up to 300, 400, 500, or more--and THEN apply compression. If that image is 480px wide, and you've only got 72ppi, compression will quickly erode quality. However, having several hundred extra pixels per inch will allow more information to be stored. Then, you can apply much higher compression rates, and shrink file sizes down quite a bit more.
The Oversized Image Approach
Another trick is to do the same thing, and slightly oversize the image. If 480px is the max size for your thumbnails/small pics, make them actually 540-600 px wide, with 400-500ppi and compress them at really high settings. The browser will resize to the max width of 480 px...but then you have a performance hit there. Everything is a trade off. You can blur backgrounds in images as well, allowing the foreground/main focus of the photo to be of higher quality while the background requires less information, yielding smaller file sizes.
Not Suitable For Batch Processing
This should be done individually for each image, batch editing does not generally get the most out of this technique, because the color information is so different in each photo. One photo might be best quality and smallest size for your purposes at 300ppi and 50% quality, another at 500ppi and 35% quality. You'll want to do this not just for your gallery thumbnails, but multiple images. No point in serving up a 1400px wide full size desktop to someone who's browsing your site with 480px wide/resolution display after all. Use media queries to serve up the appropriate ballpark sized image, and have a small, medium and large variant. Done right, you don't even need to be serving larger images to those browsing with phones...the gallery images they are viewing are good enough.
The compression setting is not so heavily determinate of the final image quality as the number of pixels you have to compress goes up. More pixels to work with, the better quality at higher compression settings.
Design Considerations and Smart Image Loading
Break It Up Into Smaller Content Chunks
Also, consider the process/design of your gallery. Do you have 20 items? 100? 400? Are you trying to show them all on one page? Break it up into small numbers...12-20 per page. Smaller and fewer images will load faster, and can remain responsive, with links for those who want a larger or higher quality image. No need to show a huge, high quality image to someone browsing with their phone.
Pre-fetching and Loading
Server side scripting, and even some javascript solutions can help with this. You might do things like limiting each gallery page to four rows of four images, and then after page load, have a javascript that pre-fetches the first four images that will display on page 2. If your visitor goes to page two after scrolling through page one, the first four images are loaded in cache, and display quickly while the others load normally, giving the experience of a faster page load.
If the visitor goes to another page in the site, you didn't waste bandwidth on 12 images and only cost you the bandwidth of four. Smart design might be to use those first four images on page two of the gallery elsewhere on the site...so that first gallery page visit actually sped up page load elsewhere and does not in fact give up bandwidth for loading 4 unnecessary images. Think the process through, and solutions will suggest themselves.
Resources
Anyway, here are relevant articles/posts/links you may find helpful in understanding all of this:
Are Compressive Images A Good Solution For High Resolution Displays?
http://www.vanseodesign.com/web-design/compressive-image-tests/
Reducing image sizes (ResponsiveDesign.is)
https://responsivedesign.is/articles/reducing-image-sizes
Search benfrain dot com for this post:
How to serve high-resolution website images for retina displays
And a tool you might find useful...
adaptive-images dot com

dealing with large number of pictures

My program save images from a camera in a rate about 30fps;
Moving these images or browsing them by windows explorer take a long time.
My questions is:
is storing them as a video file is a better approach? so moving files wouldn't take a lot of time. (if this is good, how to open a large video file and get a specified frame number fast? Is this approach faster?)
It's very unlikely that a video file will load or seek any faster than individual image files. Not to mention this is going to be a lot more difficult to handle than images, and probably introduce a bunch of extra, unnecessary dependencies to your application.
If you really need a way to speed up image browsing, you should look into generating separate thumbnail images. Thumbnails are exact duplicates of the original images, but with significant reductions in size and quality, suitable for display purposes only. This makes the resulting thumbnail images significantly smaller than the originals, which should speed up their loading and rendering substantially. This trick is used all over graphics packages and file managers. This approach allows you to delay the expensive process of loading the associated full image until after the user selects a particular thumbnail image to open.

Tips for reducing Core Animation memory usage

So here's the situation:
I have a CALayer that is the size of my screen, and I'm setting the contents property to a 2 Mb JPEG that's roughly 3500 x 2000 pixels in size with a resolution of 240ppi.
I'd expect there to be a slight overhead involved in using the CALayer, but my sample application (which only does exactly what's above) shows usage of about 33Mb RSIZE, 22Mb RPVT and 30Mb RSHRD. I've noticed that these numbers are much better when running the application as 64-bit than they are running as a 32-bit process.
I'm doing everything I can think of in the real application that this example comes from, including resampling my CGImageRefs to only be the size of the layer, but this seems extraneous to me - shouldn't it be simpler?
Has anyone come across good methods to reduce the amount of memory CALayers and CGImageRefs use?
First, you're going to run into problems with an image that size in a plain CALayer, because you may hit the texture size limit of 2048 x 2048 (depending on your graphics card). Applications like this are what CATiledLayer is designed for. Bill Dudney has some code examples on his blog (a large PDF), as well as with the code that accompanies his book.
It isn't surprising to me that such a large image would take so much memory, given that it will be stored as an uncompressed bitmap in your CGImage. Aside from scaling your image to the resolution you need, and tiling it with CATiledLayer, I can't think of much. Are you releasing the CGImageRef once you've assigned it to the contents of the CAlayer? You won't need to hang onto it at that point.

Resources