I am a first time poster. I have a problem that seems very common to a lot of people but I can't seem to find or decipher an answer anywhere.
The game I'm making has 3 rather large sprites on screen (230 high) x various widths.
Each sprite has 3 1024x1024 character sheets where the frames of animations are taken from.
I've experimented with PVR's but at that size they are absolutely horrible so I want to keep PNG's.
From other information I believe that the device can handle 3 1024x1024 PNGs in memory without memory warnings.
My problem is I end up with 9 as if I don't use imagenamed then the time it takes to load the sheet from disk is unacceptable.
I have created a NSMutableDictionary to store the data from the png image sheets into, but here lies my problem.
How do I add uncompressed png data to the NSMutableDictionary without using imagenamed? If I add it in uncompressed (imagewithcontents of file) then there is still a massive loading time when I get the image from the cache.
I need to somehow store it uncompressed in the NSMutableDictionary - Assign it to the image of the sprite (sprites are UIImageViews) so that it doesn't get stored as it does if I use imageNamed.
Completely free the png from texture memeory if I need to change the sheet the sprite is grabbing the frames from. (not from my NSMutableDictionary cache)
thanks in advance -
Related
I generate bitmap images with Cocoa using NSImage or CGImage (UIImage would work similar probably)
How do I determine what would be the maximum image size I can generate?
I guess it should be in some relation to the memory available?
Apart from a tech curiosity, is always bad to allocate big images.
Apple (especially for iPhone/iOS) has worked hard to use tiles and "tiled" classes (i.e. CATiledLayer) to display very large images.
I saw it directly at WWDC in past years (see sample code PhotoScroller, https://developer.apple.com/library/content/samplecode/PhotoScroller/Introduction/Intro.html)
Pretty much with every Pagespeed test I do for all my website I get the comment "Optimize images by lossless compressing image X" which often increases my page rank a lot.
I already save EVERY image with 'save for web' with Photoshop, but I was wondering how I could "Optimize images by compressing lossless" even more. As far as I know I'm already doing everything I can.
Really wondering..
Off-topic, but I noticed that Google's PageSpeed uses a Retina device to check, since all my Retina images got loaded instead of the regular ones. Since these are larger than the area I got a 1/100 score on the mobile segment. Haha.
This was a real issue with many of my sites, however I use the free version of kraken to 'loosely compress' all of my images and this passes the Google Test, thus boosting rankings!
https://kraken.io/web-interface
I must have used this for well over 10,000 images already!
The images you create in programs like Photoshop and Illustrator look amazing but often the file sizes are very large. This is because the images are made in a format that makes them easier to manipulate in different ways. If you put these files on your website it would be very slow to load. Optimizing your images for the web means saving or compiling your images in a web-friendly format depending on what the image contains.
How does it work?
There are two forms of compression that we need to understand, Lossy and Lossless.
Images saved in a lossy format will look slightly different than the original image when uncompressed. Keep in mind that this is only visible at a very close look. Lossy compression is good for web, because images use a small amount of memory, but can be sufficiently like the original image.
Images saved in lossless format retain all the information needed to produce the original image. For this reason, these images carry a lot more data and in return are a much large file size.
We also can optimize images for the web by saving them as the appropriate dimensions. Resizing the image on the webpage itself using CSS is helpful but the issue is the web browser will still download the entire original file, then resize it and display it.
Can you imagine taking a poster size image and using it as a thumbnail? The little 20px by 20px image would take as long to load as the original poster when we could just be loading a 20px image the whole time.
How to Optimize Images?
In simple terms optimizing your image works by removing all the unnecessary data that is saved within the image to reduce the file size of the image based on where it is being used in your website. Optimizing images for the web can reduce your total page load size by up to 80%.
Full optimization of images can be quite an art to perfect as there are such a wide variety of images you might be dealing with. Here are the most common ways to optimize your images for the web.
Reduce the white space around images – some developers use whitespace for padding which is a big no-no. Crop your images to remove any whitespace around the image and use CSS to provide padding.
Use proper file formats. If you have icons, bullets, or any graphics that don’t have too many colors use a format such as GIF and save the file with lower amounts of colors. If you have more detailed graphics then use JPG file format to save your images and reduce the quality.
Save your images in the proper dimensions. If you are having to use HTML or CSS to resize your images, stop right there. Save the image in the desired size to reduce the file size.
To resize your images you will have to use some form of program. For basic compression, you can use a simple editing program such as GIMP. For more advanced optimization you will have to save specific files in Photoshop, Illustrator, or Fireworks.
[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.
I'm porting some CF 2.0 VB.Net apps to a newer version of a handset that has twice the screen resolution. So I have to double the dimensions of everything otherwise it all gets squished up into the top LH corner of the screen.
One screen had a bitmap which was 250K in size, and after I doubled the dimensions naturally it blew out to one MB. This isn't real good on a handheld, so I fired up irfanview and converted it to a .GIF. The .GIF was only 60KB in size, with no discernible change in the quality of the image.
To me, it seems a no-brainer : Convert all Bitmaps to Gif (or JPG) and get the same results for a fraction of the disk space (and probably quicker form loading times).
But does anyone know of a situation where you would use a bitmap in preference to a GIF/JPEG? I cannot find any.
I really can't think of any realistic example where you would prefer an bitmap to a GIF. Since GIF is a lossless format you loose no information when storing images. So after reading the file in your app you will have the same image data as if you have read a bitmap. And like you said: The file will be smaller and thus will probably will be read faster from disk.
JPEG is different because it's a lossy format, meaning you will lose information when storing images in it. You will need to decide if the loss of information is meaningful in your app.
Bitmaps would be preferable if and only if reading files from disk where faster than decompressing the file in memory.
And to be precise you would prefer bitmaps when storing images in main memory, so you can work easily on the data in your code. Which is actually what you most likely already have when you have loaded a file using an image library.
To cut a long story shorts, a BMP is stored as a series of pixels along with their colour. This is useful if you want to do such things as pattern recognition, movement detection and such like.
Bitmaps are typically used for their convenience - you can knock one up in paint without having specialist graphics software.
I am building a map system that requires a large image (native 13K pixels wide by 20K pixels tall) to be overlayed onto an area of the US covering about 20 kilometers or so. I have the file size of the image in jpg format down to 23 MB and it loads onto the map fairly quickly. I can zoom in and out and it looks great. It's even located exactly where I need it to be (geographically). However, that 25 MB file is causing Firefox to consume an additional 1GB of memory!!! I am using Memory Restart extension on Firefox and without the image overlay, the memory usage is about 360 MB to 400 MB, which seems to be about the norm for regular usage, browsing other websites etc. But when I add the image layer, the memory usage jumps to 1.4 GB. I'm at a complete loss to explain WHY that is and how to fix it. Any ideas would be greatly appreciated.
Andrew
The file only takes up 23 MB as a JPEG. However, the JPEG format is compressed, and any program (such as FireFox) that wants to actually render the image has to uncompress it and store every pixel in memory. You have 13k by 20k pixels, which makes 260M pixels. Figure at least 3 bytes of color info per pixel, that's 780 MB. It might be using 4 bytes, to have each pixel aligned at a word boundary, which would be 1040 MB.
As for how to fix it, well, I don't know if you can, except by reducing the image size. If the image contains only a small number of colors (for instance, a simple diagram drawn in a few primary colors), you might be able to save it in some format that uses indexed colors, and then FireFox might be able to render it using less memory per pixel. It all depends on the rendering code.
Depending on what you're doing, perhaps you could set things up so that the whole image is at lower resolution, then when the user zooms in they get a higher-resolution image that covers less area.
Edit: to clarify that last bit: right now you have the entire photograph at full resolution, which is simple but needs a lot of memory. An alternative would be to have the entire photograph at reduced resolution (maximum expected screen resolution), which would take less memory; then when the user zooms in, you have the image at full resolution, but not the entire image - just the part that's been zoomed in (which likewise needs less memory).
I can think of two approaches: break up the big image into "tiles" and load the ones you need (not sure how well that would work), or use something like ImageMagick to construct the smaller image on-the-fly. You'd probably want to use caching if you do it that way, and you might need to code up a little "please wait" message to show while it's being constructed, since it could take several seconds to process such a large image.