I wish to compress my png sprite textures a bit so they don't take up that much memory as I build for mobile devices. As I use a lot of gif animations this is crucial for my game's performance.
I looked for an answer but the threads I found were more than 1 year old and not about sprites so I figured I ask my version. An image that's 224 KB in size takes up 1.6 MB in Unity with the generate mip maps turned off
So compressing the png sprite textures in my game. How to do that?
Try setting Format to Automatic Compressed.
For this to work on all mobile platforms make sure the initial image has sizes that are power of 2 (actually I think that multiple of 4 should be enough, but just to be on the safe side).
This will lower the quality of the image but should save you some space.
Turning mip map off is again a good idea as mim map increase the size of the image by about 33%.
And one more thing, make sure that Non Power of 2 is set to None, that may be the reason you see an increase in size at the moment.
Related
I was shocked to find that a game I had just created takes up a whopping 330 megabytes. According to the Editor Log, my textures are to blame:
From the list I started at the top with the Chieftain Walk animation spritesheet. The file was huge, so I opened it in Photoshop and decreased the image resolution dramatically.
However, even after saving in Photoshop, the Editor Log claims that the texture takes up the same amount of memory. What am I doing wrong, and also, when does the Editor Log update? Is it upon building the game? Many thanks.
First of all, you don't need to reduce resolution on the actual PNG file. When Unity builds player, it will store the imported uncompressed file in its Data folder near the executable. The size of the texture will be as it is in your importer settings. By default it is 2048x2048 if I remember correctly. If you change importer settings for your texture, the PNG file will remain the same (which is in the editor), but the texture object (which is used in actual standalone) will become much smaller.
Also, is there any particular reason why you didn't make it squared? Like 512x512. Always make it a square and a power of 2. If not, Unity will be unable to make any optimizations for your sprites
EDIT:
This is the texture import settings, set max size to lower and your game will take less memory (both in hard drive and in RAM/GPU when game is running). You can also add compression level, it will take even less memory, but will take longer to load (in game). When loaded will take same amount of RAM/GPU memory as non-compressed. A win on app size, a lose on load performance. (Test it out and choose what is better for you)
Why power of 2 and square, well:
By ensuring the texture dimensions are a power of two, the graphics pipeline can take advantage of optimizations related to efficiencies in working with powers of two. For example, it can be faster to divide and multiply by powers of two. It will also be easier for unity to create mip-maps (they might take more memory if texture is not square). There are many sources on internet about mip-mapping.
At work, I work with very large images.
I currently do my rendering via SDL2.
The max texture size on the graphics card my machine uses is 8192x8192.
Because my data sets are larger than what will fit in a single texture, I split my image into multiple textures after it is loaded, and tile them.
However, I have found that this comes at a very steep cost. Rendering only 4 textures around 5K by 5K (pixels) each completely tanks the framerate!
Conventional wisdom tells me that the fewer texture swaps the better, but with such large images I've found myself between a rock and a hard place.
One thing I've considered is that perhaps if I were to chunck the images up into many small textures, I could take advantage of culling which would hopefully be a net win. But there's a big problem with that approach - I need to be able to zoom out.
Another option would be to down scale the images. This seems promising as the analysis I am doing on the images do not require the high resolution that the images provide.
I know that OpenGL has mipmapping, but I am inexperienced with OpenGL and am weary of diving into it for a work project. I am not aware of a good way to downscale the images within the confines of SDL2, and for reasons specific to the work I am doing, scaling the images down offline (before I load them) is not appealing.
What is the best approach for me to get the highest framerate in this situation?
I've found that the maximum texture size that my opengl can support is 8192 but the image that I'm working with is 16997x15931. As you can see in this link, I've completed the class COpenGLControl and customized it for my own use to work with a smaller 7697x7309 image and activated different navigation tasks for it.
Render an outlined red rectangle on top a 2D texture in OpenGL
but now in the last stages of work, I've decided to change the part where applies the texture and enable it to handle images bigger than the size 8192.
Questions:
Is it possible in my opengl?
what concept should I study mipmaps, multiple texturing?
Will it expand performance of code?
Right now my program uses 271 MB of ram for just showing this small image(7697x7309) and I'm going to add a task to it (for image-processing filtering processes) that I have used all my effort to optimize the code but it uses 376 MB of ram for the (7697x7309) image(the code is already written as a console application will be combined with this project). So I think the final project would use up to 700 MB of ram for images near the 7000x7000 size. Obviously for the bigger image (16997x15931 ) the usage of ram will be alot higher!
So I'm looking for a concept to handle images bigger than the MAX_TEXTURE_SIZE and also optimize the performance of the program
More Questions:
What concept should I study in OpenGL to achieve the above goal?
explain alittle about the concept that you suggest?
I've asked the question in Game Developement too but decided to repeat the question here maybe it will have more viewers. As soon as I get the answer, I will delete the question from either on of the sites. So don't worry about multiple questionings.
I will try to sum up my comments for the original question.
know your proper opengl version: maybe you can load some modern extension and work with even the recent version of opengl.
if it is possible you can take a look at Sparse Textures (Mega Textures): ARB_sparse_texture or AMD_sparse_texture
to reduce memory you can use some texture compression:
How to: load DDS files in OpenGL.
another simple idea: you can split the huge texture and create 4 smaller textures (from 16k x 16k into four 8k x 8k) and somehow render four squares.
maybe you can use OpenCL or CUDA to do the work?
regarding mipmaps: it is set of smaller version of your input texture, mipmaps improve performance and final quality of the filtering, but you need another 33% more memory for a texture with full mipmap chain. In your case they could be very helpful. For instance when you look at a wall from a huge distance you do not have to use full (large) texture... only a small version of it is enough. g-truc on mipmaps
In general there is a lot of options, but it depends on your experience what is simpler and fastest to implement.
I am working on a gameplay which needs load 27 texture altas (each one 1024 * 1024) before enter the game scene
but sometimes my game crash because receiving memory warning
I know 27 texture altas will use:
4 * 27 * 1024 * 1024 = 108mb memory
which is huge amount, but I really need to load them before entering game.
Is there anyway to solve my issue?
Any ideas will be very appreciated!
BTW:
I am using cocos2d 1.0.1
Best suggestion is to review your design, and the 'need' for preloading all these textures. I tend to pre-load only the textures that are most frequently used (animations and static map objects).
For example, I have textures for animating walks on a map for 16 character classes. I regrouped the 'idle' animations in 4 textures, and preload these, because initially, when a soldier enters the scene, it idles. The moving animations are in separate textures that are loaded in real time, as a function of the direction of travel, for each character class in movement. When the characters stop walking (idle), i remove unused textures from the cache, as well as unused sprite frames.
Also: there are other avenues for memory management. You could use a 16 bit format for certain textures (RGB88888 is the default). You may gain by converting to compressed PVR format (once again this is lossy, but could be fine for some textures)
Look here and there to learn more about picture formats in coco, and the relationship to memory consumption (as well as load, rendering speeds). But once again, before you start optimizing, make certain you have no alternative to the pre-load all approach.
use jpg instead of png it will make that non transparent you can make that transparent by alpha image of that image it will help you reducing size almost half of you are using now.
I am building a map system that requires a large image (native 13K pixels wide by 20K pixels tall) to be overlayed onto an area of the US covering about 20 kilometers or so. I have the file size of the image in jpg format down to 23 MB and it loads onto the map fairly quickly. I can zoom in and out and it looks great. It's even located exactly where I need it to be (geographically). However, that 25 MB file is causing Firefox to consume an additional 1GB of memory!!! I am using Memory Restart extension on Firefox and without the image overlay, the memory usage is about 360 MB to 400 MB, which seems to be about the norm for regular usage, browsing other websites etc. But when I add the image layer, the memory usage jumps to 1.4 GB. I'm at a complete loss to explain WHY that is and how to fix it. Any ideas would be greatly appreciated.
Andrew
The file only takes up 23 MB as a JPEG. However, the JPEG format is compressed, and any program (such as FireFox) that wants to actually render the image has to uncompress it and store every pixel in memory. You have 13k by 20k pixels, which makes 260M pixels. Figure at least 3 bytes of color info per pixel, that's 780 MB. It might be using 4 bytes, to have each pixel aligned at a word boundary, which would be 1040 MB.
As for how to fix it, well, I don't know if you can, except by reducing the image size. If the image contains only a small number of colors (for instance, a simple diagram drawn in a few primary colors), you might be able to save it in some format that uses indexed colors, and then FireFox might be able to render it using less memory per pixel. It all depends on the rendering code.
Depending on what you're doing, perhaps you could set things up so that the whole image is at lower resolution, then when the user zooms in they get a higher-resolution image that covers less area.
Edit: to clarify that last bit: right now you have the entire photograph at full resolution, which is simple but needs a lot of memory. An alternative would be to have the entire photograph at reduced resolution (maximum expected screen resolution), which would take less memory; then when the user zooms in, you have the image at full resolution, but not the entire image - just the part that's been zoomed in (which likewise needs less memory).
I can think of two approaches: break up the big image into "tiles" and load the ones you need (not sure how well that would work), or use something like ImageMagick to construct the smaller image on-the-fly. You'd probably want to use caching if you do it that way, and you might need to code up a little "please wait" message to show while it's being constructed, since it could take several seconds to process such a large image.