avoid massive memory usage in openlayers with image overlay - firefox

I am building a map system that requires a large image (native 13K pixels wide by 20K pixels tall) to be overlayed onto an area of the US covering about 20 kilometers or so. I have the file size of the image in jpg format down to 23 MB and it loads onto the map fairly quickly. I can zoom in and out and it looks great. It's even located exactly where I need it to be (geographically). However, that 25 MB file is causing Firefox to consume an additional 1GB of memory!!! I am using Memory Restart extension on Firefox and without the image overlay, the memory usage is about 360 MB to 400 MB, which seems to be about the norm for regular usage, browsing other websites etc. But when I add the image layer, the memory usage jumps to 1.4 GB. I'm at a complete loss to explain WHY that is and how to fix it. Any ideas would be greatly appreciated.
Andrew

The file only takes up 23 MB as a JPEG. However, the JPEG format is compressed, and any program (such as FireFox) that wants to actually render the image has to uncompress it and store every pixel in memory. You have 13k by 20k pixels, which makes 260M pixels. Figure at least 3 bytes of color info per pixel, that's 780 MB. It might be using 4 bytes, to have each pixel aligned at a word boundary, which would be 1040 MB.
As for how to fix it, well, I don't know if you can, except by reducing the image size. If the image contains only a small number of colors (for instance, a simple diagram drawn in a few primary colors), you might be able to save it in some format that uses indexed colors, and then FireFox might be able to render it using less memory per pixel. It all depends on the rendering code.
Depending on what you're doing, perhaps you could set things up so that the whole image is at lower resolution, then when the user zooms in they get a higher-resolution image that covers less area.
Edit: to clarify that last bit: right now you have the entire photograph at full resolution, which is simple but needs a lot of memory. An alternative would be to have the entire photograph at reduced resolution (maximum expected screen resolution), which would take less memory; then when the user zooms in, you have the image at full resolution, but not the entire image - just the part that's been zoomed in (which likewise needs less memory).
I can think of two approaches: break up the big image into "tiles" and load the ones you need (not sure how well that would work), or use something like ImageMagick to construct the smaller image on-the-fly. You'd probably want to use caching if you do it that way, and you might need to code up a little "please wait" message to show while it's being constructed, since it could take several seconds to process such a large image.

Related

How do I fix terrible image scaling in Apache FOP?

We're currently using apache fop to render a bunch of images in PDF format, but we've notice that the quality of just about any image is complete garbage and anything with text in it is incredibly unreadable.
I have looked through the documentation about source and target resolutions here.
And I thought that maybe I could force fit some good image quality in by assuming a default (source) resolution of 72 pixels per inch for all bitmaps/pngs/etc and desiring a target resolution of 300 pixels per inch. I figured this would theoretically cause FOP to cram way more image into a smaller space netting me an increase in quality with less of a need to smash my images into such a small space. Also I did the math and before where things were 72 72 it seemed like we were getting 5 * 72 = 375 pixel Nokia phone screen resolutions so I was like no wonder they are terrible quality.
Well, it seems like even with a target PPI of 300 instead of netting me a generous space of about 1500 pixels, I'm still getting the same crappy Nokia phone quality. I tried many combinations of source and target PPI 300/300 72/300 and 300/72 respectively, but nothing seems to cause any form or resampling or better image output. In the end I'm always left with this.
This is an actual screenshot of my PDF at almost full screen size. It may be worth noting that we are using the org.apache.commons.codec.binary.Base64 encoder to take images from memory and turn them into strings to embed into PDF. I don't know what kind of compression if any is being done through this encoder, but I hope to dump a string to the filesystem soon to take a look and make sure that isn't the issue.
In the meantime, does anyone know what PPI settings I might have messed up or what better options I may have to render clear cleaner images out to PDF with FOP?

Do we really need separate thumbnail images?

I understand the use of thumbnail in network applications but assuming all the image are in the application itself (photo application), in this case do we still need thumbnail images for performance reasons or is it just fine for the device to resize the actual image on run time?
Since the question is too opinion based I am going to ask more quantitively.
The images are 500x500, about 200-300kb size jpg.
There will be about 200 images.
It is targeted for iphone4 and higher, so that would be the minimum hardware specs users will have.
The maximum memory used should not pass 20% of the devices capacity.
Will the application in this case need separate thumbnail images?
It depends on your application. Just test performance and memory usage on device.
If you show a lot of images and/or they change very quickly (like when you are scrolling UITableView with a lot of images) you will probably have to use thumbnails.
UPDATE:
When image is shown it takes width * height * 3 (width * height * 4 for images with ALPHA channel) bytes of memory. 10 photos 2592 x 1936 stored in memory will require 200Mb of RAM. It is too much. You definitely have to use thumbnails.
Your question is a bit lacking on detail but I assume you're asking if, for say a photo album app, can you just throw around full size UIImages and let a UIImageView resize them to fit on the screen, or do you need to resize?
You absolutely need to resize.
An image taken by an iPhone camera will be several megabytes in compressed file size, more in actual bytes used to represent pixels. The dimensions of the image will be far greater than the screen dimensions of the device. The memory use is very high, particularly if you're thinking of showing multiple "thumbnails". It's not so much a CPU issue (once the image has been rendered it doesn't need re-rendering) but a memory one, and you're severely memory constrained on a mobile device.
Each doubling in size of an image (e.g. from a 100x100 to a 200x200) represents a four-fold increase in the memory needed to hold it.

File format limits in pixel size for png images?

Is there a file format limit to the PNG pixel size?
I am trying to visualize a 30.000x30.000 pixels PNG image with Firefox, but I get an error. The image opens correcly in Preview.app, although very slowly. The file size is not big, just around 3 MiB (1 bit black/white image). I am wondering if there's a technical file-format reason for this.
A naive implementation of resizing would require the image to be blown up to 2.7GB in size before it is displayed. This would clearly be too large for a normal 32-bit program to handle.
The PNG specification doesn't appear to place any limits on the width and height of an image; these are 4 byte unsigned integers, which could be up to 4294967295. http://www.libpng.org/pub/png/spec/iso/index-object.html#11IHDR
That is an odd image, but I am sure there is a reason to have such a huge image.
I can't really address the size limit, but I can address a way to get around it. Create a set of tiles of some size, and then as the user scrolls, bring tiles into view using CSS to position them correctly. You might even be able to get away with bringing up all the tiles at once, with a slew of smaller images.
But I am curious, what is the application that needs such a huge image displayed without scaling out?
Erick

Reducing the file size of a very large images, without changing the image dimensions

Consider an application handling uploading of potentially very large PNG files.
All uploaded files must be stored to disk for later retrieval. However, the PNG files can be up to 30 MB in size, but disk storage limitations gives a maximum per file size of 1 MB.
The problem is to take an input PNG of file size up to 30 MB and produce an output PNG of file size below 1 MB.
This operation will obviously be lossy - and reduction in image quality, colors, etc is not a problem. However, one thing that must not be changed is the image dimension. Hence, an input file of dimension 800x600 must produce an output file of dimension 800x600.
The above requirements outlined above are strict and cannot be changed.
Using ImageMagick (or some other open source tool) how would you go about reducing the file size of input PNG-files of size ~30 MB to a maximum of 1 MB per file, without changing image dimensions?
PNG is not a lossy image format, so you would likely need to convert the image into another format-- most likely JPEG. JPEG has a settable "quality" factor-- you could simply keep reducing the quality factor until you got an image that was small enough. All of this can be done without changing the image resolution.
Obviously, depending on the image, the loss of visual quality may be substantial. JPEG does best for "true life" images, such as pictures from cameras. It does not do as well for logos, screen shots, or other images with "sharp" transitions from light to dark. (PNG, on the other hand, has the opposite behavior-- it's best for logos, etc.)
However, at 800x600, it likely will be very easy to get a JPEG down under 1MB. (I would be very surprised to see a 30MB file at those smallish dimensions.) In fact, even uncompressed, the image would only be around 1.4MB:
800 pixels * 600 pixels * 3 Bytes / color = 1,440,000 Bytes = 1.4MB
Therefore, you only need a 1.4:1 compression ratio to get the image down to 1MB. Depending on the type of image, the PNG compression may very well provide that level of compression. If not, JPEG almost certainly could-- JPEG compression ratios on the order of 10:1 are not uncommon. Again, the quality / size of the output will depend on the type of image.
Finally, while I have not used ImageMagick in a little while, I'm almost certain there are options to re-compress an image using a specific quality factor. Read through the docs, and start experimenting!
EDIT: Looks like it should, indeed, be pretty easy with ImageMagick. From the docs:
$magick> convert input.png -quality 75 output.jpg
Just keep playing with the quality value until you get a suitable output.
Your example is troublesome because a 30MB image at 800x600 resolution is storing 500 bits per pixel. Clearly wildly unrealistic. Please give us real numbers.
Meanwhile, the "cheap and cheerful" approach I would try would be as follows: scale the image down by a factor of 6, then scale it back up by a factor of 6, then run it through PNG compression. If you get lucky, you'll reduce image size by a factor of 36. If you get unlucky the savings will be more like 6.
pngtopng big.png | pnmscale -reduce 6 | pnmscale 6 | pnmtopng > big.png
If that's not enough you can toss a ppmquant in the middle (on the small image) to reduce the number of colors. (The examples are netpbm/pbmplus, which I have always found easier to understand than ImageMagick.)
To know whether such a solution is reasonable, we have to know the true numbers of your problem.
Also, if you are really going to throw away the information permanently, you are almost certainly better off using JPEG compression, which is designed to lose information reasonably gracefully. Is there some reason JPEG is not appropriate for your application?
Since the size of an image file is directly related to the image dimensions and the number of colours, you seem to have only one choice: reduce the number of colours.
And ~30MB down to 1MB is a very large reduction.
It would be difficult to achieve this ratio with a conversion to monochrome.
It depends a lot on what you want at the end, I often like to reduce the number of colors while perserving the size. In many many cases the reduced colors does not matter. Here is an example of reducing the colors to 254.
convert -colors 254 in.png out.png
You can try the pngquant utility. It is very simple to install and to use. And it can compress your PNGs a lot without visible quality loss.
Once you install it try something like this:
pngquant yourfile.png
pngquant --quality=0-70 yourfile.png
For my demo image (generated by imagemagick) the first command reduces 350KB to 110KB, and the second one reduces it to 65KB.
Step 1: Decrease the image to 1/16 of its original size.
Step 2: Decrease the amount of colors.
Step 3: Increase the size of the image back to its original size.
I know you want to preserve the pixel size, but can you reduce the pixel size and adjust the DPI stored with the image so that the display size is preserved? It depends on what client you'll be using to view the images, but most should observe it. If you are using the images on the web, then you can just set the pixel size of the <img> tag.
It depends on they type of image, is it a real life picture or computer generated image,
for real life images png will do very little it might even not compress at all, use jpg for those images, it the image has a limited number of different colors (it can have a 24 bit image depth but the number of unique images will be low) png can compress quite nicely.
png is basicly an implementation of zip for images so if a lot of pixels are the same you can have a rather nice compression ratio, if you need lossless compression don't do resizing.
use optipng it reduce size without loss
http://optipng.sourceforge.net/
Try ImageOptim https://imageoptim.com/mac it is free and open source
If you want to modify the image size in ubuntu, you can try "gimp".
I have tried couple of image editing apps in ubuntu and this seemed to be the best among them.
Installation:
Open terminal
Type: sudo apt install gimp-plugin-registry
Give admin password. You'll need net connection for this.
Once installed, open the image with GIMP image editor. Then go to: File > Export as > Click on 'Export' button
You will get a small window, where check box on "Show preview in image window". Once you check this option, you will get to see the current size of the file along with Quality level.
Adjust the quality level to increase/decrease the file size.
Once adjusting is done, click on 'Export' button finally to save the file.
Right click on the image. Select open with paint. Click on resize. Click on pixel and change the horizontal to 250 or 200.
That's the only thing. It is the fastest way for those who are using Windows XP or Windows 7.

Tips for reducing Core Animation memory usage

So here's the situation:
I have a CALayer that is the size of my screen, and I'm setting the contents property to a 2 Mb JPEG that's roughly 3500 x 2000 pixels in size with a resolution of 240ppi.
I'd expect there to be a slight overhead involved in using the CALayer, but my sample application (which only does exactly what's above) shows usage of about 33Mb RSIZE, 22Mb RPVT and 30Mb RSHRD. I've noticed that these numbers are much better when running the application as 64-bit than they are running as a 32-bit process.
I'm doing everything I can think of in the real application that this example comes from, including resampling my CGImageRefs to only be the size of the layer, but this seems extraneous to me - shouldn't it be simpler?
Has anyone come across good methods to reduce the amount of memory CALayers and CGImageRefs use?
First, you're going to run into problems with an image that size in a plain CALayer, because you may hit the texture size limit of 2048 x 2048 (depending on your graphics card). Applications like this are what CATiledLayer is designed for. Bill Dudney has some code examples on his blog (a large PDF), as well as with the code that accompanies his book.
It isn't surprising to me that such a large image would take so much memory, given that it will be stored as an uncompressed bitmap in your CGImage. Aside from scaling your image to the resolution you need, and tiling it with CATiledLayer, I can't think of much. Are you releasing the CGImageRef once you've assigned it to the contents of the CAlayer? You won't need to hang onto it at that point.

Resources