Random access to a huge image? - image

I'm looking for a way to store a huge image in a way that allows random access to small regions without the need to load the whole image into memory. The focus is on speed and on a low memory footprint.
Let's say the image is 50k*50k 32-bit-pixels, amounting to ~9 GB (Currently it is stored as a single tiff file). One thing I found looks quite promising: HDF5 allows me to store and randomly access (integer) arrays of arbitrary size and dimension. I could put my image's pixels into such a database.
What (other) (simpler?) methods for efficient random pixel access are out there?
I would really like to store the image as a whole and avoid creating tiles for several reasons: The requested image region dimensions are not constant/predictable. I want to only load the pixels into memory that I really need. I would like to avoid the processing effort involved in tiling the image.

Related

Who stores the image in zeros and ones

I wanted to know who stores the images/ videos or audios in zeros and ones. I know that an image is stored in form of zeros and ones by storing the color for each pixel in the form of zeros and ones, and similar things happen for other types of data. But my question is, for example, if I create an image using any Image creating application and store it in my computer, then what or who is storing the colors in binary form for each pixel?
There are two types of images
acquired from a device (camera, scanner), which measures the amount of light in the RGB channels for every pixel and converts it to a binary value; writing the values to memory is handled by the device driver.
synthetized, from a geometric model with surface and lighting characteristics by pure computation, and every pixel value is obtained "out of nothing"; this is done by a rendering program.
After the image has been written to RAM memory, it can be transferred to disk for long term storage.

Windows Phone 7 memory management

I'd like to know if there are any specific strategies for handling memory, especially with respect to image caching on the Windows Phone. I have a very graphics intensive silverlight App which needs to keep it graphics that it retrieves from the internet and it needs to be able to freely roam about - but the memory requirement becomes quite huge after using the app for a couple of minutes.
I have tried setting the image's UriSource to null but I need to maintain the image backgrounds when I come back to the page. I'm at a loss because there isn't much information on the internet. The inbuilt profiling showed me "Texture Memory Dominant" and asked me to Analyze Heap Memory to resolve the issue, but I'm still clueless about these.
Any pointers to move forward?
My answer will be general - similarly to your question. I presume that you know for sure that the problem is in images. (Because a simple ListBox with a few hundred text items can cost you many MB.)
If you search the web you'll find plenty of links such as this one. But a general analysis is easy to do.
Take an image of the WP7 screen size, i.e. 480x800. 32-bit bitmap (I suppose this is what WP7 uses when the image is opened) takes roughly 1.5 MB (a simple multiplication).
The same jpg file can have 10x smaller size (for high quality compression) or even less.
Now what's done behind the scenes when you use the construction
<image source="http://..."/>.
(In the absence of any information from you, this is what I suppose you use.)
WP7 downloads the image and adds it to the cache. The cache apparently traces the use of the Uri pointing to the image.
As next the image gets opened, i.e. converted to a bitmap of native image size. Image gets downsampled in this process if it would exceed max. WP7 texture size.
You can customize the bitmap size as described here. If you care of quality, then you should use downscale factor 2, 4, or 8. In case of jpeg these factors represent by far the fastest option. (Well, I have no idea if you know the image resolution before the image gets loaded into the Image control. It is not too difficult to get this info from a jpg file, but right now I have no idea how it can be easily done on WP7.)
The bitmap gets freed if (my speculation) if the control's source is set to null. The downloaded image is purged from the cache when Uri is set to null. (This is reported on the web plenty times.)
If you take all this info, it should be possible to (kind of) control your use of the image cache. You can roughly estimate the image size and can decide which images remain in the cache. Maybe it will need some tricks such as storing Uri objects in you own structures and releasing them as needed. I am not saying this is easy to do, but it is certainly possible.

avoid massive memory usage in openlayers with image overlay

I am building a map system that requires a large image (native 13K pixels wide by 20K pixels tall) to be overlayed onto an area of the US covering about 20 kilometers or so. I have the file size of the image in jpg format down to 23 MB and it loads onto the map fairly quickly. I can zoom in and out and it looks great. It's even located exactly where I need it to be (geographically). However, that 25 MB file is causing Firefox to consume an additional 1GB of memory!!! I am using Memory Restart extension on Firefox and without the image overlay, the memory usage is about 360 MB to 400 MB, which seems to be about the norm for regular usage, browsing other websites etc. But when I add the image layer, the memory usage jumps to 1.4 GB. I'm at a complete loss to explain WHY that is and how to fix it. Any ideas would be greatly appreciated.
Andrew
The file only takes up 23 MB as a JPEG. However, the JPEG format is compressed, and any program (such as FireFox) that wants to actually render the image has to uncompress it and store every pixel in memory. You have 13k by 20k pixels, which makes 260M pixels. Figure at least 3 bytes of color info per pixel, that's 780 MB. It might be using 4 bytes, to have each pixel aligned at a word boundary, which would be 1040 MB.
As for how to fix it, well, I don't know if you can, except by reducing the image size. If the image contains only a small number of colors (for instance, a simple diagram drawn in a few primary colors), you might be able to save it in some format that uses indexed colors, and then FireFox might be able to render it using less memory per pixel. It all depends on the rendering code.
Depending on what you're doing, perhaps you could set things up so that the whole image is at lower resolution, then when the user zooms in they get a higher-resolution image that covers less area.
Edit: to clarify that last bit: right now you have the entire photograph at full resolution, which is simple but needs a lot of memory. An alternative would be to have the entire photograph at reduced resolution (maximum expected screen resolution), which would take less memory; then when the user zooms in, you have the image at full resolution, but not the entire image - just the part that's been zoomed in (which likewise needs less memory).
I can think of two approaches: break up the big image into "tiles" and load the ones you need (not sure how well that would work), or use something like ImageMagick to construct the smaller image on-the-fly. You'd probably want to use caching if you do it that way, and you might need to code up a little "please wait" message to show while it's being constructed, since it could take several seconds to process such a large image.

How to create extremely large vectors

I want to store the intensity of each pixel of an image in a n*n matrix. I am currently storing it in a vector. But for extremely large dimensions the program crashes as it runs out of memory. How do i solve this problem?
If your RAM is too small to hold all the information, you will need to use other means of storage. Maybe swapping to your harddisk. What kind of information is the intensity? A floating number? How many Pixels do your large images have? I think that your storage class simply creates too much overhead. Which language are you using? Can you supply some code snippets?

Data Structure for large and detailed maps

Does anyone has recommendation of data structures for relative large maps with high resolution, something like 400mile x 400mile with 10-15ft resolution. Using 2D array, that would be roughly 2Mx2M cells.
The map only needs to store the elevation and terrain (earth, water, rock, etc.), and I don't think storing tiles is a good strategy.
Thank you!
It depends on what you need to do with it: view it, store it, analyze it, etc...
One thing I can say, however, is that that file will be HUGE at your stated resolution, and you should consider splitting it up into at least a few tiles, even better at 1x1 mile tiles.
The list of raster formats supported by GDAL could serve as a good starting point for exploring various formats, keeping in mind that many software packages (GRASS, ArcGIS, etc. use GDAL to read and write most raster formats). Note also that some file formats have maximum sizes which may prevent you from using them with your very large file.
For analysis and non-viewable storage, HDF5 format might be of interest.
If you want people to see the data as a map over the web, then creating small image tile overlays will be the fastest approach to sharing such a large dataset.

Resources