Loading a single pixel from an image without loading the entire image - image

I want to load a single pixel from a large satellite image for an operation. Loading the entire image is a performance concern, so I want to create a solution that only loads the required pixel. Would this even be possible? Preferred technologies are .NET and shell scripting, though any appropriate technologies which can solve this problem are welcome. I also have control over the creation of the image file, so other file formats, etc. are possible.

Related

How to optimize images for SEO & Google's Pagespeed & Improve web-saving

Pretty much with every Pagespeed test I do for all my website I get the comment "Optimize images by lossless compressing image X" which often increases my page rank a lot.
I already save EVERY image with 'save for web' with Photoshop, but I was wondering how I could "Optimize images by compressing lossless" even more. As far as I know I'm already doing everything I can.
Really wondering..
Off-topic, but I noticed that Google's PageSpeed uses a Retina device to check, since all my Retina images got loaded instead of the regular ones. Since these are larger than the area I got a 1/100 score on the mobile segment. Haha.
This was a real issue with many of my sites, however I use the free version of kraken to 'loosely compress' all of my images and this passes the Google Test, thus boosting rankings!
https://kraken.io/web-interface
I must have used this for well over 10,000 images already!
The images you create in programs like Photoshop and Illustrator look amazing but often the file sizes are very large. This is because the images are made in a format that makes them easier to manipulate in different ways. If you put these files on your website it would be very slow to load. Optimizing your images for the web means saving or compiling your images in a web-friendly format depending on what the image contains.
How does it work?
There are two forms of compression that we need to understand, Lossy and Lossless.
Images saved in a lossy format will look slightly different than the original image when uncompressed. Keep in mind that this is only visible at a very close look. Lossy compression is good for web, because images use a small amount of memory, but can be sufficiently like the original image.
Images saved in lossless format retain all the information needed to produce the original image. For this reason, these images carry a lot more data and in return are a much large file size.
We also can optimize images for the web by saving them as the appropriate dimensions. Resizing the image on the webpage itself using CSS is helpful but the issue is the web browser will still download the entire original file, then resize it and display it.
Can you imagine taking a poster size image and using it as a thumbnail? The little 20px by 20px image would take as long to load as the original poster when we could just be loading a 20px image the whole time.
How to Optimize Images?
In simple terms optimizing your image works by removing all the unnecessary data that is saved within the image to reduce the file size of the image based on where it is being used in your website. Optimizing images for the web can reduce your total page load size by up to 80%.
Full optimization of images can be quite an art to perfect as there are such a wide variety of images you might be dealing with. Here are the most common ways to optimize your images for the web.
Reduce the white space around images – some developers use whitespace for padding which is a big no-no. Crop your images to remove any whitespace around the image and use CSS to provide padding.
Use proper file formats. If you have icons, bullets, or any graphics that don’t have too many colors use a format such as GIF and save the file with lower amounts of colors. If you have more detailed graphics then use JPG file format to save your images and reduce the quality.
Save your images in the proper dimensions. If you are having to use HTML or CSS to resize your images, stop right there. Save the image in the desired size to reduce the file size.
To resize your images you will have to use some form of program. For basic compression, you can use a simple editing program such as GIMP. For more advanced optimization you will have to save specific files in Photoshop, Illustrator, or Fireworks.

XNA Texture loading speed (for extra large Texture sizes)

[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.

Are there any benefits to using bitmaps?

I'm porting some CF 2.0 VB.Net apps to a newer version of a handset that has twice the screen resolution. So I have to double the dimensions of everything otherwise it all gets squished up into the top LH corner of the screen.
One screen had a bitmap which was 250K in size, and after I doubled the dimensions naturally it blew out to one MB. This isn't real good on a handheld, so I fired up irfanview and converted it to a .GIF. The .GIF was only 60KB in size, with no discernible change in the quality of the image.
To me, it seems a no-brainer : Convert all Bitmaps to Gif (or JPG) and get the same results for a fraction of the disk space (and probably quicker form loading times).
But does anyone know of a situation where you would use a bitmap in preference to a GIF/JPEG? I cannot find any.
I really can't think of any realistic example where you would prefer an bitmap to a GIF. Since GIF is a lossless format you loose no information when storing images. So after reading the file in your app you will have the same image data as if you have read a bitmap. And like you said: The file will be smaller and thus will probably will be read faster from disk.
JPEG is different because it's a lossy format, meaning you will lose information when storing images in it. You will need to decide if the loss of information is meaningful in your app.
Bitmaps would be preferable if and only if reading files from disk where faster than decompressing the file in memory.
And to be precise you would prefer bitmaps when storing images in main memory, so you can work easily on the data in your code. Which is actually what you most likely already have when you have loaded a file using an image library.
To cut a long story shorts, a BMP is stored as a series of pixels along with their colour. This is useful if you want to do such things as pattern recognition, movement detection and such like.
Bitmaps are typically used for their convenience - you can knock one up in paint without having specialist graphics software.

What is the best way to dynamically resize images?

The problem:
We have large product images we want thumbnails of at various size but don't want to be stuck batch processing the images in Photoshop. We want a dynamic way or resizing images, that wont add an extra load time while the images is processing on the backend.
Amazon does this some how with their ecommerce solution. When you upload an image it resizes the image in square format and then gives you every size imagineable. ex 150x150, 149x149, etc. Starting at the largest size of the image, so if you upload a 1024x900 image it will resize it to 1023x899, 1023x1203 (add in white space where needed), then resize every pixel until it gets to 1x1px. The some how stores all the images to the server (if it even does that)
"there's got to be a better way"
Any suggestions on the best way to handle image resize on the fly?
Dynamic image processing can be incredibly fast, and it's a much better solution than generating every possible combination of sizes for an uploaded image.
The open-source ImageResizing.Net library allows dynamic crop/zoom and resizing, and with the WIC plugin can often have round-trip times of less than 20ms. That's hard to beat. It also offers disk caching and Amazon CloudFront & S3 support if you want to scale with the big folks. It's used by 20K-60K websites, and some servers host upwards of 20TB of images.
I'm pretty sure Amazon uses cached dynamic image resizing for their eCommerce solution. Pre-generating image versions is a very 2001-era solution.
[full disclosure: I'm the author.]

dealing with large number of pictures

My program save images from a camera in a rate about 30fps;
Moving these images or browsing them by windows explorer take a long time.
My questions is:
is storing them as a video file is a better approach? so moving files wouldn't take a lot of time. (if this is good, how to open a large video file and get a specified frame number fast? Is this approach faster?)
It's very unlikely that a video file will load or seek any faster than individual image files. Not to mention this is going to be a lot more difficult to handle than images, and probably introduce a bunch of extra, unnecessary dependencies to your application.
If you really need a way to speed up image browsing, you should look into generating separate thumbnail images. Thumbnails are exact duplicates of the original images, but with significant reductions in size and quality, suitable for display purposes only. This makes the resulting thumbnail images significantly smaller than the originals, which should speed up their loading and rendering substantially. This trick is used all over graphics packages and file managers. This approach allows you to delay the expensive process of loading the associated full image until after the user selects a particular thumbnail image to open.

Resources