Render images progressively in a MFC based application - image

Browser can render progressive images progressively.
And the images can only be progressively decoded if they were progressively encoded.
e.g., GIF or PNG images saved with the "interlaced" option, or JPEG images saved with the "progressive" option.
I want to render the progressive images in my MFC based application just like the browser does.
Windows Imaging Component provide IWICProgressiveLevelControl interface to decode image progressively.
But I can't find out any example to show how to stream and display image progressively at the same time using IWICProgressiveLevelControl.
Any advice would be appreciated. Thanks.

There's a good sample here:
https://code.msdn.microsoft.com/Windows-Imaging-Component-3af3cd49
Once you've used IWICProgressiveLevelControl::SetCurrentLevel to select the scan, the decoder will behave normally but only use the scans up to and including the one you selected. So any call to CopyPixels or any IWICBitmapSource components in your chain will receive the fully decoded image at the selected scan level.
The trick, as demonstrated in the sample, is that you can't use IWICProgressiveLevelControl::GetLevelCount and select the max level immediately if you don't know the complete file is available. As the documentation for the sample states,
IWICProgressiveLevelControl allows you to control which progressive level of detail to use on the frame decode. It also allows you to query the total number of progressive levels in the file; however it is not recommended to use this method on JPEG images because the total count is not known until the entire image has been downloaded, defeating the purpose of progressive decode. Instead, this sample demonstrates the recommended practice of iteratively requesting increasing levels of detail until WIC returns WINCODEC_ERR_INVALIDPROGRESSIVELEVEL.

Related

Does jpeg compression destroys any embedded malicious code inside an image?

I'm trying to figure out what is the best way to "clean" images that are coming from non authorized source (app visitors) before opening them, similar to Whatsapp.
Scanning each image with anti virus is probably not so efficient in a large scale, So i came to assumption that rewriting each incoming image by compressing it using jpeg could results a clean image without a malicous code inside it.
From what i read so far the JPEG compression should destroy any hidden content and reorder the data structure of the image which will results a safe image.
WTYT? Am i on the right path to overcome this issue?
There is no code in a JPEG stream. In fact, I don't know of any image format that directs the decoder execute code.
The worst think I can thing of would be to have a JPEG stream that, say embedded malicious code in a thumbnail, COM, or APPn marker. Then another application would look for that image and load the code.
Even this requires something else to get on your system to execute the JPEG "code" and it would be a lot of trouble for something that could be accomplished much easier.

Better thumbnail creation of raw images

I'm building a web application (RoR) that manages images that are in raw image format. I need to create thumbnail/web versions of these images to be displayed on the site. Currently, I'm using imagemagick, which delegates to dcraw to produce the jpeg thumbnail. The problem I'm running into is that the thumbnail deviates from the look of the original; the image gets darker and the white balance is sometimes heavily shifted.
I'm assuming that the raw format default setting can't be read by dcraw, and thus it's left guessing how to parameterize the raw conversion. I can play around with customizing these setting, but it seems getting it right on one image causes others to be further off the mark.
Is there a better way to do this in order to get a result that more closely mimics the what I might see in a raw viewer like photoshop, or even Mac OSX preview? Given that Mac OS X supports a variety of digital camera raw formats, is there anyway to utilize the OS's ability to render preview images (especially considering that result is what is expected).
The raw images that I'm using are 3FRs and fffs (both from Hasselblad).
I can post samples if people are interested.
Thanks
Look at "sips" and "Resizing images using the command line" to get you started.

HTML5 Canvas toDataURL 8 bit?

In a webappp I am currently creating the user has to provide images that get stored server side in a database. To minimize server load I am handling image resizing client-side courtesy of the HTML5 Canvas and getting the user to pre-approve the quality of the resized image.
The issue I have run into is this - the file size of the resized image is big. If I resize the same image with Paint.NET I can get a perfectly decent light weight 8 bit PNG image. Even the 32 bit Paint.NET image is smaller than the one that turns up on the server via toDataURL. I tried playing around with the toDataURL quality parameter but changing it has no effect whatsoever - exactly the same data size.
I should mention tha t I am testing with Chrome 20.0.1132.57 m and that the only browsers that are relevant to the app are the desktop versions of Chrome and Safari.
I know I could do some server side image processing but I want to avoid that if possible. Question - what, if anything can I do to cut down on the image file size sent out from the browser?
Browsers may happily ignore any quality parameter given for the toDataUrl and such. I don't believe honoring it is mandatory by the specification.
The only way to control the quality exactly would be
Write your own PNG compressor in JS or use something you can steal from the internets https://github.com/imaya/CanvasTool.PngEncoder
Dump <canvas> data to ArrayBuffer
Pass this to WebWorker
Let WebWorker compress it using your PNG compressor library
I believe there exist JPEG/PNG encoding and decoding solutions already.
Alternative you may try canvas.mozGetAsFile() / canvas.toBlob(), but I'll believe browsers still won't honour quality parameters.
https://developer.mozilla.org/en/DOM/HTMLCanvasElement/

Where does directshow get image dimensions from?

We are using a directshow interface to capture images from a video stream. These images are presented in a fixed size window.
Once we have captured an image we store it as a bitmap. Downstream we have the ability to add annotation to the image, for example letters in a fixed size font.
In one of our desktop environments, the annotation has started appearing at half the size that it normally appears at. This implies that the image we are merging the text onto has dimensions that are maybe twice as large.
The system that this happens on is a shared resource as in some unknown individual has installed software on the system that differs from our baseline.
We have two approaches - the 1st is to reimage the system to get our default text size behaviour back. The 2nd is to figure out how directshow manages image dimensions so that we can set the scaling on the image correctly.
A survey of the directshow literature indicates that the above is not a trivial task. The original work was done by another team that did not document what they did. Can anybody point us in the direction of what directshow object we want to deal with to properly size the sampled image?
DirectShow - as a framework - does not deal with resolutions directly. Your video source (such as capture hardware) is capable of providing video feed in certain resolution which you possibly can change. You normally use IAMStreamConfig as described in Configure the Video Output Format in order to choose capture resolution.
Sometimes you cannot affect capture resolution and you need to resample the image in whatever dimensions you captured it. There is no stock filter for this, however Media Foundation provides a suitable Video Resizer DSP which does most of the task. Unfortunately it does not fit DirectShow pipeline smoothly, so you need fitting and/or custom filter for resizing.
When filters connect in DirectShow, they have an AM_MEDIA_TYPE. Here you will find a VIDEOINFOHEADER with a BITMAPINFOHEADER and this header has a biWidth and biHeight.
Try to build the FilterGraph manually (with GraphEdit or GraphStudioNext) and inspect these fields.

Best practices for creating thumbnails with GAE's Image API

For each photo in my datastore, I'm creating 3 thumbnails (small, medium, and large). I'm having a hard time figuring out what API functions to use on the original photos to get a balance between quality and file size for the thumbnails. The file size for the thumbs always seems to be too large.
GAE's Image API has many options for images (such as im_feeling_lucky(), converting from PNG to JPEG, and adjusting JPEG quality) and I'd like to know what functions to use and in which order to achieve the optimal setting for these thumbnails.
The easiest way to do this is simply to use get_serving_url to get a public URL for a scaled version of the image you can use as a thumbnail. This removes the need for you to create and separately store thumbnailed images.

Resources