How can I find out if a `QPixmap` has transparency? - image

I have some QGraphicsPixmapItem that will contain images... of any kind.
I will have to perform certain processing, that is very simple if the image contained in the item is fully opaque, but more complicated if there is transparency. So I want to separate these 2 situations.
How can I find out if a QGraphicsPixmapItem or a QPixmap has transparency ?
(the only thing I found so far is QPixmap::mask():
Extracts a bitmap mask from the pixmap's alpha channel.
Warning: This is potentially an expensive operation. The mask of the pixmap is extracted dynamically from the pixeldata.
not sure what i do with it...
or I can iterate myself through pixel data till I find a pixel with some transparency... either version seems ineffective)
Update:
After implementing it myself, I found an older similar question:
Checking if a QImage has an alpha channel
(seems nobody else found an alternative to iterating through all pixel data)

Related

iOS: Merging two images, combining the content where it overlaps

I would like to combine two images that partially contain content and otherwise are transparent (alpha = 0). Where the content of the two images overlaps I would like to use half the color value (alpha=0.5) from the first image combined with half the color value of the other image. All pixels that still does not contain content should be transparent. I can't seem to find a convenient way to do this using Core Graphics or Core Image or maybe I am missing something... Does anyone have any tips on how to do this?
If anyone else encounters this problem:
I was able to solve it by using pixel wise processing inspired by this answer https://stackoverflow.com/a/31661519/3652610
and alpha blending described here https://stackoverflow.com/a/727339

How do I know if an image is "Premultiplied Alpha"?

I heard that only premultiplied alpha is needed when doing layer blending etc. How do I know if my original image is premultiplied alpha?
You can't.
The only thing that you can check is if it's not premultiplied. To do it go over all the pixels and see if there is a color-value that has a higher value than the alpha would permit if(max(col.r,col.g,col.b) > 255*alpha)//not premul. Any other cases are ambiguous and could or could not be premultiplied. Your best guess is probably to assume that they aren't as that's the case for most PNGs.
Edit: actually, not even the code that I posted would work as there are a lot of PNGs out there with white matte, so the image would have to include parts that have an alpha of 0 to determine the matte color first.
Android Bitmap stores images loaded from PNG with premultiplied alpha. You can't get non-premultiplied original colours from it in usual way.
In order to load images without RGB channels being premultiplied I have to use 3rd party PNGDecoder from here: http://twl.l33tlabs.org/#downloads

How can I deblur an image in matlab?

I need to remove the blur this image:
Image source: http://www.flickr.com/photos/63036721#N02/5733034767/
Any Ideas?
Although previous answers are right when they say that you can't recover lost information, you could investigate a little and make a few guesses.
I downloaded your image in what seems to be the original size (75x75) and you can see here a zoomed segment (one little square = one pixel)
It seems a pretty linear grayscale! Let's verify it by plotting the intensities of the central row. In Mathematica:
ListLinePlot[First /# ImageData[i][[38]][[1 ;; 15]]]
So, it is effectively linear, starting at zero and ending at one.
So you may guess it was originally a B&W image, linearly blurred.
The easiest way to deblur that (not always giving good results, but enough in your case) is to binarize the image with a 0.5 threshold. Like this:
And this is a possible way. Just remember we are guessing a lot here!
HTH!
You cannot generally retrieve missing information.
If you know what it is an image of, in this case a Gaussian or Airy profile then it's probably an out of focus image of a point source - you can determine the characteristics of the point.
Another technique is to try and determine the character tics of the blurring - especially if you have many images form the same blurred system. Then iteratively create a possible source image, blur it by that convolution and compare it to the blurred image.
This is the general technique used to make radio astronomy source maps (images) and was used for the flawed Hubble Space Telescope images
When working with images one of the most common things is to use a convolution filter. There is a "sharpen" filter that does what it can to remove blur from an image. An example of a sharpen filter can be found here:
http://www.panoramafactory.com/sharpness/sharpness.html
Some programs like matlab make convolution really easy: conv2(A,B)
And most nice photo editing have the filters under some name or another (sharpen usually).
But keep in mind that filters can only do so much. In theory, the actual information has been lost by the blurring process and it is impossible to perfectly reconstruct the initial image (no matter what TV will lead you to believe).
In this case it seems like you have a very simple image with only black and white. Knowing this about your image you could always use a simple threshold. Set everything above a certain threshold to white, and everything below to black. Once again most photo editing software makes this really easy.
You cannot retrieve missing information, but under certain assumptions you can sharpen.
Try unsharp masking.

Create and write paletted RGBA PNG using NSImage

I'm trying to create paletted PNG image (8-bit per pixel) that uses RGBA palette (32-bit per palette entry) using Cocoa framework*.
I've tried few combinations for [NSBitmapImageRep initWithBitmapDataPlanes:…] method. It seems to create appropriate bitmap for bitsPerSample:2 bitsPerPixel:8.
However, when I try to write such bitmap with [NSBitmapImageRep representationUsingType:NSPNGFileType…] I get:
libpng error: Invalid bit depth for RGBA image
If I try other bit depths, then I get 32-bit per pixel (non-paletted) image.
*) I know I could just use libpng, but that's not an answer I'm looking for.
2 bits per sample, 8 per pixel will not get you an indexed PNG--it will, in theory, create an RGBA PNG file with 2 bits per sample, just as it suggests. Now, such an image has 256 possible colour values per pixel (including alpha channel) but it's not indexed in the sense of having a colour lookup table.
To my knowledge, there is no way to specify a colour palette when using NSBitmapImageRep. You will probably have to use libpng directly to get the effect you want. (By the way, it doesn't matter if you aren't looking for this answer. It's still the correct answer to this particular problem and saying "no!" isn't going to change the universe around you.)
However, before you do that, if you tell us why you think/know you need an indexed PNG, we may be able to point you toward a better or simpler solution.

Draw part of CGImage

I have an application that draws images from a CGImage.
The CImage itself is loaded using a CGImageSourceCreateImageAtIndex to create an image from a PNG file.
This forms part of a sprite engine - there are multiple sprite images on a single PNG file, so each sprite has a CGRect defining where it is found on the CGImage.
The problem is, CGContextDraw only takes a destination rect - and stretches the source CGImage to fill it.
So, to draw each sprite image we need to create multiple CGImages from the original source, using CGImageCreateWithImageInRect().
I thought at first that this would be a 'cheap' operation - it doesn't seem necessary for each CGImage to contain its own copy of the images bits - however, profiling has revealed that using CGImageCreateWithImageInRect() is a rather expensive operation.
Is there a more optimal method to draw a sub-section of a CGImage onto a CGContext so I dont need to CGImageCreateWithImageInRect() so often?
Given the lack of a source rectangle, and the ease of making a CGImage from a rect on a CGImage I began to suspect that perhaps CGImage implemented a copy-on-write semantic where a CGImage made from a CGImage would refer to a sub-rect of the same physical bits as the parent.
Profiling seems to prove this wrong :/
I was in the same boat as you. CGImageCreateWithImageInRect() worked better for my needs but previously I had attempted to convert to an NSImage, and prior to that I was clipping the context I was drawing in, and translating so that CGContextDrawImage() would draw the right data into the clipped region.
Of all of the solutions I tried:
Clipping and translating was prohibitively tolling on the CPU. It was too slow. It seemed that increasing the amount of bitmap data only slightly made significant performance impacts, suggesting that this approach lacks any sort of scalability.
Conversion to NSImage was relatively efficient, at least for the data we were using. There didn't seem to be any duplication of bitmap data that I could see, which was mostly what I was afraid of going from one image object to another.
At one point I converted to a CIImage, as this class also allows drawing subregions of the image. This seemed to be slower than converting to NSImage, but did offer me the chance to fiddle around with the bitmap by passing through some of the Core Image filters.
Using CGImageCreateWithImageInRect() was the fastest of the lot; maybe this has been optimised since you had last used it. The documentation for this function says the resulting image retains a reference to the original image, this seems to agree with what you had assumed regarding copy-on-write semantics. In my benchmarks, there appears to be no duplication of data but maybe I'm reading the results wrong. We went with this method because it was not only the fastest but it seemed like a more “clean” approach, keeping the whole process in one framework.
Create an NSImage with the CGImage. An NSImage object makes it easy to draw only some section of it to a destination rectangle.
I believe the recommendation is to use a clipping region.
I had a similar problem when writing a simple 2D tile-based game.
The only way I got decent performance was to:
1) Pre-render the tilesheet CGImage into a CGBitmapContext using CGContextDrawImage()
2) Create another CGBitmapContext as an offscreen rendering buffer, with the same size as the UIView I was drawing in, and same pixel format as the context from (1).
3) Write my own fast blit routine that would copy a region (CGRect) of pixels from the bitmap context created in (1) to the bitmap context created in (2). This is pretty easy: just simple memory copying (and some extra per-pixel operations to do alpha blending if needed), keeping in mind that the rasters are in reverse order in the buffer (the last row of pixels in the image is at the beginning of the buffer).
4) Once a frame had been drawn, draw the offscreen buffer in the view using CGContextDrawImage().
As far as I could tell, every time you call CGImageCreateWithImageInRect(), it decodes the entire PNG file into a raw bitmap, then copies the desired region of the bitmap to the destination context.

Resources