Does anyone know is there a better way to check if some image contain a (semi)transparent pixel beside going trough all pixels and check their alpha channel?
[pseudo]
for each pixel in image:
if pixel.alpha != 0xff:
return true
Thanks in advance.
You could use BufferedImage.getType()
or
ColorModel.hasAlpha()
to check if there is an alpha channel.
If there is an alpha channel, you will have to check the individual pixels.
yes there is a better way than simply iterating all pixels. if you already have a mip-map stored for the alpha channels you can check from top to bottom for any non-opaque pixels.
JAI supports these: put the alpha channel or the whole image into a javax.media.jai.ImageMIPMap then iterate its levels from top to bottom using getImage(int level)
some keywords for googling: gauss-laplace image pyramids, mipmaps
Related
I would like to combine two images that partially contain content and otherwise are transparent (alpha = 0). Where the content of the two images overlaps I would like to use half the color value (alpha=0.5) from the first image combined with half the color value of the other image. All pixels that still does not contain content should be transparent. I can't seem to find a convenient way to do this using Core Graphics or Core Image or maybe I am missing something... Does anyone have any tips on how to do this?
If anyone else encounters this problem:
I was able to solve it by using pixel wise processing inspired by this answer https://stackoverflow.com/a/31661519/3652610
and alpha blending described here https://stackoverflow.com/a/727339
I am using Kinect2 with Matlab; however, the depth images shown in the video stream are much brighter than when I saved it in Matlab?
do you know the solution for this problem
Firstly, you should provide the code that you are using at the moment so we can see where you are going wrong.. this is a generic advice for posting on any forum; to provide with all your information, so others can help.
If you use the histogram to check your depth values, you will see that the image is a uint8 image with values from 0 to 255. And since the depth distances are scaled to grayscale value, the values are scaled to new values and using imshow, will not provide enough contrast.
An easy workaround for displaying images is to use any type of
histogram equalization such as
figure(1);
C= adapthisteq(A, 'clipLimit',0.02,'Distribution','rayleigh');
imshow(C);
The image will be contrast adjusted for display.
I used mat2gray and it solved the problem.
I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?
You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.
First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.
I heard that only premultiplied alpha is needed when doing layer blending etc. How do I know if my original image is premultiplied alpha?
You can't.
The only thing that you can check is if it's not premultiplied. To do it go over all the pixels and see if there is a color-value that has a higher value than the alpha would permit if(max(col.r,col.g,col.b) > 255*alpha)//not premul. Any other cases are ambiguous and could or could not be premultiplied. Your best guess is probably to assume that they aren't as that's the case for most PNGs.
Edit: actually, not even the code that I posted would work as there are a lot of PNGs out there with white matte, so the image would have to include parts that have an alpha of 0 to determine the matte color first.
Android Bitmap stores images loaded from PNG with premultiplied alpha. You can't get non-premultiplied original colours from it in usual way.
In order to load images without RGB channels being premultiplied I have to use 3rd party PNGDecoder from here: http://twl.l33tlabs.org/#downloads
I want to achieve the same result as if I were in photoshop and turned off one of the channels. I was about to try to loop through every pixel changing colors. Is there a better way to do this?
Use Core Image's Color Matrix filter. The array of vectors can be bewildering, but it's very powerful. In your case, you'll want to set the vector for the channel you want to turn off to all-zeroes.
Obviously, this will only work for RGB images, since Core Image only works for RGB images. You can make it work for gray images (turn off R, G, and B to turn off the K channel), but not for CMYK.