Folks,
I have read a number of articles on Discrete Wavelet Transform (DWT) and looked at some sample code as well. However, I am not clear on what exactly does DWT achieve.
Here is what I understand. For a two dimensional image in YUV format, I can pass in the Y plane (brightness) to DWT function as a parameter. The function returns me a matrix of the original width and height containing coefficient values.
What are these coefficient values telling me? Is it how fast or slow the brightness of a pixel has changed compared to its neighbors?
Further, the returned matrix is rearranged in four quarters. As the coefficients have been rearranged, I no longer know which coefficient belongs to which pixel. This is confusing. If I cannot associate the coefficient to its corresponding pixel location, how can I really use the coefficients?
A little bit of background. I am looking at hiding some information in an image as an invisible watermark. From what I understand, DWT can help me identify the best region to hide the information. However, I have not been able to put the whole picture together.
Ok. I figured out how DWT works. I was under the assumption that the coefficients generated have a relationship with the original image. However, the transform converts the input luma into a completely different set. It is possible to run the reverse transform on the new values to once again obtain the original values.
Regards,
Peter
Related
I have two images of same height/width they look like similar.But they are not exactly similar pixel by pixel.That is one of the image is moved to right by few pixels.
I am currently using imagemagick compare command.It shows difference as it compares pixel by pixel.Also i tried with fuzz attribute of it.
Please suggest any other tool to compare such type of images.
I don't know what you're really trying to achieve, but if you want a metric to express the similitude between the two images without taking image displacement into account, then maybe you should work in the frequency domain.
As instance, the frequency part of the DFT of your images should be nearly identical, so if you compute the SNR of the two frequency parts, it should be practically null.
In fact, according to the Fourier shift theorem, you can even get an estimation of the displacement offset by calculating the inverse DFT of the combination of the two DFT.
I have the following image:
What I want to do is "id" the individual strips based on their dominant color. What is the best approach to do this?
What I've done is used the image's value (HSV) and make a distribution on that value's occurrence. The problem is, for strip0 values [27=32191, 28=5433, others=8] strip1 values [26=7107, 27=23111, others=22]. I can't get a definitive distinction.
The project's main goal is to compare an actual yellow-colored paper to the strips and determine which strip is the most similar.
First, since you know the boundaries of each strip in the reference image, the only problem possible here is that your reference image is noisy. A relatively overkill way to handle that is clustering the colors in each strip and taking the cluster's centroid as the representative color of the strip. In order to get a more meaningful response here, consider the CIELAB colorspace for this step. Doing this, and converting the results back to RGB, for the first strip I get the rgb triplet (0.949375, 0.879872, 0.147898), and for the second strip (0.945324, 0.857322, 0.129756) (each channel in range [0, 1]).
When you get a new image, you perform the same operation. But there are a lot of problems here. For instance, how are you handling the white balance in this input image ? Supposing you have no such problem, then now it is only a matter of finding the nearest color to the one you just found by the same process. To find the nearest color you have to use a meaningful colorspace for such thing too, and CIELAB is recommended again since the well established Delta-E functions are defined on it. See http://en.wikipedia.org/wiki/Color_difference for some such metrics, the simplest being the euclidean distance in CIELAB.
Calibrate your equipment. If you do not calibrate your equipment, you will have arbitrary errors between the test sample and the reference. Lighting is part of your equipment.
Use edge detection and your knowledge of the reference strip's geometry (strips are equal width) to determine sampling regions. For each sampling region, extract an internal patch.
For the test strip, compute an image where each pixel is the max difference within a sampling window (e.g. 5x5). This will let you identify a relatively homogeneous region which is dissimilar to the outside region (i.e. the paper). Extract a patch.
Use downsampling to find an integrated color for each patch per svnpenn's advice. You can look at other computation methods later, but this should work quite well.
For weights wh, ws, wv, compute similarity = whabs(h0-h1) + wsabs(s0-s1) + wv*abs(v0-v1) between the test color and each reference color. You can look at other distance measures later, but this should work quite well. Start with equal weights. One perk to this method is that it behaves well regardless of the dimension or combination of dimensions under which the reference strip varies.
Sort the results to find the most similar and second most similar matches. Note that similarity is set up so zero is an exact match, and a big number is a poor match. Use the ratio of these two results to estimate the quality of the most similar match - if the first two matches are very close, it's probably not a great match to either.
You can scan through all the colors and use a hashtable to keep track of how many pixels of each color there are.
Take those numbers and, remembering which colors they correspond to, sort them in decreasing order.
Look at the sorted list of numbers and find the difference between each consecutive pair of numbers. Keep track the indices in the list of the two numbers that resulted in each difference. Sort this difference list.
Look at the maximum number in the difference list. You now have the biggest drop-off between two sets of pixels. Go find which was the bigger one. Everything with this number of pixels and above is a dominant color. Everything below is a sub-dominant color. Now you know how many dominant colors you have, and what they are.
Should be pretty easy from there to do whatever it is you want to do.
The only time this wouldn't work is if some of the noise was of the same color as a strip, so much so that it corrupted your data.
In this case, you would use a different approach, which you can also use in the first case - looking at runs. Go through the pixels, and each time you find a new color, look at how many of the following pixels are of the same color.
Use the method described earlier to cluster the colors into dominant and non-dominant, for the same result.
In both cases, if you know that the picture is of vertical strips, you could limit the number of horizontal lines of colors you look at to make things go faster.
You could split the image into sections, then resize each section to one pixel. This is an example using the whole image
$ convert Y82IirS.jpg -resize 1x1 txt:
# ImageMagick pixel enumeration: 1,1,255,srgb
0,0: (220,176, 44) #DCB02C srgb(220,176,44)
Average colour of an image
Lets think, I would take a picture of a sheet on a table in an angle, that is not frontal. Of course, I will have a perspectivly stretched image.
Does anyone know an easy algorithm to "normalize" the area again to a sqared one, when all 4 edges/edgepoints in the source (taken photo) are defined/maybe clicked by the user?
The interpolation may be easy, I do not need algorithms to have smooth borders, nearest neighbour is enough (so, simply copying the pixel from the source position, which meets the rounded value from the calculated according pixel, ignoring the after-commas).
OpenCV has it all.
Basically, you get the 4 points and use
http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#getperspectivetransform
(which basically inverts a matrix inside) to obtain the perspective transformation matrix, and then use
http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#warpperspective
to apply the perspective transformation on the image.
The algorithms inside the latter can be found in http://en.wikipedia.org/wiki/Texture_mapping
I'd like to implement a Filter that allows resampling of an image by moving a number of control points that mark edges and tangent directions. The goal is to be able to freely transform an image as seen in Photoshop when you use "Free Transform" and chose Warpmode "Custom". The image is fitted into a some kind of Spline-Patch (if that is a valid name) that can be manipulated.
I understand how simple splines (paths) work but how do you connect them to form a patch?
And how can you sample such a patch to render the morphed image? For each pixel in the target I'd need to know what pixel in the source image corresponds. I don't even know where to start searching...
Any helpful info (keywords, links, papers, reference implementations) are greatly appreciated!
This document will get you a good insight into warping: http://www.gson.org/thesis/warping-thesis.pdf
However, this will include filtering out high frequencies, which will make the implementation a lot more complicated but will give a better result.
An easy way to accomplish what you want to do would be to loop through every pixel in your final image, plug the coordinates into your splines and retrieve the pixel in your original image. This pixel might have coordinates 0.4/1.2 so you could bilinearly interpolate between 0/1, 1/1, 0/2 and 1/2.
As for splines: there are many resources and solutions online for the 1D case. As for 2D it gets a bit trickier to find helpful resources.
A simple example for the 1D case: http://www-users.cselabs.umn.edu/classes/Spring-2009/csci2031/quad_spline.pdf
Here's a great guide for the 2D case: http://en.wikipedia.org/wiki/Bicubic_interpolation
Based upon this you could derive an own scheme for splines for the 2D case. Define a bivariate (with x and y) polynomial and set your constraints to solve for the coefficients of the polynomial.
Just keep in mind that the borders of the spline patches have to be consistent (both in value and derivative) to avoid ugly jumps.
Good luck!
I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.