How to convert cv2.addWeighted and cv2.GaussianBlur into MATLAB? - image

I have this Python code:
cv2.addWeighted(src1, 4, cv2.GaussianBlur(src1, (0, 0), 10), src2, -4, 128)
How can I convert it to Matlab? So far I got this:
f = imread0('X.jpg');
g = imfilter(f, fspecial('gaussian',[size(f,1),size(f,2)],10));
alpha = 4;
beta = -4;
f1 = f*alpha+g*beta+128;
I want to subtract local mean color image.
Input image:
Blending output from OpenCV:

The documentation for cv2.addWeighted has the definition such that:
cv2.addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype]]) → dst
Also, the operations performed on the output image is such that:
(source: opencv.org)
Therefore, what your code is doing is exactly correct... at least for cv2.addWeighted. You take alpha, multiply this by the first image, then beta, multiply this by the second image, then add gamma on top of this. The only intricacy left to deal with is saturate, which means that any values that are beyond the dynamic range of the data type you are dealing with, you cap it at that much. Because there is a potential for negatives to occur in the result, the saturate option simply means to make any values that are negative 0 and any values that are greater than the maximum expected to that max. In this case, you'll want to make any values larger than 1 equal to 1. As such, it'll be a good idea to convert your image to double through im2double because you want to allow the addition and subtraction of values beyond the dynamic range to happen first, then you saturate after. By using the default image precision of the image (which is uint8), the saturation will happen even before the saturate operation occurs, and that'll give you the wrong results. Because you're doing this double conversion, you'll want to convert the addition of 128 for your gamma to 0.5 to compensate.
Now, the only slight problem is your Gaussian Blur. Looking at the documentation, by doing cv2.GaussianBlur(src1, (0, 0), 10), you are telling OpenCV to infer on the mask size while the standard deviation is 10. MATLAB does not infer the size of the mask for you, so you need to do this yourself. A common practice is to simply find six-times the standard deviation, take the floor and add 1. This is for both the horizontal and vertical dimensions of the mask. You can see my post here on the justification as to why this is common practice: By which measures should I set the size of my Gaussian filter in MATLAB?
Therefore, in MATLAB, you would do this with your Gaussian blur instead. BTW, it's simply imread, not imread0:
f = im2double(imread('http://i.stack.imgur.com/kl3Md.jpg')); %// Change - Reading image directly from StackOverflow
sigma = 10; %// Change
sz = 1 + floor(6*sigma); %// Change
g = imfilter(f, fspecial('gaussian', sz, sigma)); %// Change
%// Rest of the code is the same
alpha = 4;
beta = -4;
f1 = f*alpha+g*beta+0.5; %// Change
%// Saturate
f1(f1 > 1) = 1;
f1(f1 < 0) = 0;
I get this image:
Take a note that there is a slight difference in the way this appears between OpenCV in MATLAB... especially the hallowing around the eye. This is because OpenCV does something different when inferring the mask size for the Gaussian blur. This I'm not sure what is going on, but how I specified the mask size by looking at the standard deviation is one of the most common heuristics for it. Play around with the standard deviation until you get something you like.

Related

Average set of color images and standard deviation

I am learning image analysis and trying to average set of color images and get standard deviation at each pixel
I have done this, but it is not by averaging RGB channels. (for ex rchannel = I(:,:,1))
filelist = dir('dir1/*.jpg');
ims = zeros(215, 300, 3);
for i=1:length(filelist)
imname = ['dir1/' filelist(i).name];
rgbim = im2double(imread(imname));
ims = ims + rgbim;
end
avgset1 = ims/length(filelist);
figure;
imshow(avgset1);
I am not sure if this is correct. I am confused as to how averaging images is useful.
Also, I couldn't get the matrix holding standard deviation.
Any help is appreciated.
If you are concerned about finding the mean RGB image, then your code is correct. What I like is that you converted the images using im2double before accumulating the mean and so you are making everything double precision. As what Parag said, finding the mean image is very useful especially in machine learning. It is common to find the mean image of a set of images before doing image classification as it allows the dynamic range of each pixel to be within a normalized range. This allows the training of the learning algorithm to converge quickly to the optimum solution and provide the best set of parameters to facilitate the best accuracy in classification.
If you want to find the mean RGB colour which is the average colour over all images, then no your code is not correct.
You have summed over all channels individually which is stored in sumrgbims, so the last step you need to do now take this image and sum over each channel individually. Two calls to sum in the first and second dimensions chained together will help. This will produce a 1 x 1 x 3 vector, so using squeeze after this to remove the singleton dimensions and get a 3 x 1 vector representing the mean RGB colour over all images is what you get.
Therefore:
mean_colour = squeeze(sum(sum(sumrgbims, 1), 2));
To address your second question, I'm assuming you want to find the standard deviation of each pixel value over all images. What you will have to do is accumulate the square of each image in addition to accumulating each image inside the loop. After that, you know that the standard deviation is the square root of the variance, and the variance is equal to the average sum of squares subtracted by the mean squared. We have the mean image, now you just have to square the mean image and subtract this with the average sum of squares. Just to be sure our math is right, supposing we have a signal X with a mean mu. Given that we have N values in our signal, the variance is thus equal to:
Source: Science Buddies
The standard deviation would simply be the square root of the above result. We would thus calculate this for each pixel independently. Therefore you can modify your loop to do that for you:
filelist = dir('set1/*.jpg');
sumrgbims = zeros(215, 300, 3);
sum2rgbims = sumrgbims; % New - for standard deviation
for i=1:length(filelist)
imname = ['set1/' filelist(i).name];
rgbim = im2double(imread(imname));
sumrgbims = sumrgbims + rgbim;
sum2rgbims = sum2rgbims + rgbim.^2; % New
end
rgbavgset1 = sumrgbims/length(filelist);
% New - find standard deviation
rgbstdset1 = ((sum2rgbims / length(filelist)) - rgbavgset.^2).^(0.5);
figure;
imshow(rgbavgset1, []);
% New - display standard deviation image
figure;
imshow(rgbstdset1, []);
Also to make sure, I've scaled the display of each imshow call so the smallest value gets mapped to 0 and the largest value gets mapped to 1. This does not change the actual contents of the images. This is just for display purposes.

Laplacian Image Filtering and Sharpening Images in MATLAB

I am trying to "translate" what's mentioned in Gonzalez and Woods (2nd Edition) about the Laplacian filter.
I've read in the image and created the filter. However, when I try to display the result (by subtraction, since the center element in -ve), I don't get the image as in the textbook.
I think the main reason is the "scaling". However, I'm not sure how exactly to do that. From what I understand, some online resources say that the scaling is just so that the values are between 0-255. From my code, I see that the values are already within that range.
I would really appreciate any pointers.
Below is the original image I used:
Below is my code, and the resultant sharpened image.
Thanks!
clc;
close all;
a = rgb2gray(imread('e:\moon.png'));
lap = [1 1 1; 1 -8 1; 1 1 1];
resp = uint8(filter2(lap, a, 'same'));
sharpened = imsubtract(a, resp);
figure;
subplot(1,3,1);imshow(a); title('Original image');
subplot(1,3,2);imshow(resp); title('Laplacian filtered image');
subplot(1,3,3);imshow(sharpened); title('Sharpened image');
I have a few tips for you:
This is just a little thing but filter2 performs correlation. You actually need to perform convolution, which rotates the kernel by 180 degrees before performing the weighted sum between neighbourhoods of pixels and the kernel. However because the kernel is symmetric, convolution and correlation perform the same thing in this case.
I would recommend you use imfilter to facilitate the filtering as you are using methods from the Image Processing Toolbox already. It's faster than filter2 or conv2 and takes advantage of the Intel Integrated Performance Primitives.
I highly recommend you do everything in double precision first, then convert back to uint8 when you're done. Use im2double to convert your image (most likely uint8) to double precision. When performing sharpening, this maintains precision and prematurely casting to uint8 then performing the subtraction will give you unintended side effects. uint8 will cap results that are negative or beyond 255 and this may also be a reason why you're not getting the right results. Therefore, convert the image to double, filter the image, sharpen the result by subtracting the image with the filtered result (via the Laplacian) and then convert back to uint8 by im2uint8.
You've also provided a link to the pipeline that you're trying to imitate: http://www.idlcoyote.com/ip_tips/sharpen.html
The differences between your code and the link are:
The kernel has a positive centre. Therefore the 1s are negative while the centre is +8 and you'll have to add the filtered result to the original image.
In the link, they normalize the filtered response so that the minimum is 0 and the maximum is 1.
Once you add the filtered response onto the original image, you also normalize this result so that the minimum is 0 and the maximum is 1.
You perform a linear contrast enhancement so that intensity 60 becomes the new minimum and intensity 200 becomes the new maximum. You can use imadjust to do this. The function takes in an image as well as two arrays - The first array is the input minimum and maximum intensity and the second array is where the minimum and maximum should map to. As such, I'd like to map the input intensity 60 to the output intensity 0 and the input intensity 200 to the output intensity 255. Make sure the intensities specified are between 0 and 1 though so you'll have to divide each quantity by 255 as stated in the documentation.
As such:
clc;
close all;
a = im2double(imread('moon.png')); %// Read in your image
lap = [-1 -1 -1; -1 8 -1; -1 -1 -1]; %// Change - Centre is now positive
resp = imfilter(a, lap, 'conv'); %// Change
%// Change - Normalize the response image
minR = min(resp(:));
maxR = max(resp(:));
resp = (resp - minR) / (maxR - minR);
%// Change - Adding to original image now
sharpened = a + resp;
%// Change - Normalize the sharpened result
minA = min(sharpened(:));
maxA = max(sharpened(:));
sharpened = (sharpened - minA) / (maxA - minA);
%// Change - Perform linear contrast enhancement
sharpened = imadjust(sharpened, [60/255 200/255], [0 1]);
figure;
subplot(1,3,1);imshow(a); title('Original image');
subplot(1,3,2);imshow(resp); title('Laplacian filtered image');
subplot(1,3,3);imshow(sharpened); title('Sharpened image');
I get this figure now... which seems to agree with the figures seen in the link:

High Pass Butterworth Filter on images in MATLAB

I need to implement a high pass Butterworth filter in MATLAB for the purposes of image filtering. I have implemented one but it looks like it doesn't work. Here is the code I have written. Can anyone tell me what is wrong?
n=1;
d=50;
A=1.5;
im=imread('imagex.jpg');
h=size(im,1);
w=size(im,2);
[x y]=meshgrid(-floor(w/2):floor(w-1/2),-floor(h/2):floor(h-1/2));
hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
image_2Dfilter=fftshift(fft2(im));
Image_butterworth=image_2Dfilter;
imshow(Image_butterworth);
ifftshow(Image_butterworth);
For one thing, there is no such command called ifftshow. Secondly, you aren't filtering anything. All you're doing is visualizing the spectrum of the image.
In terms of visualizing the spectrum, how you're doing it right now is very dangerous. For one thing, you are visualizing the coefficients at each spatial frequency component which is complex-valued in nature. If you want to visualize the spectrum in a way that makes sense to most of us, it's better to take a look at either the magnitude or phase. However, because this is a Butterworth filter, it's best to apply it to the magnitude of the filter.
You can find the magnitude of the spectrum by using the abs function. Even when you do that, if you did imshow directly on the magnitude, you will get a visualization that is zero everywhere except for the middle. This is because the DC component is so large and the rest of the spectrum is small in comparison.
Let me show you an example. This is the cameraman image that is part of the image processing toolbox:
im = imread('cameraman.tif');
figure;
imshow(im);
Now, let's visualize the spectrum and ensuring that the DC component is in the centre of the image - you already did this with fftshift. It's also a good idea to cast the image to double to ensure the best precision of data. In addition, make sure you apply abs to find the magnitude:
fftim = fftshift(fft2(double(im)));
mag = abs(fftim);
figure;
imshow(mag, []);
As you can see, it's not very useful due to the reason that I mentioned. A better way to visualize the spectrum of the image is usually to apply a log transformation to the spectrum. This is also useful if you want to de-mean or remove the mean so that the dynamic range fits better for display. In other words, you would add 1 to the magnitude, then apply a logarithm to the magnitude so that higher values can taper off. It doesn't matter which base you use, so I'll just use the natural logarithm which is encapsulated by the log command:
figure;
imshow(log(1 + mag), []);
Now that's much better. Now we'll get onto your filtering mechanism. Your Butterworth filter is slightly incorrect. The meshgrid of coordinates is slightly wrong. The -1 operation that's at the ending interval needs to go outside:
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
Remember, you are defining a symmetric interval about the centre of the image, and what you had originally wasn't correct. I'd also like to mention that this looks like a high-pass filter, so the output should look like an edge detection. In addition, the definition of the Butterworth high pass filter is incorrect. The correct definition of the filter in frequency domain is:
D(u,v) is the distance from the centre of the image in frequency domain, Do is the cutoff distance while B is a controlling scale factor controlling what the desired gain would be at the cutoff distance. n is the order of the filter. Do in your case is d = 50. In practice, B = sqrt(2) - 1 so that at the cutoff distance of Do, D(u,v) = 1 / sqrt(2) = 0.707, which is the 3 dB cutoff frequency mostly seen in electronics circuit filters. Sometimes you'll see B being set to 1 for simplicity, but it's common to set this to B = sqrt(2) - 1.
However, your current code isn't doing any filtering. To filter in the frequency domain, you simply multiply the spectrum of the image with the spectrum of the filter itself. This is equivalent to convolution in the spatial domain. Once you do that, you simply undo the fftshift that was performed on the image, take the inverse FFT and then eliminate any imaginary components that are due to numerical imprecision. Also, let's cast to uint8 to make sure that we respect the original image type.
That can be done like so:
%// Your code with meshgrid fix
n=1;
d=50;
h=size(im,1);
w=size(im,2);
fftim = fftshift(fft2(double(im)));
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
%hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
%%%%%%// New code
B = sqrt(2) - 1; %// Define B
D = sqrt(x.^2 + y.^2); %// Define distance to centre
hhp = 1 ./ (1 + B * ((d ./ D).^(2 * n)));
out_spec_centre = fftim .* hhp;
%// Uncentre spectrum
out_spec = ifftshift(out_spec_centre);
%// Inverse FFT, get real components, and cast
out = uint8(real(ifft2(out_spec)));
%// Show image
imshow(out);
If you want to see what the filtered spectrum looks like, just do this:
figure;
imshow(log(1 + abs(out_spec_centre)), []);
We get:
This makes sense. You see that in the middle of the spectrum, it's slightly darker in comparison to the outer edges of the spectrum. That's because with the high-pass Butterworth filter, you are amplifying the higher frequency terms and it gets visualized to be a higher intensity.
Now, out contains your filtered image, and we finally get this:
That looks like a fine result! However, naively casting the image to uint8 truncates any negative values to 0 and any positive values greater than 255 to 255. Because this is an edge detection, you want to detect both the negative and positive transitions... so a good idea would be to normalize the output so that it ranges from [0,1], and then cast with uint8 after you multiply by 255. This way, no changes in the image get visualized to gray, negative changes get visualized as dark and positive changes get visualized as white.... so you'd do something like this:
%// Your code with meshgrid fix
n=1;
d=50;
h=size(im,1);
w=size(im,2);
fftim = fftshift(fft2(double(im)));
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
%hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
%%%%%%// New code
B = sqrt(2) - 1; %// Define B
D = sqrt(x.^2 + y.^2); %// Define distance to centre
hhp = 1 ./ (1 + B * ((d ./ D).^(2 * n)));
out_spec_centre = fftim .* hhp;
%// Uncentre spectrum
out_spec = ifftshift(out_spec_centre);
%// Inverse FFT, get real components
out = real(ifft2(out_spec));
%// Normalize and cast
out = (out - min(out(:))) / (max(out(:)) - min(out(:)));
out = uint8(255*out);
%// Show image
imshow(out);
We get this:
I think that you should work a little bit diferent
n=1;
D0=50; % change the name for d0, d is usuaally the (u²+v²)⁽1/2)
A=1.5; % normally the amplitude is 1
im=imread('cameraman.jpg');
[M,N]=size(im); % is easy to get the h and w like this
% compute the 2d fourier transform in order to multiply
F=fft2(double(im));
% compute your filter and do the meshgrid for your matrix but it is M*n, and get only the real part
u=0:(M-1);
v=0:(N-1);
idx=find(u>M/2);
u(idx)=u(idx)-M;
idy=find(v>N/2);
v(idy)=v(idy)-N;
[V,U]=meshgrid(v,u);
D=sqrt(U.^2+V.^2);
H =A * (1./(1 + (D0./D).^(2*n)));
% multiply element by element
G=H.*F;
g=real(ifft2(double(G)));
subplot(1,2,1); imshow(im); title('Input image');
subplot(1,2,2); imshow(g,[ ]); title('filtered image');

Resize an image with bilinear interpolation without imresize

I've found some methods to enlarge an image but there is no solution to shrink an image. I'm currently using the nearest neighbor method. How could I do this with bilinear interpolation without using the imresize function in MATLAB?
In your comments, you mentioned you wanted to resize an image using bilinear interpolation. Bear in mind that the bilinear interpolation algorithm is size independent. You can very well use the same algorithm for enlarging an image as well as shrinking an image. The right scale factors to sample the pixel locations are dependent on the output dimensions you specify. This doesn't change the core algorithm by the way.
Before I start with any code, I'm going to refer you to Richard Alan Peters' II digital image processing slides on interpolation, specifically slide #59. It has a great illustration as well as pseudocode on how to do bilinear interpolation that is MATLAB friendly. To be self-contained, I'm going to include his slide here so we can follow along and code it:
Please be advised that this only resamples the image. If you actually want to match MATLAB's output, you need to disable anti-aliasing.
MATLAB by default will perform anti-aliasing on the images to ensure the output looks visually pleasing. If you'd like to compare apples with apples, make sure you disable anti-aliasing when comparing between this implementation and MATLAB's imresize function.
Let's write a function that will do this for us. This function will take in an image (that is read in through imread) which can be either colour or grayscale, as well as an array of two elements - The image you want to resize and the output dimensions in a two-element array of the final resized image you want. The first element of this array will be the rows and the second element of this array will be the columns. We will simply go through this algorithm and calculate the output pixel colours / grayscale values using this pseudocode:
function [out] = bilinearInterpolation(im, out_dims)
%// Get some necessary variables first
in_rows = size(im,1);
in_cols = size(im,2);
out_rows = out_dims(1);
out_cols = out_dims(2);
%// Let S_R = R / R'
S_R = in_rows / out_rows;
%// Let S_C = C / C'
S_C = in_cols / out_cols;
%// Define grid of co-ordinates in our image
%// Generate (x,y) pairs for each point in our image
[cf, rf] = meshgrid(1 : out_cols, 1 : out_rows);
%// Let r_f = r'*S_R for r = 1,...,R'
%// Let c_f = c'*S_C for c = 1,...,C'
rf = rf * S_R;
cf = cf * S_C;
%// Let r = floor(rf) and c = floor(cf)
r = floor(rf);
c = floor(cf);
%// Any values out of range, cap
r(r < 1) = 1;
c(c < 1) = 1;
r(r > in_rows - 1) = in_rows - 1;
c(c > in_cols - 1) = in_cols - 1;
%// Let delta_R = rf - r and delta_C = cf - c
delta_R = rf - r;
delta_C = cf - c;
%// Final line of algorithm
%// Get column major indices for each point we wish
%// to access
in1_ind = sub2ind([in_rows, in_cols], r, c);
in2_ind = sub2ind([in_rows, in_cols], r+1,c);
in3_ind = sub2ind([in_rows, in_cols], r, c+1);
in4_ind = sub2ind([in_rows, in_cols], r+1, c+1);
%// Now interpolate
%// Go through each channel for the case of colour
%// Create output image that is the same class as input
out = zeros(out_rows, out_cols, size(im, 3));
out = cast(out, class(im));
for idx = 1 : size(im, 3)
chan = double(im(:,:,idx)); %// Get i'th channel
%// Interpolate the channel
tmp = chan(in1_ind).*(1 - delta_R).*(1 - delta_C) + ...
chan(in2_ind).*(delta_R).*(1 - delta_C) + ...
chan(in3_ind).*(1 - delta_R).*(delta_C) + ...
chan(in4_ind).*(delta_R).*(delta_C);
out(:,:,idx) = cast(tmp, class(im));
end
Take the above code, copy and paste it into a file called bilinearInterpolation.m and save it. Make sure you change your working directory where you've saved this file.
Except for sub2ind and perhaps meshgrid, everything seems to be in accordance with the algorithm. meshgrid is very easy to explain. All you're doing is specifying a 2D grid of (x,y) co-ordinates, where each location in your image has a pair of (x,y) or column and row co-ordinates. Creating a grid through meshgrid avoids any for loops as we will have generated all of the right pixel locations from the algorithm that we need before we continue.
How sub2ind works is that it takes in a row and column location in a 2D matrix (well... it can really be any amount of dimensions you want), and it outputs a single linear index. If you're not aware of how MATLAB indexes into matrices, there are two ways you can access an element in a matrix. You can use the row and column to get what you want, or you can use a column-major index. Take a look at this matrix example I have below:
A =
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
If we want to access the number 9, we can do A(2,4) which is what most people tend to default to. There is another way to access the number 9 using a single number, which is A(11)... now how is that the case? MATLAB lays out the memory of its matrices in column-major format. This means that if you were to take this matrix and stack all of its columns together in a single array, it would look like this:
A =
1
6
11
2
7
12
3
8
13
4
9
14
5
10
15
Now, if you want to access element number 9, you would need to access the 11th element of this array. Going back to the interpolation bit, sub2ind is crucial if you want to vectorize accessing the elements in your image to do the interpolation without doing any for loops. As such, if you look at the last line of the pseudocode, we want to access elements at r, c, r+1 and c+1. Note that all of these are 2D arrays, where each element in each of the matching locations in all of these arrays tell us the four pixels we need to sample from in order to produce the final output pixel. The output of sub2ind will also be 2D arrays of the same size as the output image. The key here is that each element of the 2D arrays of r, c, r+1, and c+1 will give us the column-major indices into the image that we want to access, and by throwing this as input into the image for indexing, we will exactly get the pixel locations that we want.
There are some important subtleties I'd like to add when implementing the algorithm:
You need to make sure that any indices to access the image when interpolating outside of the image are either set to 1 or the number of rows or columns to ensure you don't go out of bounds. Actually, if you extend to the right or below the image, you need to set this to one below the maximum as the interpolation requires that you are accessing pixels to one over to the right or below. This will make sure that you're still within bounds.
You also need to make sure that the output image is cast to the same class as the input image.
I ran through a for loop to interpolate each channel on its own. You could do this intelligently using bsxfun, but I decided to use a for loop for simplicity, and so that you are able to follow along with the algorithm.
As an example to show this works, let's use the onion.png image that is part of MATLAB's system path. The original dimensions of this image are 135 x 198. Let's interpolate this image by making it larger, going to 270 x 396 which is twice the size of the original image:
im = imread('onion.png');
out = bilinearInterpolation(im, [270 396]);
figure;
imshow(im);
figure;
imshow(out);
The above code will interpolate the image by increasing each dimension by twice as much, then show a figure with the original image and another figure with the scaled up image. This is what I get for both:
Similarly, let's shrink the image down by half as much:
im = imread('onion.png');
out = bilinearInterpolation(im, [68 99]);
figure;
imshow(im);
figure;
imshow(out);
Note that half of 135 is 67.5 for the rows, but I rounded up to 68. This is what I get:
One thing I've noticed in practice is that upsampling with bilinear has decent performance in comparison to other schemes like bicubic... or even Lanczos. However, when you're shrinking an image, because you're removing detail, nearest neighbour is very much sufficient. I find bilinear or bicubic to be overkill. I'm not sure about what your application is, but play around with the different interpolation algorithms and see what you like out of the results. Bicubic is another story, and I'll leave that to you as an exercise. Those slides I referred you to does have material on bicubic interpolation if you're interested.
Good luck!

Comparing 2 images intensities

I have two images 1 and 2. I want to get the v (intensity) value of the hsv images; then I want the v (intensity) value of the first image equal to v (intesity) value of the second image?
I used this code to get the v
v = image1(:, :, 3);
u = image2(:, :, 3);
How do I make both u and v the same value?
Thanks,
It definitely sounds like you want to do some kind of histogram equalization. As a first attempt you can try Matlab's histeq function. (You should also read the documentation for imhist.) To make the intensity values in image1 more closely match those in image2 (they will almost certainly never become identical) you would do something like this:
v = image1(:, :, 3);
u = image2(:, :, 3);
targetHist = imhist(u);
newV = histeq(v, targetHist);
newImage1 = cat(3, image1(:, :, 1), image1(:, :, 2), newV);
% or, alternatively %
image1(:, :, 3) = newV;
If this technique doesn't give you the results you need, there are other methods of histogram equalization that you can use, including adaptive techniques that equalize intensities by regions.
Edit:
Here's a little about what the code is doing. For more information you can look at the Algorithm section of the link to histeq I gave above.
Given a reference image (in this case image2) in HSV format, we're taking the Value channel (or Intensity, or Brightness channel) and using imhist to divide the intensity levels in the image into 256 bins (by default), with each bin containing the number of pixels that have that intensity value.
histeq in this usage actually does histogram matching. It calculates the histogram for the second image and tries to "match" the input histogram. For instance, if the histogram has all of the bins empty except for bin 100, which has 300 pixels, and your image has all of the bins empty except for bin 80, which has 300 pixels, histeq will increase the intensity of each pixel in the image by 20. A very simplistic example, but hopefully you get the idea.
It would be very helpful to plot the 3 histograms for image1, image2 and the equalized histogram to see how the intensity levels for image1 are changed.

Resources