For matrix (say A) using the formula skewness(A(:)) we can easily get the skewness of the whole matrix. But doing same thing for an image (which is also a matrix) its not working.
Say I'm running the following code:
>> I=imread('lenna.jpg');
>> s=skewness(I(:))
The error coming is
Integers can only be combined with integers of the same class, or scalar doubles.
Error in ==> skewness at 39
x0 = x - repmat(nanmean(x,dim), tile);
I is of type uint8 after imread(), you can convert it to double first by using im2double().
Try
>> I=imread('lenna.jpg');
>> I2 = im2double(I);
>> s=skewness(I2(:))
Related
I have two images to fuse. Images are
A
B
I have fused them and got this image fused
Now I want difference between fused image F and the image B.
I have execute the code, but not getting desirable results
I am getting this image -> difference normalized difference,
But I want this -> Required
Values of difference image are normalized to the range of 0 to 1.
The code used is
difference=F-B;
figure,imshow(difference);
normImage = mat2gray(difference);
figure,imshow(normImage);
Please anyone help. Thank you.
Using:
R = mat2gray(im2double(F)-im2double(B));
My result is:
To see why conversion to double is important, look at an area of the image where B(y,x) > F(y,x), such as (343, 280) in your sample images.
>> F(343,280)
ans = 32
>> B(343,280)
ans = 107
Mathematically, we'd expect 32-107 to equal -75, but:
>> F(343,280) - B(343,280)
ans = 0
This is because both F and B are arrays of uint8:
>> class(F)
ans = uint8
>> class(B)
ans = uint8
As an unsigned integer, uint8 can't take a negative value, so any attempt to assign a negative value to a uint8 variable results in 0. Since both operands are uint8, the result is uint8. Trying to cast that value to a double after it has already been clamped to be with in the range of 0-255 would simply result in a double variable with a value of 0. (The same thing also happens at the upper end of the range. Try uint8(444).)
Casting F and B to a signed type (one big enough to the range -max to +max, or -255 to 255 in this case) will take care of the math problem:
>> int16(F(343,280)) - int16(B(343,280))
ans = -75
For images, though, casting to double feels more natural and gives you more precision than integers when you're doing calculations and rescaling. Plus, there's this handy im2double function we can use that not only casts the array to doubles, but rescales everything to be between 0 and 1:
>> Fd = im2double(F);
>> Fd(343,280)
ans = 0.1255 % 32.0/255.0
>> Bd = im2double(B);
>> Bd(343,280)
ans = 0.4196 % 107.0/255.0
But now when we try to subtract the two, we actually get a negative value as expected:
>> Fd(343,280) - Bd(343,280)
ans = -0.2941 % -75.0/255.0
So, im2double(F)-im2double(B) gives us double values between -1.0 and 1.0. mat2gray takes care of scaling those values back to a range of 0.0 to 1.0 for display.
Note: I chose the coordinates (343,280) very carefully because that's where F-B is most negative. If you're curious about how the conversion happens and what values get scaled to what, you can also have a look at (53,266).
I am writing a program where I need to delete duplicate points stored in a matrix. The problem is that when it comes to check whether those points are in the matrix, MATLAB can't recognize them in the matrix although they exist.
In the following code, intersections function gets the intersection points:
[points(:,1), points(:,2)] = intersections(...
obj.modifiedVGVertices(1,:), obj.modifiedVGVertices(2,:), ...
[vertex1(1) vertex2(1)], [vertex1(2) vertex2(2)]);
The result:
>> points
points =
12.0000 15.0000
33.0000 24.0000
33.0000 24.0000
>> vertex1
vertex1 =
12
15
>> vertex2
vertex2 =
33
24
Two points (vertex1 and vertex2) should be eliminated from the result. It should be done by the below commands:
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
After doing that, we have this unexpected outcome:
>> points
points =
33.0000 24.0000
The outcome should be an empty matrix. As you can see, the first (or second?) pair of [33.0000 24.0000] has been eliminated, but not the second one.
Then I checked these two expressions:
>> points(1) ~= vertex2(1)
ans =
0
>> points(2) ~= vertex2(2)
ans =
1 % <-- It means 24.0000 is not equal to 24.0000?
What is the problem?
More surprisingly, I made a new script that has only these commands:
points = [12.0000 15.0000
33.0000 24.0000
33.0000 24.0000];
vertex1 = [12 ; 15];
vertex2 = [33 ; 24];
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
The result as expected:
>> points
points =
Empty matrix: 0-by-2
The problem you're having relates to how floating-point numbers are represented on a computer. A more detailed discussion of floating-point representations appears towards the end of my answer (The "Floating-point representation" section). The TL;DR version: because computers have finite amounts of memory, numbers can only be represented with finite precision. Thus, the accuracy of floating-point numbers is limited to a certain number of decimal places (about 16 significant digits for double-precision values, the default used in MATLAB).
Actual vs. displayed precision
Now to address the specific example in the question... while 24.0000 and 24.0000 are displayed in the same manner, it turns out that they actually differ by very small decimal amounts in this case. You don't see it because MATLAB only displays 4 significant digits by default, keeping the overall display neat and tidy. If you want to see the full precision, you should either issue the format long command or view a hexadecimal representation of the number:
>> pi
ans =
3.1416
>> format long
>> pi
ans =
3.141592653589793
>> num2hex(pi)
ans =
400921fb54442d18
Initialized values vs. computed values
Since there are only a finite number of values that can be represented for a floating-point number, it's possible for a computation to result in a value that falls between two of these representations. In such a case, the result has to be rounded off to one of them. This introduces a small machine-precision error. This also means that initializing a value directly or by some computation can give slightly different results. For example, the value 0.1 doesn't have an exact floating-point representation (i.e. it gets slightly rounded off), and so you end up with counter-intuitive results like this due to the way round-off errors accumulate:
>> a=sum([0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1]); % Sum 10 0.1s
>> b=1; % Initialize to 1
>> a == b
ans =
logical
0 % They are unequal!
>> num2hex(a) % Let's check their hex representation to confirm
ans =
3fefffffffffffff
>> num2hex(b)
ans =
3ff0000000000000
How to correctly handle floating-point comparisons
Since floating-point values can differ by very small amounts, any comparisons should be done by checking that the values are within some range (i.e. tolerance) of one another, as opposed to exactly equal to each other. For example:
a = 24;
b = 24.000001;
tolerance = 0.001;
if abs(a-b) < tolerance, disp('Equal!'); end
will display "Equal!".
You could then change your code to something like:
points = points((abs(points(:,1)-vertex1(1)) > tolerance) | ...
(abs(points(:,2)-vertex1(2)) > tolerance),:)
Floating-point representation
A good overview of floating-point numbers (and specifically the IEEE 754 standard for floating-point arithmetic) is What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg.
A binary floating-point number is actually represented by three integers: a sign bit s, a significand (or coefficient/fraction) b, and an exponent e. For double-precision floating-point format, each number is represented by 64 bits laid out in memory as follows:
The real value can then be found with the following formula:
This format allows for number representations in the range 10^-308 to 10^308. For MATLAB you can get these limits from realmin and realmax:
>> realmin
ans =
2.225073858507201e-308
>> realmax
ans =
1.797693134862316e+308
Since there are a finite number of bits used to represent a floating-point number, there are only so many finite numbers that can be represented within the above given range. Computations will often result in a value that doesn't exactly match one of these finite representations, so the values must be rounded off. These machine-precision errors make themselves evident in different ways, as discussed in the above examples.
In order to better understand these round-off errors it's useful to look at the relative floating-point accuracy provided by the function eps, which quantifies the distance from a given number to the next largest floating-point representation:
>> eps(1)
ans =
2.220446049250313e-16
>> eps(1000)
ans =
1.136868377216160e-13
Notice that the precision is relative to the size of a given number being represented; larger numbers will have larger distances between floating-point representations, and will thus have fewer digits of precision following the decimal point. This can be an important consideration with some calculations. Consider the following example:
>> format long % Display full precision
>> x = rand(1, 10); % Get 10 random values between 0 and 1
>> a = mean(x) % Take the mean
a =
0.587307428244141
>> b = mean(x+10000)-10000 % Take the mean at a different scale, then shift back
b =
0.587307428244458
Note that when we shift the values of x from the range [0 1] to the range [10000 10001], compute a mean, then subtract the mean offset for comparison, we get a value that differs for the last 3 significant digits. This illustrates how an offset or scaling of data can change the accuracy of calculations performed on it, which is something that has to be accounted for with certain problems.
Look at this article: The Perils of Floating Point. Though its examples are in FORTRAN it has sense for virtually any modern programming language, including MATLAB. Your problem (and solution for it) is described in "Safe Comparisons" section.
type
format long g
This command will show the FULL value of the number. It's likely to be something like 24.00000021321 != 24.00000123124
Try writing
0.1 + 0.1 + 0.1 == 0.3.
Warning: You might be surprised about the result!
Maybe the two numbers are really 24.0 and 24.000000001 but you're not seeing all the decimal places.
Check out the Matlab EPS function.
Matlab uses floating point math up to 16 digits of precision (only 5 are displayed).
I am working on error control in WMSNs. I want to transmit a video through binary symmetric channel with error probability p. So I have frames (images) in each gop which are shown by a matrix.
Each matrix element have decimal value which might be positive or negative. As explained here I need to convert this whole matrix to a binary stream. I used
reshape(dec2bin(typecast(b,'uint8'),8).',1,[])
for converting elements to binary streams but I cannot get back the exact number using
typecast(uint8(bin2dec(reshape(m,8,[]).')),'double').
On the other side, I think for getting the right bit error rate, I have to convert the whole matrix to just one bit stream which I'm not sure how to do that. And convert them to a matrix of measured values of a image again.
I think you need
m = reshape(dec2bin(typecast(b(:),'uint8'),8).',1,[]);
Note that this reads the matrix in Matlab's standard, column-major order (down, then across).
Then you can convert back with
b_recovered = reshape(typecast(uint8(bin2dec(reshape(m,8,[]).')),'double'),size(b));
Since typecast converts data types without changing the underlying data, this process entails no loss of accuracy. For example,
>> b = randn(2,3)
b =
-0.241247174335006 0.540703471823211 0.526269662140438
0.908207564087271 -0.507829312416083 -1.067884765919437
>> m = reshape(dec2bin(typecast(b(:),'uint8'),8).',1,[])
m =
101100101011100000000010111110100010111111100001110011101011111101001010100000100011011101001111000010010001000011101101001111110010100100001111000010100101111001110001010011011110000100111111001011101101111100011000010000100010001101000000111000001011111100010010101001010111100001111001001100111101011111100000001111110101110001010100000110000101011000001110000101101111000110111111
>> b_recovered = reshape(typecast(uint8(bin2dec(reshape(m,8,[]).')),'double'),size(b))
b_recovered =
-0.241247174335006 0.540703471823211 0.526269662140438
0.908207564087271 -0.507829312416083 -1.067884765919437
>> b==b_recovered
ans =
2×3 logical array
1 1 1
1 1 1
>>
I am trying to experiment with JacobiSVD of Eigen. In particular I am trying to reconstruct the input matrix from its singular value decomposition. http://eigen.tuxfamily.org/dox/classEigen_1_1JacobiSVD.html.
Eigen::MatrixXf m = Eigen::MatrixXf::Random(3,3);
Eigen::JacobiSVD<Eigen::MatrixXf, Eigen::NoQRPreconditioner> svd(m, Eigen::ComputeFullU | Eigen:: ComputeFullV);
Eigen::VectorXf SVec = svd.singularValues();
Eigen::MatrixXf S = Eigen::MatrixXf::Identity(3,3);
S(0,0) = SVec(0);
S(1,1) = SVec(1);
S(2,2) = SVec(2);
Eigen::MatrixXf recon = svd.matrixU() * S * svd.matrixV().transpose();
cout<< "diff : \n"<< m - recon << endl;
I know that internally the SVD is computed by an iterative method and can never get a perfect reconstruction. The errors are in order of 10^-7. With the above code the output is --
diff :
9.53674e-07 1.2517e-06 -2.98023e-07
-4.47035e-08 1.3113e-06 8.9407e-07
5.96046e-07 -9.53674e-07 -7.7486e-07
For my application this error is too high, I am aiming for an error in the range 10^-10 - 10^-12. My question is how to set the threshold for the decomposition.
NOTE : In the docs I have noted that there is a method setThreshold() but it clearly states that this does not set a threshold for the decomposition but for singular values comparison with zero.
NOTE : As far as possible I do not wish to go for double. Is it even possible with float?
A single precision floating point (a 32 bit float) has between six to nine significant decimal figures, so your requirement of 10^{-10} is impossible (assuming the values are around 0.5f). A double precision floating point (a 64 bit double) has 15-17 significant decimal figures, so should work as long as the values aren't 10^6.
I don't have enough memory to simply create a diagonal D-by-D matrix, since D is large. I keep getting an 'out of memory' error.
Instead of performing M x D x D operations in the first multiplication, I do M x D operations, but still my code takes ages to run.
Can anybody find a more effective way to perform the multiplication A'*B*A? Here's what I've attempted so far:
D=20000
M=25
A = floor(rand(D,M)*10);
B = floor(rand(1,D)*10);
for i=1:D
for j=1:M
result(i,j) = A(i,j) * B(1,j);
end
end
manual = result * A';
auto = A*diag(B)*A';
isequal(manual,auto)
One option that should solve your problem is using sparse matrices. Here's an example:
D = 20000;
M = 25;
A = floor(rand(D,M).*10); %# A D-by-M matrix
diagB = rand(1,D).*10; %# Main diagonal of B
B = sparse(1:D,1:D,diagB); %# A sparse D-by-D diagonal matrix
result = (A.'*B)*A; %'# An M-by-M result
Another option would be to replicate the D elements along the main diagonal of B to create an M-by-D matrix using the function REPMAT, then use element-wise multiplication with A.':
B = repmat(diagB,M,1); %# Replicate diagB to create an M-by-D matrix
result = (A.'.*B)*A; %'# An M-by-M result
And yet another option would be to use the function BSXFUN:
result = bsxfun(#times,A.',diagB)*A; %'# An M-by-M result
Maybe I'm having a bit of a brainfart here, but can't you turn your DxD matrix into a DxM matrix (with M copies of the vector you're given) and then .* the last two matrices rather than multiply them (and then, of course, normally multiply the first with the found product quantity)?
You are getting "out of memory" because MATLAB can not find a chunk of memory large enough to accommodate the entire matrix. There are different techniques to avoid this error described in MATLAB documentation.
In MATLAB you obviously do not need programming explicit loops in most cases because you can use operator *. There exists a technique how to speed up matrix multiplication if it is done with explicit loops, here is an example in C#. It has a good idea how (potentially large) matrix can be split into smaller matrices. To contain these smaller matrices in MATLAB you can use cell matrix. It is much more probably that system finds enough RAM to accommodate two smaller sub-matrices then the resulting large matrix.