How to output a struct array when I use the nlfilter function? - image

I have the following function which calculates statistic parameters. I would like to pass this function to nlfilter to do the calculation for a whole image. But the output of nlfilter must be a scalar.
How can I convert this to a function handle suitable for use with nlfilter so I can save the output of the function getStatistics2?
The getStatistics2 function's output is an struct array.
function [out] = getStatistics2(D)
D = double(D);
% out.MAX = max(D);%maximum
% out.MIN = min(D);%minimum
out.MEA = mean(D);%mean
out.MAD = mad(D);% mean absolute deviation y=mean(abs(X-mean(x)))
out.MED = median(D);%median
out.RAN = max(D) - min(D);%range
out.RMS = rms(D);%root mean square
out.STD = std(D);%stardard deviation
out.VAR= var(D);%variance

This is an interesting question. What's interesting is that your approach is almost perfect. The only reason it fails is because struct cannot be constructed using a numeric scalar input (i.e. struct(3)). The reason I mention this is because somewhere during the execution of nlfilter (specifically in mkconstarray.m), it calls the the following code:
repmat(feval(class, value), size);
Where:
class is 'struct'.
value is 0.
size is the size() of the input image, e.g. [100,100].
... and this fails because feval('struct', 0), which is equivalent to struct(0) - and this we already know to be invalid.
So what do we do? Create a custom class that can be constructed this way!
Here's an example of one such class:
classdef MyStatsClass % Value class
properties (GetAccess = public, SetAccess = private)
MAX#double scalar = NaN; % Maximum
MIN#double scalar = NaN; % Minimum
MEA#double scalar = NaN; % Mean
MAD#double scalar = NaN; % Mean absolute deviation y = mean(abs(X-mean(x)))
MED#double scalar = NaN; % Median
RMS#double scalar = NaN; % Root mean square
STD#double scalar = NaN; % Stardard deviation
VAR#double scalar = NaN; % Variance
RAN#double scalar = NaN; % Range
end % properties
methods (Access = public)
%% Constructor:
function obj = MyStatsClass(vec)
%% Special case:
if (nargin == 0) || (numel(vec) == 1) && (vec == 0)
% This happens during nlfilter allocation
return
end
%% Regular case:
obj.MAX = max(vec(:));
obj.MIN = min(vec(:));
obj.MEA = mean(vec(:));
obj.MAD = mad(vec(:));
obj.MED = median(vec(:));
obj.RMS = rms(vec(:));
obj.STD = std(vec(:));
obj.VAR = var(vec(:));
obj.RAN = obj.MAX - obj.MIN;
end % default constructor
end % public methods
end % classdef
And here's how you can use it:
function imF = q35693068(outputAsStruct)
if nargin == 0 || ~islogical(outputAsStruct) || ~isscalar(outputAsStruct)
outputAsStruct = false;
end
rng(35693068); % Set the random seed, for repeatability
WINDOW_SZ = 3;
im = randn(100);
imF = nlfilter(im, [WINDOW_SZ WINDOW_SZ], #MyStatsClass);
% If output is strictly needed as a struct:
if outputAsStruct
warning off MATLAB:structOnObject
imF = arrayfun(#struct,imF);
warning on MATLAB:structOnObject
end
Notice that I have added an optional input (outputAsStruct) that can force the output to be a struct array (and not an array of the type of our custom class, which is functionally identical to a read-only struct).
Notice also that by default nlfilter pads your array with zeros, which means that the (1,1) output will operate on an array that looks like this (assuming WINDOW_SZ=3):
[0 0 0
0 1.8096 0.5189
0 -0.3434 0.6586]
and not on im(1:WINDOW_SZ,1:WINDOW_SZ) which is:
[ 1.8096 0.5189 0.2811
-0.3434 0.6586 0.8919
-0.1525 0.7549 0.4497]
the "expected result" for im(1:WINDOW_SZ,1:WINDOW_SZ) will be found further "inside" the output array (in the case of WINDOW_SZ=3 at index (2,2)).

Related

difference in results while using pixel value or int

While using Matlab for image processing (exactly improving img by Fuzzy Logic) I found a really strange thing. My fuzzy function is correct, I tested it on random values and they are basically simple linear functions.
function f = Udark(z)
if z < 50
f = 1;
elseif z > 125
f = 0;
elseif (z >= 50) && (z <= 125)
f = -z/75 + 125/75;
end
end
where z is a value of a pixel (in grayscale). Now there is a really strange thing going on.
f = -z/75 + 125/75;, where a is an image. However, it is giving really different results if used as an input. I.e. if I use a variable p = 99, the output of the function is 0.3467 as it should be, when if I use A(i,j) it is giving me result f=2. Since it is clearly impossible, I do not know where is the problem. I thought that maybe there is a case with the type of the variable but if I change it to uint8 it stays the same... If you know what's going on, please, let me know :)
1.Changed line:
f = (125/75) - (z/75);
After editing the third condition the resultant/transformed image has no pixel values of 2. Not sure if you intend to work with decimals. If decimals are necessary using the im2double() function to convert the image and scaling it up by a factor of 255 might suffice your needs. See heading 3 for rounding details.
2.Reading in Image and Testing:
%Reading in the image and applying the function%
Image = imread("RGB_Image.png");
Greyscale_Image = rgb2gray(Image);
[Image_Height,Image_Width] = size(Greyscale_Image);
Transformed_Image = zeros(Image_Height,Image_Width);
for Row = 1: +1: Image_Height
for Column = 1: +1: Image_Width
Pixel_Value = Greyscale_Image(Row,Column);
[Transformed_Pixel_Value] = Udark(Pixel_Value);
Transformed_Image(Row,Column) = Transformed_Pixel_Value;
end
end
subplot(1,2,1); imshow(Greyscale_Image);
subplot(1,2,2); imshow(Transformed_Image);
%Checking that no transformed pixels falls in this impossible range%
Check = (Transformed_Image > (125/75)) & (Transformed_Image ~= 1);
Check_Flag = any(Check,'all');
%Function to transform pixel values%
function f = Udark(z)
if z < 50
f = 1;
elseif z > 125
f = 0;
elseif (z >= 50) && (z <= 125)
f = (125/75) - (z/75);
end
end
3.Evaluating the Specifics of the Third Condition
Working with integers (uint8) will force the values to be rounded to the nearest integer. Any number that falls between the range (50,125] will evaluate to 1 or 0.
f = -z/75 + 125/75;
If z = 50.1,
-50.1/75 + 125/75 = 74.9/75 ≈ 0.9987 → rounds to 1
Using MATLAB version: R2019b

Is it possible to do a poisson distribution with the probabilities based on integers?

Working within Solidity and the Ethereum EVM and Decimals don't exist. Is there a way I could mathematically still create a Poisson distribution using integers ? it doesnt have to be perfect, i.e rounding or losing some digits may be acceptable.
Let me preface by stating that what follows is not going to be (directly) helpful to you with etherium/solidity. However, it produces probability tables that you might be able to use for your work.
I ended up intrigued by the question of how accurate you could be in expressing the Poisson probabilities as rationals, so I put together the following script in Ruby to try things out:
def rational_poisson(lmbda)
Hash.new.tap do |h| # create a hash and pass it to this block as 'h'.
# Make all components of the calculations rational to allow
# cancellations to occur wherever possible when dividing
e_to_minus_lambda = Math.exp(-lmbda).to_r
factorial = 1r
lmbda = lmbda.to_r
power = 1r
(0...).each do |x|
unless x == 0
power *= lmbda
factorial *= x
end
value = (e_to_minus_lambda / factorial) * power
# the following double inversion/conversion bounds the result
# by the significant bits in the mantissa of a float
approx = Rational(1, (1 / value).to_f)
h[x] = approx
break if x > lmbda && approx.numerator <= 1
end
end
end
if __FILE__ == $PROGRAM_NAME
lmbda = (ARGV.shift || 2.0).to_f # read in a lambda (defaults to 2.0)
pmf = rational_poisson(lmbda) # create the pmf for a Poisson with that lambda
pmf.each { |key, value| puts "p(#{key}) = #{value} = #{value.to_f}" }
puts "cumulative error = #{1.0 - pmf.values.inject(&:+)}" # does it sum to 1?
end
Things to know as you glance through the code. Appending .to_r to a value or expression converts it to a rational, i.e., a ratio of two integers; values with an r suffix are rational constants; and (0...).each is an open-ended iterator which will loop until the break condition is met.
That little script produces results such as:
localhost:pjs$ ruby poisson_rational.rb 1.0
p(0) = 2251799813685248/6121026514868073 = 0.36787944117144233
p(1) = 2251799813685248/6121026514868073 = 0.36787944117144233
p(2) = 1125899906842624/6121026514868073 = 0.18393972058572117
p(3) = 281474976710656/4590769886151055 = 0.061313240195240384
p(4) = 70368744177664/4590769886151055 = 0.015328310048810096
p(5) = 17592186044416/5738462357688819 = 0.003065662009762019
p(6) = 1099511627776/2151923384133307 = 0.0005109436682936699
p(7) = 274877906944/3765865922233287 = 7.299195261338141e-05
p(8) = 34359738368/3765865922233287 = 9.123994076672677e-06
p(9) = 67108864/66196861914257 = 1.0137771196302974e-06
p(10) = 33554432/330984309571285 = 1.0137771196302975e-07
p(11) = 33554432/3640827405284135 = 9.216155633002704e-09
p(12) = 4194304/5461241107926203 = 7.68012969416892e-10
p(13) = 524288/8874516800380079 = 5.907792072437631e-11
p(14) = 32768/7765202200332569 = 4.2198514803125934e-12
p(15) = 256/909984632851473 = 2.8132343202083955e-13
p(16) = 16/909984632851473 = 1.7582714501302472e-14
p(17) = 1/966858672404690 = 1.0342773236060278e-15
cumulative error = 0.0

Image Processing: Algorithm taking too long in MATLAB

I am working in MATLAB to process two 512x512 images, the domain image and the range image. What I am trying to accomplish is the following:
Divide both domain and range images into 8x8 pixel blocks
For each 8x8 block in the domain image, I have to apply a linear transformations to it and compare each of the 4096 transformed blocks with each of the 4096 range blocks.
Compute error in each case between the transformed block and the range image block and find the minimum error.
Finally I'll have for each 8x8 range block, the id of the 8x8 domain block for which the error was minimum (error between the range block and the transformed domain block)
To achieve this, I have written the following code:
RangeImagecolor = imread('input.png'); %input is 512x512
DomainImagecolor = imread('input.png'); %Range and Domain images are identical
RangeImagetemp = rgb2gray(RangeImagecolor);
DomainImagetemp = rgb2gray(DomainImagecolor);
RangeImage = im2double(RangeImagetemp);
DomainImage = im2double(DomainImagetemp);
%For the (k,l)th 8x8 range image block
for k = 1:64
for l = 1:64
minerror = 9999;
min_i = 0;
min_j = 0;
for i = 1:64
for j = 1:64
%here I compute for the (i,j)th domain block, the transformed domain block stored in D_trans
error = 0;
D_trans = zeros(8,8);
R = zeros(8,8); %Contains the pixel values of the (k,l)th range block
for m = 1:8
for n = 1:8
R(m,n) = RangeImage(8*k-8+m,8*l-8+n);
%ApplyTransformation can depend on (k,l) so I can't compute the transformation outside the k,l loop.
[m_dash,n_dash] = ApplyTransformation(8*i-8+m,8*j-8+n);
D_trans(m,n) = DomainImage(m_dash,n_dash);
error = error + (R(m,n)-D_trans(m,n))^2;
end
end
if(error < minerror)
minerror = error;
min_i = i;
min_j = j;
end
end
end
end
end
As an example ApplyTransformation, one can use the identity transformation:
function [x_dash,y_dash] = Iden(x,y)
x_dash = x;
y_dash = y;
end
Now the problem I am facing is the high computation time. The order of computation in the above code is 64^5, which is of the order 10^9. This computation should take at the worst minutes or an hour. It takes about 40 minutes to compute just 50 iterations. I don't know why the code is running so slow.
Thanks for reading my question.
You can use im2col* to convert the image to column format so each block forms a column of a [64 * 4096] matrix. Then apply transformation to each column and use bsxfun to vectorize computation of error.
DomainImage=rand(512);
RangeImage=rand(512);
DomainImage_col = im2col(DomainImage,[8 8],'distinct');
R = im2col(RangeImage,[8 8],'distinct');
[x y]=ndgrid(1:8);
function [x_dash, y_dash] = ApplyTransformation(x,y)
x_dash = x;
y_dash = y;
end
[x_dash, y_dash] = ApplyTransformation(x,y);
idx = sub2ind([8 8],x_dash, y_dash);
D_trans = DomainImage_col(idx,:); %transformation is reduced to matrix indexing
Error = 0;
for mn = 1:64
Error = Error + bsxfun(#minus,R(mn,:),D_trans(mn,:).').^2;
end
[minerror ,min_ij]= min(Error,[],2); % linear index of minimum of each block;
[min_i min_j]=ind2sub([64 64],min_ij); % convert linear index to subscript
Explanation:
Our goal is to reduce number of loops as much as possible. For it we should avoid matrix indexing and instead we should use vectorization. Nested loops should be converted to one loop. As the first step we can create a more optimized loop as here:
min_ij = zeros(4096,1);
for kl = 1:4096 %%% => 1:size(D_trans,2)
minerror = 9999;
min_ij(kl) = 0;
for ij = 1:4096 %%% => 1:size(R,2)
Error = 0;
for mn = 1:64
Error = Error + (R(mn,kl) - D_trans(mn,ij)).^2;
end
if(Error < minerror)
minerror = Error;
min_ij(kl) = ij;
end
end
end
We can re-arrange the loops and we can make the most inner loop as the outer loop and separate computation of the minimum from the computation of the error.
% Computation of the error
Error = zeros(4096,4096);
for mn = 1:64
for kl = 1:4096
for ij = 1:4096
Error(kl,ij) = Error(kl,ij) + (R(mn,kl) - D_trans(mn,ij)).^2;
end
end
end
% Computation of the min
min_ij = zeros(4096,1);
for kl = 1:4096
minerror = 9999;
min_ij(kl) = 0;
for ij = 1:4096
if(Error(kl,ij) < minerror)
minerror = Error(kl,ij);
min_ij(kl) = ij;
end
end
end
Now the code is arranged in a way that can best be vectorized:
Error = 0;
for mn = 1:64
Error = Error + bsxfun(#minus,R(mn,:),D_trans(mn,:).').^2;
end
[minerror ,min_ij] = min(Error, [], 2);
[min_i ,min_j] = ind2sub([64 64], min_ij);
*If you don't have the Image Processing Toolbox a more efficient implementation of im2col can be found here.
*The whole computation takes less than a minute.
First things first - your code doesn't do anything. But you likely do something with this minimum error stuff and only forgot to paste this here, or still need to code that bit. Never mind for now.
One big issue with your code is that you calculate transformation for 64x64 blocks of resulting image AND source image. 64^5 iterations of a complex operation are bound to be slow. Rather, you should calculate all transformations at once and save them.
allTransMats = cell(64);
for i = 1 : 64
for j = 1 : 64
allTransMats{i,j} = getTransformation(DomainImage, i, j)
end
end
function D_trans = getTransformation(DomainImage, i,j)
D_trans = zeros(8);
for m = 1 : 8
for n = 1 : 8
[m_dash,n_dash] = ApplyTransformation(8*i-8+m,8*j-8+n);
D_trans(m,n) = DomainImage(m_dash,n_dash);
end
end
end
This serves to get allTransMat and is OUTSIDE the k, l loop. Preferably as a simple function.
Now, you make your big k, l, i, j loop, where you compare all the elements as needed. Comparison could be also done block-wise instead of filling a small 8x8 matrix, yet doing it per element for some reason.
m = 1 : 8;
n = m;
for ...
R = RangeImage(...); % This will give 8x8 output as n and m are vectors.
D = allTransMats{i,j};
difference = sum(sum((R-D).^2));
if (difference < minDifference) ...
end
Even though this is a simple no transformations case, this speeds up code a lot.
Finally, are you sure you need to compare each block of transformed output with each block in the source? Typically you compare block1(a,b) with block2(a,b) - blocks (or pixels) on the same position.
EDIT: allTransMats requires k and l too. Ouch. There is NO WAY to make this fast for a single iteration, as you require 64^5 calls to ApplyTransformation (or a vectorization of that function, but even then it might not be fast - we would have to see the function to help here).
Therefore, I will re-iterate my advice to generate all transformations and then perform lookup: this upper part of the answer with allTransMats generation should be changed to have all 4 loops and generate allTransMats{i,j,k,l};. It WILL be slow, there is no way around that as I mentioned in the upper part of edit. But, it is a cost you pay once, as after saving the allTransMats, all further image analyses will be able to simply load it instead of generating it again.
But ... what do you even do? Transformation that depends on source and destination block indices plus pixel indices (= 6 values total) sounds like a mistake somewhere, or a prime candidate to optimize instead of all the rest.

How to pass a parameter that changes with time to SciLab ode?

I'm trying to solve and heat transfer problem using SciLab's ode function. The thing is: one of the parameters changes with time, h(t).
ODE
My question is: how can I pass an argument to ode function that is changing over time?
ode allows extra function's parameters as list :
It may happen that the simulator f needs extra arguments. In this
case, we can use the following feature. The f argument can also be a
list lst=list(f,u1,u2,...un) where f is a Scilab function with
syntax: ydot = f(t,y,u1,u2,...,un) and u1, u2, ..., un are extra
arguments which are automatically passed to the simulator simuf.
Extra parameter is a function of t
function y = f(t,y,h)
// define y here depending on t and h(t),eg y = t + h(t)
endfunction
function y = h(t)
// define here h(t), eg y = t
endfunction
// define y0,t0 and t
y = ode(y0, t0, t, list(f,h)) // this will pass the h function as a parameter
Extra is a vector for which we want to extract the corresponding term.
Since ode only compute the solution y at t. An idea is to look for Ti < t < Tj when ode performs a computation and get Hi < h < Hj.
This is rather ugly but totally works:
function y = h(t,T,H)
res = abs(t - T) // looking for nearest value of t in T
minres = min(res) // getting the smallest distance
lower = find(res==minres) // getting the index : T(lower)
res(res==minres)=%inf // looking for 2nd nearest value of t in T: nearest is set to inf
minres = min(res) // getting the smallest distance
upper = find(minres==res) // getting the index: T(upper)
// Now t is between T(lower) (nearest) and T(upper) (farest) (! T(lower) may be > T(upper))
y = ((T(upper)-t)*H(lower)+(t-T(lower))*H(upper))/(T(upper)-T(lower)) // computing h such as the barycenter with same distance to H(lower) and H(upper)
endfunction
function ydot=f(t, y,h,T,H)
hi = h(t,T,H) // if Ti< t < Tj; Hi<h(t,T,H)<Hj
disp([t,hi]) // with H = T, hi = t
ydot=y^2-y*sin(t)+cos(t) - hi // example of were to use hi
endfunction
// use base example of `ode`
y0=0;
t0=0;
t=0:0.1:%pi;
H = t // simple example
y = ode(y0,t0,t,list(f,h,t,H));
plot(t,y)

K-means for image compression only gives black-and-white result

I'm doing this exercise by Andrew NG about using k-means to reduce the number of colors in an image. But the problem is my code only gives a black-and-white image :( . I have checked every step in the algorithm but it still won't give the correct result. Please help me, thank you very much
Here is the link of the exercise, and here is the dataset.
The correct result is given in the link of the exercise. And here is my black-and-white image:
Here is my code:
function [] = KMeans()
Image = double(imread('bird_small.tiff'));
[rows,cols, RGB] = size(Image);
Points = reshape(Image,rows * cols, RGB);
K = 16;
Centroids = zeros(K,RGB);
s = RandStream('mt19937ar','Seed',0);
% Initialization :
% Pick out K random colours and make sure they are all different
% from each other! This prevents the situation where two of the means
% are assigned to the exact same colour, therefore we don't have to
% worry about division by zero in the E-step
% However, if K = 16 for example, and there are only 15 colours in the
% image, then this while loop will never exit!!! This needs to be
% addressed in the future :(
% TODO : Vectorize this part!
done = false;
while done == false
RowIndex = randperm(s,rows);
ColIndex = randperm(s,cols);
RowIndex = RowIndex(1:K);
ColIndex = ColIndex(1:K);
for i = 1 : K
for j = 1 : RGB
Centroids(i,j) = Image(RowIndex(i),ColIndex(i),j);
end
end
Centroids = sort(Centroids,2);
Centroids = unique(Centroids,'rows');
if size(Centroids,1) == K
done = true;
end
end;
% imshow(imread('bird_small.tiff'))
%
% for i = 1 : K
% hold on;
% plot(RowIndex(i),ColIndex(i),'r+','MarkerSize',50)
% end
eps = 0.01; % Epsilon
IterNum = 0;
while 1
% E-step: Estimate membership given parameters
% Membership: The centroid that each colour is assigned to
% Parameters: Location of centroids
Dist = pdist2(Points,Centroids,'euclidean');
[~, WhichCentroid] = min(Dist,[],2);
% M-step: Estimate parameters given membership
% Membership: The centroid that each colour is assigned to
% Parameters: Location of centroids
% TODO: Vectorize this part!
OldCentroids = Centroids;
for i = 1 : K
PointsInCentroid = Points((find(WhichCentroid == i))',:);
NumOfPoints = size(PointsInCentroid,1);
% Note that NumOfPoints is never equal to 0, as a result of
% the initialization. Or .... ???????
if NumOfPoints ~= 0
Centroids(i,:) = sum(PointsInCentroid , 1) / NumOfPoints ;
end
end
% Check for convergence: Here we use the L2 distance
IterNum = IterNum + 1;
Margins = sqrt(sum((Centroids - OldCentroids).^2, 2));
if sum(Margins > eps) == 0
break;
end
end
IterNum;
Centroids ;
% Load the larger image
[LargerImage,ColorMap] = imread('bird_large.tiff');
LargerImage = double(LargerImage);
[largeRows,largeCols,~] = size(LargerImage); % RGB is always 3
% Dist = zeros(size(Centroids,1),RGB);
% TODO: Vectorize this part!
% Replace each of the pixel with the nearest centroid
for i = 1 : largeRows
for j = 1 : largeCols
Dist = pdist2(Centroids,reshape(LargerImage(i,j,:),1,RGB),'euclidean');
[~,WhichCentroid] = min(Dist);
LargerImage(i,j,:) = Centroids(WhichCentroid);
end
end
% Display new image
imshow(uint8(round(LargerImage)),ColorMap)
imwrite(uint8(round(LargerImage)), 'D:\Hoctap\bird_kmeans.tiff');
You're indexing into Centroids with a single linear index.
Centroids(WhichCentroid)
This is going to return a single value (specifically the red value for that centroid). When you assign this to LargerImage(i,j,:), it will assign all RGB channels the same value resulting in a grayscale image.
You likely want to grab all columns of the selected centroid to provide an array of red, green, and blue values that you want to assign to LargerImage(i,j,:). You can do by using a colon : to specify all columns of Centroids which belong to the row indicated by WhichCentroid.
LargerImage(i,j,:) = Centroids(WhichCentroid,:);

Resources