Recomposed matrix from Decomposed matrix doesn't match - opengl-es

This is a Matrix knowledge question, I am talking about XNA but only as a reference
I decomposed a matrix on XNA and got the decomposed values, then just tried to create again
the Matrix from those values and the resultant Matrix does not match the original one
I tried to Normalize the quaternion
I tried to generate a Rotation Matrix from the Quaternion
I tried swaping the order of the Transformation SRT , STR, TRS, TSR, RST, RTS
Why I am doing this? I am creating my own model importer and I am comparing my results
with XNA using the same model, so I am reading almost (some decimal difference) the same source SRT as the XNA's decomposed values, but my resultant Matrix didn't match XNA, so I went back to the basics and tried to decompose/recompose the XNA Matrix but I found it doesn't match either
These are the Original XNA Matrix values
?this.XNAModel.Bones[0].Transform
{{M11:1.331581E-06 M12:-5.551115E-17 M13:1 M14:0}
{M21:1 M22:-4.16881E-11 M23:-1.331581E-06 M24:0}
{M31:4.16881E-11 M32:1 M33:8.15331E-23 M34:0}
{M41:0.03756338 M42:37.46099 M43:2.230549 M44:1} }
Decomposition , lFlag is true
bool lFlag = this.XNAModel.Bones[0].Transform.Decompose(out lDecScale, out lDecRotation, out lDecTranslation);
//decomposed values
?lDecScale
{X:1 Y:1 Z:1}
?lDecRotation //quat
{X:-0.5000003 Y:-0.4999996 Z:-0.4999996 W:0.5000004}
?lDecTranslation
{X:0.03756338 Y:37.46099 Z:2.230549}
Recompose the matrix from the decomposed values , I've tried all the combinations SRT
//lDecRotation.Normalize();
Matrix lRecompose = Matrix.CreateScale(lDecScale) *
Matrix.CreateFromQuaternion(lDecRotation) * Matrix.CreateTranslation(lDecTranslation);
Quaternion not normalized result using SRT , doesnt' match original Matrix
?lRecompose
{{M11:1.430511E-06 M12:-5.960464E-08 M13:0.9999999 M14:0}
{M21:0.9999999 M22:1.192093E-07 M23:-1.370907E-06 M24:0}
{M31:-5.960464E-08 M32:0.9999999 M33:1.192093E-07 M34:0}
{M41:0.03756338 M42:37.46099 M43:2.230549 M44:1} }
Quaternion normalized result using SRT, doesnt' match original Matrix
?lRecompose
{{M11:1.192093E-06 M12:-5.960464E-08 M13:1 M14:0}
{M21:1 M22:-1.192093E-07 M23:-1.370907E-06 M24:0}
{M31:-5.960464E-08 M32:1 M33:-1.192093E-07 M34:0}
{M41:0.03756338 M42:37.46099 M43:2.230549 M44:1} }
This is what my model importer read
?this.ModelNew.Bones[0].Scale
{X:1 Y:1 Z:1}
?this.ModelNew.Bones[0].Rotation
{X:-0.0002303041 Y:-8.604798E-05 Z:-5.438289}
There is a small diference between this result and the Decomposed one from XNA
//My importer, based on the above Rotation Vector, converted to radians
?lQuat {X:-0.4999999 Y:-0.5 Z:-0.5 W:0.4999999}
//XNA
{X:-0.5000003 Y:-0.4999996 Z:-0.4999996 W:0.5000004}
?this.ModelNew.Bones[0 ].Translation
{X:0.03756338 Y:37.46099 Z:2.230549}

Depending on how your graphics API manages matrices, and how your modelling software exports them, you may have to transpose the matrix before decomposing it, and then transpose the result after recomposing it to get the same result as the original matrix.
By the way, the correct order for recomposing is to translate first, then rotate, then scale.

Related

how do I make face recognition with blob oracle

I have a row that contains the names and photos of people in Oracle, how do I make face recognition that can recognize names only by taking pictures from the camera ??
what techniques can I use?
Firstly, do not store the raw images in the blob column. You should store the vector representation of raw images. The following python code block will find the vector representation of a face image.
#!pip install deepface
from deepface.basemodels import VGGFace, Facenet
model = VGGFace.loadModel() #you can use google facenet instead of vgg
target_size = model.layers[0].input_shape
#preprocess detects facial area and aligns it
img = functions.preprocess_face(img="img.jpg", target_size=target_size)
representation = model.predict(img)[0,:]
Here, you can either pass exact image path like img.jpg or the 3D array to img argument of preprocess_face. In this way, you will store the vector representations in the blob column of oracle database.
When you have a new face image, and want to find its identity in the database find its representation again.
#preprocess detects facial area and aligns it
target_img = functions.preprocess_face(img="target.jpg", target_size=target_size)
target_representation = model.predict(target_img )[0,:]
Now, you have the vector representation of the target image and vector representations of the database images. You need to find the similarity score of target image representation and each instance of database representations.
Euclidean distance is the easiest way to compare vectors.
def findEuclideanDistance(source_representation, test_representation):
euclidean_distance = source_representation - test_representation
euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance))
euclidean_distance = np.sqrt(euclidean_distance)
return euclidean_distance
We will compare each data base instance to target. Suppose that representations of data base instances are stored in representations object.
distances = []
for i in range(0, len(representations)):
source_representation = representations[i]
#find the distance between target_representation and source_representation
distance = findEuclideanDistance(source_representation, target_representation )
distances.append(distance)
Distances list stores the distance of each item in the data base to target. We need to find the lowest distance.
import numpy as np
idx = np.argmax(distances)
Idx is the id of the target image in the database.

VTK - How to read Tensors/Matrix per cell from a NIFTI Image?

I'm trying to implement a MRT-DTI real-time fibertracking visualization tool based on VTK.
Therefore we need to read the DTI tensors/matrices per cell stored in a NIFTI Image (.nii) and I really can't figure out how to do this.
It's not a problem to retrieve a single scalar value from the NIFTI file, but I don't know how to get the tensor (3x3/4x4 matrix).
We would really appreciate any help !
Since the NIFTIImageReader is supposed to read a tensor NIFTI image as a multi-component vtkImage we tried this:
vtkSmartPointer<vtkImageExtractComponents> extractTupel1 = vtkSmartPointer<vtkImageExtractComponents>::New();
extractTupel1->SetInputConnection(reader->GetOutputPort());
extractTupel1->SetComponents(0,1,2);
extractTupel1->Update();
vtkSmartPointer<vtkImageExtractComponents> extractTupel2 = vtkSmartPointer<vtkImageExtractComponents>::New();
extractTupel2->SetInputConnection(reader->GetOutputPort());
extractTupel2->SetComponents(3, 4, 5);
extractTupel2->Update();
vtkSmartPointer<vtkImageExtractComponents> extractTupel3 = vtkSmartPointer<vtkImageExtractComponents>::New();
extractTupel3->SetInputConnection(reader->GetOutputPort());
extractTupel3->SetComponents(6, 7, 8);
extractTupel3->Update();
extractTupel1->GetOutput()->GetPoint(pointId, tupel1);
extractTupel2->GetOutput()->GetPoint(pointId, tupel2);
extractTupel3->GetOutput()->GetPoint(pointId, tupel3);
But it doesn't work. Maybe the GetPoint-Method is the wrong choice?
Please help :)
Answer by David Gobbi, really much thanks to him!:
No, the GetPoint() method will not return the tensor value. It will return the coordinates of the voxel. So vtkImageExtractComponents is not necessary here, either.
A vtkImageData always stores the voxel values as its "Scalars" array, even if the voxel values are not scalar quantities.
A simple (but inefficient way) to get the scalar values is this method:
GetScalarComponentAsDouble (int x, int y, int z, int component)
For each voxel, you would call this method 9 times with component = [0..8].
A much more efficient way of getting the tensors is to get the scalar array from the data, and then look up the tensors via the pointId:
reader->Update();
vtkDataArray *tensors = reader->GetOutput()->GetPointData()->GetScalars();
double tensor[9];
tensors->GetTuple(pointId, tensor);
This is orders of magnitude more efficient than GetScalarComponentAsDouble().

Matlab smoothing

How can apply user-defined mask as a vector e.g. [1 1 1].
img=imread('xxx.jpg');
mask=[1,1,1];
f=conv2(img,mask);
"Undefined function 'conv2' for input arguments of type 'double' and attributes 'full 3d real'."
Color images are 3 dimensional arrays (x,y,color). conv2 is only defined for 2-dimensions, so it won't work directly on a 3-dimensional array.
You can use an n-dimensional convolution, convn() instead of conv2(). Another possibility is to take each color separately and do a conv2()
if you want to apply a mask to your image you can try to use the following example:
Im2 =rgb2gray (fr);
fr=Im2.*uint8(mask);

Add two images in MATLAB

I am trying to overlay an activation map over a baseline vasculature image but I keep getting the same error below:
X and Y must have the same size and class or Y must be a scalar double.
I resized each to 400x400 so I thought it would work but no dice. Is there something I am missing? It is fairly straight forward for a GUI I am working on. Any help would be appreciated.
a=imread ('Vasculature.tif');
b = imresize (a, [400,400]);
c=imread ('activation.tif');
d= imresize (c, [400,400]);
e=imadd (b,d);
Could it be the bit depth or dpi?
I think one of your images is RGB (size(...,3)==3) and the other is grayscale (size(...,3)==1). Say the vasculature image a is grayscale and the activation image c is RGB. To convert a to RGB to match c, use ind2rgb, then add.
aRGB = ind2rgb(a,gray(256)); % assuming uint8
Alternatively, you could do aRGB = repmat(a,[1 1 3]);.
Or to put the activation image into grayscale:
cGray = rgb2gray(c);
Also, according to the documentation for imadd the two images must be:
nonsparse numeric arrays with the same size and class
To get the uint8 and uint16 images to match use the im2uint8 or im2uint16 functions to convert. Alternatively, just rescale and cast (e.g. b_uint8 = uint8(double(b)*255/65535);).
Note that in some versions of MATLAB there is a bug with displaying 16-bit images. The fix depends on whether the image is RGB or gray scale, and the platform (Windows vs. Linux). If you run into problems displaying 16-bit images, use imshow, which has the fix, or use the following code for integer data type images following image or imagesc:
function fixint16disp(img)
if any(strcmp(class(img),{'int16','uint16'}))
if size(img,3)==1,
colormap(gray(65535)); end
if ispc,
set(gcf,'Renderer','zbuffer'); end
end
chappjc's answers is just fine, I want to add a more general answer to the question how to solve the error message
X and Y must have the same size and class or Y must be a scalar double
General solving strategy
At which line does the error occur
Try to understand the error message
a. "... must have the same size ...":
Check the sizes of the input.
Try to understand the meaning of your code for the given (type of) input parameters. Is the error message reasonable?
What do you want to achieve?
Useful command: size A: returns the size of A
b. "... must have the same class ...":
Check the data types of the input arguments.
Which common data type is reasonable?
Convert it to the chosen data type.
Usefull command: whos A: returns all the meta information of A, i.e. size, data type, ...
Implement the solution: your favorite search engine and the matlab documentation are your best friend.
Be happy: you solved your problem and learned something new.
A simple code :
a=imread ('image1.jpg');
b=imresize (a, [400,400]);
subplot(3,1,1), imshow(b), title('image 1');
c=imread ('image2.jpg');
d= imresize (c, [400,400]);
subplot(3,1,2), imshow(d), title('image 2');
[x1, y1] = size(b) %height and wedth of 1st image
[x2, y2] = size(d) %height and wedth of 2nd image
for i = 1: x1
for j = 1: y1
im3(i, j)= b(i, j)+d(i, j);
end
end
subplot(3,1,3), imshow (im3), title('Resultant Image');

Matlab: Coding with arrays and writing data to excel sheet

I have obtained blood vessels of an eye in an image variable ves. I found the number of connected components(8-connectivity) as blobs. For each blob I need to calculate the Area, Major axis length and Centroid and store these values in a matrix testfv (each row corresponding to each property).For a single blob, Area returns a 1x1 struct, Centroid returns a 1x2 struct, and MajorAxisLength returns a 1x1 struct. So ,I guess depending on the number of blobs the number of cells required to store the values of Areas, Centroids and MajorAxisLength's vary, so using just one testfv to store these values as I have done would be wrong.
Is it possible? This is the code I tried(i assumed that testfv has 25 columns which allows me to store upto 8 blobs info)
[labeledImage numberOfBlobs] = bwlabel(ves, 8);
col=numberOfBlobs*2;
testfv = zeros(3,col);
for i=1:col
blobMeasurements = regionprops(labeledImage, 'Area');
testfv(1,col) = [blobMeasurements.Area];
blobMeasurements = regionprops(labeledImage, 'MajorAxisLength');
testfv(2,col)= [blobMeasurements.MajorAxisLength];
blobMeasurements = regionprops(labeledImage, 'Centroid');
testfv(3,col) = [blobMeasurements.Centroid];
end
I am getting the following error....
??? Subscripted assignment dimension mismatch.
Error in ==> alpha1 at 191 <br/>
testfv(1,col) = [blobMeasurements.Area];
Also, I need to write the data of the testfv matrix to an excel sheet file. How do I that ?
Would really appreciate the help as I am new to Matlab.

Resources