how do I make face recognition with blob oracle - oracle

I have a row that contains the names and photos of people in Oracle, how do I make face recognition that can recognize names only by taking pictures from the camera ??
what techniques can I use?

Firstly, do not store the raw images in the blob column. You should store the vector representation of raw images. The following python code block will find the vector representation of a face image.
#!pip install deepface
from deepface.basemodels import VGGFace, Facenet
model = VGGFace.loadModel() #you can use google facenet instead of vgg
target_size = model.layers[0].input_shape
#preprocess detects facial area and aligns it
img = functions.preprocess_face(img="img.jpg", target_size=target_size)
representation = model.predict(img)[0,:]
Here, you can either pass exact image path like img.jpg or the 3D array to img argument of preprocess_face. In this way, you will store the vector representations in the blob column of oracle database.
When you have a new face image, and want to find its identity in the database find its representation again.
#preprocess detects facial area and aligns it
target_img = functions.preprocess_face(img="target.jpg", target_size=target_size)
target_representation = model.predict(target_img )[0,:]
Now, you have the vector representation of the target image and vector representations of the database images. You need to find the similarity score of target image representation and each instance of database representations.
Euclidean distance is the easiest way to compare vectors.
def findEuclideanDistance(source_representation, test_representation):
euclidean_distance = source_representation - test_representation
euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance))
euclidean_distance = np.sqrt(euclidean_distance)
return euclidean_distance
We will compare each data base instance to target. Suppose that representations of data base instances are stored in representations object.
distances = []
for i in range(0, len(representations)):
source_representation = representations[i]
#find the distance between target_representation and source_representation
distance = findEuclideanDistance(source_representation, target_representation )
distances.append(distance)
Distances list stores the distance of each item in the data base to target. We need to find the lowest distance.
import numpy as np
idx = np.argmax(distances)
Idx is the id of the target image in the database.

Related

Coordinates and Field values

Consider I have load a dataset as follows:
ds = yt.load('pltxxx')
The dataset includes the following fields
density, mag_vort, tracer, x_velocity, y_velocity
One can simply plot the mag_vort which is the magnitude of vorticity in 2D domain in this case, by means of:
slc = yt.SlicePlot(ds, 'z', 'mag_vort')
If I want to export the x-cooridnates, y-coordinates and vorticity_magnitude in the txt file (or numpy array) or plot it via matplotlib scatter plot
plt.scatter(x_coor, y_coor, c=mag_vort)
Is there an easy way to extract those information from dataset?
You can use a data object (in this case we use the all_data data object) to access the field values for the 'x', 'y', and 'mag_vort' fields:
ad = ds.all_data()
x = ad['x']
y = ad['y']
mag_vort = ad['mag_vort']
The arrays you get back from accessing a data object are YTArray instances. YTArray is a subclass of numpy's ndarray that has units attached.
Before you pass these arrays to matplotlib, convert them to whichever units you want to do the plot in, then cast them to numpy arrays:
x_plot = np.array(x.to('km'))
y_plot = np.array(y.to('km'))
plt.scatter(x_plot, y_plot, c=np.array(mag_vort))

Python regionprops sci-kit image

I am using sci-kit image to get the "regionprops" of a segmented image. I then wish to replace each of the segment labels with their corresponding statistic (e.g eccentricity).
from skimage import segmentation
from skimage.measure import regionprops
#a segmented image
labels = segmentation.slic(img1, compactness=10, n_segments=200)
propimage = labels
#props loop
for region in regionprops(labels1, properties ='eccentricity') :
eccentricity = region.eccentricity
propimage[propimage==region] = eccentricity
This runs, but the propimage values do not change from their original labels
I have also tried:
for i in range(0,max(labels)):
prop = regions[i].eccentricity #the way to cal a single prop
propimage[i]= prop
This delivers this error
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
I am a recent migrant from matlab where I have implemented this, but the data structures used are completely different.
Can any one help me with this?
Thanks
Use ndimage from scipy : the sum() function can operate using your label array.
from scipy import ndimage as nd
sizes = nd.sum(label_file[0]>0, labels=label_file[0], index=np.arange(0,label_file[1])
You can then evaluate the distribution with numpy.histogram and so on.

Matlab: Coding with arrays and writing data to excel sheet

I have obtained blood vessels of an eye in an image variable ves. I found the number of connected components(8-connectivity) as blobs. For each blob I need to calculate the Area, Major axis length and Centroid and store these values in a matrix testfv (each row corresponding to each property).For a single blob, Area returns a 1x1 struct, Centroid returns a 1x2 struct, and MajorAxisLength returns a 1x1 struct. So ,I guess depending on the number of blobs the number of cells required to store the values of Areas, Centroids and MajorAxisLength's vary, so using just one testfv to store these values as I have done would be wrong.
Is it possible? This is the code I tried(i assumed that testfv has 25 columns which allows me to store upto 8 blobs info)
[labeledImage numberOfBlobs] = bwlabel(ves, 8);
col=numberOfBlobs*2;
testfv = zeros(3,col);
for i=1:col
blobMeasurements = regionprops(labeledImage, 'Area');
testfv(1,col) = [blobMeasurements.Area];
blobMeasurements = regionprops(labeledImage, 'MajorAxisLength');
testfv(2,col)= [blobMeasurements.MajorAxisLength];
blobMeasurements = regionprops(labeledImage, 'Centroid');
testfv(3,col) = [blobMeasurements.Centroid];
end
I am getting the following error....
??? Subscripted assignment dimension mismatch.
Error in ==> alpha1 at 191 <br/>
testfv(1,col) = [blobMeasurements.Area];
Also, I need to write the data of the testfv matrix to an excel sheet file. How do I that ?
Would really appreciate the help as I am new to Matlab.

Recomposed matrix from Decomposed matrix doesn't match

This is a Matrix knowledge question, I am talking about XNA but only as a reference
I decomposed a matrix on XNA and got the decomposed values, then just tried to create again
the Matrix from those values and the resultant Matrix does not match the original one
I tried to Normalize the quaternion
I tried to generate a Rotation Matrix from the Quaternion
I tried swaping the order of the Transformation SRT , STR, TRS, TSR, RST, RTS
Why I am doing this? I am creating my own model importer and I am comparing my results
with XNA using the same model, so I am reading almost (some decimal difference) the same source SRT as the XNA's decomposed values, but my resultant Matrix didn't match XNA, so I went back to the basics and tried to decompose/recompose the XNA Matrix but I found it doesn't match either
These are the Original XNA Matrix values
?this.XNAModel.Bones[0].Transform
{{M11:1.331581E-06 M12:-5.551115E-17 M13:1 M14:0}
{M21:1 M22:-4.16881E-11 M23:-1.331581E-06 M24:0}
{M31:4.16881E-11 M32:1 M33:8.15331E-23 M34:0}
{M41:0.03756338 M42:37.46099 M43:2.230549 M44:1} }
Decomposition , lFlag is true
bool lFlag = this.XNAModel.Bones[0].Transform.Decompose(out lDecScale, out lDecRotation, out lDecTranslation);
//decomposed values
?lDecScale
{X:1 Y:1 Z:1}
?lDecRotation //quat
{X:-0.5000003 Y:-0.4999996 Z:-0.4999996 W:0.5000004}
?lDecTranslation
{X:0.03756338 Y:37.46099 Z:2.230549}
Recompose the matrix from the decomposed values , I've tried all the combinations SRT
//lDecRotation.Normalize();
Matrix lRecompose = Matrix.CreateScale(lDecScale) *
Matrix.CreateFromQuaternion(lDecRotation) * Matrix.CreateTranslation(lDecTranslation);
Quaternion not normalized result using SRT , doesnt' match original Matrix
?lRecompose
{{M11:1.430511E-06 M12:-5.960464E-08 M13:0.9999999 M14:0}
{M21:0.9999999 M22:1.192093E-07 M23:-1.370907E-06 M24:0}
{M31:-5.960464E-08 M32:0.9999999 M33:1.192093E-07 M34:0}
{M41:0.03756338 M42:37.46099 M43:2.230549 M44:1} }
Quaternion normalized result using SRT, doesnt' match original Matrix
?lRecompose
{{M11:1.192093E-06 M12:-5.960464E-08 M13:1 M14:0}
{M21:1 M22:-1.192093E-07 M23:-1.370907E-06 M24:0}
{M31:-5.960464E-08 M32:1 M33:-1.192093E-07 M34:0}
{M41:0.03756338 M42:37.46099 M43:2.230549 M44:1} }
This is what my model importer read
?this.ModelNew.Bones[0].Scale
{X:1 Y:1 Z:1}
?this.ModelNew.Bones[0].Rotation
{X:-0.0002303041 Y:-8.604798E-05 Z:-5.438289}
There is a small diference between this result and the Decomposed one from XNA
//My importer, based on the above Rotation Vector, converted to radians
?lQuat {X:-0.4999999 Y:-0.5 Z:-0.5 W:0.4999999}
//XNA
{X:-0.5000003 Y:-0.4999996 Z:-0.4999996 W:0.5000004}
?this.ModelNew.Bones[0 ].Translation
{X:0.03756338 Y:37.46099 Z:2.230549}
Depending on how your graphics API manages matrices, and how your modelling software exports them, you may have to transpose the matrix before decomposing it, and then transpose the result after recomposing it to get the same result as the original matrix.
By the way, the correct order for recomposing is to translate first, then rotate, then scale.

Finding area of the image

I used connected component labeling algorithm (bwconncomp) to label the different parts of a binary image (MATLAB). Now i need to calculate the area of different labels and remove the labels with smaller area. Can i use the default area finding command or is there any specific commands for that in matlab...Help..
From the documentation:
CC = bwconncomp(BW) returns the connected components CC found in BW.
The binary image BW can have any dimension. CC is a structure with
four fields...
The final field in CC is PixelIdxList, which is:
[a] 1-by-NumObjects cell array where the kth element in the cell array is
a vector containing the linear indices of the pixels in the kth object.
You can find the area of each label by looking at the length of the corresponding entry in the cell array. Something like:
areas_in_pixels = cellfun(#length, CC.PixelIdxList);
The PixelIdxList is a cell array, each member of which contains the linear indexes of the pixels present in that connected component. The line of code above finds the length of each cell in the cell array - i.e. the number of pixels in each connected component.
I've used cellfun to keep the code short and efficient. A different way of writing the same thing would be something like:
areas_in_pixels = nan(1, length(CC.PixelIdxList);
for i = 1:length(CC.PixelIdxList)
areas_in_pixels(i) = length(CC.PixelIdxList{i});
end
For each connected component, you can then find the size of that component in pixels by accessing an element in areas_in_pixels:
areas_in_pixels(34) %# area of connected component number 34
If you don't want to write lots of code like above just use built-in functions of MATLAB to detect the area. Label your components and from the properties of the component you can find out the area of that component. Suppose Bw is the binary image:
[B,L] = bwboundaries(Bw,'noholes');
stats = regionprops(L,'Area','perimeter');
for k = 1:length(B)
area(k)=stats.Area;
end
You can make this better still by avoiding the for loop with the following:
[B,L] = bwboundaries(Bw,'noholes');
stats = regionprops(L,'Area','perimeter');
area = [stats.Area];
Best,
-Will

Resources