I am trying to plot an Image from time domain signals stored at each pixel in Imagespace. I want to implement a gaussian high pass filter but I am not getting sharp filtered images as desired.
for i=1:165
for j=1:212
v = reshape(Imagespace(i,j,:),[1024,1]);
v1 = abs(fft(v));
E= v1;
c=0.5;
gauss=1-exp((-E(10).^2)/(2*c^2));
signal= gauss.*E;
filtered=abs(ifft(signal));
IS(i,j)=(max(filtered)./max(tvldata.Ref/2));
end
end
Related
I have a row that contains the names and photos of people in Oracle, how do I make face recognition that can recognize names only by taking pictures from the camera ??
what techniques can I use?
Firstly, do not store the raw images in the blob column. You should store the vector representation of raw images. The following python code block will find the vector representation of a face image.
#!pip install deepface
from deepface.basemodels import VGGFace, Facenet
model = VGGFace.loadModel() #you can use google facenet instead of vgg
target_size = model.layers[0].input_shape
#preprocess detects facial area and aligns it
img = functions.preprocess_face(img="img.jpg", target_size=target_size)
representation = model.predict(img)[0,:]
Here, you can either pass exact image path like img.jpg or the 3D array to img argument of preprocess_face. In this way, you will store the vector representations in the blob column of oracle database.
When you have a new face image, and want to find its identity in the database find its representation again.
#preprocess detects facial area and aligns it
target_img = functions.preprocess_face(img="target.jpg", target_size=target_size)
target_representation = model.predict(target_img )[0,:]
Now, you have the vector representation of the target image and vector representations of the database images. You need to find the similarity score of target image representation and each instance of database representations.
Euclidean distance is the easiest way to compare vectors.
def findEuclideanDistance(source_representation, test_representation):
euclidean_distance = source_representation - test_representation
euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance))
euclidean_distance = np.sqrt(euclidean_distance)
return euclidean_distance
We will compare each data base instance to target. Suppose that representations of data base instances are stored in representations object.
distances = []
for i in range(0, len(representations)):
source_representation = representations[i]
#find the distance between target_representation and source_representation
distance = findEuclideanDistance(source_representation, target_representation )
distances.append(distance)
Distances list stores the distance of each item in the data base to target. We need to find the lowest distance.
import numpy as np
idx = np.argmax(distances)
Idx is the id of the target image in the database.
I want to read in the right ascension (in hour angles), declination (in degrees) and size (in arcmin) of a catalogue of galaxies and draw all of them in a large image of specified pixel size.
I tried converting the ra, dec and size into pixels to create a Bounds object for each galaxy, but get an error that "BoundsI must be initialized with integer values." I understand that pixels have to be integers...
But is there a way to center the large image at a specified ra and dec, then input the ra and dec of each galaxy as parameters to draw it in?
Thank you in advance!
GalSim uses the CelestialCoord class to handle coordinates in the sky and any of a number of WCS classes to handle the conversion from pixels to celestial coordinates.
The two demos in the tutorial series that use a CelestialWCS (the base class for WCS classes that use celestial coordinates for their world coordinate system) are demo11 and demo13. So you might want to take a look at them. However, neither one does something very close to what you're doing.
So here's a script that more or less does what you described.
import galsim
import numpy
# Make some random input data so we can run this.
# You would use values from your input catalog.
ngal = 20
numpy.random.seed(123)
ra = 15 + 0.02*numpy.random.random( (ngal) ) # hours
dec = -34 + 0.3*numpy.random.random( (ngal) ) # degrees
size = 0.1 * numpy.random.random( (ngal) ) # arcmin
e1 = 0.5 * numpy.random.random( (ngal) ) - 0.25
e2 = 0.5 * numpy.random.random( (ngal) ) - 0.25
# arcsec is usually the more natural units for sizes, so let's
# convert to that here to make things simpler later.
# There are options throughout GalSim to do things in different
# units, such as arcmin, but arcsec is the default, so it will
# be simpler if we don't have to worry about that.
size *= 60 # size now in arcsec
# Some plausible location at which to center the image.
# Note that we are now attaching the right units to these
# so GalSim knows what angle they correspond to.
cen_ra = numpy.mean(ra) * galsim.hours
cen_dec = numpy.mean(dec) * galsim.degrees
# GalSim uses CelestialCoord to handle celestial coordinates.
# It knows how to do all the correct spherical geometry calculations.
cen_coord = galsim.CelestialCoord(cen_ra, cen_dec)
print 'cen_coord = ',cen_coord.ra.hms(), cen_coord.dec.dms()
# Define some reasonable pixel size.
pixel_scale = 0.4 # arcsec / pixel
# Make the full image of some size.
# Powers of two are typical, but not required.
image_size = 2048
image = galsim.Image(image_size, image_size)
# Define the WCS we'll use to connect pixels to celestial coords.
# For real data, this would usually be read from the FITS header.
# Here, we'll need to make our own. The simplest one that properly
# handles celestial coordinates is TanWCS. It first goes from
# pixels to a local tangent plane using a linear affine transformation.
# Then it projects that tangent plane into the spherical sky coordinates.
# In our case, we can just let the affine transformation be a uniform
# square pixel grid with its origin at the center of the image.
affine_wcs = galsim.PixelScale(pixel_scale).affine().withOrigin(image.center())
wcs = galsim.TanWCS(affine_wcs, world_origin=cen_coord)
image.wcs = wcs # Tell the image to use this WCS
for i in range(ngal):
# Get the celestial coord of the galaxy
coord = galsim.CelestialCoord(ra[i]*galsim.hours, dec[i]*galsim.degrees)
print 'gal coord = ',coord.ra.hms(), coord.dec.dms()
# Where is it in the image?
image_pos = wcs.toImage(coord)
print 'position in image = ',image_pos
# Make some model of the galaxy.
flux = size[i]**2 * 1000 # Make bigger things brighter...
gal = galsim.Exponential(half_light_radius=size[i], flux=flux)
gal = gal.shear(e1=e1[i],e2=e2[i])
# Pull out a cutout around where we want the galaxy to be.
# The bounds needs to be in integers.
# The fractional part of the position will go into offset when we draw.
ix = int(image_pos.x)
iy = int(image_pos.y)
bounds = galsim.BoundsI(ix-64, ix+64, iy-64, iy+64)
# This might be (partially) off the full image, so get the overlap region.
bounds = bounds & image.bounds
if not bounds.isDefined():
print ' This galaxy is completely off the image.'
continue
# This is the portion of the full image where we will draw. If you try to
# draw onto the full image, it will use a lot of memory, but if you go too
# small, you might see artifacts at the edges. You might need to
# experiment a bit with what is a good size cutout.
sub_image = image[bounds]
# Draw the galaxy.
# GalSim by default will center the object at the "true center" of the
# image. We actually want it centered at image_pos, so provide the
# difference as the offset parameter.
# Also, the default is to overwrite the image. But we want to add to
# the existing image in case galaxies overlap. Hence add_to_image=True
gal.drawImage(image=sub_image, offset=image_pos - sub_image.trueCenter(),
add_to_image=True)
# Probably want to add a little noise...
image.addNoise(galsim.GaussianNoise(sigma=0.5))
# Write to a file.
image.write('output.fits')
GalSim deals with image bounds and locations using image coordinates. The way to connect true positions on the sky (RA, dec) into image coordinates is using the World Coordinate System (WCS) functionality in GalSim. I gather from your description that there is a simple mapping from RA/dec into pixel coordinates (i.e., there are no distortions).
So basically, you would set up a simple WCS defining the (RA, dec) center of the big image and its pixel scale. Then for a given galaxy (RA, dec), you can use the "toImage" method of the WCS to figure out where on the big image the galaxy should live. Any subimage bounds can be constructed using that information.
For a simple example with a trivial world coordinate system, you can check out demo10 in the GalSim repository.
In the conv-nets model, I know how to visualize the filters, we can do itorch.image(model:get(1).weight)
But how could I efficiently visualize the output images after the convolution? especially those images in the second or third layer in a deep neural network?
Thanks.
Similarly to weight, you can use:
itorch.image(model:get(1).output)
To visualize the weights:
-- visualizing weights
n = nn.SpatialConvolution(1,64,16,16)
itorch.image(n.weight)
To visualize the feature maps:
-- initialize a simple conv layer
n = nn.SpatialConvolution(1,16,12,12)
-- push lena through net :)
res = n:forward(image.rgb2y(image.lena()))
-- res here is a 16x501x501 volume. We view it now as 16 separate sheets of size 1x501x501 using the :view function
res = res:view(res:size(1), 1, res:size(2), res:size(3))
itorch.image(res)
For more: https://github.com/torch/tutorials/blob/master/1_get_started.ipynb
My project is to detect human activity through stored video clips.
I am successfully able to do the following:
Get the Motion History Image (MHI) from a video using OpenCV
Train and classify the set of images using Matlab
However, I want to use Matlab in order to get the Motion History Image (MHI). Is it possible, and if yes can someone guide me? Thank you.
I have attached a sample Motion History Image (MHI)
I have used the following code for MHI:
http://www.ece.iastate.edu/~alexs/classes/2007_Fall_401/code/09_MotionHistory/motempl.c
MHI is just a ways of implementing motion detection (and uses silhouettes as the basis of it).
Let suppose that the silhouette of the most recent object has been created. It also uses a timestamp to identify if the current silhouette is recent or not. The older silhouettes have to be compared with the current silhouette in order to achieve movement detection. Hence, earlier silhouettes are also saved in the image, with an earlier timestamp.
MHI describes the changes of some moving objects over the image sequence. Basically, you should only maintain an image where every pixel encodes a time information - whether the silhouette is recent or not or where the movement occurs at a given time.
Therefore the implementation of MHI is very simple e.g.:
function MHI = MHI(fg)
% Initialize the output, MHI a.k.a. H(x,y,t,T)
MHI = fg;
% Define MHI parameter T
T = 15; % # of frames being considered; maximal value of MHI.
% Load the first frame
frame1 = fg{1};
% Get dimensions of the frames
[y_max x_max] = size(frame1);
% Compute H(x,y,1,T) (the first MHI)
MHI{1} = fg{1} .* T;
% Start global loop for each frame
for frameIndex = 2:length(fg)
%Load current frame from image cell
frame = fg{frameIndex};
% Begin looping through each point
for y = 1:y_max
for x = 1:x_max
if (frame(y,x) == 255)
MHI{frameIndex}(y,x) = T;
else
if (MHI{frameIndex-1}(y,x) > 1)
MHI{frameIndex}(y,x) = MHI{frameIndex-1}(y,x) - 1;
else
MHI{frameIndex}(y,x) = 0;
end
end
end
end
end
Code from: https://searchcode.com/codesearch/view/8509149/
Update #1:
Try to draw it as follows:
% showMHI.m
% Input frame number and motion history vector to display normalized MHI
% at the specified frame.
function showMHI(n, motion_history)
frameDisp = motion_history{n};
frameDisp = double(frameDisp);
frameDisp = frameDisp ./ 15;
figure, imshow(frameDisp)
title('MHI Image');
I want to get only leaf from an image.
The background is a normal white paper(A4) and there is some shadow.
I apply some method (structure element,edge detection using filter) but I cannot find the general way which can apply all the image.
these are examples.
Are there better methods for this problem??
thank you
another example.
and the result I got is
By using
hsv_I = rgb2hsv(I);
Is = hsv_I(:,:,2);
Is_d = imdilate(Is,strel('diamond',4));
Is_e = imerode(Is,strel('diamond',2));
Is_de = imerode(Is_d,strel('disk',2));
Is_def = imfill(Is_de,'holes');
Is_defe = imerode(Is_def,strel('disk',5));
Then Is_defe is a mask to segment
But the method that i did is very specific. I cannot use this in general.
If you have the Image Processing Toolbox, you could do as follows:
The code below first estimates the threshold with the function graythresh, thresholds the image and fills holes with the imfill function. Suppose I is a cell containing your RGB images:
for k=1:length(I)
t=graythresh(rgb2gray(I{k}));
BW{k}=imfill(~im2bw(I{k}, t), 'holes');
subplot(length(I),1,k), imshow(BW{k});
end