Is there any API to clip the gradients of a network? Or, I need to develop myself?
Best,
Afshin
I found one:
Tensorat::clamp(const Tensor &self, c10::optional<Scalar> min = c10::nullopt, c10::optional<Scalar> max = c10::nullopt), though I also implemented myself.
Related
I have a problem where I need to compare analytical and numerical flux around a circular geometry defined by x^2+y^2=0.5^2.
The flux is defined by grad(u).n where I choose u_analytical to be (x^2+y^2 , 0) in 2 dimensions.
The n in the formula is the surface normal of the circle so I think it is
(2x/sqrt(4x^2+4y^2) , 2y/sqrt(4x^2+4y^2) ). So my flux in x direction only is
(4x^2/sqrt(4x^2+4y^2) + 4y^2/sqrt(4x^2+4y^2)) but my numerical solution is far from that. Do I do any fundamental mistake here?
Thanks in advance.
I guess I now figured this out after reading some other related/similar posts. The easiest way to think is converting surface integral of the flux \int_S (grad(u).n) dS to volume integral \int_V div (grad(u)) dV using divergence theorem. From there I know div (grad(u)) = 4 and dV is the area of the circle = pi*r^2 that makes 4*pi*r^2 is the analytical flux.
The code:
A = rgb2gray(imread('Capture.PNG'));
imshow(A)
[centers, radii, metric] = imfindcircles(A,[12 17]);
I don't understand as to why this won't work because the picture size dimensions are 155x185 and I used ImageJ to find the diameter of one sphere (approx 30 pixels).
My goal is to identify the individual spheres. Thanks!
I'm sorry I don't have the Image Processing Toolbox right now but I've done things like this before.
My guess is that you'll have to do edge detection first (https://www.mathworks.com/help/images/edge-detection.html#responsive_offcanvas). You should probably set the threshold low to get lots of edges and then refine them with morphological operators. You might be able to get away w/o refining the initial set of edges if you play w/ the Sensitivity and EdgeThreshold parameters of imfindcircles. Looks fun!
I known that the question can be not satisfy for forum,but I think I can find the help from many smart image processing guys. My question is that, I have a image include texture and non-texture in image. How to detect the region that is texture region? Could you suggest to me any algorithm or parameter to distinguish non-texture region and texture region?
Thank you so much
UPDATE:
Based on the suggestion about Gray Level Matrix. I use a tool to extract that texture feature. However, I don't know which is best for my case. Let see the my result and explain help me which feature will be chosen
#rayryeng: Could you said to me what is purpose of Neighboring gray-level dependence matrix (NGLDM). How to use it in my case?
You can use texture descriptors such as those used in MPEG-7 :
Homogeneous Texture Descriptor (HTD)
Texture Browsing Descriptor (TBD)
Edge Histogram Descriptor (EHD)
You can find the details in some scientific papers such as Evaluation and comparison of texture descriptors proposed in MPEG-7 or Texture Descriptors in MPEG-7
A basic way to compute texture descriptors is to use Gabor filter. Some of MPEG-7 descriptors are based on it.
You can also take a look to the Grey-Level Co-occurrence Matrix texture measurements.
I am not sure if this is a valid way, or anybody uses this approach (I could not find any scholar papers) but I have an intuitive approach which I used a couple of times and worked fine for me.
I calculate the number of valid SURF features in an image and sort images with respect to the number of features. As the number of features increase, texture level also increases in my intuition. Below is my Matlab function that extract the number of features:
function [num_pts] = im2surf_feature(im)
if nargin>=1 && ischar(im) && exist(im, 'file')
im = imread(im);
end
if size(im,3)==3
im = rgb2gray(im);
end
ptsI1 = detectSURFFeatures(im);
[~, validPtsI1] = extractFeatures(im, ptsI1);
num_pts = size(validPtsI1,1);
end
detectSURFFeatures and extractFeatures are Matlab functions.
Note: I know this is a very late answer, but maybe someone can use it or give me feedback as to why this method is good or bad.
I would like to know something about wrinkles detection in Matlab:
I thought of using Hough Transform but it could not work for this. Is there any idea that I could processed further?
I even thought of using sobe, canny and other edge detector. But when I read their documentation, they are not really an edge detector.
close all
clear all
clc
Image = imread('imagename.jpg');
GrayImage = rgb2gray(Image);
FiltImage = edge(GrayImage ,'sobel');
imshow(FiltImage)
i want all the wrinkles as white pixel and the rest of the image as black.
I borrowed the method used in vessel detection from the paper Hessian-based Multiscale Vessel Enhancement Filtering by Frangi et al. There is a Matlab implementation, FrangiFilter2D, that works on 2D vessel images. And I tried to apply it to wrinkle detection.
options = struct('FrangiScaleRange', [5 5], 'FrangiScaleRatio', 1, 'FrangiBetaOne', 1,...
'FrangiBetaTwo', 500, 'verbose',true,'BlackWhite',true);
[outIm,whatScale,Direction] = FrangiFilter2D(double(GrayImage), options);
imshow(uint8(outIm/max(outIm(:))*256))
It looks better than pure edge extraction, though some improvement is need by (i) tuning the parameters, and (ii) combining with other image processing strategies.
Matlab has a ton of fun tools that you can essentially play with in combination, to detect the wrinkles. Here are some things to look at.
1). Study thresholding and see how it applies to your situation (this will help you a lot because of the contrast that exists between the wrinkles and the rest of the face color).
2). Remember you can add and subtract images.
3).Study watershed algorithm if you feel adventurous.
I'm trying to think of a fast algorithm for the following issue.
Given a goal image G, and two images A and B, determine which of A or B is more similar to G. Note that images A, B, and G are all the same dimension.
By more similar, I mean it looks more like image G overall.
Any ideas for algorithms? I am doing this in Objective-C, and have the capability to scan each and every single pixel in images A, B, and G.
I implemented the following: scan each and every pixel, determine the absolute error in each of red, green, and blue values for A to G and for B to G. The one with the less error is more similar. It works okay, but it is extremely extremely slow.
It is not possible to do better than X*Y where X, Y are the image dimensions. Since you need to scan each pixel of the input anyways.
However, one technique you can try is scan random pixels in the image and find the difference. Once you see an image considerably similar or dissimilar than A or B, you can stop.
# X, Y are the dimensions
sim_A = 0
sim_B = 0
while( abs(sim_A - sim_B) > MAX_DISSIMILARITY):
rand_x = random(X)
rand_y = random(Y)
sim_A += dissimilar(img_G, img_A, rand_X, rand_Y)
sim_B += dissimilar(img_G, img_B, rand_X, rand_Y)
You may try using SIFT Algorithm (Scale Invariant Feature Transform). As you just mentioned that you want to find which image is MORE similar to the goal image, then I guess this is the best algorithm. It basically extracts the Invariant features of the image (features that dont change with change in luminous intensity, scale, perspective etc) and then creates a feature vector of these. Then you may use this Feature vector to compare it with other images. you may check this and this for further reference.
Ideally there are computer vision libraries that make things way simpler (i guess it might be difficult to read and write to images in objective C, without any computer vision library). OpenCV (opensource computer vision Library) is best suited for stuff like these. It has many inbuilt functions to handle common stuff with images/videos.
Hope this helps :)
I would recommend checking out OpenCV, which is an image processing library. I don't think it has Objective-C support, but I think it is a better starting place than writing your own algorithm. Usually better not to reinvent the wheel unless you are doing it for personal practice.
The best way, I found out, is to do the following.
First, invert all pixels on the image to make the opposite of the image. This is the most dissimilar image.
Then, to compare image to the target image, compute how far away it is from the most dissimilar image. If it's more far, it's a better image.