How could I start to write a Gaussian Radial Basis Function in Mathematica? Please provide coding as references if possible. I have already tried but I still could not run it. Please show some guide to help me run it.
Without more details, it's impossible to give any better answer than this.
GaussionRadialBasis[x_Real, parameter_Real:1] := Exp[-parameter*x^2];
Related
I have an idea for an app that takes a printed page with four squares in each corner and allows you to measure objects on the paper given at least two squares are visible. I want to be able to have a user take a picture from less than perfect angles and still have the objects be measured accurately.
I'm unable to figure out exactly how to find information on this subject due to my lack of knowledge in the area. I've been able to find examples of opencv code that does some interesting transforms and the like but I've yet to figure out what I'm asking in simpler terms.
Does anyone know of papers or mathematical concepts I can lookup to get further into this project?
I'm not quite sure how or who to ask other than people on this forum, sorry for the somewhat vague question.
What you describe is very reminiscent of augmented reality marker tracking. Maybe you can start by searching these words on a search engine of your choice.
A single marker, if done correctly, can be used to identify it without confusing it with other markers AND to determine how the surface is placed in 3D space in front of the camera.
But that's all very difficult and advanced stuff, I'd greatly advise to NOT try and implement something like this, it would take years of research... The only way you have is to use a ready-made open source library that outputs the data you need for your app.
It may even not exist. In that case you'll have to buy one. Given the niché of your problem that would be perfectly plausible.
Here I give you only the programming aspect and if you want you can find out about the mathematical aspect from those examples. Most of the functions you need can be done using OpenCV. Here are some examples in python:
To detect the printed paper, you can use cv2.findContours function. The most outer contour is possibly the paper, but you need to test on actual images. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
In case of sloping (not in perfect angle), you can find the angle by cv2.minAreaRect which return the angle of the contour you found above. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html (part 7b).
If you want to rotate the paper, use cv2.warpAffine. https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html
To detect the object in the paper, there are some methods. The easiest way is using the contours above. If the objects are in certain colors, you can detect it by using color filter. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html
I am trying to reproduce linear local embedding in matlab. Now I am able to get the Y vectors for this. However, what I am unclear about is getting the residual variance. I was wondering if perhaps someone has run into this issue and has an algorithm for how they went about programming this in matlab?
Thanks
The dimensionality reduction toolbox implements LLE (among many other nonlinear dimensionality reduction techniques). Very easy to use and a great toolbox, so if you don't really need to implement it yourself you could use it, or otherwise look into their code for inspiration?
I am trying to allow the user to draw a free hand shape, then using a best guess algorithm, convert the free hand to an actual shape. I hope to have it pretty simple at first. Probably just ellipses and rectangles. I'm trying to find a good starting point. Is there a library available that does this. Or a set of algorithms that would be useful. Any help to get me started would be great. I'm having trouble finding the proper terms to search for.
googling "pattern recognition geometric shapes handwritten" returns hits including A Simple Approach to Recognise Geometric Shapes Interactively
I would like some help from the aficionados of openCV here.
I would like to know the direction to take (and some advices or piece of code) on how to morph 2 faces together with a kind of ratio saying 10% of the first and 90% of the second.
I have seen functions like cvWarpAffine and cvMakeScanlines but I am not sure how to use them.
So if somebody could help me here, I'll be very grateful.
Thanks in advance.
Unless the images compared are the exact same images, you would not go very far with this.
This is an artificial intelligence problem and needs to be solved as such. Typical solution involves:
Normalising the data (removing noise, skew, ...) from the images
Feature extraction (turn the image into a smaller set of data)
Use a machine learning (typically classifiers) to train the data with your matches
Test the result
Refine previous processes according to the results until you get good recognition
The choice of OpenCV functions used depends on your feature extraction method. Have a look at Eigenface.
What thresholding techique should i apply for the image in order to highlight the bright regions inside the image as well as the outer boundary..
The im2bw function does not give a good result
Help!!
Edit: Most of my images have the following histogram
Edit: Found a triangle threshold method that suits my work :)
Your question isn't very easy to answer since you don't really define what a ideal solution should accomplish.
Have you tried im2bw(yourImage, 0.1); ? I.e using a threshold for what parts should be black and waht parts shouldn't. I got descent results with that (depending on what the purpose is of course). Try it and if it isn't good enough, tell us in what way you need to improve it and i will try to help with some more advanced techniques!
EDIT: Using threshold 0.1 and 0.01 respectively, perhaps something ~0.05 should be good?
It sounds like what you want to do is ''image segmentation'' (see http://en.wikipedia.org/wiki/Segmentation_(image_processing) ).
Most methods are based on the Chan-Vese model which identifies the region of interest by solving an optimization problem involving a level set function. Since you're using matlab, this code: http://www.stanford.edu/~tagoldst/Tom_Goldstein/Split_Bregman.html should do a good job of finding the regions you are interested in.