I am trying to reproduce linear local embedding in matlab. Now I am able to get the Y vectors for this. However, what I am unclear about is getting the residual variance. I was wondering if perhaps someone has run into this issue and has an algorithm for how they went about programming this in matlab?
Thanks
The dimensionality reduction toolbox implements LLE (among many other nonlinear dimensionality reduction techniques). Very easy to use and a great toolbox, so if you don't really need to implement it yourself you could use it, or otherwise look into their code for inspiration?
Related
I have implemented the marker-less (so not like OpenCV) watershed algorithm proposed in a 1991 paper by Vincent and Soille.
I have also implemented a distance transform algorithm to apply it before watersheding.
It works well in a good number of cases, but sometimes it produces a little oversegmentation. I also corrected some of this with gauss filtering the distance-transform image.
I am planning to correct this by applying thresholding to the watersheds. Therefore considering watershed of only some height greater than a threshold
Considering that this paper is quite old (1991) I am wondering if anyone knows of papers or resources that explain something similar to what I intend to do.
Notes:
1) I am not using OpenCV. I am implementing all by myself from papers
2) I am going for a marker-less watersheding.
I'm working on a small program for optical mark recognition.
The processing of the scanned form consists of two steps:
1) Find the form in the scanned image, descew and crop borders.
2) With this "normalized" form, I can simply search the marks by using coordinates from the original document and so on.
For the first step, I'm currently using the Homography functions from OpenCV and a perspecive transform to map the points. I also tried the SurfDetector.
However, both algorithms are quite slow and do not really meet the speed requierements when scanning forms from a document scanner.
Can anyone point me to an alternative algorithm/solution for this specific problem?
Thanks in advance!
Try with ORB or FAST detector: they should be faster than SURF (documentation here).
If those don't match your speed requirement you should probably use a different approach. Do you need scale and rotation invariance? If not, you could try with the cross correlation.
Viola-Jones cascade classifier is pretty quick. It is used in OpenCV for Face detection, but you can train it for different purpose. Depending on the appearance of what you call your "form", you can use simpler algorithms such as cross correlation as said by Muffo.
Found this very interesting code on total variation filter tvmfilter
The additional functions this code uses are very confusing but the denoising is far better than all the filters i have tried so far
i have figured out the code on my own :)
His additional function "tv" denoises with the ROF model which has been a major research topic for two decades now. See http://www.ipol.im/pub/algo/g_tv_denoising/ for a summary of current methods.
Briefly, the idea behind ROF is to approximate the given noisy image with a piecewise constant image by solving an optimization which penalizes the total variation (ie l1-norm of the gradient) of the image.
The reason this performs well is that the other denoising methods you are probably working with denoise by smoothing the image via convolution with a Gaussian (ie penalizing the l2-norm of the gradient (ie solving the heat equation on the image) ). While fast to compute, denoising by smoothing blurs edges and thus results in poor image quality. l1-norm optimization preserves edges.
It's not clear how Guy solves the tv problem in that code you linked. He references the original ROF paper so it's possible that he's just using the original method (gradient descent) which is quite slow to converge. I suggest you give this code/paper a try: http://www.stanford.edu/~tagoldst/Tom_Goldstein/Split_Bregman.html as it's probably faster than the .m file you are using.
Also, as was mentioned in the comments, you will get better denoising (ie higher SNR) using nonlocal means. However, it will take much longer for the nonlocal means algorithm to work as it requires that you search the entire image for similar patches and compute weights based on them.
My requirements:
A user should be able to draw something by hand. Then after he takes off his pen (or finger) an algorithm smooths and transforms it into some basic shapes.
To get started I want to transform a drawing into a rectangle which resembles the original as much as possible. (Naturally this won't work if the user intentionally draws something else.) Right now I'm calculating an average x and y position, and I'm distinguishing between horizontal and vertical lines. But it's not yet a rectangle but some kind of orthogonal lines.
I wondered if there is some well-known algorithm for that, because I saw it a few times at some touchscreen applications. Do you have some reading tip?
Update: Maybe a pattern recognition algorithm would help me. There are some phones which request the user to draw a pattern to unlock it's keys.
P.S.: I think this question is not related to a particular programming language, but if you're interested, I will build a web application with RaphaelGWT.
The Douglas-Peucker algorithm is used in geography (to simplify a GPS track for instance) I guess it could be used here as well.
Based on your description I guess you're looking for a vectorization algorithm. Here are some pointers that might help you:
https://en.wikipedia.org/wiki/Image_tracing
http://outliner.codeplex.com/ - open source vectorizer of the edges in the raster pictures.
http://code.google.com/p/shapelogic/wiki/vectorization - describes different vectorization algorithm implementations
http://cardhouse.com/computer/vector.htm
There are a lot of resources on vectorization algorithms, I'm sure you'll be able to find something that fits your needs. I don't know how complex these algorithms are to implement them, though,
I hope my title makes sense, but here is what I a trying to do:
I have n rectangles, each with widths of Wn and heights of Hn that I need to arrange so on a two dimensional (x,y) plane the rectangle that they all fit in takes up the least area. I also need to be able to establish what (x,y) corresponds to what rectangle.
I would prefer something in psuedo-code, but can work with many languages.
Thanks for helping.
This is a difficult problem to be solved optimally, however there are some solutions that are not too hard to implement and that represent a good approximations for many uses (e.g. for textures). Try googling for "rect(angle) packing" ... you will find a lot of code that solves the problem.
Sounds NP-complete to me. Kinda like the knapsack problem. Which means there is no real solution. Only good approximations.
It's a variant on 2D bin packing. The fact that your container is flexible, allows for more optimization as well making it more challenging (as in difficult). Drools Planner (open source java) can handle this. At least one implementation (see user mailing list) with Drools Planner exists (not open source). If you need an open source example to start from, probably the cloud balance in drools-planner-examples is a good example to start from.
For an overview of the algorithms you can use, see my answer on a similar question.
Your problem is known as the 2D packing problem. Even the 1D problem is NP-hard. See here for a good article on some approaches along with sample C# code.
Also, see the following questions:
How can I programmatically determine how to fit smaller boxes into a larger package?
Where can I find open source 2d bin packing algorithms?