Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am finding it hard to understand the process of Linear discriminant analysis (LDA), and I was wondering if someone could explained it with a simple step by step process in English. I understand LDA is closely related to principal component analysis (PCA). but I have no idea how it gives all the probabilities with a grate precision. And how the training data is related to the actual dataset. I have refer few documents and i don't get much idea. It make more confusing and complicated.
PCA (Principal Component Analysis) is unsupervised or what is the same, it does not use class-label information. Therefore, discriminative information is not necessarily preserve.
Minimizes the projection error.
Maximizes the variance of projected points.
Example: Reducing the number of features of a face (Face detection).
LDA (Linear Discriminant Analysis): A PCA that takes class-labels into consideration, hence, it's supervised.
Maximizes distance between classes.
Minimizes distance within classes.
Example: Separating faces into male and female clusters (Face recognition).
With regard to the step by step process, you can easily find an implementation in Google.
Regarding the classification:
Project input x into PCA subspace U, and calculate its projection a
Project a into LDA subspace V
Find the class with the closest center
In simple words, project the input x and then check from which cluster center is closer.
Image from K. Etemad, R. Chellapa, Discriminant analysis for recognition of human faces. J. Opt. Soc. Am. A,Vol. 14, No. 8, August 1997
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 20 hours ago.
Improve this question
I want to program a software that calculates the best combination of materials to use base on parameters such as its tensile strength, elastic modulus, stiffness, and results from doing certain tests from those materials. Those each factor are going to be weighted differently in a WDM. Is there an algorithm that would allow me to find the best combination without actually going through all the combinations and doing each individual calculations? I will be working with a lot of data, so efficiency is important
I tried researching algorithms like kruskal's and other things, but I'm not very fammiliar with them
First step is to write down an equation to calculate a number that you want to optimize.
If you can do that and the equation has no squares or other exponential terms then this is the classical linear programming problem https://en.wikipedia.org/wiki/Linear_programming
Your equation needs to look something like this:
max O = n1 * p1 + n2 * p2 - n3 * p3 ...
If so, then your best bet is to choose a linear programming package ( ask google ) with a good introductory tutorial and plug your problem into that. After a day or so on a steep learning curve, your problem will become almost trivial.
If you cannot do that, then you will need to use some sort of hill climbing algorithm - probably best to hire an expert to help with that.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have latitude and longitude points of N societies, order count of these societies, I also have latitude and longitude points of a warehouse from where the trucks will deploy and will be sent to these various societies(like Amazon deliveries). A truck can deliver maximum 350 orders (order count < 350). So no need to consider items with order count above 350 (We generally would send two trucks there or a bigger truck). Now I need to determine a pattern in which the trucks should be deployed in such a way that a minimum number of trips occur.
Considering that we determine the distance between two societies or warehouses is 'X' from this script is accurate, How do we solve this? I first thought that we could solve it using sum of subset problem, maybe? Seems like dp on graphs to me, traveling salesman problem with infinite number of salemans.
There are no restrictions on the number of trucks.
This is a typical Travelling salesman problem (TSP) which is known as NP-complete. It means that if you are looking for the optimal solution you have to test most of combinatorics. And as you know, !350 is tremendeous.
Nevertheless, as Henry suggests, you can look for a good solution which is not necessarily the best. A lot of algorithm called "heuristic" let you find one good solution in a very efficient way. Just have a look here for some examples https://en.wikipedia.org/wiki/Travelling_salesman_problem.
The most simple heuristic algorithm may be a greedy solution like always take the closest unvisited point as next society.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I know that I can find a polynomial regression's coefficients doing (X'X)^-1 * X'y (where X' is the transpose, see Wikipedia for details).
This is a way of finding the coefficients; now, there is (as far as I know) at least one other way, which is by minimizing a cost function using gradient descent. The former method seems to be the easiest to implement ( I did it in C++, I have the latter in Matlab ).
What I wanted to know is the advantage of one of these methods over the other.
Upon a particular dataset, with very few points, I found that I couldn't find a satisfactory solution using (X'X)^-1 * X'y, but gradient descent worked fine and I could get an estimation function that made sense.
So what's wrong with the matrix resolution over gradient descent ? And how would one test a regression results, having all the details hidden from the user ?
Both methods are equivalent. Iterative method is much more computationally efficient thanks to lower storage and the avoidance of matrix inverse calculation. The method outweighs the closed form (matrix equation) methods especially when X is huge and sparse.
Make sure the row number of X is larger than the column number of X to avoid the underdetermined problem. Also check out the condition number of X'X to see if the problem is ill-posedness. If that is the case, you may add a small regularization factor in the closed form ((X'X + lambda * I)^(-1) * X'y) where lambda is a small value and I is the identity matrix.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am trying to understand basic chess algorithms. I have not read the literature in depth yet but after some cogitating here is my attempt:
1) Assign weight values to the pieces(i.e. a bishop is more valuable than a pawn)
2) Define heuristic function that attaches a value to a particular move
3) Build minimax tree to store all possible moves. Prune the tree via alpha/beta pruning.
4) Traverse the tree to find the best move for each player
Is this the core "big picture" idea of chess algorithms? Can someone point me to resources that go more in depth regarding chess algorithms?
Following is an overview of chess engine development.
1. Create a board representation.
In an object-oriented language, this will be an object that will represent a chess board in memory. The options at this stage are:
Bitboards
0x88
8x8
Bitboards is the recommended way for many reasons.
2. Create an evaluation function.
This simply takes a board and side-to-evaluate as agruments and returns a score. The method signature will look something like:
int Evaluate(Board boardPosition, int sideToEvaluateFor);
This is where you use the weights assigned to each piece. This is also where you would use any heuristics if you so desire. A simple evaluation function would add weights of sideToEvaluateFor's pieces and subtract weights of the opposite side's pieces. Such an evaluation function is of course too naive for a real chess engine.
3. Create a search function.
This will be, like you said, something on the lines of a MiniMax search with Alpha-Beta pruning. Some of the popular search algorithms are:
NegaMax
NegaScout
MTD(f)
Basic idea is to try all different variations to a certain maximum depth and choose the move recommended by the variation which results in highest score. The score for each variation is the score returned by Evaluation method for the board position at the maximum depth.
For an example of chess engine in C# have a look at https://github.com/bytefire/shutranj which I put together recently. A better open source engine to look at is StockFish (https://github.com/mcostalba/Stockfish) which is written in C++.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm not sure if this is a stupid question but I couldn't really find anything on Google. Given a few data points for a function f(x) would it be possible to bruteforce what the function f(x) itself might be?
This will rely on some prior knowledge of f(x).
If you know that the function is constant, one point is enough; a line, then two points, etc. for polynomial functions.
But if you have no restrictions, this isn't possible. Assuming function here means something like a real-valued function on the real numbers, there are (uncountably) infinitely many functions which will take the specified values on any finite set of data points.
This is mostly math question. It depends on number of data points that are available. You are basically fitting data to a function. You need two data points for straight line, etc. The commercial solution is TableCurve 2D, http://en.wikipedia.org/wiki/TableCurve_2D. I would search for nonlinear fit on Google.
Fitting algorithms are also described in Numerical Recipes (http://en.wikipedia.org/wiki/Numerical_Recipes). The simplest algorithm would look for deviations between assumed function and data points. If you assume certain error on your data points, you can calculate chi-square and goodness of your fit.