How to select the number of cluster centroid in K means [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am going through a list of algorithm that I found and try to implement them for learning purpose. Right now I am coding K mean and is confused in the following.
How do you know how many cluster there is in the original data set
Is there any particular format that I have follow in choosing the initial cluster centroid besides all centroid have to be different? For example does the algorithm converge if I choose cluster centroids that are different but close together?
Any advice would be appreciated
Thanks

With k-means you are minimizing a sum of squared distances. One approach is to try all plausible values of k. As k increases the sum of squared distances should decrease, but if you plot the result you may see that the sum of squared distances decreases quite sharply up to some value of k, and then much more slowly after that. The last value that gave you a sharp decrease is then the most plausible value of k.
k-means isn't guaranteed to find the best possible answer each run, and it is sensitive to the starting values you give it. One way to reduce problems from this is to start it many times, with different starting values, and pick the best answer. It looks a bit odd if an answer for larger k is actually larger than an answer for smaller k. One way to avoid this is to use the best answer found for k clusters as the basis (with slight modifications) for one of the starting points for k+1 clusters.

In the standard K-Means the K value is chosen by you, sometimes based on the problem itself ( when you know how many classes exists OR how many classes you want to exists) other times a "more or less" random value. Typically the first iteration consists of randomly selecting K points from the dataset to serve as centroids. In the following iterations the centroids are adjusted.
After check the K-Means algorithm, I suggest you also see the K-means++, which is an improvement of the first version, as it tries to find the best K for each problem, avoiding the sometimes poor clusterings found by the standard k-means algorithm.
If you need more specific details on implementation of some machine learning algorithm, please let me know.

Related

Minimum number of Trips [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have latitude and longitude points of N societies, order count of these societies, I also have latitude and longitude points of a warehouse from where the trucks will deploy and will be sent to these various societies(like Amazon deliveries). A truck can deliver maximum 350 orders (order count < 350). So no need to consider items with order count above 350 (We generally would send two trucks there or a bigger truck). Now I need to determine a pattern in which the trucks should be deployed in such a way that a minimum number of trips occur.
Considering that we determine the distance between two societies or warehouses is 'X' from this script is accurate, How do we solve this? I first thought that we could solve it using sum of subset problem, maybe? Seems like dp on graphs to me, traveling salesman problem with infinite number of salemans.
There are no restrictions on the number of trucks.
This is a typical Travelling salesman problem (TSP) which is known as NP-complete. It means that if you are looking for the optimal solution you have to test most of combinatorics. And as you know, !350 is tremendeous.
Nevertheless, as Henry suggests, you can look for a good solution which is not necessarily the best. A lot of algorithm called "heuristic" let you find one good solution in a very efficient way. Just have a look here for some examples https://en.wikipedia.org/wiki/Travelling_salesman_problem.
The most simple heuristic algorithm may be a greedy solution like always take the closest unvisited point as next society.

Fast hill climbing algorithm that can stabilize when near optimal [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a floating point number x from [1, 500] that generates a binary y of 1 at some probability p. And I'm trying to find the x that can generate the most 1 or has highest p. I'm assuming there's only one maximum.
Is there a algorithm that can converge fast to the x with highest p while making sure it doesn't jump around too much after it's achieved for e.x. within 0.1% of the optimal x? Specifically, it would be great if it stabilizes when near < 0.1% of optimal x.
I know we can do this with simulated annealing but I don't think I should hard code temperature because I need to use the same algorithm when x could be from [1, 3000] or the p distribution is different.
This paper provides an for smart hill-climbing algorithm. The idea is basically you take n samples as starting points. The algorithm is as follows (it is simplified into one dimensional for your problem):
Take n sample points in the search space. In the paper, he uses Linear Hypercube Sampling since the dimensions of the data in the paper is assumed to be large. In your case, since it is one-dimensional, you can just use random sapling as usual.
For each sample points, gather points from its "local neighborhood" and find a best fit quadratic curve. Find the new maximum candidate from the quadratic curve. If the objective function of the new maximum candidate is actually higher than the previous one, update the sample point to the new maximum candidate. Repeat this step with smaller "local neighborhood" size for each iteration.
Use the best point from the sample points
Restart: repeat step 2 and 3, and then compare the maximums. If there is no improvement, stop. If there is improvement, repeat again.

Closest point to another point on a hypersphere [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have n (about 10^5) points on a hypersphere of dimension m (between 10^4 to 10^6).
I am going to make a bunch of queries of the form "given a point p, find the closest of the n points to p". I'll make about n of these queries.
(Not sure if the hypersphere fact helps at all.)
The simple naive algorithm to solve this is, for each query, to compare p to all other n points. Doing this n times ends up with a runtime of O(n^2 m), which is far too big for me to be able to compute.
Is there a more efficient algorithm I can use? If I could get it to O(nm) with some log factors that'd be great.
Probably not. Having many dimensions makes efficient indexing extremely hard. That is why people look for opportunities to reduce the number of dimensions to something manageable.
See https://en.wikipedia.org/wiki/Curse_of_dimensionality and https://en.wikipedia.org/wiki/Dimensionality_reduction for more.
Divide your space up into hypercubes -- call these cells -- with edge size chosen so that on average you'll have one point per cube. You'll want a map from hypercells to the set of points they contain.
Then, given a point, check its hypercell for other points. If it is empty, look at the adjacent hypercells (I'd recommend a literal hypercube of hypercells for simplicity rather than some approximation to a hypersphere built out of hypercells). Check that for other points. Keep repeating until you get a point. Assuming your points are randomly distributed, odds are high that you'll find a second point within 1-2 expansions.
Once you find a point, check all hypercells that could possibly contain a closer point. This is possible because the point you find may be in a corner, but there's some closer point outside of the hypercube containing all the hypercells you've inspected so far.

Powers of a half that sum to one [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Call every subunitary ratio with its denominator a power of 2 a perplex.
Number 1 can be written in many ways as a sum of perplexes.
Call every sum of perplexes a zeta.
Two zetas are distinct if and only if one of the zeta has as least one perplex that the other does not have. In the image shown above, the last two zetas are considered to be the same.
Find all the numbers of ways 1 can be written as a zeta with N perplexes. Because this number can be big, calculate it modulo 100003.
Please don't post the code, but rather the algorithm. Be as precise as you can.
This problem was given at a contest and the official solution, written in the Romanian language, has been uploaded at https://www.dropbox.com/s/ulvp9of5b3bfgm0/1112_descr_P2_fractii2.docx?dl=0 , as a docx file. (you can use google translate)
I do not understand what the author of the solution meant to say there.
Well, this reminds me of BFS algorithms(Breadth first search), where you radiate out from a single point to find multiple solutions w/ different permutations.
Here you can use recursion, and set the base case as when N perplexes have been reached in that 1 call stack of the recursive function.
So you can say:
function(int N <-- perplexes, ArrayList<Double> currentNumbers, double dividedNum)
if N == 0, then you're done - enter the currentNumbers array into a hashtable
clone the currentNumbers ArrayList as cloneNumbers
remove dividedNum from cloneNumbers and add 2 dividedNum/2
iterate through index of cloneNumbers
for every number x in cloneNumbers, call function(N--, cloneNumbers, x)
This is a rough, very inefficient but short way to do it. There's obviously a lot of ways you can prune the algorithm(reduce the amount of duplicates going into the hashtable, prevent cloning as much as possible, etc), but because this shows the absolute permutation of every number, and then enters that sequence into a hashtable, the hashtable will use its equals() comparison to see that the sequence already exists(such as your last 2 zetas), and reject the duplicate. That way, you'll be left with the answer you want.
The efficiency of the current algorithm: O(|E|^(N)), where |E| is the absolute number of numbers you can have inside of the array at the end of all insertions, and N is the number of insertions(or as you said, # of perplexes). Obviously this isn't the most optimal speed, but it does definitely work.
Hope this helps!

How many fewer questions can I ask to guess a number? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
If I ask a person to select a number between 1 and 1200 in his mind. If I can only ask questions for which he will only reply with YES or NO, how many questions will I need to ask before I arrive at the answer for the number He had selected in my mind ?
I am looking for the less possible number of questions. Any proven solution would be appreciable.
To determine which of the numbers between 1 and n are chosen, you will need to ask at least log2 n questions. There is no possible way to do better.
The intuition for this answer is as follows. Suppose that you ask a total of k questions. The maximum number of different possible answers you can receive to those questions, even if they're dependent on one another, is 2k. Since there are n possible numbers that could be picked, you need to choose k such that
2k ≥ n
Which happens precisely when
k ≥ log2 n
In other words, you have to ask at least log2 n questions to be able to even have enough different possible outcomes to associate each possible number with some possible outcome. Since the number of questions must always be a natural number, the minimum number of questions you can ask must be at least ⌈log2 n⌉
This is purely a lower bound on the answer. At this point, we can't rule out the possibility that maybe you need far more questions than this to get the answer. However, the fact that we know about the binary search algorithm means that we know that you never need more than ⌈log2 n⌉ questions to get the answer, since this is the number of questions you'd ask if you were doing a binary search. This means that the binary search algorithm has to optimal, since there is no possible way of asking a smaller number of questions.
Hope this helps!
The log base 2 of 1200, rounded up to an integer: that's 11. Basically, every question cuts the possible range in half, so you just continue with a binary search until the possible range has length 1.
Ask for all the bits in the number. 11 questions are enough for that.
Edit: I would argue, that it's impossible to do better, due to the bijectivity between the binary and decimal representation - at least for the worst case.
This also a classical example of adversary arguments method which is used to find lower bound complexity. In our case the person who knows the number is the adversary. So he will wisely change his real answer when you ask a new question. How does he determine in his answer in each step? Let, for example the number is between 1-100.
You ask: is n>=50?. He may say YES OR NO both will be equally well for him since intervals are equal. Let assume he says yes.
Then you say a number between 50<=N<=100, lets say you ask: is n>=80.Then he should say NO even if the number he picked is larger than 80 because that 50<=n<=80 is larger interval.Now the number may be between 50 and 80
Maintaining this way, he will guarentee the maximum number of questions, that is logn since the interval size is decreasing like in binary search

Resources