Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have this "weird data" format type. It goes like this.
[v1, v2, v3, v4... vn] where n>0
now each "v" is a 2*1 vector. Example:
v3 = [time stamp, event]. (type [<string>, <string>])
Ok now my question is, what kind of classifier can I use, or is it better to use, with this format of data? For example, will the KNN be better or the perceptron algorithm?
Just want to get an idea on how to move on.
The classification algorithm depends on your task. If you have training data, you can use Bayesian classifier, Neuron Network, Support Vector Machine, k-NN or perceptron. If you just want to discover the structure of your "weird data", try k-means first.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 months ago.
Improve this question
I need to do a job where I need to place a particular object(Hanger) in a standard distance.
The rules are:
We should try to place each object in a given standard distance from each other.
There is a max distance from one object to adjacent object which in no way should be violated.
From the start and end also similar standard and maximum distance rule applies.
And there are some portions given where the objects placement needs to be avoided.
I'm not even able to start... which algorithm to use.
If anyone has any suggestion how I can achieve this or some related source please let me know.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
Can any algorithm that performs automatic learning be called a "machine learning algorithm"? Or is this designation is reserved to the known ML algorithms like SVM, Feature Selection... ?
Any algorithm that learns to do a task by itself and gets better at it is considered machine learning even if it just as simple as computing the joint probability. Only condition is automated learning, that's all.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I want to enhance my image by using pso based gray level image enhacment.I send the algorithm but i dont understand how I get particle of my image.pso paper
You only need to carefully read the section B. Proposed methodology. It says something like this:
Now our aim is to find the best set of values
for these four parameters which can produce the optimal result
and to perform this work PSO is used. P number of particles
are initialized, each with four parameters a, b, c, and k by the
random values within their range and corresponding random
velocities.
So there you have your particle generation. Each particle is a set of 4 random values.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am currently reading a programming text book and as I discover different algorithms used in the book I'm finding it necessary to understand how they work by working through them. Is there a standard & efficient way to work through simple algorithms on paper?
Write the algorithm down on the paper. Write the corresponding graphs and variables that you use in algorithm.
Now follow algorithm step by step and note what changed with variables and graphs etc.
Time slices. Make a table, where the column headers are variables involved, and row headers are step numbers. Fill in row zero with initial values if any, and each row represents the result of the current step on the previous row.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to use bag-of-word feature on image classification, and how to visualize the codebook?
I use keypoint-sift then kmeans to do clustering.
e.g., http://fias.uni-frankfurt.de/~triesch/courses/260object/papers/Fei-Fei_CVPR2005.pdf (figure 4)
The 174-word codebook is visualized by a patch. The paper mentioned they used two types of representation, one is 11*11 pixel patch and the other is SIFT descriptors. Fig 4 is a result based on the former presentation after a k-means clustering. They cannot visualize a codebook based on SIFT (a wired image as 174*128 ). Of course, we can get the closest SIFT in query and visualize a patch around corresponding keypoint. Hope it helps.