how to visualize the bag-of-word codebook(image classification)? [closed] - image

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to use bag-of-word feature on image classification, and how to visualize the codebook?
I use keypoint-sift then kmeans to do clustering.
e.g., http://fias.uni-frankfurt.de/~triesch/courses/260object/papers/Fei-Fei_CVPR2005.pdf (figure 4)

The 174-word codebook is visualized by a patch. The paper mentioned they used two types of representation, one is 11*11 pixel patch and the other is SIFT descriptors. Fig 4 is a result based on the former presentation after a k-means clustering. They cannot visualize a codebook based on SIFT (a wired image as 174*128 ). Of course, we can get the closest SIFT in query and visualize a patch around corresponding keypoint. Hope it helps.

Related

Is there any algorithm to achieve some optimization for hanger placement? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 months ago.
Improve this question
I need to do a job where I need to place a particular object(Hanger) in a standard distance.
The rules are:
We should try to place each object in a given standard distance from each other.
There is a max distance from one object to adjacent object which in no way should be violated.
From the start and end also similar standard and maximum distance rule applies.
And there are some portions given where the objects placement needs to be avoided.
I'm not even able to start... which algorithm to use.
If anyone has any suggestion how I can achieve this or some related source please let me know.

How is automatic image optimization performed? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
How is automatic image optimization performed? I'm looking for a general understanding or areas where I can read more into this, for example with the top image the bottom was automatically optimized using an image optimizing service within a few seconds.
In this example the saturation, brightness, hue and various other parameters are changed.
Original:
Optimized:
One possible method to produce a similar result is to convert to LAB colorspace, stretch the hisogram to full dynamic range separately for the L channel and the A and B channels (the same), then convert back to (s)RGB colorspace.
Input:
Output:

Find duplicate images algorithm [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to create a program that find the duplicate images into a directory, something like this app does and I wonder what would be the algorithm to determine if two images are the same.
Any suggestion is welcome.
This task can be solved by perceptual-hashing, depending on your use-case, combined with some data-structure responsible for nearest-neighbor search in high-dimensions (kd-tree, ball-tree, ...) which can replace the brute-force search (somewhat).
There are tons of approaches for images: DCT-based, Wavelet-based, Statistics-based, Feature-based, CNNs (and more).
Their designs are usually based on different assumptions about the task, e.g. rotation allowed or not?
A google scholar search on perceptual image hashing will list a lot of papers. You can also look for the term image fingerprinting.
Here is some older ugly python/cython code doing the statistics-based approach.
Remark: Digikam can do that for you too. It's using some older Haar-wavelet based approach i think.

how to efficiently work through algorithms on paper [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am currently reading a programming text book and as I discover different algorithms used in the book I'm finding it necessary to understand how they work by working through them. Is there a standard & efficient way to work through simple algorithms on paper?
Write the algorithm down on the paper. Write the corresponding graphs and variables that you use in algorithm.
Now follow algorithm step by step and note what changed with variables and graphs etc.
Time slices. Make a table, where the column headers are variables involved, and row headers are step numbers. Fill in row zero with initial values if any, and each row represents the result of the current step on the previous row.

Getting the General Idea of Vector's Classifiers [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have this "weird data" format type. It goes like this.
[v1, v2, v3, v4... vn] where n>0
now each "v" is a 2*1 vector. Example:
v3 = [time stamp, event]. (type [<string>, <string>])
Ok now my question is, what kind of classifier can I use, or is it better to use, with this format of data? For example, will the KNN be better or the perceptron algorithm?
Just want to get an idea on how to move on.
The classification algorithm depends on your task. If you have training data, you can use Bayesian classifier, Neuron Network, Support Vector Machine, k-NN or perceptron. If you just want to discover the structure of your "weird data", try k-means first.

Resources