Searching through an list [closed] - algorithm

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I'm reading about AI and in the notes it is mentioned
A lookup table in chess would have roughly 35^100 entries.
But what does this mean? Is there any way we could find out how long it would take the computer to search through and find it's entry? Would we assume thereis some order or that there is no order?

The number of atoms in the known universe is estimated to be around 10^80 which is much less than 35^100. With current technology, at least a few thousand atoms are required to store a single bit. I assume that each entry of your table would have multiple bits. You would need some really advanced technology to implement the memory of your computer.
So the answer is: With current technology it is not a matter of time, it is simply impossible.

Related

Is there any algorithm to achieve some optimization for hanger placement? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 months ago.
Improve this question
I need to do a job where I need to place a particular object(Hanger) in a standard distance.
The rules are:
We should try to place each object in a given standard distance from each other.
There is a max distance from one object to adjacent object which in no way should be violated.
From the start and end also similar standard and maximum distance rule applies.
And there are some portions given where the objects placement needs to be avoided.
I'm not even able to start... which algorithm to use.
If anyone has any suggestion how I can achieve this or some related source please let me know.

I am at a crossroads in my program and I was wondering which path would be more efficient time wise [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I currently see two ways to code the next step of my program and there are probably more, but the two routes I have are as follows.
I take the factors of the lowest number and loop through the other numbers two see if they share those common factors.
I find the factors of the lowest number and add it to a list. I then find the factors of the other numbers that do not exceed the lowest and add them to the same list. I then run through the list to check which is the highest number that appears x times.
I am leaning towards 1, but I'm not sure.
Sorry if this is too ambiguous, thanks.
Well, given the ambiguity, as stated: the 1st requires less steps and avoids the allocation of a data structure.

Algorithm for finding similar words [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
In order to support users learning English, I want to make a multiple-choice quiz using the vocabulary that the user is studying.
For example, if the user is learning "angel" then I need an algorithm to produce some similar words such as "angle" and "angled"
Another example, if the user is learning "accountant" then I need an algorithm to produce some similar words such as "accounttant" and "acountant", "acounttant"
You could compute the Levenshtein Distance from the starting word to each word in your vocabulary and pick the 2 or 3 shortest ones.
Depending on how many words are in your dictionary this might take a long time though, so I would recommend bailing out after a certain (small) number of steps - i.e. if you have made 3 mutations and still haven't arrived at your target word then stop and move on to the next one.

Find plagiarism in bulk articles [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a 20,000 collection of master articles and I will get about 400,000 articles of one or two pages everyday. Now, I am trying to see if each one of this 400k articles are a copy or modified version of my collection of master articles (a threshold of above 60% plagiarism is fine with me)
What are the algorithms and technologies I should use to tackle the problem in a very efficient and timely manner.
Thanks
Fingerprint the articles (i.e. intelligently hash them based on the word frequency) and then look for statistical connection between the fingerprints. Then if there is a hunch on some of the data set, do a brute force search for matching strings on those.

How to calculate the relevance of two words or pharse? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need a algorithm to calculate and measure the relevance of two words or phrase, e.g. "Apple" and "iPad".
Can anybody give me some hints or related books on such topics?
Thanks.
Have a look at mutual information and tf-idf. These are methods that are frequently used in information retrieval. The former quantifies the mutual dependence of two variables (each variable can be a phrase). The latter was traditionally used by search engines to prioritize results that were relevant to a particular query.

Resources