Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Let's suppose that androids which are physically alike humans are a reality.
What would be an algorithm to make it interact with human beings if we want it to:
1) be indistinguishable from regular people in behavior
2) be as equally friendly to everyone as possible?
I understand that it is very hard to write an algorithm like that. I can, however, imagine an android simulating human behavior fairly well with some sort of machine learning technique.
But how would we train it? The act of collecting data would also be a big big problem.
Which machine learning technique would be ideal?
If you consider requirement 1 to be a hard requirement, such an algorithm would beat the Turing Test at least to some extent, so it would be a pretty advanced (world-class) algorithm.
Your problem basically equates to beating the Turing Test, so check the linked article to see the scientific literature produced by people working on this problem.
Assuming massive data availability and processing power are basically unbounded, I believe an Artificial Neural Network would be the best runner-up to base such an algorithm on.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
As a course project, I have to implement an algorithm on FPGA. Currently I'm considering arithmetic algorithms and ideas like implementation of 4 basic operators for floating point numbers come to mind. As I'm new to such topics I'd be thankful if anyone suggests an algorithm which is worthwhile for implementing.
Your question is very vague, and there are infinite algorithms you could implement. Some suggestions with different difficulty level:
Very easy
Audio volume control.
Audio echo.
These are technically not "worthwhile" implementing in hardware, but audio stuff usually makes for impressive live demonstrations. Even if the algorithm is very easy.
Easy
FIR or IIR filters (low pass, high pass, band pass, ...)
CRC
Checksum
These algorithms are implemented in hardware all the time. They are very typical examples. Yet still quite easy to implement.
If you start out with audio volume control or echo, you can later add filters to make it a little bit more advanced.
Medium/hard
Various encryption algorithms, SHA, AES, ...
FFT
JPEG compression
Regarding floating point algorithms: You typically would never use floating point math in an FPGA unless you absolutely have to.
All algorithms which are possible to do with fixed point math, should be implemented in fixed point math.
You would also never use division in an FPGA, unless you absolutely have to. It is desirable to replace all divisions with multiplications whenever possible.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Lets say that my definition of 'rudimentary programming' refers to the fundamental tools employed for a computer to perform a task.
Considering programming rudiments, the learning spectrum usually looks something like this:
Variables, data types and variable memory
Arrays/Lists and their manipulation
Looping and conditionals
Functions
Classes
Multi threading/processing
Streams (hard-disk and web)
My question is, have I missed any of the major rudiments? Is there a 'next' to the spectrum that still eludes me?
I think you missed the most important one: algorithms. Understanding the complexity, know the situation to use them, why use them and more important, how to implement them.
I'm pretty sure that you already know a lot about algorithms but if you think that your tool-knowledge (aka the programming languages) are good enough, you should start focus, more, on the algorithms.
A great book to start is: Introduction to Algorithms, from Thomas H. Cormen
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm very impressed to how plagiarism checkers (such as Turnitin website ) works. But how do they do that ? In a very effective way, I'm new to this area thus is there any word matching algorithm or anything that is similar to that is used for detecting alike sentences?
Thank you very much.
I'm sure many real-world plagiarism detection systems use more sophisticated schemes, but the general class of problem of detecting how far apart two things are is called the edit distance. That link includes links to many common algorithms used for this purpose. The gist is effectively answering the question "How many edits must I perform to turn one input into the other?". The challenge for real-world systems is performing this across a large corpus in an efficient manner. A related problem is the longest common subsequence, which might also be useful for such schemes to identify passages that are copied verbatim.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
There are many Online Judges (OJ) for ACM/ICPC questions. And another Online Judge for Interview questions, named Leetcode (http://leetcode.com).
I think these OJs are very useful for us to learn algorithms. Recently, I am going to learn data mining algorithms. Is there any OJ for data mining questions?
Thank you very much.
There is MLcomp, where you can submit an algorithm and it will run it on a number of data sets to judge how well it is doing.
Plus, there is Kaggle, which hosts various classification competitions.
And of course you can do classes at Cousera. These are pretty much low level, but in order to get submission points you need to reproduce the known performance.
In particular the first also allows you to run several standard algorithms such as naive bayes and SVM and see how well they did. Obviously, your own implementation should perform similar then.
Unfortunately, both are pretty much focused on machine learning (i.e. classification and regression). There is very little in the unsupervised domain, clustering and outlier detection. On unlabeled data, things get too hard even to evaluate locally, so doing any kind of online judging is pretty much unsolved. What you can do is largely a one-class classification, or you just strip labels before running the algorithm.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm good at algorithms but not as good as converting real-time problems and learning them throughly to make it as an algorithm. I would like to know if there is any book/paper that teaches or makes you demystify the situation and formulate it as an algorithm. (Its much like training your mental ability to break the situation and comeup with algorithm in a crisp.)
Showing some of the ways to approach these kinda problems. and any easy learning links/material would help me a lot.
Note: I know SO doesnt allow to ask for the opinion or something vague (I dont mind my Q being downgraded). But I am asking some concrete problem and hope can get some nice info from some of the great minds here.
The word that fits better as a direct answer is "experience". There exists no magical formula to convert a real time problem into some algorithms that solve it. As an analogy, there exist no predefined patterns on how to solve a mathematical problem. It is a mind's task to express the solution, based on some fundamental knowledge and on experience that is accumulated though constant learning.