Suggesting Implementation of an Algorithm on FPGA [closed] - vhdl

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
As a course project, I have to implement an algorithm on FPGA. Currently I'm considering arithmetic algorithms and ideas like implementation of 4 basic operators for floating point numbers come to mind. As I'm new to such topics I'd be thankful if anyone suggests an algorithm which is worthwhile for implementing.

Your question is very vague, and there are infinite algorithms you could implement. Some suggestions with different difficulty level:
Very easy
Audio volume control.
Audio echo.
These are technically not "worthwhile" implementing in hardware, but audio stuff usually makes for impressive live demonstrations. Even if the algorithm is very easy.
Easy
FIR or IIR filters (low pass, high pass, band pass, ...)
CRC
Checksum
These algorithms are implemented in hardware all the time. They are very typical examples. Yet still quite easy to implement.
If you start out with audio volume control or echo, you can later add filters to make it a little bit more advanced.
Medium/hard
Various encryption algorithms, SHA, AES, ...
FFT
JPEG compression
Regarding floating point algorithms: You typically would never use floating point math in an FPGA unless you absolutely have to.
All algorithms which are possible to do with fixed point math, should be implemented in fixed point math.
You would also never use division in an FPGA, unless you absolutely have to. It is desirable to replace all divisions with multiplications whenever possible.

Related

How does SVM work? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is it possible to provide a high-level, but specific explanation of how SVM algorithms work?
By high-level I mean it does not need to dig into the specifics of all the different types of SVM, parameters, none of that. By specific I mean an answer that explains the algebra, versus solely a geometric interpretation.
I understand it will find a decision boundary that separates the data points from your training set into two pre-labeled categories. I also understand it will seek to do so by finding the widest possible gap between the categories and drawing the separation boundary through it. What I would like to know is how it makes that determination. I am not looking for code, rather an explanation of the calculations performed and the logic.
I know it has something to do with orthogonality, but the specific steps are very "fuzzy" everywhere I could find an explanation.
Here's a video that covers one seminal algorithm quite nicely. The big revelations for me are (1) optimize the square of the critical metric, giving us a value that's always positive, so that minimizing the square (still easily differentiable) gives us the optimum; (2) Using a simple, but not-quite-obvious "kernel trick" to make the vector classifications compute easily.
Watch carefully at how unwanted terms disappear, leaving N+1 vectors to define the gap space in N dimensions.
I'll give you a very small details that will help you to continue understanding how SVM works.
make everything simple, 2 dimensions and linearly seperable data. The general idea in SVM is to find a hyperplan that maximize the margine between two classes. each of your data is a vector from the center. One you suggest a hyperplan, you project you data vector into the vector defining the hyperplan and then you see if the length of you projected vector is before or after the hyperplan and this is how you define your two classes.
This is very simple way of seeing it, and then you can go into more details by following some papers or videos.

OS as algorithm? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can an Operating System be considered as an algorithm? Focus on the finiteness property, please. I have contradicting views on it right now with one prof. telling me something and the other something else.
The answer depends on picky little details in your definition of word 'algorithm', which are not relevant in any practical context.
"Yes" is a very good answer, since the OS is an algorithm for computing the next state of the kernel given the previous state and a hardware or software interrupt. The computer runs this algorithm every time an interrupt occurs.
But if you are being asked to "focus on the finiteness property", then whoever is asking probably wants you to say "no", because the OS doesn't necessarily ever terminate... (except that when you characterize it as I did above, it does :-)
By definition an Operation System can not be called an algorithm.
Let us take a look at what an algorithm is:
"a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer."
The Operating System is composed by set of rules (in the software coding itself) which allow the user to perform tasks on a system but is not defined as a set of rules.
With this said, the Operating System itself is not an algorithm, but we can write an algorithm on how to use it. We can also write algorithms for Operating Systems, defining how it should work, but to call the Operating System itself an algorithm does not make much sense. The Operating System is just a piece of software like any other, though considerably bigger and complex. The question is, would you call MS Word or Photoshop an algorithm?
The Operating System is, however, composed of several algorithms.
I'm sure people will have deferring views on this matter.
From Merriam-Webster: "a procedure for solving a mathematical problem ... in a finite number of steps that frequently involves repetition of an operation". The problem with an OS, is even if you are talking about a fixed distribution, so that it can consist of a discrete step-by-step procedure, it is not made for solving "a problem". It is made for solving many problems. It consists of many algorithms, but it is not a discrete algorithm in and of itself.

What makes a task difficult or 'complex' to machine learn? Regarding complexity of pattern, not computationally [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
As many, I am interested in machine learning. I have taken a class on this topic, and have been reading some papers. I am interested in finding out what makes a problem difficult to solve with machine learning. Ideally, I want to learn about how the complexity of a problem regarding machine learning can be quantified or expressed.
Obviously, if a pattern is very noisy,one can look at the update techniques of different algorithms and observe that some particular machine learning algorithm incorrectly updates itself into the wrong direction due to a noisy label, but this is very qualitative arguing instead of some analytical / quantifiable reasoning.
So, how can the complexity of a problem or pattern be quantified to reflect the difficulty a machine learning algorithm faces? Maybe something from information theory or so, I really do not have an idea.
In thery of machine learning, the VC dimension of the domain is usually used to classify "How hard it is to learn it"
A domain said to have VC dimension of k if there is a set of k samples, such that regardless their label, the suggested model can "shatter them" (split them perfectly using some configuration of the model).
The wikipedia page offers the 2D example as a domain, with a linear seperator as a model:
The above tries to demonstrate that there is a setup of points in 2D, such that one can fit a linear seperator to split them, whatever the labels are. However, for every 4 points in 2D, there is some assignment of labels such that a linear seperator cannot split them:
Thus, the VC Dimension of 2D space with linear seperator is 3.
Also, if VC dimension of a domain and a model is infinty, it is said that the problem is not learnable
If you have strong enough mathematical background, and interested in the theory of machine learning, you can try following the lecture of Amnon Shashua about PAC

How would be an algorithm to simulate human interaction? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Let's suppose that androids which are physically alike humans are a reality.
What would be an algorithm to make it interact with human beings if we want it to:
1) be indistinguishable from regular people in behavior
2) be as equally friendly to everyone as possible?
I understand that it is very hard to write an algorithm like that. I can, however, imagine an android simulating human behavior fairly well with some sort of machine learning technique.
But how would we train it? The act of collecting data would also be a big big problem.
Which machine learning technique would be ideal?
If you consider requirement 1 to be a hard requirement, such an algorithm would beat the Turing Test at least to some extent, so it would be a pretty advanced (world-class) algorithm.
Your problem basically equates to beating the Turing Test, so check the linked article to see the scientific literature produced by people working on this problem.
Assuming massive data availability and processing power are basically unbounded, I believe an Artificial Neural Network would be the best runner-up to base such an algorithm on.

Strong encryption done by hand [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
One of the distinguishing features of a good encryption algorithm, is that it is easy to encrypt, and hard to crack. Are there any that are easy enough for average folk to remember, and calculate by hand, and still stand up to brute force attacks on a computer.
Imagine, a prisoner (with pen and paper) sending a message to another inmate, and the guards seize the handwritten message - and put their prison-crypto-cracking department on it.
Currently, I am thinking TEA is the best candidate, but pretty hard to remember I think.
Yes, there are examples of strong cryptographic algorithms which can be implemented by hand. For example, in Neal Stephenson's classic - the Cryptonomicon, there's an algorithm called Solitaire (or Pontifex) developed by Bruce Schneier for use with a deck of playing cards. Here is Wikipedia's explanation, and here is the description from the author's home page.
One-time pads are do-able by hand and impossible to crack, unless the opponent gets hold of the one-time pad. Have each prisoner make up a bunch of one-time pads, number them according to some scheme, have them exchange the pads, then when transmitting the message have a set of cues as to which pad will be used, e.g. if you hand it at this part of the prison or with this gesture then use this pad, etc.
Bruce Schneier's solitaire cipher is designed to be operated by hand using only a deck of cards. There is also the VIC cipher actually used by a Soviet spy in the 1950s. Both are cumbersome to actually operate by hand, though it is possible.

Resources