OS as algorithm? [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can an Operating System be considered as an algorithm? Focus on the finiteness property, please. I have contradicting views on it right now with one prof. telling me something and the other something else.

The answer depends on picky little details in your definition of word 'algorithm', which are not relevant in any practical context.
"Yes" is a very good answer, since the OS is an algorithm for computing the next state of the kernel given the previous state and a hardware or software interrupt. The computer runs this algorithm every time an interrupt occurs.
But if you are being asked to "focus on the finiteness property", then whoever is asking probably wants you to say "no", because the OS doesn't necessarily ever terminate... (except that when you characterize it as I did above, it does :-)

By definition an Operation System can not be called an algorithm.
Let us take a look at what an algorithm is:
"a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer."
The Operating System is composed by set of rules (in the software coding itself) which allow the user to perform tasks on a system but is not defined as a set of rules.
With this said, the Operating System itself is not an algorithm, but we can write an algorithm on how to use it. We can also write algorithms for Operating Systems, defining how it should work, but to call the Operating System itself an algorithm does not make much sense. The Operating System is just a piece of software like any other, though considerably bigger and complex. The question is, would you call MS Word or Photoshop an algorithm?
The Operating System is, however, composed of several algorithms.
I'm sure people will have deferring views on this matter.

From Merriam-Webster: "a procedure for solving a mathematical problem ... in a finite number of steps that frequently involves repetition of an operation". The problem with an OS, is even if you are talking about a fixed distribution, so that it can consist of a discrete step-by-step procedure, it is not made for solving "a problem". It is made for solving many problems. It consists of many algorithms, but it is not a discrete algorithm in and of itself.

Related

When to use declarative programming over imperative programming [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
As far as I know the main difference between declarative and imperative programming is the fact that in declarative programming you rather specify what the problem is, while in imperative programming you state exactly how to solve a problem.
However it is not entirely clear for me when to use one over another. Imagine you are imposed to solve a certain problem, according to which properties you decide to tackle this down declaratively (i.e using prolog) or imperatively (i.e using Java)? For what kind of problems would you prefer to use one over the other?
Imperative programming is closer to what the actual machine performs. This is a quite low level form of programming, and the more complex your application grows, the harder it will be for you to grasp all details at such a low level. On the plus side, being close to the machine, you can write quite performant code if you are good at that.
Declarative programming is more abstract and higher level: With comparatively little code, you can express quite sophisticated relationships in a way that can be more easily seen to be correct.
To see an important difference, compare for example pure Prolog with Java: Suppose I take away one of the rules in a Prolog program. I know a priori that this can make the program at most more specific: Some things that held previously may now no longer hold.
On the other hand, suppose I take away a statement in a Java program: Nothing much can be said about the effect in general. The program may even no longer compile.
So, changes in an imperative program can have very unforeseen effects, and are extremely hard to reason about, because there are few guarantees and invariants, and many things are implicit in some global state of the program. This makes imperative programming very error-prone.

Suggesting Implementation of an Algorithm on FPGA [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
As a course project, I have to implement an algorithm on FPGA. Currently I'm considering arithmetic algorithms and ideas like implementation of 4 basic operators for floating point numbers come to mind. As I'm new to such topics I'd be thankful if anyone suggests an algorithm which is worthwhile for implementing.
Your question is very vague, and there are infinite algorithms you could implement. Some suggestions with different difficulty level:
Very easy
Audio volume control.
Audio echo.
These are technically not "worthwhile" implementing in hardware, but audio stuff usually makes for impressive live demonstrations. Even if the algorithm is very easy.
Easy
FIR or IIR filters (low pass, high pass, band pass, ...)
CRC
Checksum
These algorithms are implemented in hardware all the time. They are very typical examples. Yet still quite easy to implement.
If you start out with audio volume control or echo, you can later add filters to make it a little bit more advanced.
Medium/hard
Various encryption algorithms, SHA, AES, ...
FFT
JPEG compression
Regarding floating point algorithms: You typically would never use floating point math in an FPGA unless you absolutely have to.
All algorithms which are possible to do with fixed point math, should be implemented in fixed point math.
You would also never use division in an FPGA, unless you absolutely have to. It is desirable to replace all divisions with multiplications whenever possible.

What makes a task difficult or 'complex' to machine learn? Regarding complexity of pattern, not computationally [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
As many, I am interested in machine learning. I have taken a class on this topic, and have been reading some papers. I am interested in finding out what makes a problem difficult to solve with machine learning. Ideally, I want to learn about how the complexity of a problem regarding machine learning can be quantified or expressed.
Obviously, if a pattern is very noisy,one can look at the update techniques of different algorithms and observe that some particular machine learning algorithm incorrectly updates itself into the wrong direction due to a noisy label, but this is very qualitative arguing instead of some analytical / quantifiable reasoning.
So, how can the complexity of a problem or pattern be quantified to reflect the difficulty a machine learning algorithm faces? Maybe something from information theory or so, I really do not have an idea.
In thery of machine learning, the VC dimension of the domain is usually used to classify "How hard it is to learn it"
A domain said to have VC dimension of k if there is a set of k samples, such that regardless their label, the suggested model can "shatter them" (split them perfectly using some configuration of the model).
The wikipedia page offers the 2D example as a domain, with a linear seperator as a model:
The above tries to demonstrate that there is a setup of points in 2D, such that one can fit a linear seperator to split them, whatever the labels are. However, for every 4 points in 2D, there is some assignment of labels such that a linear seperator cannot split them:
Thus, the VC Dimension of 2D space with linear seperator is 3.
Also, if VC dimension of a domain and a model is infinty, it is said that the problem is not learnable
If you have strong enough mathematical background, and interested in the theory of machine learning, you can try following the lecture of Amnon Shashua about PAC

How would be an algorithm to simulate human interaction? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Let's suppose that androids which are physically alike humans are a reality.
What would be an algorithm to make it interact with human beings if we want it to:
1) be indistinguishable from regular people in behavior
2) be as equally friendly to everyone as possible?
I understand that it is very hard to write an algorithm like that. I can, however, imagine an android simulating human behavior fairly well with some sort of machine learning technique.
But how would we train it? The act of collecting data would also be a big big problem.
Which machine learning technique would be ideal?
If you consider requirement 1 to be a hard requirement, such an algorithm would beat the Turing Test at least to some extent, so it would be a pretty advanced (world-class) algorithm.
Your problem basically equates to beating the Turing Test, so check the linked article to see the scientific literature produced by people working on this problem.
Assuming massive data availability and processing power are basically unbounded, I believe an Artificial Neural Network would be the best runner-up to base such an algorithm on.

Which function in pascal is the fastest one, while, for or repeat? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I solved one problem in pascal using mostly for function and I didn't meet the time limit, and the problems is correctly solver in the best way. So is any of these other functions faster maybe or I made a mistake and solved it wrongly?
Short answer: Try out! just make 3 versions of your solution and test each against a good mass data and record how much times it takes. If you are still not getting the time limit you want try a faster PC or review your solution.
while and repeat are not functions, but indicate looping control structures intrinsic to the programming language.
Neither is faster, neither is slower. Both are faster, both are slower. A general answer is not possible, and the more work you do inside the loop, the less relevant any difference, if any, becomes.
If you didn't solve your exercise, then the problem is not the choice of the loop. It could be the algorithm you chose, it could be a mistake you made, it could be the testing machine didn't have enough processing time left to your program to be in time.
Assuming a sensible compiler, there should be no performance difference between them.
Even if there were a difference, it would likely be negligible -- unnoticeable by humans, and easily within experimental error for computed results.
Since this is a school assignment, I'd suggest reviewing the material taught in the last week; probably something there will hint at a better way to solve the problem.
Originally, for was considered faster, because it didn't allow changing the loop variable , which enabled certain kind of optimizations, specially if the value of the loop variable wasn't used.
Nowadays, with less memory and time limits on optimization, the various forms are easily transformed into each other, and the whole discussion becomes academic.
Note that most modern compilers (after Delphi) add another check to the FOR statement, to check that the higher bound is greater or equal than the lower bound. If you have to answer such question for homework, check your compiler very thoroughly.

Resources