Which function in pascal is the fastest one, while, for or repeat? [closed] - performance

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I solved one problem in pascal using mostly for function and I didn't meet the time limit, and the problems is correctly solver in the best way. So is any of these other functions faster maybe or I made a mistake and solved it wrongly?

Short answer: Try out! just make 3 versions of your solution and test each against a good mass data and record how much times it takes. If you are still not getting the time limit you want try a faster PC or review your solution.

while and repeat are not functions, but indicate looping control structures intrinsic to the programming language.
Neither is faster, neither is slower. Both are faster, both are slower. A general answer is not possible, and the more work you do inside the loop, the less relevant any difference, if any, becomes.
If you didn't solve your exercise, then the problem is not the choice of the loop. It could be the algorithm you chose, it could be a mistake you made, it could be the testing machine didn't have enough processing time left to your program to be in time.

Assuming a sensible compiler, there should be no performance difference between them.
Even if there were a difference, it would likely be negligible -- unnoticeable by humans, and easily within experimental error for computed results.
Since this is a school assignment, I'd suggest reviewing the material taught in the last week; probably something there will hint at a better way to solve the problem.

Originally, for was considered faster, because it didn't allow changing the loop variable , which enabled certain kind of optimizations, specially if the value of the loop variable wasn't used.
Nowadays, with less memory and time limits on optimization, the various forms are easily transformed into each other, and the whole discussion becomes academic.
Note that most modern compilers (after Delphi) add another check to the FOR statement, to check that the higher bound is greater or equal than the lower bound. If you have to answer such question for homework, check your compiler very thoroughly.

Related

What's the usual practice to avoid stack overflow in recursive algorithms? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I used to think iterative algorithms be always superior to recursive ones due to the potential stack overflow issue, as long as one doesn't mind the extra efforts on coding. Therefore, if I were to build a util function to be used over and over again, I should always go with iterative. Is that logic correct? Or is there any standard trick to avoid stack overflow in recursive even for very large N? Still assume the data structure itself is not too big to fit in RAM.
Basically you're right for common compilers iteration is superior in both memory consumption and performance, but do not forget there are languages usually focused on functional programing and or AI that are optimized for recursion and with them the recursion is superior instead...
Anyway if I can I always use iteration for the reasons you mentioned. However recursive approach is often more simplistic than iterative one and the amount of coding required to convert to iteration is too much... In such cases you can do these:
limit heap/stack trashing
simply get rid of as much operands, return values and local variables as you can as each recursion will make a copy/instance of it...
you can use static variables for the locals or even global ones for the operands but beware not to break the functionality by using this.
You would not believe how many times I saw passing array as operand into recursion ...
limit recursion
if you got multi split recursion (one recursive layer have more than 1 recursive call) then you can have some internal global counter how many recursions are currently active ...
if you hit some threshold value do not use recursive calls anymore ... instead schedule them into some global array/list IIRC called priority queue and once all the pended recursions stop process this queue until its empty. However this approach is not always applicable. Nice examples for this approach are: Flood Fill, grid A* path finding,...
increase heap/stack size
executables have a table that tells the OS how much memory to allocate for its parts... so just find the settings in your compiler/linker/IDE and set the value to reasonable size.
I am sure there are more techniques but these are those I use ...

What are Some Ways or Common Methods to Tell if an Unsupervised Learning Algorithm is Correct [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I know that part of the beauty, mystery, and complexity of unsupervised learning is that it extracts information out of lots of data that humans can figure out. However, are there any ways of knowing if the algorithm is right. For example, say it is looking at stock trends and it makes some deduction about a certain stock. Without actually seeing how it plays out, is there any way of knowing that it is right? The data it trained off of could be wrong or, more importantly, your algorithm could've just drawn the wrong conclusions. Obviously there are mathematical measures such as loss, but currently do we just have to live with the fact that the algorithm may be wrong? What are some of the ways that we can measure how correct an unsupervised learning algorithm is (or is "correct" just an ambitious term)?
In short:
Without actually seeing how it plays out, is there any way of knowing that it is right?
No. If there was, then you wouldn't even need your original algorithm. Just make random predictions and use your oracle to tell if they're right or not.
The data it trained off of could be wrong
ML algorithms learn based on data. If it's wrong, they will learn wrong. If you were only ever told that 1+1=3, would you have any reason to question it?
more importantly, your algorithm could've just drawn the wrong conclusions
No conclusion is wrong if it is supported by the data. It might not be the one you're after (see https://www.jefftk.com/p/detecting-tanks), in which case you should get data that better describes what you're after.
but currently do we just have to live with the fact that the algorithm may be wrong?
Yes, and we probably always will. Are humans ever always right about something? You could be wrong about very basic things under the right circumstances. And we're much smarter than current AIs.
What are some of the ways that we can measure how correct an unsupervised learning algorithm is (or is "correct" just an ambitious term)?
It's very ambitious. You could check the results manually, if you're good enough at the problem you're trying to solve. If you want to classify images into dogs and cats, that's probably simple enough for you as a human to judge. Apply the algorithm and check some of the predictions manually to get an idea of how well it did.
If you want to have something that plays Go really well, challenge the world champion.
It depends on the problem.

OS as algorithm? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can an Operating System be considered as an algorithm? Focus on the finiteness property, please. I have contradicting views on it right now with one prof. telling me something and the other something else.
The answer depends on picky little details in your definition of word 'algorithm', which are not relevant in any practical context.
"Yes" is a very good answer, since the OS is an algorithm for computing the next state of the kernel given the previous state and a hardware or software interrupt. The computer runs this algorithm every time an interrupt occurs.
But if you are being asked to "focus on the finiteness property", then whoever is asking probably wants you to say "no", because the OS doesn't necessarily ever terminate... (except that when you characterize it as I did above, it does :-)
By definition an Operation System can not be called an algorithm.
Let us take a look at what an algorithm is:
"a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer."
The Operating System is composed by set of rules (in the software coding itself) which allow the user to perform tasks on a system but is not defined as a set of rules.
With this said, the Operating System itself is not an algorithm, but we can write an algorithm on how to use it. We can also write algorithms for Operating Systems, defining how it should work, but to call the Operating System itself an algorithm does not make much sense. The Operating System is just a piece of software like any other, though considerably bigger and complex. The question is, would you call MS Word or Photoshop an algorithm?
The Operating System is, however, composed of several algorithms.
I'm sure people will have deferring views on this matter.
From Merriam-Webster: "a procedure for solving a mathematical problem ... in a finite number of steps that frequently involves repetition of an operation". The problem with an OS, is even if you are talking about a fixed distribution, so that it can consist of a discrete step-by-step procedure, it is not made for solving "a problem". It is made for solving many problems. It consists of many algorithms, but it is not a discrete algorithm in and of itself.

When to use declarative programming over imperative programming [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
As far as I know the main difference between declarative and imperative programming is the fact that in declarative programming you rather specify what the problem is, while in imperative programming you state exactly how to solve a problem.
However it is not entirely clear for me when to use one over another. Imagine you are imposed to solve a certain problem, according to which properties you decide to tackle this down declaratively (i.e using prolog) or imperatively (i.e using Java)? For what kind of problems would you prefer to use one over the other?
Imperative programming is closer to what the actual machine performs. This is a quite low level form of programming, and the more complex your application grows, the harder it will be for you to grasp all details at such a low level. On the plus side, being close to the machine, you can write quite performant code if you are good at that.
Declarative programming is more abstract and higher level: With comparatively little code, you can express quite sophisticated relationships in a way that can be more easily seen to be correct.
To see an important difference, compare for example pure Prolog with Java: Suppose I take away one of the rules in a Prolog program. I know a priori that this can make the program at most more specific: Some things that held previously may now no longer hold.
On the other hand, suppose I take away a statement in a Java program: Nothing much can be said about the effect in general. The program may even no longer compile.
So, changes in an imperative program can have very unforeseen effects, and are extremely hard to reason about, because there are few guarantees and invariants, and many things are implicit in some global state of the program. This makes imperative programming very error-prone.

How many of you are recording their historical project-data - for future estimates and how are you doing it? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
When working on a project - I always estimate my tasks and calculate how long it will take me to finish. So in the end I get a time-span in which the project should be finished (it rarely is).
My question is - do you record your data and assumptions use in your estimates during a project and use them for later projects or refined estimates on the same project?
If so - how do you record such data and how do you store them?
I used an excel-sheet - but somehow (cannot imagine how that happened ;)) I tend to forget to fill in new assumptions or gained information. On the other hand it is not really readable or useful for evaluating my predictions after finishing the project - to learn from it for the next project.
Sounds like what Joel wrote FogBugz for.
I had a discussion with a friend recently about a pragmatic variation of this, more specifically, the feasiblity of using the coarse level evidence of when code is checked in.
Provided you work in a reasonably cohesive manner, your checkins can be related, at least through the files involved, to some work units and the elapsed time used to determine an average productivity.
This fits well with the Evidence-based Scheduling approach included in FogBugz. If you happen to have been spending time on other things to an unusual degree, then in future you will be more productive than the checkin rate suggests. Any error is on the safe side of over-allocating time.
The main flaw, for me, in an approach like this is that I typically interweave at least two projects, often more, in different repositories and languages. I would need to pull the details together and make a rough allocation of relative time between them to achieve the same thing. In a more focused team, I think repository date stamps may be good enough.
Isn't that what project managers are for? ;)

Resources