AllDistinct(a1 , . . . , an )
if (n = 1)
return True
for i := n down to 2
begin
if (LinearSearch(a1 , . . . , ai−1 ; ai ) != 0)
return False
end
return True
Give a big-O bound on the running time of AllDistinct. For full credit, you must show
work or briefly explain your answer.
So the actual answer for this according to the solution for this problem is O(n^2). However, since BigO is the worst case running time, could I have answered O(n^100000) and gotten away with it? Theres no way they can take points off for that since its technically the correct answer right? Although the more useful O(n^2) is obvious in this algorithm, I ask because we might have a more difficult algorithm on the upcoming exam, and incase I cant figure out the 'tight' bound, I could make up some extremely large value and it should still be correct, right?
Yes, if a function is in O(n^2), it is also in O(n^1000).
Whether you'll get full (or any) credit for answering this way depends on the person grading your exam of course, so I can't tell you that (probably not though). But yes, it is technically correct.
If you do decide to go this way though, you should probably chose something like O(n^n) or O(Ackermann(n)), since for example exponential functions are not in O(n^1000).
Another problem is that you will probably be asked to proof the bound as well. This will be hard to do if you don't actually know the running time of the function. "n^n is really large, so the running time will probably be less than that" is not a proof. Though on the upside if you manage to correctly proof that the function is in O(n^n), you'll probably get at least partial credit.
That would be a trivial answer to the question. Although correct, it tells you nothing and is thus worthless. It's not about right or wrong, it's about bad and good. The better your answer, the more points you'll get for it. The question does not say you'll get credit for a terrible bad bound. Bad answers give bad marks?
(Asking for Big Theta would be a harder question. I would play nice :)
No.
It might be all clever and HA! I got you! but that's not the idea. (and you know that)
If the professor ask you for BigO of it you can answer whatever BigO you think but you must prove it as it say For full credit, you must show work or briefly explain your answer.
BigO is not useless. For problems it's easy to get Upper Bound (BigO) graeter; e.g.
Sorting problem: you have the simple bubble sort and you can proof that is n^2 (right?), so upper bound of Sorting problem is n^2 (because exists and algorithm that solve it in that time, but if you go on with maths, you see that the problem has a lower bound of log(n!) ). So n^2 was a good answer until you proof it's log(n!). There are many problems that we know just BigO but not the lower bound, so it's not useless.
If you can say that a program halts you can always compute is BigO with some math, but is not always easy (exists even ammortized complexity) but it's simpler than problems lowerBound. So BigO is not so important in algorithm, but it's not useless.
The important thing is that you understand what it means; then if you can get any BigO of that program you can write it on that exam paper that is a function from Student to number.. and good luck.
At a guess, you'd probably have to talk to the professor, and argue with him a bit to even get partial credit for an answer like that. Depending on how much he values theory vs. practicality, he might give you partial credit, or he might give no credit -- but I can hardly imagine a professor who'd give any credit without your explicitly pointing out how it's (semi-)correct, and some might not even then.
I was a prof. Profs make up exam questions, and those can have bugs. It's embarrasing when you have to throw out a question because it's got a bug and people can give trivial answers. In this case the bug is "a big-O bound". Making exam questions is tricky, because you don't want to err on the side of saying too much, like some kind of airtight lawyer statement, because that will confuse people even more.
After all, the reason for doing this is, hopefully, you'll learn something useful. If you see an ambiguous question like this, the prof will appreciate it if you say something like "I assume you mean a good big-O bound."
Related
While looking for a reasonably fast algorithm to calculate a square root of a number up to n digits I have stumbled upon this algorithm:
https://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Binary_numeral_system_(base_2)
I must admit, this is a beautiful piece of code, but the explanation provided on wikipedia doesn't really speak to me. I have tried to understand it for several hours now and I simply have no idea how does it work - I have done some example calculations on paper but It didn't seem to help.
So, that's why I ask this question here, an explanation would be something really useful.
Also, if this method is supposed to be much quicker, why It is not the one used in standard C library?
I'm reading the chapter 2 and 3 of CLRS, and get stuck so often, especially in the problems provided at the end of each chapter, that I wonder if it'll ever be worthwhile for this much effort. I can't understand the solution online like this one: http://clrs.skanev.com/02/problems/01.html
I heard that this book is one of the most popular text books for university CS class, but do people skip intricate parts and just memorize important things, like insertion sort has this order of growth and merge sort has that order of growth, and go ahead?
Isn't it just enough to be familiar with many useful algorithms to have about as much understanding of computer science as people with a degree in CS do in general?
Understanding isn't about memorization. It's about being able to apply the knowledge to solve problems. The textbook problems are quite simple compared to most real-life problems. So, skipping these simply means you're not learning at all, and you certainly won't be able to apply any of it in real life. You're memorizing, but you can't use what you've memorized.
TL;DR: The proof of being able to use the knowledge is the ability to solve problems, and textbook problems are simple.‡ One doesn't go without the other.
‡ Knuth's texts are a notable exception: he also offers some borderline intractable problems, and everything in between :)
The point is that "people with a degree in CS ... in general" can work out the order of growth of an algorithm. That's why people go to the effort of learning this stuff. If you just want to be able to say "mergesort is O(n log n)", then indeed, all you need is to see and memorise that fact. If you want to be able to work out the O() of an algorithm, even when it's one you've never seen before - then you need these methods.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Today there was a question on SO, where the author was given an NP-complete problem during an interview and he obviously hadn't been told that it was one.
What is the purpose of asking such questions? What behavior does the interviewer expect when asking such things? Proof? Useful heuristics? And is it even legitimate to ask one if it's not a well-known NP-complete problem everyone should know about? (there's a plenty of them)
Completely legitimate to me. If you are Computer Science professional there are good chances that you can either argument informally why the problem seems to be hard, or (even better) provide a sketch of reduction from a known NP-hard problem.
Many real world problems eventually turn out to be NP-hard, and stackoverflow also has now and then questions about the complexity of a problem which turns out to be a difficult one (NP-hard, for instance). It is an important part of a CS professionals toolbox to be able to recognize and to argue for problems which are known to be difficult to solve.
I don't see any problem with asking something like this. Also, programmers should NOT be expected to recognize NP-complete problems by rote. They should, however, be able to identify that their algorithm is potentially slow regardless of whether a given problem is NP-complete.
Sure, why not? NP-complete doesn't mean unsolvable, it just means your solution will be slow. You may be looking to see if the candidate will choose the brute-force solution, or try a dynamic programming solution. And this type of question can lead into questions about runtime and other useful theory.
There's a category of interview questions that are illegal in some countries, usually pertaining to personal details that are none of the employer's business. That aside, any question is fair game if the interviewer feels it'll help get an idea of the interviewee's capabilities!
If you're hiring for a position that calls for a thinker rather than just a code monkey, it may be useful to throw this kind of problem at the applicant. Who cares if a problem is "well known" to be NP? If the guy is good he'll come to that understanding in analyzing the problem. That may well be the result the interviewer wants to see, or the applicant can go on to do some more pre-analysis and describe how he'd brute-force the problem, or what optimizations he can think to apply to make it more manageable.
It's good to ask a question that is hard to answer, to see how a programmer reasons through a problem.
But it all depends on how the interviewer asks the question, and prompts the programmer towards a solution if they aren't a mathematical genius (i.e. to see how they reason, and how they react to questions like "that's a good start, but what if...") rather than to detect if they are autistic and can provide an optimal solution in 4.3 seconds).
It's worth remembering that interviews are highly stressful affairs in which many people find such questions very difficult to answer well - a much simpler question will usually suffice without putting the interviewee under undue stress/pressure.
If you do it to deliberately try to see how they deal with stress is just stupid - that isn't the sort of stress a programmer has to deal with in their job, so you're not testing anything worthwhile.
I think it's valid to ask a question you know the interviewee won't know the answer to.
Everyone encounters problems they don't know the answer to. This type of question will give you insight as to what the interviewee's internal process is. If they logically conclude things and start to formulate a correct answer, even if it's not the best dynamic programming algorithm for it, it shows that they can reason well and discover an answer.
Also, since they likely don't know everything about the problem, this sort of question lets you see how comfortable the interviewee is with asking for help or clarification.
I think the best way to answer this type of question is to ask for any clarifications if something is missing or not well known, and then postulate an answer, pointing out why you think it is correct, and why it likely isn't the best solution.
I don't see a problem with this, but I do somewhat question the usefulness of these sorts of questions in interviews in general.
The benefit of asking questions like this, as an interviewer, is see how the person approaches a problem, and how they think. If you tell them to talk it out, you can find out quite a bit about how they will approach a difficult problem.
That being said, during an interview, most people aren't at their best - so throwing something that's somewhat "tricky" like this is often overkill, IMO.
It's sort of mean to ask nigh-impossible questions without informing the interviewee of it, but in observed problem solving, the question is often asked so that you may demonstrate critical thinking skills, how you approach problem solving, and how you handle pressure or failure.
I've been asked interview questions I couldn't solve, and I don't think I've ever "failed" an interview because of it.
No, it's rude and a sign that the interviewer just likes being in a position of power. Haha, peon! I know the answer, and you don't! And boy, do I love to make you squirm trying to come up with it!
About the only way it could be even slightly valid as a useful interview question is if it were a well-known question or one that was somehow obviously NP-complete, and asked in a way that encouraged discussion of feasibility.
Is it fair to ask in an interview how to factorise numbers?
That's not known to be NP-C, but no polynomial-time solution[*] is known, so it is certainly not known to be in P.
I think the answer to both my question, and the original question, is "yes", and for the same reasons. Some problems have no solution which scales well, but do need to be solved anyway for certain inputs. If you need programmers who can handle such problems, there's a good way to let them prove it in interview, and that's to pitch them one and see whether they freak out.
If someone claims a CompSci background, then they should even be able to provide good solutions to certain NP-C problems on demand, such as solving the knapsack problem with dynamic programming. I would consider it pointless asking an applicant for a programming job to take a problem they've never seen before, and actually prove it NP-complete (for example by reducing knapsack to the specified problem). You don't need very many programmers per company who can do that (usually 0), and all you'll likely discover is how long the candidate keeps at it before attempting to change the subject and do something more valuable with the interview time...
[*] polynomial in the size of the input in bits, that is. You often see people discussing algorithmic complexity of integer problems like factorisation in terms of the size of the number represented by the input, e.g. "sqrt(N) trial divisions". But that's not how NP and NP-C are defined.
That is evil!
If the interviewer asks an NP-complete question in an interviewer the only response they can reasonably expect is that the interviewee respond with a proof that the problem is NP-complete. In a low-stress environment like a university homework question, this usually takes a bright student 2-3 or more hours to prove. The proof itself can take several pages to write out completely, perhaps several hours of work itself. In a high-stress environment like an interview you can expect that the interviewee may not even recognize that this is np-complete.
The only reasonable alternative is that the interviewee produce an approximation algorithm; however, in this case the interviewer should make it explicitly clear that they are fine with approximations.
Even so, most approximations only come with an order of 2 of the correct answer.
I guess there is 1 more alternative: the interviewee suggests that a search-type algorithm maybe the most suitable (take for example the integer-domain optimization problem which is NP-complete, most approximation algorithms use a branch and bound search spin on the simplex algorithm to produce decent results.)
There is nothing wrong with giving an NP-complete problem as a programming challenge during an interview. I only see something wrong with expecting to find a polynomial-time solution to the problem during the interview.
An interviewer should want to see how a candidate deals with a variety of situations -- including situations that the candidate can't find an easy solution to. "Impossible" questions show how the candidate reacts when there's no simple solution. Does the candidate just give up? How many different attempts does the candidate search? How far-reaching are the solutions tried? When does the candidate ask for help -- and how? Does the candidate complain that the problem "isn't fair"?
In short, such an interview question isn't about solving P=NP... it's a psychological answer.
I prefer asking them to prove that P != NP or P == NP. Someday a candidate will answer it, I'll steal their answer and be famous!
On a more serious note, though, I think it's completely fair. Most NP complete problems are easy to solve, they just run very slowly. Unless the job requires them to know a lot about complexity theory, though, all they need to demonstrate is that they understand the solution will be slow. Bonus points if they know it's non-polynomial time, gold star if they know it's NP complete.
If such a question was given before an interview(to be answered at an interview) I would say it's ok.. but to just solve such a difficult problem as that on the spot is definitely not going to be done well by any programmer, and if the programmer does do it well that just means they can act on the spot(which isn't always the best thing for programming as designing things needs to take time and check every possible flaw) or that they have seen a similar problem before.
Edit:
Or possibly discussion about the problem would be good, like say laying down a plan of action whether or not you completely solve it.. and discussing how feasible and if there is a fast(but difficult) way to do it and such. I would not say that the interviewee should have to write down over 50 lines of C code in an interview to solve it though
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
When we start getting into algorithm design and more discrete computer science topics, we end up having to prove things all of the time. Every time I've seen somebody ask how to become really good at proofs, the common (and possibly lazy) answer is "practice".
Practicing is all fine if you have the basics down, but how do you get into the mind set for mathematical proofs? When did induction click? What resources are best for teaching these topics? What foundation topics should be researched prior to indulging in proof-writing?
They aren't being lazy, practice is the only way. Take classes you have to do proofs in and look online for class notes and old tests with answers from other colleges that go over proofs.
I'll start off my answer by admitting that as a CS student, I had a really tough time grasping a formal way of thinking, and it's never easy, unless you have a talent for it.
I'm afraid there is no better answer than practice and study.
A formal mathematical and algorithmic way of thinking and visioning problems is a skill which first demands a very deep understanding of the subjects you are dealing with. Second, it requires you have good knowledge of existing proofs. Try to envision yourself as some of the great scientists who came up with the algorithms you are studying. Understand how you would have tried to tackle that specific problem. Then see how they proved the correctness of their algorithm.
I can only recommend the greatest textbook in this subject which is Intro to Algorithms by CLRS. If you go through it from start to finish, including every exercise, you will enhance your skills.
Practice is really the only way, but it can helped along by reading proofs as well. I won't touch on practice because the other answerers have covered everything that I can think of, so I'll just talk about what I mean by reading.
Textbooks are very fond of writing out the "important" proofs. Its very nice, because they often prove very powerful statements, and are really fancy. But just as you shouldn't learn to be a world-class gymnast from day 1 by emulating an Olympian (as in, you'll probably break your spine), you shouldn't read any really big proofs (at first). What I found was helpful was reading smaller proofs, usually from returned homework (I assume you're a student) or occasionally a textbook that wisens up.
The reason why I think reading proofs is helpful is because there are a small set of "tricks" or "ideas" that constitute huge chunks of schoolwork proofs, and even more advanced ones. Data structure qualities and recurrence relations usually involve thinking related to proof by induction, proofs involving computability with finite state machines sometimes use the pigeonhole principle, and more rarely the idea of diagonalization (very infrequent, don't worry about it). And of course, just about every other proof uses proof by contradiction. I'm sure there are other handy tools that have slipped my mind, but I hope you get the idea.
Figuring out when, how, and why you'd approach a problem with one particular method or another is what takes practice and experience. I suggest reading proofs in addition to practice because it can often show you creative ways of using a proving method you've already encountered.
As a final note, try to remember when you first learned to program. How did you get better? Proving things and programming things are not too dissimilar, in my opinion. :)
You get into the mind set for doing mathematical proofs by becoming a mathematician. I don't mean the last statement in a tautological way, but realize that a mathematical proof, as published in a mathematical journal, is something of a rhetorical artifact; i.e., it is a proof because a body of mathematicians agree that it is a proof. Ideally, the arguments in the proof could all be reduced to symbolic logic, but this is not how it is done in practice. The utter failure of computer-generated proofs to do valuable mathematics provides some evidence for this.
I get into the mind set by doing proofs and having them accepted by other mathematicians. I agree with the others that "practice" is essential. You don't do proofs unless you try, try, and try. Often the light dawns slowly.
The best resources are, of course, other mathematicians, and reading proofs. There are very few, if any, who can do true mathematical proofs without being part of the mathematical community.
I'm afraid that "practice" really is the best answer here.
Its very similar to programming: once you get the hang of it, you find patterns which solve problems particularly well, and you can create a picture of the high-level design of novel systems which you've never implemented before. However, neophyte programmers aren't aware of patterns: they hack away at code until they accidentally stumble on some solution which appears to "work".
When you're given a problem to prove, you can usually identify properties ("Do I have a set of distinct objects?", "Am I generating permutations?", "Am I looking to minimize/maximize some value?", etc). Sooner or later, proofs will clump together into vaguely similar group, where techniques used to solve one problem can easily apply to novel variations.
Recommended reading:
The Algorithm Design Manual by Steven Skiena.
I have no idea. Probably the same way you get good at composing music.
When I try to prove something I'm not following some fixed strategy, I just think about the problem. Then [undefined amount of time] later, my mind returns a result and I jump up to write it down.
But practicing definitely helps. When I started trying to prove extremely simple statements, like DeMorgan's laws, I was completely hopeless. So I sat down and did the fifty or so optional example problems on a worksheet we were given. Now it feels natural to prove something.
Practice and study makes perfect sense, agreed. Some tricks, that I found useful:
Make notes on everything you study (I've tried just to read books -- a lot of material just passes through).
In addition to previous point: do all (or most) proofs by youself, use book/lecture notes as a guide; a lot of proofs contains phrases like "we can see now, that XXX". And XXX is not always trivial conclusion.
Make exercises; for example, in CLRS book there are dozens of exercises. Exercises are good way to get the ideas behind algorithms/correct proofs.
If you want to better understand the internals of algorithm -- consider participating in online programming contests like UVa's.
Recently in an interview I was asked several questions related to the Big-O of various algorithms that came up in the course of the technical questions. I don't think I did very well on this... In the ten years since I took programming courses where we were asked to calculate the Big-O of algorithms I have not have one discussion about the 'Big-O' of anything I have worked on or designed. I have been involved in many discussions with other team members and with the architects I have worked with about the complexity and speed of code, but I have never been part of a team that actually used Big-O calculations on a real project. The discussions are always "is there a better or more efficient way to do this given our understanding of out data?" Never "what is the complexity of this algorithm"?
I was wondering if people actually have discussions about the "Big-O" of their code in the real word?
It's not so much using it, it's more that you understand the implications.
There are programmers who do not realise the consequence of using an O(N^2) sorting algorithm.
I doubt many apart from those working in academia would use Big-O Complexity Analysis in anger day-to-day.
No needless n-squared
In my experience you don't have many discussions about it, because it doesn't need discussing. In practice, in my experience, all that ever happens is you discover something is slow and see that it's O(n^2) when in fact it could be O(n log n) or O(n), and then you go and change it. There's no discussion other than "that's n-squared, go fix it".
So yes, in my experience you do use it pretty commonly, but only in the basest sense of "decrease the order of the polynomial", and not in some highly tuned analysis of "yes, but if we switch to this crazy algorithm, we'll increase from logN down to the inverse of Ackerman's function" or some such nonsense. Anything less than a polynomial, and the theory goes out the window and you switch to profiling (e.g. even to decide between O(n) and O(n log n), measure real data).
Big-O notation is rather theoretical, while in practice, you are more interested in actual profiling results which give you a hard number as to how your performance is.
You might have two sorting algorithms which by the book have O(n^2) and O(nlogn) upper bounds, but profiling results might show that the more efficient one might have some overhead (which is not reflected in the theoretical bound you found for it) and for the specific problem set you are dealing with, you might choose the theoretically-less-efficient sorting algorithm.
Bottom line: in real life, profiling results usually take precedence over theoretical runtime bounds.
I do, all the time. When you have to deal with "large" numbers, typically in my case: users, rows in database, promotion codes, etc., you have to know and take into account the Big-O of your algorithms.
For example, an algorithm that generates random promotion codes for distribution could be used to generate billions of codes... Using a O(N^2) algorithm to generate unique codes means weeks of CPU time, whereas a O(N) means hours.
Another typical example is queries in code (bad!). People look up a table then perform a query for each row... this brings up the order to N^2. You can usually change the code to use SQL properly and get orders of N or NlogN.
So, in my experience, profiling is useful ONLY AFTER the correct class of algorithms is used. I use profiling to catch bad behaviours like understanding why a "small" number bound application under-performs.
The answer from my personal experience is - No. Probably the reason is that I use only simple, well understood algorithms and data structures. Their complexity analysis is already done and published, decades ago. Why we should avoid fancy algorithms is better explained by Rob Pike here. In short, a practitioner almost never have to invent new algorithms and as a consequence almost never have to use Big-O.
Well that doesn't mean that you should not be proficient in Big-O. A project might demand the design and analysis of an altogether new algorithm. For some real-world examples, please read the "war stories" in Skiena's The Algorithm Design Manual.
To the extent that I know that three nested for-loops are probably worse than one nested for-loop. In other words, I use it as a reference gut feeling.
I have never calculated an algorithm's Big-O outside of academia. If I have two ways to approach a certain problem, if my gut feeling says that one will have a lower Big-O than the other one, I'll probably instinctively take the smaller one, without further analysis.
On the other hand, if I know for certain the size of n that comes into my algorithm, and I know for certain it to be relatively small (say, under 100 elements), I might take the most legible one (I like to know what my code does even one month after it has been written). After all, the difference between 100^2 and 100^3 executions is hardly noticeable by the user with today's computers (until proven otherwise).
But, as others have pointed out, the profiler has the last and definite word: If the code I write executes slowly, I trust the profiler more than any theoretical rule, and fix accordingly.
I try to hold off optimizations until profiling data proves they are needed. Unless, of course, it is blatently obvious at design time that one algorithm will be more efficient than the other options (without adding too much complexity to the project).
Yes, I use it. And no, it's not often "discussed", just like we don't often discuss whether "orderCount" or "xyz" is a better variable name.
Usually, you don't sit down and analyze it, but you develop a gut feeling based on what you know, and can pretty much estimate the O-complexity on the fly in most cases.
I typically give it a moment's thought when I have to perform a lot of list operations. Am I doing any needless O(n^2) complexity stuff, that could have been done in linear time? How many passes am I making over the list? It's not something you need to make a formal analysis of, but without knowledge of big-O notation, it becomes a lot harder to do accurately.
If you want your software to perform acceptably on larger input sizes, then you need to consider the big-O complexity of your algorithms, formally or informally. Profiling is great for telling you how the program performs now, but if you're using a O(2^n) algorithm, your profiler will tell you that everything is just fine as long as your input size is tiny. And then your input size grows, and runtime explodes.
People often dismiss big-O notation as "theoretical", or "useless", or "less important than profiling". Which just indicates that they don't understand what big-O complexity is for. It solves a different problem than a profiler does. Both are essential in writing software with good performance. But profiling is ultimately a reactive tool. It tells you where your problem is, once the problem exists.
Big-O complexity proactively tells you which parts of your code are going to blow up if you run it on larger inputs. A profiler can not tell you that.
No. I don't use Big-O complexity in 'real world' situations.
My view on the whole issue is this - (maybe wrong.. but its just my take.)
The Big-O complexity stuff is ultimately to understand how efficient an algorithm is. If from experience or by other means, you understand the algorithms you are dealing with, and are able to use the right algo in the right place, thats all that matters.
If you know this Big-O stuff and are able to use it properly, well and good.
If you don't know to talk about algos and their efficiencies in the mathematical way - Big-O stuff, but you know what really matters - the best algo to use in a situation - thats perfectly fine.
If you don't know either, its bad.
Although you rarely need to do deep big-o analysis of a piece of code, it's important to know what it means and to be able to quickly evaluate the complexity of the code you're writing and the consequences it might have.
At development time you often feel like it's "good enough". Eh, no-one will ever put more than 100 elements in this array right ? Then, one day, someone will put 1000 elements in the array (trust users on that: if the code allows it, one of them will do it). And that n^2 algorithm that was good enough now is a big performance problem.
It's sometimes usefull the other way around: if you know that you functionaly have to make n^2 operations and the complexity of your algorithm happens to be n^3, there might be something you can do about it to make it n^2. Once it's n^2, you'll have to work on smaller optimizations.
In the contrary, if you just wrote a sorting algorithm and find out it has a linear complexity, you can be sure that there's a problem with it. (Of course, in real life, occasions were you have to write your own sorting algorithm are rare, but I once saw someone in an interview who was plainly satisfied with his one single for loop sorting algorithm).
Yes, for server-side code, one bottle-neck can mean you can't scale, because you get diminishing returns no matter how much hardware you throw at a problem.
That being said, there are often other reasons for scalability problems, such as blocking on file- and network-access, which are much slower than any internal computation you'll see, which is why profiling is more important than BigO.