same algorithm execution time on different computers? [closed] - algorithm

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
if an algorithm with O(nlogn) time complexity is executed in two seconds on a computer ,how long does it take a 100 times faster computer to execute the same algorithm? is it 2/100 seconds? as far as i know Big o notation is a function of the input size and has nothing to do with execution time of the same algorithm on different computers, am i right?

The complexity does not say anything about the running time.
Example: I'm working on a very complex algorithm in a small team and the exact same algorithm runs in 30 minutes on my computer, in 60 minutes on a computer (similar performance as my comuter) of a team member, in 15 minutes on a netbook with a very slow processor but a different OS and a special extension. The runningtime also differs by different compilers.
The complexity gives you only a hint, how much more time your algorithm needs, if the input grows.

Related

the number of swaps in sorting algorithm [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 days ago.
This post was edited and submitted for review 3 days ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
Recently I was learning sorting algorithms, and a description that I found in a book illustrated the numbers of comparisons, swaps, and extra space that will determine the performance. So I was very confused about that. Why will the number of swaps hurt the performance?
"performance" refers to the running time of code. I have seen another post that mentioned swapping elements is vastly more expensive.
in practice, swapping elements is vastly more expensive than comparing. This is even more pronounced when elements are far apart, due to caching. So, on modern hardware, algorithms that tend to swap less - and when they do swap, move elements the furthest toward their final destination - tend to win out.
I want to know the affection of swaps in sorting algorithms. I'm new to this, pls.

Performance analysis of Sorting Algorithms [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am trying to compare the performance of couple of sorting algorithms. Are there any existing benchmarks that can make my task easy? If not I want create my own benchmark. How would I achieve this?
Couple of things I need to consider:
Test on different possible input permutation
Test on different scale of input size
Keep hardware configuration consistent across all the algorithms
Major challenge is in implementing sorting algorithm. Because if I implement one and if that happens to be the non-efficient way of implementation it will generate inaccurate result. How would I tackle this?
Tomorrow if someone comes up with his/her own sorting algorithms how would he/she compare with other sorting algorithm?
Though I am flexible with any programming language but would really appreciate if someone can suggest me some functions available in python.
Well, i think you are having trouble what a doubling ratio test is. I know only basics of python so i got this code from here
#!/usr/bin/python
import time
# measure wall time
t0 = time.time()
procedure() // call from here the main function of your sorting class and as
the (sorting)process ends then it will automatically print
the time taken by a sorting algorithm
print time.time() - t0, "seconds wall time"

How to figure out if a number is present in a very large dataset [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was wondering as to what would be a suitable answer to the question "Given a very large set of numbers write a service that will return back if a number is present within 500 ms". There would be trillions of numbers. This question was supposed to test my knowledge of scalability and architecture. I answered I would break up the set of numbers into multiple buckets and assign a set to a specific server, very much like a HashMap dividing up it's keys into buckets. In each server, the server would maintain something like a bit array which would mark out if a number is present. He asked me what what if the numbers are very sparse, in which case I will use a balanced binary search tree like red black or AVL tree. I guess there would be multiple solutions to this problem. I was wondering as to what would be the other answers?
A trillion is 10^12. Size of a bigint is 8 bytes. So you have 10^12 * 8 bytes = 7.27 terrabytes.
Now you can easily buy a 8TB disc for 500$ and it is not hard to buy a disc for 16TB. So you can just store all of them on one machine and no need to have a fancy multi-machine stuff. Then you just sort all of them (will take O(n * log n) which is approximately 2.8 * 10^13 operations.
On my machine a Go program can execute approximately 10^9 operations in 0.6 seconds, so nothing stops a C program to sort this many integers in 5 hours. This is only done once. Now to return a number you have to do log 10^12 operations which is less than 50 disk seeks which would be done in microseconds.

Measuring performances and scalability of mpi programs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to measure scalability and performances of one mpi program I wrote. Till now I used the MPI_Barrier function and the stopwatch library in order to count the time. The thing is that the computation time depends a lot on the current use of my cpu and ram so all the time I get different results. Moreover my program runs on a virtual machine vmware which I need in order to use Unix.
I wanted to ask...how can I have an objective measure of the times? I want to see if my program has a good scalability or not.
In general, the way most people measure time in their MPI programs is to use MPI_WTIME since it's supposed to be a portable way to get the system time. That will give you a decent realtime result.
If you're looking to measure CPU time instead of real time, that's a very different and much more difficult problem. Usually the way most people handle that is to run their benchmarks on an otherwise quiet system.

What Is "Time" When Talking About bigO Notation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am getting ahead on next semester classes and just had a question about bigO notation. What is the time factor measured in? Is it a measure of milliseconds, nanoseconds or just an arbitrary measure based upon the amount of inputs, n, used to compare different versions of algorithims?
It kinda sorta depends on how exactly you define the notation (there are a many different definitions that ultimately describe the same thing). We defined it on turing machines, there time would be defined as the number of computation steps performed. On real machines, it'd be similar - for instance, the number of atomic instructions performed. As some of the comments have pointed out, the unit of time doesn't really matter anyway because what's measured is the asymptotic performance, that is, how the performance changes with increasing input sizes.
Note that this isn't really a programming question and probably not a good fit for the site. More of a CompSci thing, but i think the compsci stackexchange site is meant for post graduates.

Resources