Not understanding the formula to calculate the execution time of the program task - cpu

"Suppose your CPU is rated at 2.5 GHz and you execute a program task on your app that requires the execution of 16 million ISA3 instructions. If each ISA3 instructions executes an average of 3.7 mc2 instructions, what is the execution time of the program task?"
I know the answer, its 0.024.
I just dont understand why thats the answer. I dont understand where the numbers ago, if you convert a number to 1000 bts or what.
Someone walk me through it?

Related

Need help understanding Amdahl's Law for program optimization

Amdahl's law as explained by Intel states that "Parallelizing the part of your program where it spends 80% of its time cannot speed it up by more than a factor of five, no matter how many cores you run it on".
Can somebody explain where does the number five comes from?
T is the time taken for whole task in serial;
P is portion of time to execute in serial the parallelizable part;
1-P is the portion of time to execute in serial the non-parallizable part 1-.80 =.2%
So the program can be sped up by a maximum of a factor of 2 not 5!
Reference: https://workforce.libretexts.org/Bookshelves/Information_Technology/Information_Technology_Hardware/Advanced_Computer_Organization_Architecture_(Njoroge)/02%3A_Multiprocessing/2.01%3A_The_Amdahl's_law
https://www.intel.com/content/www/us/en/develop/documentation/advisor-user-guide/top/model-threading-designs/annotate-code-for-deeper-analysis/annotate-code-to-model-parallelism/use-amdahl-law.html

What is the difference between MIPS and Execution time

When it comes to rating the performance of a processor, is calculating the Million Instructions Per Second (MIPS) a practical measure to use?
Or is finding the Execution Time (IC x CPI x 1/CR) the main thing to use?
Imagine you have one CPU that does 100 million tiny little instructions that don't do much on their own per second. Next; imagine you have another CPU where you need a quarter of the instructions to do the same work; which can do 50 million larger instructions per second. The second CPU has half as many MIPs but is twice as fast.
Now.. Imagine you have 2 CPUs that both execute the exact same instructions; where one CPU runs at 1 GHz, can do 5 instructions per cycle, and stalls rarely; and the other CPU runs at 4 GHz, can only do 2 instructions per cycle, and spends a lot more time stalled doing nothing (due to cache misses, branch mispredictions, etc). In this case the 1 GHz CPU might be significantly faster than the 4 GHz CPU.
Finally; imagine you have 2 CPUs that both execute the exact same instructions, both have exactly the same clock frequency, both execute the same number of instructions per cycle, and both spend exactly the same amount of time stalled. One CPU has overheats easily and had to "under-clock" itself to a crawl after 250 milliseconds of not being idle just to avoid melting itself, and the other CPU can go at max. speed continuously without ever overheating.
Execution time is how long it takes to do some work taking everything into account (and can be extremely different for different types of work); while MIPS is like a real estate agent determining how much a building is worth by measuring the weight of a rubber chicken.

H2O - Not seeing much speed-up after moving to powerful machine

I am running a Python program that calls H2O for deep learning (training and testing). The program runs in a loop of 20 iterations and in each loop calls H2ODeepLearningEstimator() 4 times and associated predict() and model_performance(). I am doing h2o.remove_all() and cleaning up all data-related Python objects after each iteration.
Data size: training set 80,000 with 122 features (all float) with 20% for validation (10-fold CV). test set 20,000. Doing binary classification.
Machine 1: Windows 7, 4 core, Xeon, each core 3.5GHz, Memory 32 GB
Takes about 24 hours to complete
Machine 2: CentOS 7, 20 core, Xeon, each core 2.0GHz, Memory 128 GB
Takes about 17 hours to complete
I am using h2o.init(nthreads=-1, max_mem_size = 96)
So, the speed-up is not that much.
My questions:
1) Is the speed-up typical?
2) What can I do to achieve substantial speed-up?
2.1) Will adding more cores help?
2.2) Are there any H2O configuration or tips that I am missing?
Thanks very much.
- Mohammad,
Graduate student
If the training time is the main effort, and you have enough memory, then the speed up will be proportional to cores times core-speed. So, you might have expected a 40/14 = 2.85 speed-up (i.e. your 24hrs coming down to the 8-10 hour range).
There is a typo in your h2o.init(): 96 should be "96g". However, I think that was a typo when writing the question, as h2o.init() would return an error message. (And H2O would fail to start if you'd tried "96", with the quotes but without the "g".)
You didn't show your h2o.deeplearning() command, but I am guessing you are using early stopping. And that can be unpredictable. So, what might have happened is that your first 24hr run did, say, 1000 epochs, but your second 17hr run did 2000 epochs. (1000 vs. 2000 would be quite an extreme difference, though.)
It might be that you are spending too much time scoring. If you've not touched the defaults, this is unlikely. But you could experiment with train_samples_per_iteration (e.g. set it to 10 times the number of your training rows).
What can I do to achieve substantial speed-up?
Stop using cross-validation. That might be a bit controversial, but personally I think 80,000 training rows is going to be enough to do an 80%/10%/10% split into train/valid/test. That will be 5-10 times quicker.
If it is for a paper, and you want to show more confidence in the results, once you have your final model, and you've checked that test score is close to valid score, then rebuild it a couple of times using a different seed for the 80/10/10 split, and confirm you end up with the same metrics. (*)
*: By the way, take a look at the score for each of the 10 cv models you've already made; if they are fairly close to each other, then this approach should work well. If they are all over the place, you might have to re-consider the train/valid/test splits - or just think about what it is in your data that might be causing that sensitivity.

solving computer performance by amdal's low speedup factor

a program runs 100s. multiyply instructions are 80% of the program.There is need to make the program 2 times faster. how much should the multiply instruction be speedup achieve to reach the overall speedup level ?
pls help me to solve this question by amdal's low....
Hint:
The running time is 80 s for multiplies and 20 s for the rest.
You need to bring this down to 30 / 20 s.

What is the performance of 10 processors capable of 200 MFLOPs running code which is 10% sequential and 90% parallelelizable?

simple problem from Wilkinson and Allen's Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers. Working through the exercises at the end of the first chapter and want to make sure that I'm on the right track. The full question is:
1-11 A multiprocessor consists of 10 processors, each capable of a peak execution rate of 200 MFLOPs (millions of floating point operations per second). What is the performance of the system as measured in MFLOPs when 10% of the code is sequential and 90% is parallelizable?
I assume the question wants me to find the number of operations per second of a serial processor which would take the same amount of time to run the program as the multiprocessor.
I think I'm right in thinking that 10% of the program is run at 200 MFLOPs, and 90% is run at 2,000 MFLOPs, and that I can average these speeds to find the performance of the multiprocessor in MFLOPs:
1/10 * 200 + 9/10 * 2000 = 1820 MFLOPs
So when running a program which is 10% serial and 90% parallelizable the performance of the multiprocessor is 1820 MFLOPs.
Is my approach correct?
ps: I understand that this isn't exactly how this would work in reality because it's far more complex, but I would like to know if I'm grasping the concepts.
Your calculation would be fine if 90% of the time, all 10 processors were fully utilized, and 10% of the time, just 1 processor was in use. However, I don't think that is a reasonable interpretation of the problem. I think it is more reasonable to assume that if a single processor were used, 10% of its computations would be on the sequential part, and 90% of its computations would be on the parallelizable part.
One possibility is that the sequential part and parallelizable parts can be run in parallel. Then one processor could run the sequential part, and the other 9 processors could do the parallelizable part. All processors would be fully used, and the result would be 2000 MFLOPS.
Another possibility is that the sequential part needs to be run first, and then the parallelizable part. If a single processor needed 1 hour to do the first part, and 9 hours to do the second, then it would take 10 processors 1 + 0.9 = 1.9 hours total, for an average of about (1*200 + 0.9*2000)/1.9 ~ 1053 MFLOPS.

Resources