Average instruction time - time

Lets say we have an average of one page fault every 20,000,000 instructions, a normal instruction takes 2 nanoseconds, and a page fault causes the instruction to take an additional 10 milliseconds. What is the average instruction time, taking page faults into account?

20,000,000 instructions, one of them will page-fault
Therefore, the 20,000,000 instructions will take
(2 nanoseconds * 20,000,000) + 10 milliseconds
get the result (which is the total time for 20,000,000 instructions), and divide it by the number of instructions to get the time-per-instruction.

What is the average instruction time, taking page faults into account?
The average instruction time is the total time, divided by the number of instructions.
So: what's the total time for 20,000,000 instructions?

2.5 nanoseconds? Pretty simple arithmetic, I guess.

If 1 in 20,000,000 instructions causes a page fault then you have a page fault rate of:
Page Fault Rate = (1/20000000)
You can then calculate your average time per instruction:
Average Time = (1 - Page Fault Rate) * 2 ns + (Page Fault Rate * 10 ms)
Comes to 2.5 ns / instruction

Related

What is the difference between MIPS and Execution time

When it comes to rating the performance of a processor, is calculating the Million Instructions Per Second (MIPS) a practical measure to use?
Or is finding the Execution Time (IC x CPI x 1/CR) the main thing to use?
Imagine you have one CPU that does 100 million tiny little instructions that don't do much on their own per second. Next; imagine you have another CPU where you need a quarter of the instructions to do the same work; which can do 50 million larger instructions per second. The second CPU has half as many MIPs but is twice as fast.
Now.. Imagine you have 2 CPUs that both execute the exact same instructions; where one CPU runs at 1 GHz, can do 5 instructions per cycle, and stalls rarely; and the other CPU runs at 4 GHz, can only do 2 instructions per cycle, and spends a lot more time stalled doing nothing (due to cache misses, branch mispredictions, etc). In this case the 1 GHz CPU might be significantly faster than the 4 GHz CPU.
Finally; imagine you have 2 CPUs that both execute the exact same instructions, both have exactly the same clock frequency, both execute the same number of instructions per cycle, and both spend exactly the same amount of time stalled. One CPU has overheats easily and had to "under-clock" itself to a crawl after 250 milliseconds of not being idle just to avoid melting itself, and the other CPU can go at max. speed continuously without ever overheating.
Execution time is how long it takes to do some work taking everything into account (and can be extremely different for different types of work); while MIPS is like a real estate agent determining how much a building is worth by measuring the weight of a rubber chicken.

Calculating Cycles Per Instruction

From what I understand, to calculate CPI, it's the percentage of the type of instruction multiplied by the number of cycles right? Does the type of machine have any part of this calculation whatsoever?
I have a problem that asks me if a change should be recommended.
Machine 1: 40% R - 5 Cycles, 30% lw - 6 Cycles, 15% sw - 6 Cycles, 15% beq 3 - Cycles, on a 2.5 GHz machine
Machine 2: 40% R - 5 Cycles, 30% lw - 6 Cycles, 15% sw - 6 Cycles, 15% beq 4 - Cycles, on a 2.7 GHz machine
By my calculations, machine 1 has 5.15 CPI while machine 2 has 5.3 CPI. Is it okay to ignore the GHz of the machine and say that the change would not be a good idea or do I have to factor the machine in?
I think the point is to evaluate a design change that makes an instruction take more clocks, but allows you to raise the clock frequency. (i.e. leaning towards a speed-demon design like Pentium 4, instead of brainiac like Apple's A7/A8 ARM cores. http://www.lighterra.com/papers/modernmicroprocessors/)
So you need to calculate instructions per second to see which one will get more work done in the same amount of real time. i.e. (clock/sec) / (clocks/insn) = insn/sec, cancelling out the clocks from the units.
Your CPI calculation looks ok; I didn't check it, but yes a weighted average of the cycles according to the instruction mix.
These numbers are obviously super simplified; any CPU worth building at 2.5GHz would have some kind of branch prediction so the cost of a branch isn't just a 3 or 4 instruction bubble. And taking ~5 cycles per instruction on average is pathetic. (Most pipelined designs aim for at least 1 instruction per clock.)
Caches and superscalar CPUs also lead to complex interactions between instructions depending on whether they depend on earlier results or not.
But this is sort of like what you might do if considering increasing the L1d cache load-use latency by 1 cycle (for example), if that took it off the critical path and let you raise the clock frequency. Or vice versa, tightening up the latency or reducing the number of pipeline stages on something at the cost of reducing frequency.
Cycles per instruction a count of cycles. ghz doesnt matter as far as that average goes. But saying that we can see from your numbers that one instruction is more clocks but the processors are a different speed.
So while it takes more cycles to do the same job on the faster processor the speed of the processor DOES compensate for that so it seems clear this is a question about does the processor speed account for the extra clock?
5.15 cycles/instruction / 2.5 (giga) cycles/second, cycles cancels out you get
2.06 seconds/(giga) instruction or (nano) seconds/ instruction
5.30 / 2.7 = 1.96296 (nano) seconds / instruction
The faster one takes a slightly less amount of time so it will run the program faster.
Another way to see this to check the math.
For 100 clock cycles on the slower machine 15% of those are beq. So 15 of the 100 clocks, which is 5 beq instructions. The same 5 beq instructions take 20 clocks on the faster machine so 105 clocks total for the same instructions on the faster machine.
100 cycles at 2.5ghz vs 105 at 2.7ghz
we want the amount of time
hz is cycles / second we want seconds on the top
so we want
cycles / (cycles/second) to have cycles cancel out and have seconds on the top
1/2.5 = 0.400 (400 picoseconds)
1/2.7 = 0.370
0.400 * 100 = 40.00 units of time
0.370 * 105 = 38.85 units of time
So despite taking 5 more cycles the processor speed differences is fast enough to compensate.
2.7/2.5 = 1.08
105/100 = 1.05
so 2.5 * 1.05 = 2.625 so a processor 2.625ghz or faster would run that program faster.
Now what were the rules for changing computers, is less time defined as a reason to change computers? What is the definition of better? How much more power does the faster one consume it might take less time but the power consumption might not be linear so it may take more watts despite taking less time. I assume the question is not that detailed, meaning it is vague meaning it is a poorly written question on its own, so it goes to what the textbook or lecture defined as the threshold for change to the other processor.
Disclaimer, dont blame me if you miss this question on your homework/test.
Outside an academic exercise like this, the real world is full of pipelined processors (not all but most of the folks writing programs are writing programs for) and basically you cant put a number on clock cycles per instruction type in a way that you can do this calculation because of a laundry list of factors. Make sore you understand that, nice exercise, but that specific exercise is difficult and dangerous to attempt on real world processors. Dangerous in that as hard as you work you may be incorrectly measuring something and jumping to the wrong conclusions and as a result making bad recommendations. At the same time there is very much the reality that faster ghz does improve some percentage of the execution, but another percentage suffers, and is there a net gain or loss. Or a new processor design faster or slower may have features that perform better than an older processor, but not all feature will be better, there is a tradeoff and then we get into what "better" means.

Average access time with cache misses

The memory access time is 1 nanosecond for a read operation with a hit in cache, 5 nanoseconds for a read operation with a miss in cache, 2 nanoseconds for a write operation with a hit in cache and 10 nanoseconds for a write operation with a miss in cache. Execution of a sequence of instructions involves 100 instruction fetch operations, 60 memory operand read operations and 40 memory operand write operations. The cache hit-ratio is 0.9.What is the average memory access time
The question is to find the time taken for,
"100 fetch operation and 60 operand red operations and 40 memory operand write operations"/"total number of instructions".
Total number of instructions= 100+60+40 =200
Time taken for 100 fetch operations(fetch =read)
= 100*((0.9*1)+(0.1*5)) //1 corresponds to time taken for read when there is cache hit
= 140 ns //0.9 is cache hit rate
Time taken for 60 read operations
=60*((0.9*1)+(0.1*5))
=84ns
Time taken for 40 write operations
=40*((0.9*2)+(0.1*10)) =112 ns
//Here 2and 10 the time taken for write when there is cache hit and no cahce hit respectively
So,the total time taken for 200 operations is = 140+84+112=336ns
Average time taken = time taken per operation=336/200= 1.68 ns

Memory Access time in 2 level Paging

Consider a system with a two-level paging scheme in which a regular memory
access takes 150 nanoseconds, and servicing a page fault takes 8 ms.
An average instruction takes 100 ns of CPU time, and two memory
accesses. The TLB hit ratio is 90%, and the page fault rate is one in every 10,000
instructions. What is the effective average instruction execution time?
This was asked in GATE 2004. To solve the question, I would follow the below concept :
T(memory access avg) = .90(150) + .1(150+150+150) = 180
(150- level1, 150-level2 and 150-memory)
T effective = 100+ 2* 180 + 1/10000* 8* 10^6 = 1260.
Is this approach correct ? Also I have the following doubts :
There won't be a page fault when there is a TLB hit because the most
frequently used pages has to be in the memory. Is it correct ?
What is the size of the page table for a process? Say for a 32 bit
virtual address, for every process do we allocate a page-table with
2^32 entries ? How is the memory limits managed in paging ?
Please explain theses concepts.
I would suggest the following
here 100ns for instruction execution (no difference of opinion there)
Now given TLB hit ratio is 90%, so whenever there is a TLB miss, we have to do 2 memory accesses, since it is given a 2 level paging scheme.
and irrespective of TLB hit or miss 2*(150+ 8*10^6 * 1/20000 ) should be done which is memory access time for contents and overhead for page fault.
I think your expression assumes, that for an instruction whenever a TLB hit occurs for first content, it follows for the second
so you assume hit-hit or miss-miss,while since given TLB hit is 90%(per access and not per instruction), I feel there should be all 4 possible combinations
hit-hit, miss-miss, hit-miss,miss-hit

understanding CPI and cache access

These are previous homework problems, but I am using them as exam review. I am changing numbers around from what is actually in the problem. I just want to make sure I have a grasp on the concepts. I already have the answers, just need clarification that I understand them. This is not homework but review work.
Anyway, this focuses on aspects of CPI
The fist problem:
An application running on a 1GHz processor has 30% load-store instructions, 30% arithmetic, and 40% branch instructions. The individual CPIs are 3 for load-store, 4 for arithmetic, 5 for branch instructions. Determine the overall CPI of this program on the given processor.
My answer: The overall CPI is the sum of the sub-CPIs, multiplied by the percentages in which they occur i.e. 3*0.3 + 4*0.3 + 5*0.4 = 0.9 + 1.2 + 2 = 4.1
Now, the processor is enhanced to run at 1.6GHz. The CPIs of the branch instructions remain the same but load-store and arithmetic instruction CPIs both increase to 6 cycles. A new compiler is in use which eliminates 30% of branch instructions and 10% of load-stores. Determine the new overall CPI and the factor by which the application will be faster or slower.
My answer: Once again, the new CPI is just the sum of its parts. However, the parts have changed and this must be accounted for. Branch instructions will drop by 30% (0.4*0.7=0.28) and load-stores will drop by 10% (0.3*0.9=0.27); arithmetic instructions will now account for the rest of the instructions (1-0.28-0.27=0.45), or 45%. These will be multiplied by the new sub-CPIs to get: 6*0.45+6*0.27+5*0.28=5.72.
Now, the processor enhancement is 60% faster, and the CPI is greater by (5.72-4.1)/4.1 = 39.5%. Thus, the application will run roughly 0.6*0.395 = 23.7% faster.
Now, the second problem:
A new processor with a load/store architecture has an ideal CPI of 1.25. Typical applications on this processor are a mix of 50% arithmetic and logic, 25% conditional branching and 25% load/store. Memory is accessed via a separate data and instruction cache, with a 5% instruction cache miss rate and 10% data miss rate. The penalty of any cache miss is 100 cycles and hits don't produce any penalties.
What is the effective CPI?
My answer: The effective CPI is the ideal CPI, plus the stalled cycles per instruction due to cache access. The ideal CPI is, as given, 1.25. The stalled cycles per instruction is (0.1*100*0.25) + (0.05*100*1) = 7.5. 0.1*100*0.25 is the data miss rate multiplied by the stalled cycle penalty which is also multiplied by the load/store percentage (which is where the data accesses take place); 0.05*100*1 is the instruction miss rate, which is the instruction cache miss rate times the stalled cycle penalty, instruction access take place in 100% of the program, so this is multiplied by 1. Following from this, the effective CPI is 1.25 + 7.5 = 8.75.
What is the misses per 1000 instruction for typical applications and what is the average memory access time (in clock cycles) for typical applications?
My answers: The misses per 1000 instructions is equal to the stalled cycles per instruction due to cache access (as given above: 7.5), divided by 1000, which equals 7.5/1000 = 0.0075
When discussing the average memory access time (AMAT), we first must talk about the total number of accesses here, which is the percentage of data accesses (25%) plus the percentage of instruction accesses (100%), or 125%=1.25. The data accesses are .25/1.25 and the instruction accesses are 1/1.25.
The AMAT equals the percentage of data accesses (.25/1.25) multiplied by the sum of the hit time (1) and the data miss rate multiplied by the miss penalty (0.1*100), or (.25/1.25)(1+0.1*100) and this is added to the percentage of instruction accesses (1/1.25) multiplied by the sum of the hit time (1) and the instruction miss rate multiplied by the miss penalty (0.05*100), or (1/1.25)(1+0.05*100). Put together, the AMAT is (.25/1.25)(1+0.1*100)+(1/1.25)(1+0.05*100)=7.
Once again, sorry for the wall of text. If I am wrong, please try to help me understand how I am wrong. I tried to show all my work to make it as easy as possible to understand. Thanks in advance.
There's an error in the lat part of your question. When they ask:
What is the misses per 1000 instruction for typical applications and what is the average memory
access time (in clock cycles) for typical applications?
what's needed here is the number of misses you will get for every 1000 instructions, which in this case would be 1000*1*0.05 for instruction cache misses and 1000*0.25*0.1 for data cache misses. This equals 75 misses per 1000 instructions.
To calculate the AMAT, you use the formula AMAT = hit time + (miss rate*miss penalty)
In this case, your miss rate is 75/1000 and your miss penalty is 100 cycles. The hit time is given as 1.25 cycles (your ideal CPI!).
Hope this helps and all the best for your exam!

Resources