Are both cpu ticks and jiffies in a cpu the same or are they different if not can any one give me the difference between them
thanks in advance
Related
How to convert the time difference given by the GPU timer while rendering into the equivalent CPU timing?
Let's say,
glGetQueryObjectuiv(query, GL_QUERY_RESULT, &elapsed_time) - will return the elapsed time for that query and I presume this elapsed time will correspond to GPU frequency.
How to get the corresponding CPU time which is equivalent to the GPU elapsed time?
It's a timer query - it returns a time in nanoseconds. Time doesn't change with frequency ...
Is the CPU instruction per cycle and flops per cycle is same ?
how to find CPU instruction per cycle of the cpu ?
Intel(R) Core(TM) i5-2540M CPU # 2.60GHz
No, it's not the same.
FLOPS is floating point operations per second. This is used as a measurement because historically floating point operations has been very expensive operations compared to other operations.
Instructions per cycle would be an average of any kind of instruction executed per cycle. You would need a specific test case to measure that, as the number of instructions that can be performed per cycle varies much depending on what the instruction does. For example, if the first instruction performed would be a branch, and the CPU didn't predict the branch correctly, that would be the only instruction that would be executed that cycle.
On this page you can find that the specific processor has this measured speed:
2.42 GFLOPS/core
9.65 GFLOPS/computer
This is giga-floating operations per second. You would divide that with the CPU frequency if you wanted that as FLO per cycle.
I'm using time/1 to measure cpu time in YAP prolog and I'm getting for example
514.000 CPU in 0.022 seconds (2336363% CPU)
yes
What I'd like to ask is what is the interpretation of these numbers? Does 514.000 represents CPU secs? What is "0.022 seconds" and the CPU percentage that follows?
Thank you
There are only two hard things in Computer Science: cache invalidation
and naming things.
-- Phil Karlton
My app is reporting CPU time, and people reasonably want to know how much time this is out of, so they can compute % CPU utilized. My question is, what's the name for the wall clock time times the number of CPUs?
If you add up the total user, system, idle, etc. time for a system, you get the total wall clock time, times the number of CPUs. What's a good name for that? According to Wikipedia, CPU time is:
CPU time (or CPU usage, process time) is the amount of time for which
a central processing unit (CPU) was used for processing instructions
of a computer program, as opposed to, for example, waiting for
input/output (I/O) operations.
"total time" suggests just wall clock time, and doesn't connote that over a 10 second span, a four-cpu system would have 40 seconds of "total time."
Total Wall Clock Time of all CPUs
Naming things is hard, why waste a good 'un once you've found it ?
Aggregate time: 15 of 40 seconds.
The interrupt service routine (ISR) for a device transfers 4 bytes of data from the
device on each device interrupt. On each interrupt, the ISR executes 90 instructions
with each instruction taking 2 clock cycles to execute. The CPU takes 20 clock cycles
to respond to an interrupt request before the ISR starts to execute instructions.
Calculate the maximum data rate, in bits per second, that can be input from this
device, if the CPU clock frequency is 100MHz.
Any help on how to solve will be appreciated.
What I'm thinking - 90 instructions x 2 cycles = 180
20 cycles delay = 200 cycles per one interrupt
so in 100mhz = 100million cycles = 100million/200 = 500,000 cycles each with 4 bytes
so 2million bytes or 16million bits
I think its right but im not 100% sure can anyone confirm?
cheers/
Your calculation looks good to me. If you want an "Engineering answer" then I'd add a 10% margin. Something like: "Theoretical max data rate is 16m bits per sec. Using a 10% margin, no more that 14.4m bits per sec"