I am currently creating a program which identifies processes which are hung/out-of-control, and using an entire CPU core. The program then terminates them, so the CPU usage can be kept under control.
However, I have run into a problem: When I execute the 'tasklist' command on Windows, it outputs this:
Image Name: Blockland.exe
PID: 4880
Session Name: Console
Session#: 6
Mem Usage: 127,544 K
Status: Running
User Name: [removed]\[removed]
CPU Time: 0:00:22
Window Title: C:\HammerHost\Blockland\Blockland.exe
So I know that the line which says "CPU Time" is an indication of the total time, in seconds, used by the program ever since it started.
But let's suppose there are 4 CPU cores on the system. Does this mean that it used up 22 seconds of one core, and therefore used 5.5 seconds on the entire CPU in total? Or does this mean that the process used up 22 seconds on the entire CPU?
It's the total CPU time across all cores. So, if the task used 10 seconds on one core and then 15 seconds later on a different core it would report 25 seconds. If it used 5 seconds on all four cores simultaneously, it would report 20 seconds.
Related
I was analyzing mini-dump of one of my processes using Windbg. I used .time command to see the process time and I got the result as below. I was expecting (Process Uptime = Kernel Time + User Time), which was not the case. Does any body know why or my interpretation is wrong?
0:035> .time
Debug session time: Tue May 5 14:30:24.000 2020 (UTC - 7:00)
System Uptime: not available
Process Uptime: 3 days 5:29:22.000
Kernel time: 0 days 9:06:26.000
User time: 11 days 18:50:47.000
The kernel & user times match the CPU / Kernel & User Times displayed in Process Explorer under the Performance tab, and are likely related to the times returned by GetProcessTimes. They add up to the Total Time displayed in Process Explorer, or the CPU Time displayed in Task Manager for the same process.
This "CPU time" is the total time across all CPUs, and does not include time the process spent sleeping, waiting, or otherwise sitting idle. Because of that it can be either (a) smaller than the process "uptime" which is simply the time difference between the start and end times, in the case of mostly idle processes, or (b) larger than the process uptime in the case of heavy usage across multiple CPUs.
I have some Fortran script. I compile with gfortran and then run as time ./a.out.
My script completes, and outputs the runtime as,
real 0m36.037s
user 0m36.028s
sys 0m0.004s
i.e. ~36 seconds
Now suppose I want to run this script multiple times, in parallel. For this I am using GNU Parallel.
Using the lscpu command tells me that I have 8 CPUs, with 2 threads per core and 4 cores per socket.
I create some file example.txt of the form,
time ./a.out
time ./a.out
time ./a.out
time ./a.out
...
which goes on for 8 lines.
I can then run these in parallel on 8 cores as,
parallel -j 8 :::: example.txt
In this case I would expect the runtime for each script to still be 36 seconds, and the total runtime to be ~36 seconds. However, in actuality what happens is the run time for each script roughly doubles.
If I instead run on 4 cores instead of 8 (-j 4) the problem disappears, and each script reverts to taking 36 seconds to run.
What is the cause of this? I have heard talk in the past on 'overheads' but I am not sure exactly what is meant by this.
What is happening is that you have only one socket with 4 physical cores in it.
Those are the real cores of your machine.
The total number of CPUs you see as output of lscpu is calculated using the following formula: #sockets * #cores_per_socket * #threads_per_core.
In your case it is 1*4*2=8.
Threads per core are a sort of virtual CPUs and they do not always perform as real CPUs, expecially for compute intense processing (this spec is called hyperthreading ).
Hence when you try to squeeze two threads per core, they get almost executed serially.
Take a look at this article for more info.
I have a production CPU issue, after days of regular activity suddenly the CPU starts to peak. I've saved the dump file and run the !runaway command to get the list of highest CPU time consuming threads. the output is below:
User Mode Time
Thread Time
21:110 0 days 10:51:39.781
19:f84 0 days 10:41:59.671
5:cc4 0 days 0:53:25.343
48:74 0 days 0:34:20.140
47:1670 0 days 0:34:09.812
13:460 0 days 0:32:57.640
8:14d4 0 days 0:19:30.546
7:d90 0 days 0:03:15.000
23:1520 0 days 0:02:21.984
22:ca0 0 days 0:02:08.375
24:72c 0 days 0:02:01.640
29:10ac 0 days 0:01:58.671
27:1088 0 days 0:01:44.390
As you can see, the output shows I've 2 threads: 21 & 19, that consumes more than 20 hours of CPU time combined ,I was able to track the callstack of 1 of those threads like so:
~21s
!CLRStack
the output doesn't matter at the moment, let's call it the "X callstack"
What I would like, is an explanation about the !runaway command output. from what I understand, a dump file is a snapshot of the current state of the application. so my questions are:
How can the runaway command shows 10:51 hours value for thread 21, when the dumping process only took a few seconds?
Does it mean that the specific "instance" of the X callstack I've found with the !CLRStack command is hang more than 10 hours? or it's the total time the 21 thread executed his whole X callstacks executions? If so, it seems strange that the 21 thread responsible for so many executions of the X callstacks. As I know the origin is a web request (the runtime should assign a random thread for each call)
I've a speculation that may answer those 2 questions:
Maybe the windbg calculate the time by taking the thread callstack actual time and dividing it by the scope of the dumping process, so if for example the specific execution of the X callstack took 1 second and the whole dumping process took 3 seconds (33%), while the process was running for total of 24 hours the output will show:
8 hours (33% of 24 hours)
Am I right, or completely got it wrong?
This answer is intended to be comprehensible for the OP. It's not intended to be correct into all bits and bytes.
[...] and dividing it by the scope of the dumping process [...]
This understanding is probably the root of all evil: dumping a process only gives you the state of the process at a certain point in time. The duration of dumping the process is 0.0 seconds, since all threads are suspended during the operation. (so, relative time for your process, nothing has changed and time is standing still; of course wall clock time changes)
You are thinking of dumping a process as monitoring it over a longer period of time, which is not the case. Dumping a process just takes time because it involves disk activity etc.
So no, there is no "scope" and thus you cannot (it's really hard) measure performance issues with crash dumps.
How can the runaway command shows 10:51 hours value for thread 21, [...]
How can your C# program know how long the program is running if you only have a timer event that fires every second? The answer is: it uses a variable and increases the value.
That's roughly how Windows does it. Windows is responsible for thread scheduling and each time it re-schedules threads, it updates a variable that contains the thread time.
When writing the crash dump, the information that was collected by the OS long time ago already, is included in the crash dump.
[...] when the dumping process only took a few seconds?
Since the crash dump is taken by a thread of WinDbg, the time for that is accounted on that thread. You would need to debug WinDbg and do !runaway on a WinDbg thread to see how much CPU time that took. Potentially a nice exercise and the .dbgdbg (debug the debugger) command may be new to you; other than that, this particular case is not really helpful.
Does it mean that the specific "instance" of the X callstack I've found with the !CLRStack command is hang more than 10 hours?
No. It means that at the point in time when you created the crash dump, that specific method was executed. Not more, not less.
This information is unrelated to !runaway, because the thread may have been doing something totally different for a long time, but that ended just a moment ago.
or it's the total time the 21 thread executed his whole X callstacks executions?
No. A crash dump does not contain such detailed performance data. You need a performance profiler like JetBrains dotTrace do get that information. A profiler will look at callstacks very often, then aggregate identical call stacks and derive CPU time per call stack.
A certain computer system runs in a multi-programming environment using a non-preemptive
algorithm. In this system, two processes A and B are stored in the process queue,
and A has a higher priority than B. The table below shows estimated execution time for each
process; for example, process A uses CPU, I/O, and then CPU sequentially for 30, 60, and 30
milliseconds respectively. Which of the following is the estimated time in milliseconds
to complete both A and B? Here, the multi-processing overhead of OS is negligibly
small. In addition, both CPU and I/O operations can be executed concurrently, but I/O
operations for A and B cannot be performed in parallel.
UNIT : millisecond
CPU I/O CPU
A_______________30___________________60_________________30
B_______________45___________________45__________________--
Please help me.. i need to explain this in front of the class tomorrow but i cant seem get the idea of it...
A has the highest priority, but since the system is non-preemptive, this is only a tiebreaker when both processes need a resource at the same time.
At t=0, A gets the CPU for 30 ms, B waits as it needs the CPU.
At t=30, A releases the CPU, B gets the CPU for 45 ms, while A gets the I/O for 60 ms.
At t=75, the CPU sits idle as B is waiting for A to finish I/O, and A is not ready to use the CPU.
At t=90, A releases I/O and gets the CPU for another 30 ms, while B gets the I/O for 45 ms.
At t=120, A releases the CPU and is finished.
At t=135, B releases I/O and is finished.
It takes the longest path:
Non-preemptive multitasking or cooperative multitasking means that the process is kind of sharing a.e. the CPU time. In the worst case they use the worst time to achieve theire task.
CPU:
B = 45 is longer than A=30
45 +
I/O
A = 60 and B = 45
45 + 60
CPU again:
A = 30
45 + 60 + 30 = 135
i will explain in brief and please elaborate for your classroom discussion:
For your answer :135
when Process A waits for the I/O task,the CPU time will be given to Process B. so the complete time for process A and B would be
Process A (CPU )+ Process A I/O and Process B CPU + Process B I/O
30+60+45 = 135 ms
A CPU executes , on average 60 machine instructions per μs. suppose that a program process a file of record where reading and writing a record from a file takes 10 μs each. if the program needs to execute 120 machine instructions between each read and write operation, what is the CPU utilization?