When I pause my music on the lock screen it stops the elapsed time on the number but when I press play again it jumps up with the time it was paused added on. I tried setting the rat to 0 on pause but that just makes the whole time disappear.
does anyone have any other info on this?
Related
I was analyzing mini-dump of one of my processes using Windbg. I used .time command to see the process time and I got the result as below. I was expecting (Process Uptime = Kernel Time + User Time), which was not the case. Does any body know why or my interpretation is wrong?
0:035> .time
Debug session time: Tue May 5 14:30:24.000 2020 (UTC - 7:00)
System Uptime: not available
Process Uptime: 3 days 5:29:22.000
Kernel time: 0 days 9:06:26.000
User time: 11 days 18:50:47.000
The kernel & user times match the CPU / Kernel & User Times displayed in Process Explorer under the Performance tab, and are likely related to the times returned by GetProcessTimes. They add up to the Total Time displayed in Process Explorer, or the CPU Time displayed in Task Manager for the same process.
This "CPU time" is the total time across all CPUs, and does not include time the process spent sleeping, waiting, or otherwise sitting idle. Because of that it can be either (a) smaller than the process "uptime" which is simply the time difference between the start and end times, in the case of mostly idle processes, or (b) larger than the process uptime in the case of heavy usage across multiple CPUs.
I have a production CPU issue, after days of regular activity suddenly the CPU starts to peak. I've saved the dump file and run the !runaway command to get the list of highest CPU time consuming threads. the output is below:
User Mode Time
Thread Time
21:110 0 days 10:51:39.781
19:f84 0 days 10:41:59.671
5:cc4 0 days 0:53:25.343
48:74 0 days 0:34:20.140
47:1670 0 days 0:34:09.812
13:460 0 days 0:32:57.640
8:14d4 0 days 0:19:30.546
7:d90 0 days 0:03:15.000
23:1520 0 days 0:02:21.984
22:ca0 0 days 0:02:08.375
24:72c 0 days 0:02:01.640
29:10ac 0 days 0:01:58.671
27:1088 0 days 0:01:44.390
As you can see, the output shows I've 2 threads: 21 & 19, that consumes more than 20 hours of CPU time combined ,I was able to track the callstack of 1 of those threads like so:
~21s
!CLRStack
the output doesn't matter at the moment, let's call it the "X callstack"
What I would like, is an explanation about the !runaway command output. from what I understand, a dump file is a snapshot of the current state of the application. so my questions are:
How can the runaway command shows 10:51 hours value for thread 21, when the dumping process only took a few seconds?
Does it mean that the specific "instance" of the X callstack I've found with the !CLRStack command is hang more than 10 hours? or it's the total time the 21 thread executed his whole X callstacks executions? If so, it seems strange that the 21 thread responsible for so many executions of the X callstacks. As I know the origin is a web request (the runtime should assign a random thread for each call)
I've a speculation that may answer those 2 questions:
Maybe the windbg calculate the time by taking the thread callstack actual time and dividing it by the scope of the dumping process, so if for example the specific execution of the X callstack took 1 second and the whole dumping process took 3 seconds (33%), while the process was running for total of 24 hours the output will show:
8 hours (33% of 24 hours)
Am I right, or completely got it wrong?
This answer is intended to be comprehensible for the OP. It's not intended to be correct into all bits and bytes.
[...] and dividing it by the scope of the dumping process [...]
This understanding is probably the root of all evil: dumping a process only gives you the state of the process at a certain point in time. The duration of dumping the process is 0.0 seconds, since all threads are suspended during the operation. (so, relative time for your process, nothing has changed and time is standing still; of course wall clock time changes)
You are thinking of dumping a process as monitoring it over a longer period of time, which is not the case. Dumping a process just takes time because it involves disk activity etc.
So no, there is no "scope" and thus you cannot (it's really hard) measure performance issues with crash dumps.
How can the runaway command shows 10:51 hours value for thread 21, [...]
How can your C# program know how long the program is running if you only have a timer event that fires every second? The answer is: it uses a variable and increases the value.
That's roughly how Windows does it. Windows is responsible for thread scheduling and each time it re-schedules threads, it updates a variable that contains the thread time.
When writing the crash dump, the information that was collected by the OS long time ago already, is included in the crash dump.
[...] when the dumping process only took a few seconds?
Since the crash dump is taken by a thread of WinDbg, the time for that is accounted on that thread. You would need to debug WinDbg and do !runaway on a WinDbg thread to see how much CPU time that took. Potentially a nice exercise and the .dbgdbg (debug the debugger) command may be new to you; other than that, this particular case is not really helpful.
Does it mean that the specific "instance" of the X callstack I've found with the !CLRStack command is hang more than 10 hours?
No. It means that at the point in time when you created the crash dump, that specific method was executed. Not more, not less.
This information is unrelated to !runaway, because the thread may have been doing something totally different for a long time, but that ended just a moment ago.
or it's the total time the 21 thread executed his whole X callstacks executions?
No. A crash dump does not contain such detailed performance data. You need a performance profiler like JetBrains dotTrace do get that information. A profiler will look at callstacks very often, then aggregate identical call stacks and derive CPU time per call stack.
After streaming a still image (with x264) for a long period of time, the transition to live video makes the CPU spike to 100% for a period of time proportionally equal to how long the still image was streaming. More specifically, transitioning after a minute will result in a CPU spike lasting about 15 seconds. Transitioning after 30 minutes will result in that spike lasting closer to 3 minutes.
Does this symptom make any sense and is there anything I can do about it?
Anyone know how to avoid that Windows 7 sometimes pauses for 300-600ms, even freezing SystemTime and MultimediaTimer (so if you measure time before and after this pause, it measures 0ms while PerformanceCounter in fact does measure this pause correctly. CPU load is pretty low (10%). The system uses a new MLC SSD. Do these still have stutter issues?
I found this behaviour by measuring timestamps from a camera grapping at 6 frames per second. I logged when images came in, and looking at the grapping log, the time between the images were fine, until I warned if the time between them was 20% too fast and 20% too slow. Then I sometimes (once per hour, sometimes only after 4 hours) got 300-600ms warnings. Followed by some "too fast" (image buffer suddenly give images from the buffer that built up during the 300-600ms pauses in a burst). However, the times in the log entries show that the systemtime wasnt updated during this time.
Log timestamps are given by GetLocalTime(LPSYSTEMTIME), and the time between images grapped are given by PerformanceCounter. When I use multimediatimer for to measure the time between new images , its time duration is the same as you get when subtracting the times in the log. Then I thought it was weird that it gave me extra images with 0-30ms time difference.
I tried all kinds of tweaks and driver updates in the network interface, different cameras, to no luck.
166ms is the ideal time between images , but here is an example of "bursts" of missing time slots and discrepancy between systemtime and performancecounter:
[03:06:09:48:22:615]New Image
[03:06:09:48:22:781]New Image
[03:06:09:48:22:949]New Image
[03:06:09:48:22:974]New Image. Warning Time since last: 224ms
[03:06:09:48:23:083]New Image
[03:06:09:48:23:238]New Image. Warning Time since last: 454ms
[03:06:09:48:23:261]New Image. Warning Time since last: 224ms
[03:06:09:48:23:415]New Image. Warning Time since last: 353ms
[03:06:09:48:23:551]New Image
[03:06:09:48:23:583]New Image. Warning Time since last: 330ms
[03:06:09:48:23:734]New Image. Warning Time since last: 451ms
[03:06:09:48:23:754]New Image. Warning Time since last: 119ms
[03:06:09:48:23:854]New Image
[03:06:09:48:24:020]New Image
[03:06:09:48:24:186]New Image
[03:06:09:48:24:354]New Image
[03:06:09:48:24:520]New Image
[03:06:09:48:24:686]New Image
So it all comes down to this question:
What phenomenon can cause the systemtime and multimedia time to lock up with the rest of the system so the pause is masked in the timings, while performance counter still keeps time, and how can I fix it?
I fixed this by installing a new networks driver, disabling hyperthreading, turbo boost and CPU Pstates.
The last 3 lines of wget -i urls.txt:
FINISHED --2012-05-16 12:58:08--
Total wall clock time: 1h 56m 52s
Downloaded: 1069 files, 746M in 1h 52m 49s (113 KB/s)
There are two different times:
1h 56m 52s
1h 52m 49s
Why are they different? What do they stand for?
Wall clock time or real time is the human perception of passage of time. That will be the time as a human user, what we will experience. In this case wget might have took less than the real time to finish its job, but the real time is the sum of time the software took to finish its real job and the time it was waiting for resources like hard disk, network etc.
When you have wall clock time and a shorter time, the shorter time is usually user time and the missing time system time (time spend in the kernel) or time waiting for something like a file descriptor. (But I have not checked what's the case with wget). If you are curious start time wget http://some.url or look into /proc/<wget-pid>/stat while it's running (assuming you are running on linux).