I have custom JFR event. I found that the RecodedEvent.getStartTime() is actually couple of seconds later than the time when this event was really created and committed. Then what time the getStartTime() shows?
In my case I added current time to my event and read it while jfr file parsing. But how can I get it in built-in events, like jdk.ExecutionSample?
There's a field in built-in events getLong("startTime"), but it gives strange numbers, that doesn't look like current time in millis. What is it?
By default JFR uses invariant TSC for taking timestamps (not used by System.currentMillis() or System.nanoTime()).
Invariant TSC allows JFR to have very low ovehead, but on some CPUs or in some scenarios, the clock may drift. You can use the command-line flag:
-xx:-UseFastUnorderedTimeStamps
to get a more accurate clock, but at a higher overhead.
The time you get from event.getLong("startTime") is the raw ticks, typically only useful if you want to compare with some other system that uses the same timing mechanism.
Related
I'm writing a Chrome extension and I want to measure how it affects performance, specifically currently I'm interested in how it affects page load times.
I picked a certain page I want to test, recorded it with Fiddler and I use this recording as the AutoResponder in Fiddler. This allows me to measure load times without networking traffic delays.
Using this technique I found out that my extension adds ~1200ms to the load time. Now I'm trying to figure out what causes the delay and I'm having trouble understanding the DevTools Performance results.
First of all, it seems there's a discrepancy in the reported load time:
On one hand, the summary shows a range of ~13s, but on the other hand, the load event arrived after ~10s (which I also corroborated using performance.timing.loadEventEnd - performance.timing.navigationStart):
The second thing I don't quite understand is how the number add up (or rather don't add up). For example, here's a grouping of different categories during load:
Neither of this columns sums up to 10s nor to 13s.
When I group by domain I can get different rows for the extension and for the rest of the stuff:
But it seems that the extension only adds 250ms which is much lower than the exhibited difference in load times.
I assume that these numbers represent just CPU time, and do not include any wait time. Is this correct? If so, it's OK that the numbers don't add up and it's possible that the extension doesn't spend all its time doing CPU bound work.
Then there's also the mysterious [Chrome extensions overhead], which doesn't explain the difference in load times either. Judging by the fact that it's a separate line from my extension, I thought they are mutually exclusive, but if I dive deeper into the specifics, I find my extension's functions under the [Chrome extensions overhead] subdomain:
So to summarize, this is what I want to be able to do:
Calculate the total CPU time my extension uses - it seems it's not enough to look under the extension's name, and its functions might also appear in other groups.
Understand whether the delay in load time is caused by CPU processing or by synchronous waiting. If it's the latter, find where my extension is doing a synchronous wait, because I'm pretty sure that I didn't call any blocking APIs.
Update
Eventually I found out that the reason for the slowdown was that we also activated Chrome accessibility whenever our extension was running and that's what caused the drastic slowdown. Without accessibility the extension had a very minor effect. I still wonder though, how I could see in the profiler that my problem was the accessibility. It could have saved me a ton of time... I will try to look at it again later.
I have an event from the realtime world, which generates an interrupt. I need to register this event to one of the Linux kernel timescales, like CLOCK_MONOTONIC or CLOCK_REALTIME, with the goal of establishing when the event occurred in real calendar time. What is the currently recommended way to do this? My google search found some patches submitted back in 2011 to support it, but the interrupt-handling code has been heavily revised since then and I don't see a reference to timestamps anymore.
For my intended application the accuracy requirements are low (1 ms). Still, I would like to know how to do this properly. I should think it's possible to get into the microsecond range, if one can exclude the possibility of higher-priority interrupts.
If you need only low precision, you could get away with reading jiffies.
However, if CONFIG_HZ is less than 1000, you will not even get 1 ms resolution.
For a high-resolution timestamp, see how firewire-cdev.c does it:
case CLOCK_REALTIME: getnstimeofday(&ts); break;
case CLOCK_MONOTONIC: ktime_get_ts(&ts); break;
case CLOCK_MONOTONIC_RAW: getrawmonotonic(&ts); break;
If I understood your needs right - you may use getnstimeofday() function for this purpose.
If you need the high precision monotonic clock value (which is usually a good idea) you should look at ktime_get_ts() function (defined in linux/ktime.h). getnstimeofday() suggested in the other answer returns the "wall" time which may actually appear to go backward on occassion, resulting in unexpected behavior for some applications.
I wonder how can I setup the time profiler instrument to show me the calls that are done between a period of time. I don't want it to show me all the calls of running time.
Is this possible?
I've been trying with flags but no see anything to change.
Basically I want to focus on a certain peak.
Option-drag on the timeline in instruments to just include results from that time range. Simple as that, really.
There is no way (that I'm aware of anyway) to trigger arbitrary flags in Instruments from user code. I've come up with a couple of alternatives.
The simplest one I've found is to put a call to sleep(1) right before and right after the stuff I want to look at, this means that I can easily identify a period of total idle right before and after the zone of interest. Crude, but effective.
The other alternative is that you can use Instruments' custom instrument mechanism to instrument certain calls. This can, similarly, give you other items on the timeline that you can use for reference. These can be challenging to create and get just right, so most often I just use the cruder method described above.
HTH
Not sure when this changed, but in Xcode 11, Option-dragging only performs a zoom. It doesn't change the range of reported data. Cmd-dragging does nothing. The key is to just drag across the range, but do it in the content region of the Time Profiler plot area - NOT the "time bar" at the top.
What does REALTIME_PRIORITY_CLASS (with THREAD_PRIORITY_TIME_CRITICAL) actually do?
Does it:
Prevent interrupts from firing
Prevent context switching from happening
on the processor (unless the thread sleeps)?
If it does prevents the above from happening:
How come when I run a program on a processor with this flag, I still get inconsistent timing results? Shouldn't the program take the same amount of time every time, if there's nothing interrupting it?
If it does NOT prevent the above from happening:
Why does my system (mouse, keyboard, etc.) lock up if I use it incorrectly? Shouldn't drivers still get some processor time?
It basically tells the system scheduler to only a lot time to your thread till it gives it up(via Sleep or SwitchToThread) or dies. As for timing not being the same, the OS still runs inbetween each run, this can change ram and caching etc. Secondly, most timing is inaccurate, so it will fluctuate(especially system quanta based timing like GetTickCount). The OS many also have thing things going on, like power saving/dynamic freq adjustment, so you best check would be to use RDTSC, though even with that you might notice other stuff running(especially if you can run more than one physical thread).
Can I monitor changes to the System Time Adjustment (which is change by SetSystemTimeAdjustment())?
I need to monitor such changes for a high-accuracy real-time spectrogram view.
Note:
I know that WM_TIMECHANGE is sent whenever the system time is changed. This is not what I'm asking for.
This MSDN Magazine article indicates that there is no notification mechanism in the OS - you need to monitor changes by polling GetSystemTimeAdjustment(). From "Implementing a Continuously Updating, High-Resolution Time Provider for Windows" by Johan Nilsson (MSDN Magazine, March 2004):
There are a couple of problems with this, though. The first is that enabling (and changing) the time adjustment alters your reference frequency—the flow of time. The second, which is a bigger problem, is that there is no notification sent by the system when the time adjustment is changed, enabled, or disabled. Changing the time adjustment even by the minimum possible increment (one 100-nanosecond unit) on a system with a default time increment of 156250 units causes a change in the reference frequency of 6.4 PPM (1/156250). Once again, this might not sound like much, but considering that you might want to keep within 50 microseconds from the system time, it could mean that you exceed that limit after a few seconds without resynchronization.
To lessen the impact of such adjustments, the time provider has to monitor the current time adjustment settings. With no help from the operating system itself, this is implemented by calling the SetSystemTimeAdjustment companion API GetSystemTimeAdjustment. By performing this check repeatedly at short enough intervals and adjusting the internal frequency as needed, you can avoid drifting too far from the system time.
It's possible that there's been OS-level support for notification added since the article was published, but I didn't find anything documented.
I'd assume that WM_TIMECHANGE is send to WndProc(), so you will want to override WndProc SOMETHING like this:
public override void WndProc(ref Message m) {
if(m==WM_TIMECHANGE) {
//dostuff
}
base.WndProc();
}