I'd like to program my own clock based on an octal numeral system. From what I've gathered, javascript is browser friendly but inaccurate at time intervals. What's a good program to code something like this? To give an example for this specific time system, there would be 64 hours in a day, 64 minutes in an hour, and 64 seconds in a minute. This would result in 1 octal second being equivalent to 0.32958984375 ISU second.
Related
I would like to know if it is possible to modify easily perf linux with stat module to create an interval of cycles (or instructions by cycle) instead of an interval of time ? The goal is to optimize the precision of the counters got by interval. The time unit is not accurate enough.
I have a friend which submits this idea but I looked the source code a little and I understood (maybe I have wrong) that we :
create a condition for a time calculated with the rdtsc library (some clock_gettime)
create a "wait" in the perf processus
launch the program to test
test if we respect the time condition : we continue or we break the wait function with a save on the mapped register system information in perf (and call the wait function if it is not over)
I would like this result :
cycles counts unit events
10000 25000 instructions
10000 450 branch-misses
20000 21000 instructions
20000 850 branch-misses
Unfortunately, I'm seeing a big problem if I want to use the result of a counter like a condition I have not yet. Or should I get all the time this or these counter(s) which define my "interval condition" ? I also saw that for a time interval, we shouldn't get counters with a frequency lower than 100ms because it generates overhead. If I get some counters every 10000 cycles, would I have the same problems ? I don't know where is this overhead (calls system ?).
I'm trying to implement some sort of template matching which requires me calling a function more than 10 thousand times per frame. I've managed to reduce the execution time of my function to a few microseconds. However, about 1 in 5 executions will take quite longer to run. While the function usually runs in less than 20 microseconds these cases can even take 100 microseconds.
Trying to find the part of the function that has fluctuating execution time, I realized that big fluctuations appear in many parts, almost randomly. And this "ghost" time is added even in parts that take constant time. For example, iterating through a specific number of vectors and taking their dot product with a specific vector fluctuates from 3 microseconds to 20+.
All the tests I did seem to indicate that the fluctuation has nothing to do with the varying data but instead it's just random at some parts of the code. Of course I could be wrong and maybe all these parts that have fluctuations contain something that causes them. But my main question is specific and that's why I don't provide a snippet or runtime data:
Are fluctuations of execution time from 3 microseconds to 20+ microseconds for constant time functions with the same amount of data normal? Could the cpu occasionally be doing something else that is causing these ghost times?
Is there some type of conversion for ticks to a unit of real time? I would like my program to simulate a 48 hour experiment. Does anyone have any suggestions on how to do this? Thanks!
How quickly do things change in your experiment? From system dynamics, a good rule of thumb is to have a discrete clock tick 4 times during the smallest interval of real time in which something meaningful happens in the modelled system. For example, if you would expect to see changes every minute, then you would have 4 ticks each minute (and your ABM rules about updating the system would be calculated on the basis of 15 seconds) and then run the simulation for 11,520 (=48x60x4) ticks.
The man page for prstat (on Solaris 10 in my case) notes that that CPU % output is the "percentage of recent CPU time". I am trying to understand in more depth what "recent" means in this context - is it a defined amount of time prior to the sample, does it relate to the sampling interval, etc? Appreciate any insights, particularly with references to supporting documentation. I've searched but haven't been able to find a good answer. Thanks!
Adrian
The kernel maintains data that you see at the bottom - those three numbers.
For each process.
uptime shows you what those numbers are. Those are the 'recent' times for load average - the line at the bottom of prstat. 1 minute, 5 minutes, and 15 minutes.
Recent == 1 minute worth of sampling (last 60 seconds). Those numbers are averages, which is why when you first start prstat the number and processes usually change.
On the first pass you may see processes like nscd that have lots of cpu but have been up for a long time. The first display iteration is completely historical. After that the numbers reflect recent == last one minute average.
You should consider enabling sar sampling to get a much better picture.
Want a reference - try :
http://www.amazon.com/Solaris-Internals-OpenSolaris-Architecture-Edition/dp/0131482092
A "date" command on USS says:
Wed Jan 22 17:51:30 EST 2014
A couple of seconds later, a TSO TIME command says:
IKJ56650I TIME-04:51:58 PM. CPU-00:00:02 SERVICE-196896 SESSION-07:08:30 JANUARY 22,2014
(There's a one-hour time zone difference.) TSO TIME tracks, via eyeball, very closely to the time in system log entries. Any idea why the "date" command might be 28 seconds off?
Thanks.
The difference is due to the handling of leap-seconds. Applications that merely access the hardware clock directly (STCK/STCKE instructions) often forget about leap-seconds, and so they will be off by about 30 seconds. Smarter apps use system time conversion routines that factor in leap-seconds automatically. Here's an example of how this happens: http://www-01.ibm.com/support/docview.wss?uid=isg1OA41950
Having said that, POSIX or the Single Unix Specification (which z/OS UNIX Services adheres to) may in fact specify the behavior of the "date" command. Here's what SUS says under "Seconds Since the Epoch":
A value that approximates the number of seconds that have elapsed
since the Epoch...As represented in seconds since the Epoch, each and
every day shall be accounted for by exactly 86400 seconds.
By my reading, the comment about every day having exactly 86400 seconds suggests that the UNIX specification intentionally does not want leap seconds counted. If this is the case, then IBM is merely following the letter of the law with respect to how the time is displayed.