Why is z/OS USS "date" command output different from TSO TIME? - time

A "date" command on USS says:
Wed Jan 22 17:51:30 EST 2014
A couple of seconds later, a TSO TIME command says:
IKJ56650I TIME-04:51:58 PM. CPU-00:00:02 SERVICE-196896 SESSION-07:08:30 JANUARY 22,2014
(There's a one-hour time zone difference.) TSO TIME tracks, via eyeball, very closely to the time in system log entries. Any idea why the "date" command might be 28 seconds off?
Thanks.

The difference is due to the handling of leap-seconds. Applications that merely access the hardware clock directly (STCK/STCKE instructions) often forget about leap-seconds, and so they will be off by about 30 seconds. Smarter apps use system time conversion routines that factor in leap-seconds automatically. Here's an example of how this happens: http://www-01.ibm.com/support/docview.wss?uid=isg1OA41950
Having said that, POSIX or the Single Unix Specification (which z/OS UNIX Services adheres to) may in fact specify the behavior of the "date" command. Here's what SUS says under "Seconds Since the Epoch":
A value that approximates the number of seconds that have elapsed
since the Epoch...As represented in seconds since the Epoch, each and
every day shall be accounted for by exactly 86400 seconds.
By my reading, the comment about every day having exactly 86400 seconds suggests that the UNIX specification intentionally does not want leap seconds counted. If this is the case, then IBM is merely following the letter of the law with respect to how the time is displayed.

Related

WinDbg runaway command output explained

I have a production CPU issue, after days of regular activity suddenly the CPU starts to peak. I've saved the dump file and run the !runaway command to get the list of highest CPU time consuming threads. the output is below:
User Mode Time
Thread Time
21:110 0 days 10:51:39.781
19:f84 0 days 10:41:59.671
5:cc4 0 days 0:53:25.343
48:74 0 days 0:34:20.140
47:1670 0 days 0:34:09.812
13:460 0 days 0:32:57.640
8:14d4 0 days 0:19:30.546
7:d90 0 days 0:03:15.000
23:1520 0 days 0:02:21.984
22:ca0 0 days 0:02:08.375
24:72c 0 days 0:02:01.640
29:10ac 0 days 0:01:58.671
27:1088 0 days 0:01:44.390
As you can see, the output shows I've 2 threads: 21 & 19, that consumes more than 20 hours of CPU time combined ,I was able to track the callstack of 1 of those threads like so:
~21s
!CLRStack
the output doesn't matter at the moment, let's call it the "X callstack"
What I would like, is an explanation about the !runaway command output. from what I understand, a dump file is a snapshot of the current state of the application. so my questions are:
How can the runaway command shows 10:51 hours value for thread 21, when the dumping process only took a few seconds?
Does it mean that the specific "instance" of the X callstack I've found with the !CLRStack command is hang more than 10 hours? or it's the total time the 21 thread executed his whole X callstacks executions? If so, it seems strange that the 21 thread responsible for so many executions of the X callstacks. As I know the origin is a web request (the runtime should assign a random thread for each call)
I've a speculation that may answer those 2 questions:
Maybe the windbg calculate the time by taking the thread callstack actual time and dividing it by the scope of the dumping process, so if for example the specific execution of the X callstack took 1 second and the whole dumping process took 3 seconds (33%), while the process was running for total of 24 hours the output will show:
8 hours (33% of 24 hours)
Am I right, or completely got it wrong?
This answer is intended to be comprehensible for the OP. It's not intended to be correct into all bits and bytes.
[...] and dividing it by the scope of the dumping process [...]
This understanding is probably the root of all evil: dumping a process only gives you the state of the process at a certain point in time. The duration of dumping the process is 0.0 seconds, since all threads are suspended during the operation. (so, relative time for your process, nothing has changed and time is standing still; of course wall clock time changes)
You are thinking of dumping a process as monitoring it over a longer period of time, which is not the case. Dumping a process just takes time because it involves disk activity etc.
So no, there is no "scope" and thus you cannot (it's really hard) measure performance issues with crash dumps.
How can the runaway command shows 10:51 hours value for thread 21, [...]
How can your C# program know how long the program is running if you only have a timer event that fires every second? The answer is: it uses a variable and increases the value.
That's roughly how Windows does it. Windows is responsible for thread scheduling and each time it re-schedules threads, it updates a variable that contains the thread time.
When writing the crash dump, the information that was collected by the OS long time ago already, is included in the crash dump.
[...] when the dumping process only took a few seconds?
Since the crash dump is taken by a thread of WinDbg, the time for that is accounted on that thread. You would need to debug WinDbg and do !runaway on a WinDbg thread to see how much CPU time that took. Potentially a nice exercise and the .dbgdbg (debug the debugger) command may be new to you; other than that, this particular case is not really helpful.
Does it mean that the specific "instance" of the X callstack I've found with the !CLRStack command is hang more than 10 hours?
No. It means that at the point in time when you created the crash dump, that specific method was executed. Not more, not less.
This information is unrelated to !runaway, because the thread may have been doing something totally different for a long time, but that ended just a moment ago.
or it's the total time the 21 thread executed his whole X callstacks executions?
No. A crash dump does not contain such detailed performance data. You need a performance profiler like JetBrains dotTrace do get that information. A profiler will look at callstacks very often, then aggregate identical call stacks and derive CPU time per call stack.

ticks and hours in NetLogo

Is there some type of conversion for ticks to a unit of real time? I would like my program to simulate a 48 hour experiment. Does anyone have any suggestions on how to do this? Thanks!
How quickly do things change in your experiment? From system dynamics, a good rule of thumb is to have a discrete clock tick 4 times during the smallest interval of real time in which something meaningful happens in the modelled system. For example, if you would expect to see changes every minute, then you would have 4 ticks each minute (and your ABM rules about updating the system would be calculated on the basis of 15 seconds) and then run the simulation for 11,520 (=48x60x4) ticks.

Solaris prstat - definition of "recent" time used in percentages

The man page for prstat (on Solaris 10 in my case) notes that that CPU % output is the "percentage of recent CPU time". I am trying to understand in more depth what "recent" means in this context - is it a defined amount of time prior to the sample, does it relate to the sampling interval, etc? Appreciate any insights, particularly with references to supporting documentation. I've searched but haven't been able to find a good answer. Thanks!
Adrian
The kernel maintains data that you see at the bottom - those three numbers.
For each process.
uptime shows you what those numbers are. Those are the 'recent' times for load average - the line at the bottom of prstat. 1 minute, 5 minutes, and 15 minutes.
Recent == 1 minute worth of sampling (last 60 seconds). Those numbers are averages, which is why when you first start prstat the number and processes usually change.
On the first pass you may see processes like nscd that have lots of cpu but have been up for a long time. The first display iteration is completely historical. After that the numbers reflect recent == last one minute average.
You should consider enabling sar sampling to get a much better picture.
Want a reference - try :
http://www.amazon.com/Solaris-Internals-OpenSolaris-Architecture-Edition/dp/0131482092

Interval With Microseconds (Or Faster)

I'd like to program my own clock based on an octal numeral system. From what I've gathered, javascript is browser friendly but inaccurate at time intervals. What's a good program to code something like this? To give an example for this specific time system, there would be 64 hours in a day, 64 minutes in an hour, and 64 seconds in a minute. This would result in 1 octal second being equivalent to 0.32958984375 ISU second.

How to handle mass database manipulation every second - threading?

I have a very hard problem:
I have round about 20-50 objects, which I MUST (that is given for the problem, please don't spend time in thinking around it) put througt a logic EVERY SECOND.
The logic itself need round about 200-600 milliseconds (90% it is 200ms - 10% it is 600ms).
I try to find any solution how I can make is smaller, but there isn't. I must get an object from DB, I must have a lot of if-else and I must actual it. - Even if I reduce it to 50ms or smaller, to veriable rate of the object up to 50 will break my neck with the 1 second timer, because 50 x 50mx =2,5 second. So a tick needs longer then the tickrate should be.
So, my only, not very smart I think, idea is to open for every object an own thread and lead a mainthread for handling. So the mainthread opens x other thread. So only this opening must take unter 1 second. After it logic is used, the thread can kill itself and we all are happy, aren't we?
By given the last answers, I will explain my problem:
I try to build an auctioneer site. So I have up to 50 auctions running at the same moment - nothing special. So I need to: every single second look to the auctionlist, see if the time is 00:00:01 and if it is, bid automaticly (it's a feature, that user can create).
So: get 50 objects in a list, iterate through, check if a automatic bid is need, do it.
With 50 objects and the processing time you've given on average you are doing 12 seconds worth of processing every second. Assuming you have 4 cores, you can get this down to an execution time of 4 seconds via threading. Every second. This means that you're going to start off behind and slip further behind as time goes on.
I know you said you tried to think of a way to make it more efficient, but couldn't, but I fear you're going to have to. The problem as stated now is computationally intractable. You're either going to have to process the objects in a rotating window (so each object gets hit once every 4th cycle or so), or you need to make your processing run faster.
First: Profile, if you haven't already. Figure out what section of your code are taking time, etc. I'd go after that database - how long is the I/O of the objects from the database taking? Can you cache that I/O? (If you're manipulating the same 50 objects, don't load them every second.)
Let's address your threads idea: If you want multiple threads, don't create and destroy them every second. Create your X threads, and leave them be -- creating & destroying them are going to be expensive operations. You might find that less threads will work better - such as 1 or 2 per core, as you might be able to reduce time doing context switches.
To expand on Jonathan Leffler's comment on the question, as the OP requested: (This answer is a wiki)
Say you have these three things being auctioned, ending at the times indicated:
10 Apples - ends at 1:05:00 PM
20 Blueberries - ends at 2:00:00 PM
15 Pears - ends at 3:50:00 PM
If the current time is 1:00:00 PM, then sleep for 4 minutes, 58 seconds (since the closest item ends in 5 minutes). We use the 2 seconds then for processing - adjust that threshold as needed. Once we're done with the apples, we'll sleep for (2 PM - now() - 2s), for the blueberries.
Note that when we wake up at 1:04:58 PM to process the apples auction, we do not touch the blueberries or the pears -- we know that they're still way out in the future, so we don't care.

Resources