How accurate is client unix time? - time

I'm curious about how accurate a client's browser/javascript unix time is, as I don't have a very good understanding of digital/computer timekeeping.
For instance, if my sever tells the client to do something at a given unix time, will it actually happen at the same time on multiple computers in the US and Europe and China?
Thanks.

Unix time, or POSIX time, is a system for describing instants in time, defined as the number of seconds elapsed since midnight Coordinated Universal Time (UTC) of Thursday, January 1, 1970 (Unix times are defined, but negative, before that date), not counting leap seconds, which are declared by the International Earth Rotation and Reference Systems Service.
The link provided is a really good link and gives comparisons to other time systems. I strongly recommend you look there.

Probably not, since computer clock time tends to be rather arbitrary.
However, if you control all of those computers and can ensure that it is synchronized using NTP or some such service, you might be able to sync all of those actions even using Javascript.
I wouldn't trust client time on the World Wide Web.

Related

Which timing method should I use to measure changes in time

After delving into VBA benchmarking (see also) I'm not satisfied those answers go into sufficient detail. From a similar question about timing in Go, I see there is a difference between measuring absolute time and changes in time. For absolute time, a "Wall clock" should be used, which can be synchronised between machines using the Network Time Protocol for example
Meanwhile "Monatonic Clocks" should be used to measure differences in time, as these are not subject to leap seconds or (according to that linked go answer) changes in frequency of the clock.
Have I got that right? Is there anything else to consider?
Assuming those definitions are correct, which category do each of these clocks belong to, or in other words, which of these clocks will give me the most accurate measurement of changes in time:
VBA Time
VBA Timer
WinApi GetTickCount
WinApi GetSystemTimePreciseAsFileTime
WinApi QueryPerformanceFrequency + QueryPerformanceCounter
Or is it something else?
I may be overlooking other approaches. I say this because some languages like Java get time in nanoseconds rather than microseconds, how is this possible? Surely the Windows Api will tap into the most accurate hardware timer available, which gives microsecond resolution. What's Java doing and can I copy that?
PS, I have no idea how to tag this, please add as you think appropriate
QueryPerformanceFrequency and QueryPerformanceCounter are going to give you the highest resolution timer. They basically wrap the rtdsc instruction which counts elapsed CPU cycles.

Accurate time delta for moderate time intervals: GetTickCount64 vs QueryPerformanceCounter

There are lots of questions (here, here, here) about mechanisms for getting monotonic time on Windows and their various gotchas and pitfalls. I'm particularly interested in the accuracy (not precision) of the main options.
I'm looking to measure elapsed time on a single machine, when the time is on the order of multiple minutes to an hour. What i know so far:
QueryPerformanceCounter is great for short time intervals, but QPF can have error on the order of 500PPM, which translates to error of 2 seconds over an hour.
More concerning is that even on fairly recent processors, folks are seeing QPC misbehavior.
Microsoft recommends QPC above all else for short-term duration measurements. But short-term isn't defined in any absolute numbers.
GetTickCount64 is often cited as a nice and reliable, less precise alternative for QPC.
I've not found any good details about the accuracy of GetTickCount64. While it is less precise than QPC, how does its accuracy compare? What kind of error might I expect over an hour?
Some programs play with its resolution by using timeBeginPeriod, although I don't think this affects accuracy?
The docs talk about how GetTickCount64's resolution is not affected by adjustments made by the GetSystemTimeAdjustment function. Hopefully this means GetTickCount64 is monotonic and not adjusted ever? It is unusual wording...
GetSystemTimePreciseAsFileTime is an option for same-machine time deltas if I disable automatic time adjustment via SetSystemTimeAdjustment. It is backed by QPC. Is there any benefit to using this over QPC directly? (Perhaps it does sanitization or thread affinity tricks to avoid some of the issues encountered by direct QPC calls?)
One SO QA I found linked to this blog post, which has been particularly useful to read. While it doesn't answer my question directly, it dives into how QPC works on Windows, and how the common linux monotonic time basically uses the same thing.
The gist is that both of them use rtdsc when an invariant TSC on modern hardware is available.

How to find out timeGetTime precision?

The timeGetTime function documentation says:
The default precision of the timeGetTime function can be five milliseconds or more, depending on the machine. You can use the timeBeginPeriod and timeEndPeriod functions to increase the precision of timeGetTime.
So the precision is system-dependent. But what if I don't want to increase the precision, I just want to know what it is in the current system. Is there a standard way (e.g. an API) to get it? Or should I just poll timeGetTime for a while and look at what comes out and deduce from there?
I'd suggest to use the GetSystemTimeAsFileTime function. This function has low overhead and displays ths system clock. See this answer to get some more details about the granularity of time and APIs to query timer resolutions (e.g. NtQueryTimerResolution). Code to find out how the system file time increments can be found there too.
Windows 8 and Server 2012 provide the new GetSystemTimePreciseAsFileTime function which is supposed to be more accurate. MSDN states with the highest possible level of precision (<1us). However, this only works on W8 and Server 2012 and there is very little documentation about how this additional accuracy is obtained. Seems like MS is going a Linux alike (gettimeofday) way to combine the performace counter frequency with the system clock.
This post may of interest for you too.
Edit: As of February 2014 there is some more detailed information about time matters on MSDN: Acquiring high-resolution time stamps.

Response time for a vxml application

I'm developing a voicexml application, and I want to define a part of the performance by defining the maximum response time for an answer by the application.
The application is on a server realized by two virtual boxes and you can call it due to an ISDN connection.
I'm looking for some things:
-Are there any "scientific sources" that descripe the length of a "normal" response time in a dialog between two people or a voice application and a human?
-->Which response time can I assume? (for example when the user choose "option1" after two seconds the application says "you chose option1" - but what time is realistic, are for example two seconds too long? so the user "feels" that this is not a "real" communication)
(- Are there any speacial delays by calling an application from the telephone network? )
thanks in advance
I haven't seen any studies that show this information, but I would expect it to be highly contextual. Simple responses to simple questions would be quick, but complex and long answers would be wrong. And, from a human perspective, there's all sorts of emotional responses and sociological behaviors that affect the speed of a person's response.
If I understand your goal correctly, you want to specify a maximum response time for the IVR (VoiceXML system) to respond back to a caller.
Even this can be contextual. If the machine needs to look up some data, it might take longer. In systems I've built, if it's more than 2 seconds, we've played a please wait or other transitional message.
In practice, systems are fairly responsive. If they don't seem reasonably responsive, you have a problem or some other artifact in place. And, with machines, people expect a little bit more of a delay than a normal human operator (with humans, there is also a lot of non-verbal noise that lets the user know their input is being accepted, like keyboard sounds).
As for delays on the phone network, not so much any more. But, some international calls or weird routing can still introduce some unnatural delays.
To be more specific, 2 seconds is too long. If you know you have server delays add in some sort of audio queue to let people know data has been taken. I've seen a few a few shops add cute (for the first 50 times) processing sound to let users know their speech was recognized.

Do you inflate your estimated project completion dates? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If so why? How much?
I tend to inflate mine a little because I can be overly optimistic.
Hofstadter's Law: Any computing project will take twice as long as you think it will — even when you take into account Hofstadter's Law.
If you inflate your estimate based on past experiences to try and compensate for your inherent optimism, then you aren't inflating. You are trying to provide an accurate estimate. If however you inflate so that you will always have fluff time, that's not so good.
Oh yes, I've learnt to always multiply my initial estimation by two. That's why FogBUGZ's Evidence-Based Scheduling tool is so really useful.
Any organization that asks its programmers to estimate time for coarse-grained features is fundamentally broken.
Steps to unbreak:
Hire technical program managers. Developers can double as these folks if needed.
Put any feature request, change request, or bug into a database immediately when it comes in. (My org uses Trac, which doesn't completely suck.)
Have your PMs break those requests into steps that each take a week or less.
At a weekly meeting, your PMs decide which tickets they want done that week (possibly with input from marketing, etc.). They assign those tickets to developers.
Developers finish as many of their assigned tickets as possible. And/or, they argue with the PMs about tasks they think are longer than a week in duration. Tickets are adjusted, split, reassigned, etc., as necessary.
Code gets written and checked in every week. QA always has something to do. The highest priority changes get done first. Marketing knows exactly what's coming down the pipe, and when. And ultimately:
Your company falls on the right side of the 20% success rate for software projects.
It's not rocket science. The key is step 3. If marketing wants something that seems complicated, your PMs (with developer input) figure out what the first step is that will take less than a week. If the PMs are not technical, all is lost.
Drawbacks to this approach:
When marketing asks, "how long will it take to get [X]?", they don't get an estimate. But we all know, and so do they, that the estimates they got before were pure fiction. At least now they can see proof, every week, that [X] is being worked on.
We, as developers, have fewer options for what we work on each week. This is indubitably true. Two points, though: first, good teams involve the developers in the decisions about what tickets will be assigned. Second, IMO, this actually makes my life better.
Nothing is as disheartening as realizing at the 1-month mark that the 2-month estimate I gave is hopelessly inadequate, but can't be changed, because it's already in the official marketing literature. Either I piss off the higher-ups by changing my estimate, risking a bad review and/or missing my bonus, or I do a lot of unpaid overtime. I've realized that a lot of overtime is not the mark of a bad developer, or the mark of a "passionate" one - it's the product of a toxic culture.
And yeah, a lot of this stuff is covered under (variously) XP, "agile," SCRUM, etc., but it's not really that complicated. You don't need a book or a consultant to do it. You just need the corporate will.
The Scotty Rule:
make your best guess
round up to the nearest whole number
double that quadruple that (thanks Adam!)
increase to the next higher unit of measure
Example:
you think it will take 3.5 hours
round that to 4 hours
quadruple that to 16 hours
shift it up to 16 days
Ta-daa! You're a miracle worker when you get it done in less than 8 days.
Typically yes, but I have two strategies:
Always provide estimates as a range (i.e. 1d-2d) rather than a single number. The difference between the numbers tells the project manager something about your confidence, and allows them to plan better.
Use something like FogBugz' Evidence Based-Scheduling, or a personal spreadsheet, to compare your historical estimates to the time you actually took. That'll give you a better idea than always doubling. Not least because doubling might not be enough!
I'll be able to answer this in 3-6 weeks.
It's not called "inflating" — it's called "making them remotely realistic."
Take whatever estimate you think appropriate. Then double it.
Don't forget you (an engineer) actually estimate in ideal hours (scrum term).
While management work in real hours.
The difference being that ideal hours are time without interuption (with a 30 minute warm up after each interuption). Ideal hours don't include time in meetings, time for lunch or normal chit chat etc.
Take all these into consideration and ideal hours will tend towards real hours.
Example:
Estimated time 40 hours (ideal)
Management will assume that is 1 week real time.
If you convert that 40 hours to real time:
Assume one meeting per day (duration 1 hour)
one break for lunch per day (1 hour)
plus 20% overhead for chit chat bathroom breaks getting coffie etc.
8 hour day is now 5 hours work time (8 - meeting - lunch - warm up).
Times 80% effeciency = 4 hours ideal time per day.
This your 40 hour ideal will take 80 hours real time to finish.
Kirk : Mr. Scott, have you always multiplied your repair estimates by a factor of four?
Scotty : Certainly, sir. How else can I keep my reputation as a miracle worker?
A good rule of thumb is estimate how long it will take and add 1/2 again as much time to cover the following problems:
The requirements will change
You will get pulled onto another project for a quick fix
The New guy at the next desk will need help with something
The time needed to refactor parts of the project because you found a better way to do things
<sneaky> Instead of inflating your project's estimate, inflate each task individually. It's harder for your superiors to challenge your estimates this way, because who's going to argue with you over minutes.
</sneaky>
But seriously, through using EBS I found that people are usually much better at estimating small tasks than large ones. If you estimate your project at 4 months, it could very well be 7 month before it's done; or it might not. If your estimate of a task is 35 minutes, on the other hand, it's usually about right.
FogBugz's EBS system shows you a graph of your estimation history, and from my experience (looking at other people's graphs as well) people are indeed much better at estimating short tasks. So my suggestion is to switch from doing voodoo multiplication of your projects as totals, and start breaking them down upfront into lots of very small tasks that you're much better at estimating.
Then multiply the whole thing by 3.14.
A lot depends on how detailed you want to get - but additional 'buffer' time should be based on a risk assessment - at a task level, where you put in various buffer times for:
High Risk: 50% to 100%
Medium Risk: 25% to 50%
Low Risk: 10% to 25% (all dependent on prior project experience).
Risk areas include:
est. of requirement coverage (#1 risk area is missing components at the design and requirement levels)
knowledge of technology being used
knowledge/confidence in your resources
external factors such as other projects impacting yours, resource changes, etc.
So, for a given task (or group of tasks) that cover component A, initial est. is 5 days and it's considered a high risk based on requirements coverage - you could add between 50% to 100%
Six weeks.
Industry standard: every request will take six weeks. Some will be longer, some will be shorter, everything averages out in the end.
Also, if you wait long enough, it no longer becomes an issue. I can't tell you how many times I've gone through that firedrill only to have the project/feature cut.
I wouldn't say I inflate them, so much as I try to set more realistic expectations based on past experience.
You can calculate project durations in two ways - one is to work out all the tasks involved and figure out how long each will take, factor in delays, meetings, problems etc. This figure always looks woefully short, which is why people always say things like 'double it'. After some experience in delivering projects you'll be able to tell very quickly, just by looking briefly at a spec how long it will take, and, invariably, it will be double the figure arrived at by the first method...
It's a better idea to add specific buffer time for things like debugging and testing than to just inflate the total time. Also, by taking the time up front to really plan out the pieces of the work, you'll make the estimation itself much easier (and probably the coding, too).
If anything, make a point of recording all of your estimates and comparing them to actual completion time, to get a sense of how much you tend to underestimate and under what conditions. This way you can more accurately "inflate".
I wouldn't say I inflate them but I do like to use a template for all possible tasks that could be involved in the project.
You find that not all tasks in your list are applicable to all projects, but having a list means that I don't let any tasks slip through the cracks with me forgetting to allow some time for them.
As you find new tasks are necessary, add them to your list.
This way you'll have a realistic estimate.
I tend to be optimistic in what's achievable and so I tend to estimate on the low side. But I know that about my self so I tend to add on an extra 15-20%.
I also keep track of my actuals versus my estimates. And make sure the time involved does not include other interruptions, see the accepted answer for my SO question on how to get back in the flow.
HTH
cheers
I wouldn't call additional estimated time on a project "inflated" unless you actually do complete your projects well before your original estimation. If you make a habit of always completing the project well before your original estimated time, then project leaders will get wise and expect it earlier.
What are your estimates based on?
If they're based on nothing but a vague intuition of how much code it would require and how long it would take to write that code, then you better pad them a LOT to account for subtasks you didn't think of, communication and synchronization overhead, and unexpected problems. Of course, that kind o estimate is nearly worthless anyway.
OTOH, if your estimates are based on concrete knowledge of how long it took last time to do a task of that scope with the given technology and number of developers, then inflation should not be necessary, since the inflationary factors above should already be included in the past experiences. Of course there will be probably new factors whose influence on the current project you can't foresee - such risks justify a certain amount of additional padding.
This is part of the reason why Agile teams estimate tasks in story points (an arbitrary and relative measurement unit), then as the project progresses track the team's velocity (story points completed per day). With this data you can then theoretically compute your completion date with accuracy.
I take my worst case scenario, double it, and it's still not enough.
Under-promise, over-deliver.
Oh yes, the general rule from long hard experience is give the project your best estimate for time, double it, and that's about how long it will actually take!
We have to, because our idiot manager always reduces them without any justification whatever. Of course, as soon as he realizes we do this, we're stuck in an arms race...
I fully expect to be the first person to submit a two-year estimate to change the wording of a dialog.
sigh.
As a lot said, it's a delicate balance between experience and risk.
Always start by breaking down the project in manageable pieces, in fact, in pieces you can easily imagine yourself starting and finishing in the same day
When you don't know how to do something (like when it's the first time) the risk goes up
When your risk goes up, that's where you start with your best guess, then double it to cover some of the unexpected, but remember, you are doing that on a small piece of the project, not the whole project itself
The risk goes up also when there's a factor you don't control, like the quality of an input or that library that seems it can do everything you want but that you never tested
Of course, when you gain experience on a specific task (like connecting your models to the database), the risk goes down
Sum everything up to get your subtotal...
Then, on the whole project, always add about another 20-30% (that number will change depending on your company) for all the answers/documents/okays you will be waiting for, meetings we are always forgetting, the changes of idea during the project and so on... that's what we call the human/political factor
And again add another 30-40% that accounts for tests and corrections that goes out of the tests you usually do yourself... such as when you'll first show it to your boss or to the customer
Of course, if you look at all this, it ends up that you can simplify it with the magical "double it" formulae but the difference is that you'll be able to know what you can squeeze in a tight deadline, what you can commit to, what are the dangerous tasks, how to build your schedule with the important milestones and so on.
I'm pretty sure that if you note the time spent on each pure "coding" task and compare it to your estimations in relation to its riskiness, you won't be so far off. The thing is, it's not easy to think of all the small pieces ahead and be realistic (versus optimistic) on what you can do without any hurdle.
I say when I can get it done. I make sure that change requests are followed-up with a new estimation and not the "Yes, I can do that." without mentioning it will take more time. The person requesting the change will not assume it will take longer.
Of course, you'd have be to an idiot not to add 25-50%
The problem is when the idiot next to you keeps coming up with estimates which are 25-50% lower than yours and the PM thinks you are stupid/slow/swinging it.
(Has anyone else noticed project managers never seem to compare estimates with actuals?)
I always double my estimates, for the following reasons:
1) Buffer for Murphy's Law. Something's always gonna go wrong somewhere that you can't account for.
2) Underestimation. Programmers always think things are easy to do. "Oh yeah, it'll take just a few days."
3) Bargaining space. Upper Management always thinks that schedules can be shortened. "Just make the developers work harder!" This allows you to give them what they want. Of course, overuse of this (more than once) will train them to assume you're always overestimating.
Note: It's always best to put buffer at the end of the project schedule, and not for each task. And never tell developers that the buffer exists, otherwise Parkinson's Law (Work expands so as to fill the time available for its completion) will take effect instead. Sometimes I do tell Upper Management that the buffer exists, but obviously I don't give them reason #3 as justification. This, of course depends on how much your boss trusts you to be truthful.

Resources