Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm torn between two worlds, I have this very intuitive (but intricate) mechanism in a command line tool, and I'm wondering as to what extent I should explain this.
I can go the simple way, not explaining it at all and trust my users to figure it out themselves, but then some users might never discover this particular feature.
I can go the scary way and put a lot of mathematical notation into the help output and the man pages, but then users might think this is too complicated and they might develop an inexplicable fear towards my tool or this particular feature.
How can I address both experimental and, let's say, conservative users (the ones that don't go the extra mile when something isn't explained properly)?
Details:
The tool is about date and time arithmetic, in particular calculating durations between two dates and/or times, and formatting the results according to format specs.
My internal design uses a multiplication table like this:
- x d t dt
x x x x x
d x D x D
t x x T x
dt x D x S
where x is unknown (unparsable) input, d is a date, t is a time and dt is a datetime, D is a date duration (resolution is 1 day), T is a time duration (resolution is 1 second), and S is a time-stamp duration (resolution 1 second).
Now the result depends on the duration type and the format specifiers given, and I'm really lacking a succinct way of explaining this, so I do it by example:
'%d' will return the duration in days (like 12 days)
'%w' will return the duration in weeks (like 1 week)
'%w %d' will return the duration in weeks and days (like 1 week and 5 days)
...
'%S' will return the duration in seconds (e.g. 86464 seconds)
'%M' will return the duration in minutes (e.g. 1441 minutes)
'%H' will return the duration in hours (e.g. 24 hours)
'%H %M %S' will return the duration in hours, minutes and seconds (24h 1m 4s)
'%H %S' will return the duration in hours and seconds (24h 64s)
...
I mean I could probably work out what I mean with just these few examples given, but there's no formal explanation or anything in there.
For clarity:
The issue I'm trying to address is that you can combine any of the flags (seconds, hours, days, months, etc.) and the program will "intelligently" give you a result. Like %Y %d would give you a year and the number of days (in the range 0 to 365) whereas %Y %m %d would give you the days in the range 0 to 30 (because the rest is "captured" in the month).
Example: %Y %d gives 1 year 90 days whereas %Y %w %d gives 1 year 12 weeks 6 days
If you're looking to create help text within the tool itself, look at the help for the linux date command.
Alternatively, you could do something like this:
$ your_app --help
usage: your_app [OPTIONS] [FORMAT]
Returns the elapsed time between blah blah....
FORMAT:
// list formats here
OPTIONS:
--help Display this help text
--help-detailed Display more extensive help text
--help-examples Display example uses
If I were the user, I'd want --help to list all of the options as a reference, and I'd want the man pages to include as much detail as possible. I tend to use --help as a reminder and the man pages as the authoritative reference.
And no matter how well-written the text may be, a few concrete examples are always valuable.
Related
I run daily a job. Today, that job takes 1:45:09 hrs. I have a lot of such durations for that job from the past weeks and I want to be able to show that graphically using a simple column chart. On the Y axis I want duration ticks from 0:00:00 - 5:00:00 or so that I can easily compare the runtimes from the past weeks and see if the job is gradually taking longer and longer.
I have read and implemented a lot of answers from StackOverflow and other internet resources but none of them fit my purpose. When using unix timestamps (since 1970, etc) I get columns that are all of the same hight and Y-axis ticks in years instead of hours from 1970 to now.
Another option was to just calculate the minutes or seconds. Then the difference become appearant, but instead of time elements in the Y axis and tooltips I get integers.
Can someone show me how to achieve my goal in a fiddle? The question looks common enough to me for any monitoring software.
-- EDIT --
Here is a Photoshop sample of what I am trying to achieve:
On the Y-axis: a time scale. In the tooltip: date, objectname and time taken.
-- END EDIT --
BTW, I have no chart type preference. The usual column charts just seem to fit the purpose.
Thanks for any help!
ISO 8601:2004 defines a time interval format which can be expressed in various formats, including "c) by a start and a duration" and "d) by a duration and an end".
Going by the Wikipedia article alone, it seems like the examples only provide for start and end to be "time points", e.g. 2007-03-01T13:00:00Z/P1Y2M10DT2H30M would be a time interval of 1 year, 2 months, 10 days, 2 hours and 3 minutes starting on 2007-03-01 13:00 UTC.
Is it possible to represent a time interval which start or ends after some duration? For example, P1Y/P1Y2M10DT2H30M would be the same duration but "starting" after 1 year.
Such syntax would be useful to model relative time intervals, especially when combined with repeating qualifies. For example, a monthly retirement payout could reasonably be expressed as R/P65Y/1M.
The expression "P1Y/P1Y2M10DT2H30M" is not defined in ISO-8601. This paper mentions following four variants of a "time interval":
a) Start and end are defined as points in time, example =>
2019-08-27/2019-08-29
b) Duration without any fixed anchor on the (date)- or timeline,
example => P3D
c) Start as point in time and a duration, example => 2019-08-27/P3D
d) A duration and the end as point in time, example =>
P3D/2019-08-29
So your question "Is it possible to represent a time interval which start or ends after some duration?" can be answered by a clear "No". And honestly said, a double duration expression will confuse most users.
I am building an expert system that will run as a web service (i.e. continuously).
Some of the rules in it are coded procedurally and deal with intervals — the rule processor maps over a set of user's events and calculates their total duration within a certain time-frame which is defined in relative terms (like N years ago). This result is then compared with a required threshold to determine whether the rule passes.
So for example the rule calculates for how long you were employed from 3 years ago to 1 year ago and passes if it's more than 9 months.
I have no problem calculating the durations. The difficult part is that I need to display to the user not simply whether the particular rule passed, but also the exact date when this "true" is due to become "false". Ideally, I'd love to display one more additional step ahead - i.e. when "false" switches back to "true" again — if there's data for this, of course. So on the day when the total duration of their employment for last year drops below 6 months the rule reruns, the result changes, and they get an email "hey, your result has just changed, you no longer qualify, but it 5 months you will qualify once again".
| | |
_____|||1|||_______|||2|||__________|||3|||________|||4|||...
| | |
3 y. ago ---------------------- 1 y. ago Now
min 9 months work experience is required
In the example above the user qualifies, but is going not to, we need to tell them up front: "expect this to happen in 44 days" (also the system schedules a background job for that date) and when that will reverse back to true.
| | |
____________________|1|__________________||||||||2||||||||...
| | |
3 y. ago ---------------------- 1 y. ago Now
min 9 months work experience is required
In this one the user doesn't qualify, we need to tell them when they are going to start to qualify.
| |
_____|||1|||___________|||||||2|||||||_________|||3|||____...
| |
1 y. ago ------------------------------------------ Now
at least 6 months of work experience is required
And here — when they are due to stop qualifying, because there's no event that is going on for them currently, so once these events roll to the left far enough, it's over until the user changes their CV and the engine re-runs with new dataset.
I hope it's clear what I want to do. Is there a smart algorithm that can help me here? Or do I just brute-force the solution?
UPD:
The solution I am developing lies in creating a 2-dimensional graph where each point signifies a date (x-axis value) when the curve of total duration for the timeframe (y-axis value) changes direction. There are 4 such breakpoints for any given event. This graph will allow me to do a linear interpolation between two values to find when exactly the duration line crosses the threshold. I am currently writing this in Ruby.
It occurred to me that I'm not aware of a mechanism to store dates before 1970 jan. 1 as Unix timestamps. Since that date is the Unix "epoch" this isn't much of a surprise.
But - even though it's not designed for that - I still wish to store dates in the far past in Unix format.I need this for reasons.
So my question is: how would one go about making unix-timestamps contain "invalid" but still working dates? Would storing a negative amount of seconds work? Can we even store negative amounts of seconds in a unix-timestamp? I mean isn't it unsigned?
Also if I'm correct then I could only store dates as far back as 1901. dec. 13 20:45:52 could this be extended any further back in history by any means?
Unix Time is usually a 32-bit number of whole seconds from the first moment of 1970 in UTC, the epoch being 1 January 1970 00:00:00 UTC. That means a range of about 136 years with about half on either side of the epoch. Negative numbers are earlier, zero is the epoch, and positive are later. For a signed 32-bit integer, the values range from 1901-12-13 to 2038-01-19 03:14:07 UTC.
This is not written in stone. Well, it is written, but in a bunch of different stones. Older ones say 32-bit, newer ones 64-bit. Some specifications says that the meaning is "implementation-defined". Some Unix systems use an unsigned int to extend only into the future past the epoch, but usual practice has been a signed number. Some use a float rather than an integer. For details, see Wikipedia article on Unix Time, and this Question.
So, basically, your Question makes no sense. You have to know the context of your programming language (standard C, other C, Java, etc.), environment (POSIX-compliant), particular software library, or database store, or application.
Avoid Count-From-Epoch
Add to this lack of specificity the fact that a couple dozen other epochs have been used by various software systems, some extremely popular and common. Examples include January 1, 1601 for NTFS file system & COBOL, January 1, 1980 for various FAT file systems, January 1, 2001 for Apple Cocoa, and January 0, 1900 for Excel & Lotus 1-2-3 spreadsheets.
Further add the fact that different granularities of count have been used. Besides whole seconds, some systems use milliseconds, microseconds, or nanoseconds.
I recommend against tracking date-time as a count-from-epoch. Instead use specific data types where available in your programming language or database.
ISO 8601
When data types are not available, or when exchanging data, follow the ISO 8601 standard which defines sensible string formats for various kinds of date-time values.
Date
2015-07-29
A date-time with an offset from UTC (Z is zero/Zulu for UTC) (note padding zero on offset)
2015-07-29T14:59:08Z
2001-02-13T12:34:56.123+05:30
Week (with or without day of week)
2015-W31
2015-W31-3
Ordinal date (day-of-year)
2015-210
Interval
"2007-03-01T13:00:00Z/2008-05-11T15:30:00Z"
Duration (format of PnYnMnDTnHnMnS)
P3Y6M4DT12H30M5S = "period of three years, six months, four days, twelve hours, thirty minutes, and five seconds"
Search StackOverflow.com for many more Questions and Answers on these topics.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to store a time log for employees. We need to know that they spent, for example, 20 minutes working on project A and 1 hr 30 minutes working on project B.
Before I run a migration, I'd like to understand what the best practice is for storing this sort of information. we don't need to know anything about date or time so I don't want to get into date / time calculations of duration. Just a user-entered hours and minutes. However, we will certainly have to do calculations at some point. So the employee mentioned above did a total of 1hr 50 minutes.
I was going to create 2 integer fields. One for the hours and one for the minutes, and deal with the calculations later. Does that make sense, or am I making a mistake at the first hurdle?
Thanks
Just store the duration in seconds as an integer value.
You can use the chronic gem if you want to easily parse human input to actual durations, for example to convert "1 hour and 40 minutes" to 100 minutes or 6000 seconds.
No Ruby magic for your specific purposes, just store your time in minutes (integer).
So you will have something like
#employee.time_spent_on_project_a = params[:hour]*60 + params[:minute]
and store it as an integer field
However, I have to mention that Ruby makes dealing with dates/times very easy. If you rather want to record the time than have input fields, you will be able to do something like:
#start_time = Time.now
# ...
#end_time = Time.now
and then you can just save your value as #end_time - #start_time (as a decimal, in seconds). So you might want to reconsider... or not. Depending on how much precision you want.