I wrote a formula that calculates the time difference between two datetime cells in Google Sheets.
Here is the formula for the time difference in minutes.
=(HOUR(B2)-HOUR(A2)+DAYS(B2,A2)*24)*60 + MINUTE(B2)-MINUTE(A2)
where A2 is the start datetime, and B2 is the end datetime.
I feel like Google Sheets should have a simpler method of calculating time differences, but I can't find out what function to use. Does google sheets convert to and from unix timestamps, for example? Is there a better way?
In hours
=24*(B2-A2)
in minutes
=24*60*(B2-A2)
use in row 2:
=INDEX(IFERROR(1/(1/(ROUND(24*(B2:B-A2:A), 2)))))
=INDEX(IFERROR(1/(1/(ROUND(1440*(B2:B-A2:A), 2)))))
Related
I am admittedly bad at formulas so I apologize if this comes off as stupid:
I need to compare two times for my swimmers and can't figure out a clear way to write a formula to compare their time vs. a cut time they are shooting for.
Ex:
Swimmer has a 2:44.01 in her event, and needs a 2:41.96 for a specific meet.
In this instance, her time is in B3 and the cut time is in C3. How do I write a formula that will display the difference in those times in cell C4, right below the cut time?
Everything I have seen requires a hh:mm:ss.00 formula or similar but how do I write it so that I don't have so many stinkin 0's in each cell? I wish that it would allow a simple mm:ss.00 without needing hours to register as time.
My other issue is that in C4, where I want the difference to show up, how do I start that equation? Do I start with SUM or ELAPSED or what? I'm out of my league here so please help!
try:
=TEXT(VALUE("00:"&B2)-VALUE("00:"&C2), "m:ss.00")
...and format all fields as Plain text
I have a sheet where I record my working hours (this is more for me to remind me to stop working than anything else). For every day, I have three possible shifts - early, normal & late, and I have a formula which will sum up any times put into these columns and give me the daily total hours.
To summarise the duration of time spent working in a day, I use the following formula: =(C41-B41)+(E41-D41)+12+(G41-F41) which is:
early end time minus early start time
normal end time minus normal start time PLUS 12 hours
late end time minus late start time
Which gives me output like this:
What I cannot seem to achieve is, the ability to sum the daily totals into something which shows me the total hours worked over 1-week. If I attempt to sum the daily totals together for the example image shown, I get some wild figure such as 1487:25:00 when formatting as 'Duration' or 23:25:00 when formatted as 'Time'!
All my cells where I record the hours worked are formatted as 'Time'
When using arithmetic operations on date values in Google Sheets, it's important to remember that the internal representation of a date is numeric, and understood as the number of days since January 1, 1970.
What follows from that, is that if you want to add 12 hours to a time duration, you should not write "+12" because that will in fact add 12 days. Instead add "+12/24". In other words, try the following formula instead of the one you are using now:
=(C41-B41)+(E41-D41)+(12/24+G41-F41)
I am working with time series data that omit data for the weekend. When graphing these time series in D3 v4 the graph interpolates over the weekend. See the following URL for an illustration (including code, data, and graph output):
No records for weekend
Instead, I want a gap at the weekend; graph stopping on Friday and resuming on Monday.
I could fix the problem by creating dummy records for the weekend, with values 'NA', and using the D3 defined method, as shown in the following:
Data has NA records
However, generating dummy records feels to me like excessively heavy lifting. Is there a simple, natural way to get D3 to leave a gap when time series records are missing?
Is there a simple, natural way to get D3 to leave a gap when time series records are missing?
Unfortunately no, that's the normal behaviour of a time scale. According to Mike Bostock, D3 creator,
A d3 time scale should be used when you want to display time as a continuous, quantitative variable, such as when you want to take into account the fact that days can range from 23-25 hours due to daylight savings changes, and years can vary from 365-366 days due to leap years.
So, the time scale was created having in mind a continuous time.
Your current approach in the line generator...
.defined(function(d) { return !isNaN(d.value); })
... doesn't work because all the dates in your CSV have values, and d3 will connect the dots.
That having been said, if you want to keep the gap, just use dummy records (as null or any non numeric value) for the weekends and line.defined, as in your second link.
I don't have a real question but I'm more like seeking for creative input for a problem.
I want to compare two (most likely unequal) Date values and calculate the ratio of their similarity. So for example if I'd compare 08.01.2013 and 10.01.2013 I would get a relative high value but between 08.01.2013 and 17.04.1998it would be really low.
But now I'm not sure how I should exactly calculate the similarity. First I was thinking about turning the Date values into Strings and then use the EditDistance on them (number of single char operations to transform one String into another). This seems like a good idea for some cases and I'll definitly implement it but I also need an appropriate calculation for something like 31.01.2013 and 02.02.2013
Why not use the difference in days between two dates as a starting point?
It is "low" for similar dates and "high" for unequal dates, then use arithmetic to obtain a "similarity ratio" which matches your requirements.
Consider a fixed reference date "early enough" in the past if you get stuck.
The edit distance can be calculated using the Levenshtein distance.
A change in the year would mean a lot more "distance" than a change in the day.
The usual way to compare days would be to calculate the distance in days or hours. To do that, you'd convert both dates in a serial day number. Microsoft offers a DateDiff() function for date comparisons and distance calculations.
i have an array of date=>values, like this
"2010-10-12 14:58:36" =>13.4
"2010-10-17 14:58:36" =>12
"2010-10-22 14:58:36" =>17.6
"2010-10-27 14:58:36" =>22
"2010-11-01 14:58:36" =>10
[...]
I use this date-value combination to paint an graph in javascript.
Now i like to mark those dates, who are "very special".
My problem (and Question) is, which aspect should consider to find those specific dates?
As an human, i prefer the date "2010-10-17 14:58:36", because "something" should be happens on this date, because the value on the next dates rises for 5.6 points, which is the biggest step up followed by one mor big step up. On the other hand, also the date "2010-10-27 14:58:36" is an "highlight", because this is
the top of all values and
after this date, there comes the biggest step down.
So as an human, i would be choose both dates.
My problem is: how could an algorithm look like?
I tried averages values for n dates before and after the current values, which results in an accumulation of those specifics dates at the beginning and at the end of the graph
So i tried to find the biggest percentage step up (depending on the date before), but I'm not sure, if i really find the specific dates, I'm looking for?!
How would you tackle the problem?
Thank you.
Looks like financial stocking issue :-) You are looking for Time series analysis - this is a statistical issue. I'd recommend to use R programming language to play with it (you can do complex statistical things very fast). There are tens of special packages, for sure financial one's too. Once you know what you want, you may implement the solution in any other language.
Just try to google time series analysis r.
EDIT: note that R is very powerful - I'd bet there is a tool how to use R packages from other languages.
If you have information over a timeline you could use Inerpolation.
A Polynomial interpolation will give you an approximated polynomial that goes through the points.
What's nice about this is you can then use Mathematical analysis which is easy on polynomials to find interesting points (large gradients, min-max points etc...)
Also you get an approximation of how the function behaves, so you could "future" points and see what may happen in the near future.
Of course looking into the future isn't so accurate, but forms of interpolation are used in analytic to see trends and behaviors.
And of course, it's easy to plot a polynomial, which is always nice.
This is really a question of Statistics http://en.wikipedia.org/wiki/Statistics and the context of your data and what you're looking to highlight, for example, the fact that between 12/10 and 17/10 the data moved negative 1.4 units may be more useful in some scenarios than a larger positive step change.
You need sample data, on which build up a function which can calculate an expected value for any given date; for instance averaging the values of the day before, the same week day of the previous week, of the previous month and so on. After that decide a threshold: interesting date are those for which real value is outside expected value +- threshold