Calcuate the time since specific input - ksh

I need to calculate the time in hours since a specific input.
I tried some different codes and tools but couldn't get any result yet ...
So to summarize the requirement here I just need to time gone from now since specific input?
eg, in a simple way, i need the time consumed since 6/12 4:40 PM?

If your script is running at time t0 and it's still running at time t1 and you want to know how many hours have elapsed between t0 and t1, I think the easiest way to do that is to use the UNIX epoch.
So at time t0 store the epoch time:
t0="$(date '+%s')"
and then at time t1 get the epoch time again:
t1="$(date '+%s')"
then you can just query the difference between the two times to get the seconds, and divide by 3600 if you want that in hours:
(( elapsed = t1 - t0 ))
(( hours = elapsed / 3600 ))
Korn shell isn't the best tool for division though. You might prefer to stick with the number of seconds elapsed.

Related

Gnuplot - incremented minutes to days

I have data CNV file, where the first column is "minutes from start."
Minutes;Temperature
0;15.5
60;15.8
120;15.6
180;16.1
....
I would like to plot this data with x-axis as time (DAYS), so that every 1440 minutes is 1 day, then comes day 2... etc. What is the best way to do this?
Simply divide the minutes by 60 and by 24 (or 1440). Then you will have days.
Note, column values, e.g. $1, are always taken as float, so you don't have to worry about gnuplot's integer division (which can lead to unexpected results if you don't know about it).
plot "YourFile.dat" u ($1/1440):2 with lines

Summing times in Google sheets

I have a sheet where I record my working hours (this is more for me to remind me to stop working than anything else). For every day, I have three possible shifts - early, normal & late, and I have a formula which will sum up any times put into these columns and give me the daily total hours.
To summarise the duration of time spent working in a day, I use the following formula: =(C41-B41)+(E41-D41)+12+(G41-F41) which is:
early end time minus early start time
normal end time minus normal start time PLUS 12 hours
late end time minus late start time
Which gives me output like this:
What I cannot seem to achieve is, the ability to sum the daily totals into something which shows me the total hours worked over 1-week. If I attempt to sum the daily totals together for the example image shown, I get some wild figure such as 1487:25:00 when formatting as 'Duration' or 23:25:00 when formatted as 'Time'!
All my cells where I record the hours worked are formatted as 'Time'
When using arithmetic operations on date values in Google Sheets, it's important to remember that the internal representation of a date is numeric, and understood as the number of days since January 1, 1970.
What follows from that, is that if you want to add 12 hours to a time duration, you should not write "+12" because that will in fact add 12 days. Instead add "+12/24". In other words, try the following formula instead of the one you are using now:
=(C41-B41)+(E41-D41)+(12/24+G41-F41)

Library method for seconds elapsed since start of year

I want to calculate the number of seconds elapsed since the start of the current year. A straightforward approach would be to get the current date and the date on the start of the year and subtract the two but I was wondering if there was a library method that could do that for me.
This would help my year to date calculations look prettier.
current_time = Time.new
current_time - Time.new(current_time.year)
This will return a Float of the number of seconds since the start of the current year. See Time for more information.

Unknown timestamp reference date

I'm currently dealing with a system which uses an unknown timestamp mechanism.
The system is running on a Windows machine, so my first thought was that it uses some kind of Windows epoch for its timestamps, but it appears it does not.
My goal is to convert these timestamps to Unix timestamps.
A few examples:
The following timestamp: 2111441659 converts to: 2013-10-01 11:59
2111441998 to 2013-10-01 17:14
2111443876 to 2013-10-02 14:36
2111444089 to 2013-10-02 17:57
(All dates are GMT+2)
I've tried to calculate the reference date using the data above, but somehow I get a different result with every single timestamp.
Could anybody shed some light on this rather odd problem?
Thanks in advance!
To me the number seems to small to be milliseconds. My first guess was then seconds but looking at the speed this number varies with i think minutes is a better guess. Doing some math on it 2111441659/60/24/365 = 4017.20254756 which suggests the epoch might be sometime in the year -2000?
Here is a list of common epochs in computing but the year -2000 is not really there :) How are you obtaining this timestamp?
P.S. are you sure the year is set to 2013 on this machine and not to 4013? :) This would then fit with the .NET epoch of January 1, Year 1
In order to distinguish your timestamp from Unix timestamp, let's call yours The Counter.
So we have four counter values with their corresponding DateTime value. The first thing to do is calculate the counter's unit correspondence to a real time unit, let's say a second.
In order to do that, we need (1) the difference d between two counter values and (2) the difference s between their corresponding DateTimes, in seconds.
Considering the first two values we have d1=2111441998-2111441659=339. The difference between 2013-10-01 11:59 and 2013-10-01 17:14 (in seconds) is s1=18900. Consequently, the counter's unit corresponds to u1=s1/d1=55.7522123894 seconds.
But if we do the same with pairs #2 and #3, we will find that u2=40.9584664536 seconds.
Similarily, pairs #3 and #4 give us u3=56.6197183114 seconds.
My conclusion therefore, is that there's no alignment between the counter values and the corresponding DateTimes provided. That's the reason why you get a different result with each sample.
Finally, after many hours of comparing the timestamps with the datetimes, trying to discover the logic between them, I've found the answer by reverse engineering the software which generates the timestamps.
It turns out that the integer timestamps are actually bitwise representations* of the datetimes.
In pseudocode:
year = TimeStamp >> 20;
month = (TimeStamp >> 16) & 15;
day = (TimeStamp >> 11) & 31;
hour = (TimeStamp >> 6) & 31;
minute = TimeStamp & 63;
*I'm not sure if this is the correct term for it, if not, please correct me.

Why (in MATLAB) this code is faster?

I have written some code in two different ways in MATLAB. Firstly, I used two for loops, which seems stupid at the first glance:
Initial = [zeros(10,1) ones(10,1)];
for xpop=1:10
for nvar=1:10
Parent(xpop,nvar) = Initial(nvar,1)+(Initial(nvar,2)-Initial(nvar,1))*rand();
end
end
In the second scheme, I tried to do vectorized computation (I assumed it can be faster):
Parent = repmat(Initial(:,1),1,10) + rand(10,10).*(repmat(Initial(:,2),1,10)-repmat(Initial(:,1),1,10));
The elapsed time in three different run of the code can be seen following:
Elapsed time is 0.000456 seconds.
Elapsed time is 0.006342 seconds.
Elapsed time is 0.000457 seconds.
Elapsed time is 0.006147 seconds.
Elapsed time is 0.000471 seconds.
Elapsed time is 0.006433 seconds.
Why is the first scheme faster than the second? Is it really doing two stupid for loops inside the '.*' command?
Your test setup is simply too small to show the advantages of vectorization.
Initial = [zeros(10,1) ones(10,1)];
Elapsed time is 0.000078 seconds.
Elapsed time is 0.000995 seconds.
Now for a larger problem:
Initial = [zeros(1000,1) ones(1000,1)];
Elapsed time is 2.797949 seconds.
Elapsed time is 0.049859 seconds.
It is good for you to test these things. However you need to learn how to do these tests to gain good information.
First of all, the time taken is terribly small, so repeat tests are always best. Second, use a tool like timeit. It does all of the work for you, eliminating many sources of error, although it needs to have its target encapsulated as a function.
Next, there are issues with TINY problems. You test case is trivially small. In fact, there are many reasons for code taking time. Consider function overhead and startup costs. A function takes time to call, since there is overhead to set up and destroy function workspaces. Also, a GOOD function will have error testing, and offer several options. But for that to happen, it must check to see if those options were set. So time is spent, often doing nothing of value because you just want to use the function in some simple form. This means that when you call functions to vectorize a tiny computation, it may actually take more time than if you just did the unvectorized form inline. So small test cases are often misleading. (I was going to add a timing comparison for larger problem, but by then Marc had already done so in his answer. See the vest difference for larger problems.)
You should also learn to use bsxfun, a tool designed to optimize certain computations of the form you are testing. Again, small problems will often NOT show much gain in speed, if any.
Next, there are issues with JIT, the acceleration in place in MATLAB to optimize some simple codes. If that (invisible to you) tool manages to handle well the code you are testing, then it will appear as if the loop is faster.
It is good to do some tests, so lets make a comparison. Since your examples are all mainly inline, I'll just put a big loop around each case. This will reduce one of the large sources of testing error.
Ktot = 100;
N = 10;
Initial = [zeros(N,1) ones(N,1)];
tic
for k = 1:Ktot
for xpop=1:N
for nvar=1:N
Parent(xpop,nvar) = Initial(nvar,1)+(Initial(nvar,2)-Initial(nvar,1))*rand();
end
end
end
toc
tic
for k = 1:Ktot
Parent = repmat(Initial(:,1),1,N) + rand(N,N).*(repmat(Initial(:,2),1,N)-repmat(Initial(:,1),1,N));
end
toc
Can you improve your vectorized form? Why do two repmats, when one will work as well?
tic
for k = 1:Ktot
Parent = repmat(Initial(:,1),1,N) + rand(N,N).*repmat(Initial(:,2)-Initial(:,1),1,N);
end
toc
What about bsxfun?
tic
for k = 1:Ktot
Parent = bsxfun(#plus,Initial(:,1),bsxfun(#times,rand(N,N),Initial(:,2)-Initial(:,1)));
end
toc
So, with N = 10 and Ktot = 100, I see times like this:
Elapsed time is 0.003935 seconds.
Elapsed time is 0.012250 seconds.
Elapsed time is 0.008269 seconds.
Elapsed time is 0.004304 seconds.
Again, this is a small problem. What happens if we expand the problem? Try N = 100, instead of N = 10.
Elapsed time is 0.131186 seconds.
Elapsed time is 0.031671 seconds.
Elapsed time is 0.027205 seconds.
Elapsed time is 0.019763 seconds.
So there we saw things sorting out a bit more logically. Now the bsxfun variant is starting to show some gains. Next, go up to N = 1000.
Elapsed time is 12.288608 seconds.
Elapsed time is 3.412531 seconds.
Elapsed time is 2.690691 seconds.
Elapsed time is 1.626599 seconds.
Essentially, all of these codes do the same work, it is just that some are more efficient in how they structure the problem, while some have more overhead. As we see in the larger problems, explicit loops fall flat.

Resources