I am currently trying to get the execution time for a ballerina function. For that I need to get the timestamp before and after calling the function.
How to get the timestamp in milliseconds using ballerina?
I have tried time module and I did not find a direct way to get it.
For the common use case to calculate the elapsed time, it is better to use the following API instead of time:UtcNow()
decimal now = time: monotonicNow();
Do note, time:monotonicNow does not guarantee accurate reading of the utc timestamp. It only guarantees continuity i.e it guarantees consistent value increase in subsequent calls with nanoseconds precision.
The time:utcNow() function provides all needed details to get the accurate timestamp in milliseconds. This returns a tuple of length 2. The first member of the tuple is int representing an integral number of seconds from the epoch. The second member of the tuple is a decimal giving the fraction of a second. Default precision is in nanoseconds. You can manipulate the time:Utc tuple as follows to get the time in milliseconds.
time:Utc now = time:utcNow();
int timeInMills = <int>(<decimal>now[0] + now[1]) * 1000;
Do note, this can result in negative interval due to the clock synchronisation
Related
I am using QueryPerformanceCounter windows syscall in order to get high-precision timestamp.
I need to convert it to unix epoch (in nanoseconds) as I am going to pass the value to an API that needs it in this format
Could anybody help me understanding how to accomplish this?
QueryPerformanceCounter does not return a timestamp with a fixed offset to the current time (as in UTC, or, the time shown by a wall clock), so you cannot convert it to UNIX time.
However, time differences measured using QueryPerformanceCounter can be converted to nanoseconds (or any time unit) by dividing by the result of QueryPerformanceFrequency (to get seconds) and multiplying by, e.g., 10^9 to get nanoseconds.
As per comments above, QueryPerformanceCounter is not the right way to go.
I have found GetSystemTimePreciseAsFileTime that suits my needs
A hardware sensor is sampled precisely (precise period of sampling) using a real-time unit. However, the time value is not sent to the database together with the sampled value. Instead, time of insertion of the record to the database is stored for the sample in the database. The DATETIME type is used, and the GETDATE() function is used to get current time (Microsoft SQL Server).
How can I reconstruct the precise sampling times?
As the sampling interval is (should be) 60 seconds exactly, there was no need earlier for more precise solution. (This is an old solution, third party, with a lot of historical samples. This way it is not possible to fix the design.)
For processing of the samples, I need to reconstruct the correct time instances for the samples. There is no problem with shifting the time of the whole sequence (that is, it does not matter whether the start time is rather off, not absolute). On the other hand, the sampling interval should be detected as precisely as possible. I also cannot be sure, that the sampling interval was exactly 60 seconds (as mentioned above). I also cannot be sure, that the sampling interval was really constant (say, slight differences based on temperature of the device).
When processing the samples, I want to get:
start time
the sampling interval
the sequence o the sample values
When reconstructing the samples, I need to convert it back to tuples:
time of the sample
value of the sample
Because of that, for the sequence with n samples, the time of the last sample should be equal to start_time + sampling_interval * (n - 1), and it should be reasonably close to the original end time stored in the database.
Think in terms of the stored sample times slightly oscillate with respect to the real sample-times (the constant delay between the sampling and the insertion into the database is not a problem here).
I was thinking about calculating the mean value and the corrected standard deviation for the interval calculated from the previous and current sample times.
Discontinuity detection: If the calculated interval is greater than 3 sigma off the mean value, I would consider it a discontinuity of the sampled curve (say, the machine is switched off, or any outer event lead to missing samples. In the case, I want to start with processing a new sequence. (The sampling frequency could also be changed.)
Is there any well known approach to the problem. If yes, can you point me to the article(s)? Or can you give me the name or acronym of the algorithm?
+1 to looking at the difference sequence. We can model the difference sequence as the sum of a low frequency truth (the true rate of the samples, slowly varying over time) and high frequency noise (the random delay to get the sample into the database). You want a low-pass filter to remove the latter.
I would like to know a DB2 column type that could represent 25 hours or more as TIME have a limited range of 0 to 24 hours.
Typically time duration is stored as an integer data type representing the number of time units appropriate for your precision requirements. One common example is the Unix time, which is the number of seconds since 1 January, 1970.
Internally DB2 implements duration (for example, when you subtract one TIME value from another) as a DECIMAL, so this is a good option if you plan to perform arithmetic operations between DB2 TIME and your duration value. To quote the manual:
A time duration represents a number of hours, minutes, and seconds, expressed as a DECIMAL(6,0) number. To be properly interpreted, the number must have the format hhmmss., where hh represents the number of hours, mm the number of minutes, and ss the number of seconds.
What you need to do is to implement an UDT: User Defined Type. With an UDT you can perform operation on this datatype. As #mustaccio has said, the best thing is to implement this as a decimal.
https://www.toadworld.com/platforms/ibmdb2/w/wiki/7675.user-defined-data-types-udt
http://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.admin.structypes.doc/doc/c0006438.html
When I'm trying to execute my test plan in jmeter for 10,50,100... virtual users with ram up period 30 sec and Loop count is 1. I'm not getting Average response time exactly when I calculated with Average Time=(Min Time+ Max Time)/2.
Please check my attached image for differences in Average time
Can anyone suggest me please how we need to understand this.
Thanks in Advance.
Average: This is the Average elapsed time of a set of results. It is the arithmetic mean of all the samples response time.
The following equation show how the Average value (μ) is calculated:
μ = 1/n * Σi=1…n xi
An important thing to understand is that the mean value can be very misleading as it does not show you how close (or far) your values are from the average.The main thing you should focus on is "Standard Deviation".
The standard deviation (σ) measures the mean distance of the values to their average (μ). In other words, it gives us a good idea of the dispersion or variability of the measures to their mean value.
The following equation show how the standard deviation (σ) is calculated:
σ = 1/n * √ Σi=1…n (xi-μ)2
So interpreting the standard deviation is wise as mean value could be the same for the different response time of the samples! If the deviation value is low compared to the mean value, it will indicate you that your measures are not dispersed (or mostly close to the mean value) and that the mean value is significant.
Min - The lowest elapsed time(response time) for the samples with the same label.
Max - The longest elapsed time (response time) for the samples with the same label.
For further detail you could go through JMeter documentation and this blog. It will really help you to understand the concept.
I have a requirement that goes as follows (trust me, I'm way too old for homework grin)
I have a bunch of tasks that run with various frequencies. They also have a start "seed" date/time . The start seed is sometime in the past, could be one minute ago, could be 5 years ago.
I need to calculate the next run time for the task, using the start seed date/time and the frequency - it cannot simply be "now" + the task frequency (for those of you who have scheduled jobs on MS SQL Server this is a familiar concept)
Now the silly way to do it is to take the start seed and keep adding the frequency until it becomes greater than "now". That's hardly optimal. The naive way to do it would be to take the start seed date, change it to today's date and leave the time as is, then add the frequency until it's greater than now, but that assumes the frequency is a multiple of 24 hours.
So what's the best/quickest way to do this? Bonus points for a C# solution, but this is generic enough to make an interesting puzzle for any language :)
A better method would be to take the difference between the start timestamp and the current timestamp, divide that by the frequency, round the resulting multiplier up to the nearest integer, multiply by the frequency again, and add that to the start timestamp once more.
The act of rounding up will provide the proper offset.
Your answer would essentially be this:
next_time = ceiling((now - seed)/frequency) * frequency + seed
Using the ceiling function ensures that next_time will be >= now.
You would have to do the necessary conversions to be able to perform this arithmetic on the dates (e.g., translate to UNIX time, which is number of seconds since Jan 1, 1970.)
I am not familiar with C# so I can't offer the code, but I assume that C# has date/time utility classes for dealing with date/time arithmetic operations.
Interesting puzzle, thanks for the challenge :)
This should do it in c#. Could almost certainly be slimmed down, but its verbose enough to explain whats going on.
// Initialise with date the event started, and frequency
DateTime startDate = new DateTime(2009, 8,1,9,0,0);
TimeSpan frequency = new TimeSpan(0, 15, 0);
// Store datetime now (so that it doesnt alter during following calculations)
DateTime now = DateTime.Now;
// Calculate the number of ticks that have occured since the event started
TimeSpan pastTimeSpan = now.Subtract(startDate);
// Divide the period that the event has been running by the frequency
// Take the remaining time span
TimeSpan remainingTimeSpan = new TimeSpan(pastTimeSpan.Ticks % frequency.Ticks);
// Calculate last occurence the event ran
DateTime lastOccurence = now.Subtract(remainingTimeSpan);
// Calculate next occurence the event will run
DateTime nextOccurence = lastOccurence.Add(frequency);