std::chrono::milliseconds .count() returns in microseconds? - c++11

I am trying to log milliseconds of time that has elapsed over a period of time.
I have a class like this
// class member declarations
class MyClass {
std::chrono::high_resolution_clock::time_point m_start;
std::chrono::system_clock::duration m_elapsed;
};
I have 2 methods in the class. One is called from main that is func1CalledFromMainThread.
// Class methods
using namespace std::chrono;
void MyClass::func1CalledFromMainThread() {
m_start = std::chrono::high_resolution_clock::now();
}
And another one that is func2CalledFromADifferentThread is called from a different thread
void MyClass::func2CalledFromADifferentThread() {
// after some time following line of code runs from a different thread
auto end = high_resolution_clock::now();
m_elapsed = duration_cast<milliseconds>(end - m_start);
std::cout << "Elapsed time in milliseconds is " << m_elapsed.count()/1000 << std::endl;
}
The issue is in the cout logging. I see that I have to divide by 1000 to get milliseconds out of m_elapsed. Doesn't count return the count of std::chrono::milliseconds here? Why should I have to divide by 1000 here? Does count() return always in microseconds or am I doing a mistake?

count returns the number of ticks of the type on which you invoke it. If you wrote this:
duration_cast<milliseconds>(end - m_start).count()
it would correctly give you the number of milliseconds. However, you're not storing the result in std::chrono::milliseconds, you're storing it in std::chrono::system_clock::duration (the type of m_elapsed). Therefore, m_elapsed.count() returns the number of ticks in std::chrono::system_clock::duration's frequency, which is probably microseconds on your platform.
In other words, you're immediately undoing the cast to milliseconds by storing the result in something other than milliseconds.

You are storing the duration using system_clock::durationunits and not in milliseconds.
The problem in your case is that std::chrono::system_clock::duration is not using millisecond as ticks counts.
When executing this line m_elapsed = duration_cast<milliseconds>(end - m_start);,
No matter is you convert the time first to milli using a duration_cast the count of ticks will always be converted in system_clock::duration duration unit which happen to be microseconds.
I would simply declare m_elapsed as std::chrono::duration<long, std::milli> and it should work as expected.
Have a look at the doc page for more info

I faced with similar problem and I resolved it by simple change
from:
std::chrono::system_clock::duration m_elapsed;
to:
auto m_elapsed;

Related

How to Get Current CPU Usage Like Task Manager - Vb.net

I'm working in a program that it needs to get the current CPU Usage How i can achieve that in vb.Net
i tried like 4 codes but i still get 0% every time . here is one example of what i've used Link
Thanks In Advance ,
Anes08
Though it is not allowed to answer such questions,but still , here's something that might help you get started :
Dim cpu as New System.Diagnostics.PerformanceCounter
cpu.CategoryName = "Processor"
cpu.CounterName = "% Processor Time"
cpu.InstanceName = "_Total"
MessageBox(cpu.NextValue.ToString + "%")
If it doesn't work , here's a better version:
Dim cpu as PerformanceCounter '''Declare in class level
'On form load(actually you need to initialize it first)
cpu = new PerformanceCounter("Processor", "% Processor Time", "_Total")
'''Finally,get the value :
MsgBox(cpu.NextValue & "%") '''Use .ToString if required
you can use LblCpuUsage.text = CombinedAllCpuUsageOfEachThread.NextValue()
.There is a helper library to get that information:
The Performance Data Helper (see Using the PDH Functions to Consume
Counter Data (Windows)[^])
.
Microsoft examples are in C but there are also corresponding VB (not .Net) functions:
Performance Counters Functions for Visual Basic (Windows)[^]
For me I wanted an average. There were a couple problems getting CPU utilization that seemed like there should be an easy package to solve but I didn't see one.
The first is of course that a value of 0 on the first request is useless. Since you already know that the first response is 0, why doesn't the function just take that into account and return the true .NextValue()?
The second problem is that an instantaneous reading may be wildly inaccurate when trying to make decisions on what resources your app may have available to it since it could be spiking, or between spikes.
My solution was to do a for loop that cycles through and gives you an average for the past few seconds. you can adjust the counter to make it shorter or longer (as long as it is more than 2).
public static float ProcessorUtilization;
public static float GetAverageCPU()
{
PerformanceCounter cpuCounter = new PerformanceCounter("Process", "% Processor Time", Process.GetCurrentProcess().ProcessName);
for (int i = 0; i < 11; ++i)
{
ProcessorUtilization += (cpuCounter.NextValue() / Environment.ProcessorCount);
}
// Remember the first value is 0, so we don't want to average that in.
Console.Writeline(ProcessorUtilization / 10);
return ProcessorUtilization / 10;
}

Return random event out of last 2 events

Assume the following Esper events (Esper Tryout page):
StockTick={symbol='event1', price=1}
t=t.plus(10 seconds)
StockTick={symbol='event2', price=2}
t=t.plus(10 seconds)
StockTick={symbol='event3', price=3}
t=t.plus(10 seconds)
StockTick={symbol='event4', price=4}
I would like to return a random out of the last two events every 3 seconds.
I started with the following try but it does not work because the random variable never changes once it is initialized.
create schema StockTick(symbol string, price double);
create variable int size = 2;
create variable int rand = cast(Math.random()*size, int)+1;
#Name('Out')
select prevtail(price) from StockTick#length(rand) output every 3 seconds;
I am grateful about any ideas.
I could think of a bunch of possible ways.
turn off the cache (http://esper.espertech.com/release-7.0.0/esper-reference/html_single/index.html#config-engine-expression-udfcache)
or declare a variable of type Random
or use a JavaScript script
or have a user-defined function
or use a custom aggregation function

Boost: How to print/convert posix_time::ptime in milliseconds from Epoch?

I am having trouble converting posix_time::ptime to a timestamp represented by time_t or posix_time::milliseconds, or any other appropriate type which can be easily printed (from Epoch).
I actually need just to print the timestamp represented by the posix_time::ptime in milliseconds, so if there is an easy way to print in that format, I don't actually need the conversion.
This code will print the number of milliseconds since 1941-12-07T00:00:00. Obviously, you can choose whatever epoch suits your need.
void print_ptime_in_ms_from_epoch(const boost::posix_time::ptime& pt)
{
using boost::posix_time::ptime;
using namespace boost::gregorian;
std::cout << (pt-ptime(date(1941, Dec, 7))).total_milliseconds() << "\n";
}

How to time an operation in milliseconds in Ruby?

I'm wishing to figure out how many milliseconds a particular function uses. So I looked high and low, but could not find a way to get the time in Ruby with millisecond precision.
How do you do this? In most programming languages its just something like
start = now.milliseconds
myfunction()
end = now.milliseconds
time = end - start
You can use ruby's Time class. For example:
t1 = Time.now
# processing...
t2 = Time.now
delta = t2 - t1 # in seconds
Now, delta is a float object and you can get as fine grain a result as the class will provide.
You can also use the built-in Benchmark.measure function:
require "benchmark"
puts(Benchmark.measure { sleep 0.5 })
Prints:
0.000000 0.000000 0.000000 ( 0.501134)
Using Time.now (which returns the wall-clock time) as base-lines has a couple of issues which can result in unexpected behavior. This is caused by the fact that the wallclock time is subject to changes like inserted leap-seconds or time slewing to adjust the local time to a reference time.
If there is e.g. a leap second inserted during measurement, it will be off by a second. Similarly, depending on local system conditions, you might have to deal with daylight-saving-times, quicker or slower running clocks, or the clock even jumping back in time, resulting in a negative duration, and many other issues.
A solution to this issue is to use a different time of clock: a monotonic clock. This type of clock has different properties than the wall clock.
It increments monitonically, i.e. never goes back and increases at a constant rate. With that, it does not represent the wall-clock (i.e. the time you read from a clock on your wall) but a timestamp you can compare with a later timestamp to get a difference.
In Ruby, you can use such a timestamp with Process.clock_gettime(Process::CLOCK_MONOTONIC) like follows:
t1 = Process.clock_gettime(Process::CLOCK_MONOTONIC)
# => 63988.576809828
sleep 1.5 # do some work
t2 = Process.clock_gettime(Process::CLOCK_MONOTONIC)
# => 63990.08359163
delta = t2 - t1
# => 1.5067818019961123
delta_in_milliseconds = delta * 1000
# => 1506.7818019961123
The Process.clock_gettime method returns a timestamp as a float with fractional seconds. The actual number returned has no defined meaning (that you should rely on). However, you can be sure that the next call will return a larger number and by comparing the values, you can get the real time difference.
These attributes make the method a prime candidate for measuring time differences without seeing your program fail in the least opportune times (e.g. at midnight at New Year's Eve when there is another leap-second inserted).
The Process::CLOCK_MONOTONIC constant used here is available on all modern Linux, BSD, and macOS systems as well as the Linux Subsystem for Windows. It is however not yet available for "raw" Windows systems. There, you can use the GetTickCount64 system call instead of Process.clock_gettime which also returns a timer value in millisecond granularity on Windows (>= Windows Vista, Windows Server 2008).
With Ruby, you can call this function like this:
require 'fiddle'
# Get a reference to the function once
GetTickCount64 = Fiddle::Function.new(
Fiddle.dlopen('kernel32.dll')['GetTickCount64'],
[],
-Fiddle::TYPE_LONG_LONG # unsigned long long
)
timestamp = GetTickCount64.call / 1000.0
# => 63988.576809828
You should take a look at the benchmark module to perform benchmarks. However, as a quick and dirty timing method you can use something like this:
def time
now = Time.now.to_f
yield
endd = Time.now.to_f
endd - now
end
Note the use of Time.now.to_f, which unlike to_i, won't truncate to seconds.
Also we can create simple function to log any block of code:
def log_time
start_at = Time.now
yield if block_given?
execution_time = (Time.now - start_at).round(2)
puts "Execution time: #{execution_time}s"
end
log_time { sleep(2.545) } # Execution time: 2.55s
Use Time.now.to_f
The absolute_time gem is a drop-in replacement for Benchmark, but uses native instructions to be far more accurate.
If you use
date = Time.now.to_i
You're obtaining time in seconds, that is far from accurate, specially if you are timing little chunks of code.
The use of Time.now.to_i return the second passed from 1970/01/01. Knowing this you can do
date1 = Time.now.to_f
date2 = Time.now.to_f
diff = date2 - date1
With this you will have difference in second magnitude. If you want it in milliseconds, just add to the code
diff = diff * 1000
I've a gem which can profile your ruby method (instance or class) - https://github.com/igorkasyanchuk/benchmark_methods.
No more code like this:
t = Time.now
user.calculate_report
puts Time.now - t
Now you can do:
benchmark :calculate_report # in class
And just call your method
user.calculate_report

How do you measure the time a function takes to execute?

How can you measure the amount of time a function will take to execute?
This is a relatively short function and the execution time would probably be in the millisecond range.
This particular question relates to an embedded system, programmed in C or C++.
The best way to do that on an embedded system is to set an external hardware pin when you enter the function and clear it when you leave the function. This is done preferably with a little assembly instruction so you don't skew your results too much.
Edit: One of the benefits is that you can do it in your actual application and you don't need any special test code. External debug pins like that are (should be!) standard practice for every embedded system.
There are three potential solutions:
Hardware Solution:
Use a free output pin on the processor and hook an oscilloscope or logic analyzer to the pin. Initialize the pin to a low state, just before calling the function you want to measure, assert the pin to a high state and just after returning from the function, deassert the pin.
*io_pin = 1;
myfunc();
*io_pin = 0;
Bookworm solution:
If the function is fairly small, and you can manage the disassembled code, you can crack open the processor architecture databook and count the cycles it will take the processor to execute every instructions. This will give you the number of cycles required.
Time = # cycles * Processor Clock Rate / Clock ticks per instructions
This is easier to do for smaller functions, or code written in assembler (for a PIC microcontroller for example)
Timestamp counter solution:
Some processors have a timestamp counter which increments at a rapid rate (every few processor clock ticks). Simply read the timestamp before and after the function.
This will give you the elapsed time, but beware that you might have to deal with the counter rollover.
Invoke it in a loop with a ton of invocations, then divide by the number of invocations to get the average time.
so:
// begin timing
for (int i = 0; i < 10000; i++) {
invokeFunction();
}
// end time
// divide by 10000 to get actual time.
if you're using linux, you can time a program's runtime by typing in the command line:
time [funtion_name]
if you run only the function in main() (assuming C++), the rest of the app's time should be negligible.
I repeat the function call a lot of times (millions) but also employ the following method to discount the loop overhead:
start = getTicks();
repeat n times {
myFunction();
myFunction();
}
lap = getTicks();
repeat n times {
myFunction();
}
finish = getTicks();
// overhead + function + function
elapsed1 = lap - start;
// overhead + function
elapsed2 = finish - lap;
// overhead + function + function - overhead - function = function
ntimes = elapsed1 - elapsed2;
once = ntimes / n; // Average time it took for one function call, sans loop overhead
Instead of calling function() twice in the first loop and once in the second loop, you could just call it once in the first loop and don't call it at all (i.e. empty loop) in the second, however the empty loop could be optimized out by the compiler, giving you negative timing results :)
start_time = timer
function()
exec_time = timer - start_time
Windows XP/NT Embedded or Windows CE/Mobile
You an use the QueryPerformanceCounter() to get the value of a VERY FAST counter before and after your function. Then you substract those 64-bits values and get a delta "ticks". Using QueryPerformanceCounterFrequency() you can convert the "delta ticks" to an actual time unit. You can refer to MSDN documentation about those WIN32 calls.
Other embedded systems
Without operating systems or with only basic OSes you will have to:
program one of the internal CPU timers to run and count freely.
configure it to generate an interrupt when the timer overflows, and in this interrupt routine increment a "carry" variable (this is so you can actually measure time longer than the resolution of the timer chosen).
before your function you save BOTH the "carry" value and the value of the CPU register holding the running ticks for the counting timer you configured.
same after your function
substract them to get a delta counter tick.
from there it is just a matter of knowing how long a tick means on your CPU/Hardware given the external clock and the de-multiplication you configured while setting up your timer. You multiply that "tick length" by the "delta ticks" you just got.
VERY IMPORTANT Do not forget to disable before and restore interrupts after getting those timer values (bot the carry and the register value) otherwise you risk saving incorrect values.
NOTES
This is very fast because it is only a few assembly instructions to disable interrupts, save two integer values and re-enable interrupts. The actual substraction and conversion to real time units occurs OUTSIDE the zone of time measurement, that is AFTER your function.
You may wish to put that code into a function to reuse that code all around but it may slow things a bit because of the function call and the pushing of all the registers to the stack, plus the parameters, then popping them again. In an embedded system this may be significant. It may be better then in C to use MACROS instead or write your own assembly routine saving/restoring only relevant registers.
Depends on your embedded platform and what type of timing you are looking for. For embedded Linux, there are several ways you can accomplish. If you wish to measure the amout of CPU time used by your function, you can do the following:
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#define SEC_TO_NSEC(s) ((s) * 1000 * 1000 * 1000)
int work_function(int c) {
// do some work here
int i, j;
int foo = 0;
for (i = 0; i < 1000; i++) {
for (j = 0; j < 1000; j++) {
for ^= i + j;
}
}
}
int main(int argc, char *argv[]) {
struct timespec pre;
struct timespec post;
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &pre);
work_function(0);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &post);
printf("time %d\n",
(SEC_TO_NSEC(post.tv_sec) + post.tv_nsec) -
(SEC_TO_NSEC(pre.tv_sec) + pre.tv_nsec));
return 0;
}
You will need to link this with the realtime library, just use the following to compile your code:
gcc -o test test.c -lrt
You may also want to read the man page on clock_gettime there is some issues with running this code on SMP based system that could invalidate you testing. You could use something like sched_setaffinity() or the command line cpuset to force the code on only one core.
If you are looking to measure user and system time, then you could use the times(NULL) which returns something like a jiffies. Or you can change the parameter for clock_gettime() from CLOCK_THREAD_CPUTIME_ID to CLOCK_MONOTONIC...but be careful of wrap around with CLOCK_MONOTONIC.
For other platforms, you are on your own.
Drew
I always implement an interrupt driven ticker routine. This then updates a counter that counts the number of milliseconds since start up. This counter is then accessed with a GetTickCount() function.
Example:
#define TICK_INTERVAL 1 // milliseconds between ticker interrupts
static unsigned long tickCounter;
interrupt ticker (void)
{
tickCounter += TICK_INTERVAL;
...
}
unsigned in GetTickCount(void)
{
return tickCounter;
}
In your code you would time the code as follows:
int function(void)
{
unsigned long time = GetTickCount();
do something ...
printf("Time is %ld", GetTickCount() - ticks);
}
In OS X terminal (and probably Unix, too), use "time":
time python function.py
If the code is .Net, use the stopwatch class (.net 2.0+) NOT DateTime.Now. DateTime.Now isn't updated accurately enough and will give you crazy results
If you're looking for sub-millisecond resolution, try one of these timing methods. They'll all get you resolution in at least the tens or hundreds of microseconds:
If it's embedded Linux, look at Linux timers:
http://linux.die.net/man/3/clock_gettime
Embedded Java, look at nanoTime(), though I'm not sure this is in the embedded edition:
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#nanoTime()
If you want to get at the hardware counters, try PAPI:
http://icl.cs.utk.edu/papi/
Otherwise you can always go to assembler. You could look at the PAPI source for your architecture if you need some help with this.

Resources