I am having trouble converting posix_time::ptime to a timestamp represented by time_t or posix_time::milliseconds, or any other appropriate type which can be easily printed (from Epoch).
I actually need just to print the timestamp represented by the posix_time::ptime in milliseconds, so if there is an easy way to print in that format, I don't actually need the conversion.
This code will print the number of milliseconds since 1941-12-07T00:00:00. Obviously, you can choose whatever epoch suits your need.
void print_ptime_in_ms_from_epoch(const boost::posix_time::ptime& pt)
{
using boost::posix_time::ptime;
using namespace boost::gregorian;
std::cout << (pt-ptime(date(1941, Dec, 7))).total_milliseconds() << "\n";
}
Related
Keep in mind that Precision is based on the total number of digits and not the decimal places, but I need a way to set the Decimal places and all I can find is Precision, so I am trying to work with it, so I account for the number of digits in the whole number, in order to get this to work, but that is not working.
I want to do some math and return with a set precision value, the function takes a string of a very large number with a very large decimal precision and returns it as a string set to the number of decimals passed in as precision:
#include <boost/multiprecision/mpfr.hpp>
QString multiply(const QString &mThis, const QString &mThat, const unsigned int &precison)
{
typedef boost::multiprecision::number<mpfr_float_backend<0> > my_mpfr_float;
my_mpfr_float aThis(mThis.toStdString());
my_mpfr_float aThat(mThat.toStdString());
my_mpfr_float::default_precision(precison);
my_mpfr_float ans = (aThis * aThat);
return QString::fromStdString(ans.str());
}
I have tried it without the typedef, same problem;
MathWizard::multiply("123456789.123456789", "123456789.123456789", 20);
18 digits of Precision, 9 + 9, I should ask for 30
will return 22 decimal places
15241578780673678.51562
instead of 20
15241578780673678.516
So why is it off by 2?
I would like to make the precision change after the math, but it seems you have to set it before, and not like the examples that boost shows in their example, but still does not return the correct value, doing it after does not change value.
Update: Compare what I did to what they say works in this Post:
how to change at runtime number precision with boost::multiprecision
typedef number<gmp_float<0> > mpf_float;
mpfr_float a = 2;
mpfr_float::default_precision(1000);
std::cout << mpfr_float::default_precision() << std::endl;
std::cout << sqrt(a) << std::endl; // print root-2
I have noticed differences between gmp_float, mpf_float (using boost/multiprecision/gmp.hpp) and mpfr_float, and mpfr_float will give me a closer precision, for example, if I take the number (1/137):
mpf_float
0.007299270072992700729927007299270072992700729927007299270073
only 1 Precision, 23 digits when set to 13
0.00729927007299270072993
mpfr_float
0.007299270072992700729929
only 1 Precision, 16 digits when set to 13
0.0072992700729928
With only 1 Precision I would expect my answer to be have one less decimal.
The other data types do similar, I did try them all, so this code will work the same for all the data types described here:
boost 1.69.0: multiprecision Chapter 1
I also must point out that I rely on Qt since this function is used in a QtQuick Qml Felgo App, and actually I could not figure out to convert this to string without converting it to an exponent, even though I used ans.str() for both, my guess is that fromStdString does something different then std::string(ans.str()).
I figure if I can not figure his out, I will just do String Rounding to get the correct precision.
std::stringstream ss;
ss.imbue(std::locale(""));
ss << std::fixed << std::setprecision(int(precison)) << ans.str();
qDebug() << "divide(" << mThis << "/" << mThat << " # " << precison << " =" << QString::fromStdString(ss.str()) << ")";
return QString::fromStdString(ss.str());
I still could not get away without using QString, but this did not work, it returns 16 digits instead of 13, I know that is a different question, as such I just post it to show my alternatives do not work any better at this point. Also note that the divide function works the same as the multiply, I used used that example to show the math has nothing to do with this, but all the samples they are showing me do not seem to work correctly, and I do not understand why, so just to make the steps clear:
Create back end: typedef boost::multiprecision::number > my_mpfr_float;
Set Precision: my_mpfr_float::default_precision(precision);
Set initial value of variable: my_mpfr_float aThis(mThis.toStdString());
Do some math if you want, return value with correct Precision.
I must be missing something.
I know I can just get the length of the string, and if longer than Precision, then check if Precision + 1 is greater than 5, if so add 1 to Precision and return a substring of 0, Precision and be done with all this Correct way of doing things, I could even do this in JavaScript after the return, and just forget about doing it the Correct way, but I still think I am just missing something, because I can not believe this is the way this is actually supposed to work.
Submitted Bug Report: https://github.com/boostorg/multiprecision/issues/127
I am trying to log milliseconds of time that has elapsed over a period of time.
I have a class like this
// class member declarations
class MyClass {
std::chrono::high_resolution_clock::time_point m_start;
std::chrono::system_clock::duration m_elapsed;
};
I have 2 methods in the class. One is called from main that is func1CalledFromMainThread.
// Class methods
using namespace std::chrono;
void MyClass::func1CalledFromMainThread() {
m_start = std::chrono::high_resolution_clock::now();
}
And another one that is func2CalledFromADifferentThread is called from a different thread
void MyClass::func2CalledFromADifferentThread() {
// after some time following line of code runs from a different thread
auto end = high_resolution_clock::now();
m_elapsed = duration_cast<milliseconds>(end - m_start);
std::cout << "Elapsed time in milliseconds is " << m_elapsed.count()/1000 << std::endl;
}
The issue is in the cout logging. I see that I have to divide by 1000 to get milliseconds out of m_elapsed. Doesn't count return the count of std::chrono::milliseconds here? Why should I have to divide by 1000 here? Does count() return always in microseconds or am I doing a mistake?
count returns the number of ticks of the type on which you invoke it. If you wrote this:
duration_cast<milliseconds>(end - m_start).count()
it would correctly give you the number of milliseconds. However, you're not storing the result in std::chrono::milliseconds, you're storing it in std::chrono::system_clock::duration (the type of m_elapsed). Therefore, m_elapsed.count() returns the number of ticks in std::chrono::system_clock::duration's frequency, which is probably microseconds on your platform.
In other words, you're immediately undoing the cast to milliseconds by storing the result in something other than milliseconds.
You are storing the duration using system_clock::durationunits and not in milliseconds.
The problem in your case is that std::chrono::system_clock::duration is not using millisecond as ticks counts.
When executing this line m_elapsed = duration_cast<milliseconds>(end - m_start);,
No matter is you convert the time first to milli using a duration_cast the count of ticks will always be converted in system_clock::duration duration unit which happen to be microseconds.
I would simply declare m_elapsed as std::chrono::duration<long, std::milli> and it should work as expected.
Have a look at the doc page for more info
I faced with similar problem and I resolved it by simple change
from:
std::chrono::system_clock::duration m_elapsed;
to:
auto m_elapsed;
I'm trying to print to the file amount of microseconds:
high_resolution_clock::time_point t1 = high_resolution_clock::now();
high_resolution_clock::time_point t2 = high_resolution_clock::now();
auto duration1 = duration_cast<microseconds> (t2-t1).count();
fprintf(file, "%lu, %lu\n", dutation1, duration1);
In the file I can see the first column having some values around 2000 but
I get second column values always equal to zero. I wonder if I'm doing correct fprintf (the %lu parameter) and why does it print the second variable as zero in the file?
The count function returns a type called rep, which according to this std::duration reference is
an arithmetic type representing the number of ticks
Since you don't know the exact type, you can't really use any printf function to print the values, since if you use the wrong format you will have undefined behavior (which is very likely what you have here).
This will be easily solved if you use C++ streams instead, since the correct "output" operator << will automatically be selected to handle the type.
The GNU date command is one of the few implementations with the ability to return nanoseconds:
%N nanoseconds (000000000..999999999)
For example, we can get the number of nanoseconds since epoch by combining the %s and %N symbols.
$ date +%s%N
1402513692992913666
Now then, to the actual question. Where does GNU's date command get such a precise representation of time?
Note: I'm asking where GNU date is getting its time information from (C calls? /proc?), not where the computer itself is (hardware).
Use The Source, Luke! If you look in the source of GNU date, you'll see (in lib/gettime.c):
# if defined CLOCK_REALTIME && HAVE_CLOCK_GETTIME
if (clock_gettime (CLOCK_REALTIME, ts) == 0)
return;
# endif
{
struct timeval tv;
gettimeofday (&tv, NULL);
ts->tv_sec = tv.tv_sec;
ts->tv_nsec = tv.tv_usec * 1000;
}
So the answer to the question is clock_gettime(CLOCK_REALTIME) where that's available, and gettimeofday() otherwise (multiplying up from microseconds).
I'm considering to use Protocol Buffers for data exchange between a Linux and a Windows based system.
Whats the recommended format for sending date/time (timestamp) values? The field should be small when serialized.
There is Timestamp message type since protobuf 3.0, that's how to create it in model:
syntax = "proto3";
import "google/protobuf/timestamp.proto";
message MyMessage {
google.protobuf.Timestamp my_field = 1;
}
timestamp.proto file contains examples of Timestamp using, including related to Linux and Windows programs.
Example 1: Compute Timestamp from POSIX time().
Timestamp timestamp;
timestamp.set_seconds(time(NULL));
timestamp.set_nanos(0);
Example 2: Compute Timestamp from POSIX gettimeofday().
struct timeval tv;
gettimeofday(&tv, NULL);
Timestamp timestamp;
timestamp.set_seconds(tv.tv_sec);
timestamp.set_nanos(tv.tv_usec * 1000);
Example 3: Compute Timestamp from Win32 GetSystemTimeAsFileTime().
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
// A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z
// is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z.
Timestamp timestamp;
timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
Although you aren't saying which languages you are using or what kind of precision you need, I would suggest using Unix time encoded into a int64. It is fairly easy to handle in most languages and platforms (see here for a Windows example), and Protobufs will use a varint-encoding keeping the size small without limiting the representable range too much.
In the latest protobuf version (3.0) - For C#, Timestamp a WellKnownType is available. Check this