MinGW boost random_device compile error - c++11

I need some random numbers for a simulation and are experimenting with the C++ 11 random library using MinGW Distro from nuwen.net.
As have been discussed in several other threads, e.g. Why do I get the same sequence for every run with std::random_device with mingw gcc4.8.1?, random_device do not generate a random seed, i.e. the code below, compiled with GCC, generates the same sequence of numbers for every run.
// test4.cpp
// MinGW Distro - nuwen.net
// Compile with g++ -Wall -std=c++14 test4.cpp -o test4
#include <iostream>
#include <random>
using namespace std;
int main(){
random_device rd;
mt19937 mt(rd());
uniform_int_distribution<int> dist(0,99);
for (int i = 0; i< 16; ++i){
cout<<dist(mt)<<" ";
}
cout <<endl;
}
Run 1: 56 72 34 91 0 59 87 51 95 97 16 66 31 52 70 78
Run 2: 56 72 34 91 0 59 87 51 95 97 16 66 31 52 70 78
To solve this problem it has been suggested to use the boost library, and the code would then look something like below, adopted from How do I use boost::random_device to generate a cryptographically secure 64 bit integer? and from A way change the seed of boost::random in every different program run,
// test5.cpp
// MinGW Distro - nuwen.net
// Compile with g++ -Wall -std=c++14 test5.cpp -o test5
#include <boost/random.hpp>
#include <boost/random/random_device.hpp>
#include <iostream>
#include <random>
using namespace std;
int main(){
boost::random_device rd;
mt19937 mt(rd());
uniform_int_distribution<int> dist(0,99);
for (int i = 0; i< 16; ++i){
cout<<dist(mt)<<" ";
}
cout <<endl;
}
But this code wont compile, gives the error “undefined reference to ‘boost::random::random_device::random_device()”. Note that both random.hpp and radndom_device.hpp are available in the include directory. Can anyone suggest what is wrong with the code, or with the compiling?

Linking the code to the boost libraries libboost_random.a and libboost_system.a seems to solve the problem, the executable generates of list of different random numbers for each run.
// test5.cpp
// MinGW Distro - nuwen.net
// g++ -std=c++14 test5.cpp -o test5 E:\MinGW\lib\libboost_random.a E:\MinGW\lib\libboost_system.a
#include <boost/random.hpp>
#include <boost/random/random_device.hpp>
#include <iostream>
#include <random>
using namespace std;
int main(){
boost::random_device rd;
boost::mt19937 mt(rd());
uniform_int_distribution<int> dist(0,99);
for (int i = 0; i< 16; ++i){
cout<<dist(mt)<<" ";
}
cout <<endl;
}
Run 1: 20 89 31 30 74 3 93 43 68 4 64 38 74 37 4 69
Run 2: 40 85 99 72 99 29 95 32 98 73 95 88 37 59 79 66

Related

How to implement this script about concurrency in c++ 11

I implement an concurrency script of multi thread in c++ 11 . But i stuck .
int product_val = 0;
- thread 1 : increase product_val to vector in thread 2 , notify thread 2 and waiting for thread 2 print product_val;
- thread 2 : wait and decrease product_val , print product_val
1 #include <iostream>
2 #include <thread>
3 #include <condition_variable>
4 #include <mutex>
5 #include <chrono>
6 #include <queue>
7 using namespace std;
8 int product_val = 0;
9 std::condition_variable cond;
10 std::mutex sync;
11 int main() {
12 //thread 2
13 std::thread con = std::thread([&](){
14 while (1)
15 {
16 std::unique_lock<std::mutex> l(sync);
17 cond.wait(l);
18 product_val--;
19 printf("Consumer product_val = %d \n", product_val);
20 l.unlock();
21 }
22 });
23 //thread 1 (main thread) process
24 for (int i = 0; i < 5; i++)
25 {
26 std::unique_lock<std::mutex> l(sync);
27 product_val++;
28 std::cout << "producer product val " << product_val;
29 cond.notify_one();
30 l.unlock();
31 l.lock();
32 while (product_val)
33 {
34
35 }
36 std::cout << "producer product val " << product_val;
37 l.unlock();
38 }
39 return 0;
40 }

Repeating values for in a random bytes generator in c++

I have made a random bytes generator for intialization vector of CBC mode AES implementation,
#include <iostream>
#include <random>
#include <climits>
#include <algorithm>
#include <functional>
#include <stdio.h>
using bytes_randomizer = std::independent_bits_engine<std::default_random_engine, CHAR_BIT, uint8_t>;
int main()
{
bytes_randomizer br;
char x[3];
uint8_t data[100];
std::generate(std::begin(data), std::end(data), std::ref(br));
for(int i = 0; i < 100; i++)
{
sprintf(x, "%x", data[i]);
std::cout << x << "\n";
}
}
But the problem is it gives the same sequence over and over, I found a solution to on Stack which is to use srand() but this seems to work only for rand().
Any solutions to this, also is there a better way to generate nonce for generating an unpredictable Initialization Vector.
Error C2338: invalid template argument for independent_bits_engine: N4659 29.6.1.1 [rand.req.genl]/1f requires one of unsigned short, unsigned int, unsigned long, or unsigned long long
Error C2338 note: char, signed char, unsigned char, int8_t, and uint8_t are not allowed
You can't use uint8_t in independent_bits_engine, at least on Visual Studio 2017. I don't know where and how you managed to compile it.
As the answer DeiDei suggests, seeding the engine is an important part to get random values. It's also same with rand().
srand(time(nullptr)); is required to get random values by using rand().
You can use:
using bytes_randomizer = std::independent_bits_engine<std::default_random_engine, CHAR_BIT, unsigned long>;
std::random_device rd;
bytes_randomizer br(rd());
Some example output:
25
94
bd
6d
6c
a4
You need to seed the engine, otherwise a default seed will be used which will give you the same sequence every time. This is the same as the usage of srand and rand.
Try:
std::random_device rd;
bytes_randomizer br(rd());

IEEE-754 double precision and splitting method

When you compute elementary functions, you apply constant modification. Specially in the implementation of exp(x). In all these implementations any correction with ln(2) is done in two steps. ln(2) is split in two numbers:
static const double ln2p1 = 0.693145751953125;
static const double ln2p2 = 1.42860682030941723212E-6;
// then ln(2) = ln2p1 + ln2p2
Then any computation with ln(2) is done by:
blablabla -= ln2p1
blablabla -= ln2p2
I know it is to avoid rounding effect. But why this two numbers in specially ? Some of you have an idea how get these two numbers ?
Thank you !
Following the first comment I complete this post with more material and a very weird question. I worked with my team and we agree the deal is to double potentially the precision by splitting the number ln(2) in two numbers. For this, two transformations are applied, the first one:
1) c_h = floor(2^k ln(2))/2^k
2) c_l = ln(2) - c_h
the k indicates the precisions, in look likes in Cephes library (~1980) for float k has been fixed on 9, 16 for double and also 16 for long long double (why I do not know). So for double c_h has a precision of 16 bits but 52 bits for c_l.
From this, I write the following program, and determine c_h with 52 bit precision.
#include <iostream>
#include <math.h>
#include <iomanip>
enum precision { nine = 9, sixteen = 16, fiftytwo = 52 };
int64_t k_helper(double x){
return floor(x/log(2));
}
template<class C>
double z_helper(double x, int64_t k){
x -= k*C::c_h;
x -= k*C::c_l;
return x;
}
template<precision p>
struct coeff{};
template<>
struct coeff<nine>{
constexpr const static double c_h = 0.693359375;
constexpr const static double c_l = -2.12194440e-4;
};
template<>
struct coeff<sixteen>{
constexpr const static double c_h = 6.93145751953125E-1;
constexpr const static double c_l = 1.42860682030941723212E-6;
};
template<>
struct coeff<fiftytwo>{
constexpr const static double c_h = 0.6931471805599453972490664455108344554901123046875;
constexpr const static double c_l = -8.78318343240526578874146121703272447458793199905066E-17;
};
int main(int argc, const char * argv[]) {
double x = atof(argv[1]);
int64_t k = k_helper(x);
double z_9 = z_helper<coeff<nine> >(x,k);
double z_16 = z_helper<coeff<sixteen> >(x,k);
double z_52 = z_helper<coeff<fiftytwo> >(x,k);
std::cout << std::setprecision(16) << " 9 bits precisions " << z_9 << "\n"
<< " 16 bits precisions " << z_16 << "\n"
<< " 52 bits precisions " << z_52 << "\n";
return 0;
}
If I compute now for a set of different values I get:
bash-3.2$ g++ -std=c++11 main.cpp
bash-3.2$ ./a.out 1
9 bits precisions 0.30685281944
16 bits precisions 0.3068528194400547
52 bits precisions 0.3068528194400547
bash-3.2$ ./a.out 2
9 bits precisions 0.61370563888
16 bits precisions 0.6137056388801094
52 bits precisions 0.6137056388801094
bash-3.2$ ./a.out 100
9 bits precisions 0.18680599936
16 bits precisions 0.1868059993678755
52 bits precisions 0.1868059993678755
bash-3.2$ ./a.out 200
9 bits precisions 0.37361199872
16 bits precisions 0.3736119987357509
52 bits precisions 0.3736119987357509
bash-3.2$ ./a.out 300
9 bits precisions 0.56041799808
16 bits precisions 0.5604179981036264
52 bits precisions 0.5604179981036548
bash-3.2$ ./a.out 400
9 bits precisions 0.05407681688
16 bits precisions 0.05407681691155647
52 bits precisions 0.05407681691155469
bash-3.2$ ./a.out 500
9 bits precisions 0.24088281624
16 bits precisions 0.2408828162794319
52 bits precisions 0.2408828162794586
bash-3.2$ ./a.out 600
9 bits precisions 0.4276888156
16 bits precisions 0.4276888156473074
52 bits precisions 0.4276888156473056
bash-3.2$ ./a.out 700
9 bits precisions 0.61449481496
16 bits precisions 0.6144948150151828
52 bits precisions 0.6144948150151526
It like when x becomes larger than 300 a difference appear. I had a look on the the implementation of gnulibc
http://osxr.org:8080/glibc/source/sysdeps/ieee754/ldbl-128/s_expm1l.c
presently it is using the 16 bits prevision for c_h (line 84)
Well I am probably missing something, with the IEEE standard, and I can not imagine an error of precision in glibc. What do you think ?
Best,
ln2p1 is exactly 45426/65536. This can be obtained by round(65536 * ln(2)). ln2p2 is simply the remainder. So what's so special about the two number is the denominator 65536 (216).
From what I found most algorithms using this constant can be traced back to the cephes library, which was first released in 1984 where 16-bit computing was still dominating, which probably explains why 216 is chosen.

macos: CCCrypt() decrypt output does not match original plaintext

In the code below, the decrypted text does not match the original plaintext. The first 12 bytes are messed up. Note that block cipher padding has been disabled. I have tried different values for BUF_SIZE, all multiples of 16 - every time the first 12 bytes of the decrypted data is wrong. Here's the output:
plain buf[32]:
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
outlen=32
outlen=32
dec buf[32]:
0C 08 01 46 6D 3D FC E9 98 0A 2D E1 AF A3 95 3A
0B 31 1B 9D 11 11 11 11 11 11 11 11 11 11 11 11
Here's the code:
#include <stdio.h>
#include <string.h>
#include <CommonCrypto/CommonCryptor.h>
static void
dumpbuf(const char* label, const unsigned char* pkt, unsigned int len)
{
const int bytesPerLine = 16;
if (label) {
printf("%s[%d]:\n", label, len);
}
for (int i = 0; i < int(len); i++) {
if (i && ((i % bytesPerLine) == 0)) {
printf("\n");
}
unsigned int c = (unsigned int)pkt[i] & 0xFFu;
printf("%02X ", c);
}
printf("\n");
}
int main(int argc, char* argv[])
{
unsigned char key[16];
unsigned char iv[16];
memset(key, 0x22, sizeof(key));
memset(iv, 0x33, sizeof(iv));
#define BUF_SIZE 32
unsigned char plainBuf[BUF_SIZE];
unsigned char encBuf[BUF_SIZE];
memset(plainBuf, 0x11, sizeof(plainBuf));
dumpbuf("plain buf", plainBuf, sizeof(plainBuf));
int outlen;
CCCryptorStatus status;
status = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, 0,
key, kCCKeySizeAES128, iv, plainBuf, sizeof(plainBuf),
encBuf, sizeof(encBuf), (size_t*)&outlen);
if (kCCSuccess != status) {
fprintf(stderr, "FEcipher: CCCrypt failure\n");
return -1;
}
printf("outlen=%d\n", outlen);
status = CCCrypt(kCCDecrypt, kCCAlgorithmAES128, 0,
key, kCCKeySizeAES128, iv, encBuf, sizeof(encBuf),
plainBuf, sizeof(plainBuf), (size_t*)&outlen);
if (kCCSuccess != status) {
fprintf(stderr, "FEcipher: CCCrypt failure\n");
return -1;
}
printf("outlen=%d\n", outlen);
dumpbuf("dec buf", plainBuf, sizeof(plainBuf));
return 0;
}
Thanks,
Hari
#owlstead, thanks for your response. CBC is the default - you don't need to specify anything special in the options to enable it.
The same code using CCCrypt() was working before. I don't know what changed - may be a new library was installed during an update. Instead of using the convenience function CCCrypt() I'm now using the Create/Update/Final API - it works, so I have a workaround.
outlen should be size_t, not int.

Boost thread does no sleep properly, Am I missing something?

Consider following example, On windows 7 icore7 laptop(VC++2010) and ubuntu 64bit 12.04 lte gcc 4.6.3
#include <iostream>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/thread/thread.hpp>
typedef boost::posix_time::ptime Time;
typedef boost::posix_time::time_duration TimeDuration;
int main()
{
Time t1;
Time t2;
TimeDuration dt;
boost::posix_time::microseconds tosleep=boost::posix_time::microseconds(100);
for(int i=0;i<10;i++){
t1=boost::posix_time::microsec_clock::local_time();
//std::cout <<i << std::endl; // on win7 without this all outputs are 0
boost::this_thread::sleep( tosleep );
t2=boost::posix_time::microsec_clock::local_time();
dt = t2 - t1;
long long msec = dt.total_microseconds();
std::cout << msec << std::endl;
}
return 0;
}
I was expecting that my thread will sleep constantly 100 microsecond, but output is something weird :
=========== AND OUTPUT ==================
arm2arm#cudastartub:~$ g++ -O3 sleepme.cpp -lboost_thread
arm2arm#cudastartub:~$ ./a.out
726
346
312
311
513
327
394
311
306
445
Is boost has a some overhead? What I need for RealTime systems, where microseconds are important?
On Windows one can't Sleep() less than 1 ms, so your sleep(tosleep) call is equivalent to sleep(0). (See also this link)
Of course, std::cout <<i << std::endl would take some time...

Resources