The documentation for GMP seems to list only the following algorithms for random number generation:
gmp_randinit_mt, the Mersenne Twister;
gmp_randinit_lc_2exp and gmp_randinit_lc_2exp_size, linear congruential.
There is also gmp_randinit_default, but it points to gmp_randinit_mt.
Neither the Mersenne Twister nor linear congruential generators should be used for Cryptography.
What do people usually do, then, when they want to use the GMP to build some cryptographic code?
(Using a cryptographic API for encrypting/decrypting/etc doesn't help, because I'd actually implement a new algorithm, which crypto libraries do not have).
Disclaimer: I have only "tinkered" with RNGs, and that was over a year ago.
If you are on a linux box, the solution is relatively simple and non-deterministic. Just open and read a desired number of bits from /dev/urandom. If you need a large number of random bits for your program however, then you might want to use a smaller number of bits from /dev/urandom as seeds for a PRNG.
boost offers a number of PRNGs and a non-deterministic RNG, random_device. random_device uses the very same /dev/urandom on linux and a similar(IIRC) function on windows, so if you need windows or x-platform.
Of course, you just might want/need to write a function based on your favored RNG using GMP's types and functions.
Edit:
#include<stdio.h>
#include<gmp.h>
#include<boost/random/random_device.hpp>
int main( int argc, char *argv[]){
unsigned min_digits = 30;
unsigned max_digits = 50;
unsigned quantity = 1000; // How many numbers do you want?
unsigned sequence = 10; // How many numbers before reseeding?
mpz_t rmin;
mpz_init(rmin);
mpz_ui_pow_ui(rmin, 10, min_digits-1);
mpz_t rmax;
mpz_init(rmax);
mpz_ui_pow_ui(rmax, 10, max_digits);
gmp_randstate_t rstate;
gmp_randinit_mt(rstate);
mpz_t rnum;
mpz_init(rnum);
boost::random::random_device rdev;
for( unsigned i = 0; i < quantity; i++){
if(!(i % sequence))
gmp_randseed_ui(rstate, rdev.operator ()());
do{
mpz_urandomm(rnum, rstate, rmax);
}while(mpz_cmp(rnum, rmin) < 0);
gmp_printf("%Zd\n", rnum);
}
return 0;
}
Related
I have implemented random sequence generator in python and want to test the results in TestU01. But I am not getting how to give input for that test suite and also suggest me that how many bit sequence I need to generate to test the sequence
TestU01 is a library and doesn't come with executables. It mostly has methods to test C generators which implement unif01_Gen defined in unif01.h. See guideshorttest01.pdf.
However, it does come with a few methods which test binary files. Here is a short program which calls them:
#include <stdio.h>
#include "gdef.h"
#include "swrite.h"
#include "bbattery.h"
int main (int argc, char *argv[])
{
if (argc != 2) {
printf("Specify binary file of random bits as ./test <path>");
return 0;
}
FILE* fp = fopen(argv[1], "r");
fseek(fp, 0L, SEEK_END);
size_t sz = ftell(fp) * 8;
fclose(fp);
printf("Reading binary file %s of size %d bits", argv[1], sz);
swrite_Basic = FALSE;
bbattery_RabbitFile (argv[1], sz);
bbattery_AlphabitFile (argv[1], sz);
bbattery_FIPS_140_2File (argv[1]);
return 0;
}
After installing TestU01 (it's in the Arch/Manjaro AUR, in case that helps), compile it with: gcc test.c -o test -ltestu01
Here is a Python program which generates a random binary file. Note that the tests work on 32-bit blocks, and I suggest to stick to that when generating the file.
size = 1024*1024
rand = Random()
with open("bits", "wb") as f:
for i in range(size//4):
value = rand.getrandbits(32)
s = struct.pack('I', value)
f.write(s)
There is also a version of SmallCrush which reads a text file of about 5 million floats. See bbattery_SmallCrushFile. I haven't tried it, but make sure the floats are written with many digits as the conversion to/from text can break the test.
I don't know much about the theory of testing RNGs, so I can't answer how long of a sequence you need. The TestU01 guide is detailed and might answer your questions.
Maybe I got the whole idea wrong but in my attempt to create a RNG using what is provided in the random-header I found out that there is a lack of uniformness in the least significant bits of the random numbers. In fact most of the times when I look at the numbers in hexadecimal display in the Visual Studio 2010 debugger they end in 0x00.
So I wrote this little bit of code to confirm my suspicion:
std::random_device rd;
std::mt19937_64 generator(rd());
std::uniform_int_distribution<unsigned long long> distribution(0, ULLONG_MAX);
int cnt[256];
for (int i = 0; i < 256; i++)
cnt[i] = 0;
for (int i = 0; i < 10000; i++)
cnt[distribution(generator) & 0xFF]++;
it turnes out that about 95% of the random numbers generated end with "0x00".
Did I get the idea of uniformness wrong?
What do I have to do in order generate uniformly distributed 64 bit unsigned integers?
I intended to use them as basis for lower-resolution random numbers I could derive by masking them with the corresponding 0xF....F pattern.
Performance is not an issue. Flexibility is.
Is it possible to generate random numbers within a device function without preallocate all the states? I would like to generate and use them in "realtime". I need to use them for Monte Carlo simulations what are the most suitable for this purpose? The number generated below are single precision is it possible to have them in double precision?
#include <iostream>
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <curand_kernel.h>
__global__ void cudaRand(float *d_out, unsigned long seed)
{
int i = blockDim.x * blockIdx.x + threadIdx.x;
curandState state;
curand_init(seed, i, 0, &state);
d_out[i] = curand_uniform(&state);
}
int main(int argc, char** argv)
{
size_t N = 1 << 4;
float *v = new float[N];
float *d_out;
cudaMalloc((void**)&d_out, N * sizeof(float));
// generate random numbers
cudaRand << < 1, N >> > (d_out, time(NULL));
cudaMemcpy(v, d_out, N * sizeof(float), cudaMemcpyDeviceToHost);
for (size_t i = 0; i < N; i++)
{
printf("out: %f\n", v[i]);
}
cudaFree(d_out);
delete[] v;
return 0;
}
UPDATE
#include <iostream>
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <curand_kernel.h>
#include <ctime>
__global__ void cudaRand(double *d_out)
{
int i = blockDim.x * blockIdx.x + threadIdx.x;
curandState state;
curand_init((unsigned long long)clock() + i, 0, 0, &state);
d_out[i] = curand_uniform_double(&state);
}
int main(int argc, char** argv)
{
size_t N = 1 << 4;
double *h_v = new double[N];
double *d_out;
cudaMalloc((void**)&d_out, N * sizeof(double));
// generate random numbers
cudaRand << < 1, N >> > (d_out);
cudaMemcpy(h_v, d_out, N * sizeof(double), cudaMemcpyDeviceToHost);
for (size_t i = 0; i < N; i++)
printf("out: %f\n", h_v[i]);
cudaFree(d_out);
delete[] h_v;
return 0;
}
How I was dealing with the similar situation in the past, within __device__/__global__ function:
int tId = threadIdx.x + (blockIdx.x * blockDim.x);
curandState state;
curand_init((unsigned long long)clock() + tId, 0, 0, &state);
double rand1 = curand_uniform_double(&state);
double rand2 = curand_uniform_double(&state);
So just use curand_uniform_double for generating random doubles and also I believe you don't want the same seed for all of the threads, thats what I am trying to achieve by using clock() + tId instead. This way the odds of having the same rand1/rand2 in any of the two threads are close to nil.
EDIT:
However, based on below comments, proposed approach may perhaps lead to biased result:
JackOLantern pointed me to this part of curand documentation:
Sequences generated with different seeds usually do not have statistically correlated values, but some choices of seeds may give statistically correlated sequences.
Also there is a devtalk thread devoted to how to improve performance of curand_init in which the proposed solution to speed up the curand initialization is:
One thing you can do is use different seeds for each thread and a fixed subsequence of 0 and offset of 0.
But the same poster is later stating:
The downside is that you lose some of the nice mathematical properties between threads. It is possible that there is a bad interaction between the hash function that initializes the generator state from the seed and the periodicity of the generators. If that happens, you might get two threads with highly correlated outputs for some seeds. I don't know of any problems like this, and even if they do exist they will most likely be rare.
So it is basically up to you whether you want better performance (as I did) or 1000% unbiased results. If that is what you desire, then solution proposed by JackOLantern is the correct one, i.e. initialize curand as:
curand_init((unsigned long long)clock(), tId, 0, &state)
Using not 0 value for offset and subsequence parameters is, however, decreasing performance. For more info on these parameters you may review this SO thread and also curand documentation.
I see that JackOLantern stated in comment that:
I would say it is not recommandable to call curand_init and curand_uniform_double from within the same kernel from two reasons ........ Second, curand_init initializes the pseudorandom number generator and sets all of its parameters, so I'm afraid your approach will be somewhat slow.
I was dealing with this in my thesis on several pages, tried various approaches to get different random numbers in each thread and creating curandState in each of the threads turned out to be the most viable solution for me. I needed to generate ~10 random numbers in each thread and among others I tried:
developing my own simple random number generator (Linear Congruential Generator) whose intialization was basically for free, however, the performance suffered greatly when generating numbers, so in the end having curandState in each thread turned out to be superior,
pre-allocating curandStates and reusing them - this was memory heavy and when I decreased number of preallocated states then I had to use non zero values for offset/subsequence parameters of curand_uniform_double in order to get rid of bias which led to decreased performance when generating numbers.
So after making thorough analysis I decided to indeed call curand_init and curand_uniform_double in each thread. The only problem was with the amount of registry that these states were occupying so I had to be careful with the block sizes not to exceed the max number of registry available to each block.
Thats what I have to say about provided solution which I was finally able to test and it is working just fine on my machine/GPU. I run the code from UPDATE section in the above question and 16 different random numbers were displayed in the console correctly. Therefore I advise you to properly perform error checking after executing kernel to see what went wrong inside. This topic is very well covered in this SO thread.
In the IEEE754 standarad, the minimum strictly positive (subnormal) value is 2−16493 ≈ 10−4965 using Quadruple-precision floating-point format. Why does GCC reject anything lower than 10-4949? I'm looking for an explanation of the different things that could be going on underneath which determine the limit to be 10-4949 rather than 10−4965.
#include <stdio.h>
void prt_ldbl(long double decker) {
unsigned char * desmond = (unsigned char *) & decker;
int i;
for (i = 0; i < sizeof (decker); i++) {
printf ("%02X ", desmond[i]);
}
printf ("\n");
}
int main()
{
long double x = 1e-4955L;
prt_ldbl(x);
}
I'm using GNU GCC version 4.8.1 online - not sure which architecture it's running on (which I realize may be the culprit). Please feel free to post your findings from different architectures.
Your long double type may not be(*) quadruple-precision. It may simply be the 387 80-bit extended-double format. This format has the same number of bits for the exponent as quad-precision, but many fewer significand bits, so the minimum value that would be representable in it sounds about right (2-16445)
(*) Your long double is likely not to be quad-precision, because no processor implements quad-precision in hardware. The compiler can always implement quad-precision in software, but it is much more likely to map long double to double-precision, to extended-double or to double-double.
The smallest 80-bit long double is around 2-16382 - 63 ~= 10-4951, not 2-164934. So the compiler is entirely correct; your number is smaller than the smallest subnormal.
I have a set of integer values and I want to sort them using Thrust. Is there a possiblity for using only some high bits/low bits in this sorting. If possible I do not want to use user defined comparator, because it changes the used algorithm from radix-sort to merge-sort and increases elapsed time quite much.
I think when all the numbers have the same value on a bit, the bit is skipped while sorting, so is it feasible to use the lowest possible bit number and hope it will be sufficient. (ie: for 5 bits using char with 8 bits and setting upper 3 bits to 0)
Example:
sort<4, 0>(myvector.begin(), myvector.end())
sort<4, 1>(myvector.begin(), myvector.end())
sort using only 4 bits, high or low..
Something similar to
http://www.moderngpu.com/sort/mgpusort.html
Thrust's interface abstracts away algorithm implementation details such as the fact that one of the current sorting strategies is a radix sort. Due to the possibility that the underlying sort implementation could change from version to version, backend to backend, or even invocation to invocation, there is no way for the user to communicate the number of bits to sort.
Fortunately, such explicit information generally isn't necessary. When appropriate, Thrust's current sorting implementation will inspect the sorting keys and omit superfluous computation amidst zeroed bits.
How about using transformer_iterator?
Here is a short example (sort by first bit) and you can write your own unary function for your purpose.
#include <iostream>
#include <thrust/device_vector.h>
#include <thrust/iterator/transform_iterator.h>
#include <thrust/sort.h>
using namespace std;
struct and_func : public thrust::unary_function<int,int>
{
__host__ __device__
int operator()(int x)
{
return 8&x;
}
};
int main()
{
thrust::device_vector<int> d_vec(4);
d_vec[0] = 10;
d_vec[1] = 8;
d_vec[2] = 12;
d_vec[3] = 1;
thrust::sort_by_key(thrust::make_transform_iterator(d_vec.begin(), and_func()),
thrust::make_transform_iterator(d_vec.end(), and_func()),
d_vec.begin());
for (int i = 0; i < 4; i++)
cout<<d_vec[i]<<" ";
cout<<"\n"<<endl;
return 0;
}