C++ 0xC0000094: Integer division by zero - divide-by-zero

This code is working perfectly until 100000 but if you input 1000000 it is starting to give the error C++ 0xC0000094: Integer division by zero. I am sure it is something about floating points. I tried all the combinations of (/fp:precise), (/fp:strict), (/fp:except) and (/fp:except-) but had no positive result.
#include "stdafx.h"
#include "time.h"
#include "math.h"
#include "iostream"
#define unlikely(x)(x)
int main()
{
using namespace std;
begin:
int k;
cout<<"Please enter the nth prime you want: ";
cin>>k;
int cloc=clock();
int*p;p=new int [k];
int i,j,v,n=0;
for(p[0]=2,i=3;n<k-1;i+=2)
for(j=1;unlikely((v=p[j],pow(v,2)>i))?!(p[++n]=i):(i%v);++j);
cout <<"The "<<k<<"th prime is "<<p[n]<<"\nIt took me "<<clock()-cloc<<" milliseconds to find your prime.\n";
goto begin;
}

The code displayed in the question does not initialize p[1] or assign a value to it. In the for loop that sets j=1, p[j] is used in an assignment to v. The results in an unknown value for v. Apparently, it happens to be zero, which causes a division by zero in the expression i%v.
As this code is undocumented, poorly structured, and unreadable, the proper solution is to discard it and start from scratch.
Floating point has no bearing on the problem, although the use of pow(v, 2) to calculate v2 is a poor choice; v*v would serve better. However, some systems print the misleading message “Floating exception” when an integer division by zero occurs. In spite of the message, this is an error in an integer operation.

Related

boost::multiprecision random number with fixed seed and variable precision

When using a fixed seed inside a rng, results are not reproducible when precision is varied. Namely, if one changes the template argument cpp_dec_float<xxx> and runs the following code, different outputs are seen (for each change in precision).
#include <iostream>
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <boost/multiprecision/cpp_int.hpp>
#include <random>
#include <boost/random.hpp>
typedef boost::multiprecision::cpp_dec_float<350> mp_backend; // <--- change me
typedef boost::multiprecision::number<mp_backend, boost::multiprecision::et_off> big_float;
typedef boost::random::independent_bits_engine<boost::mt19937, std::numeric_limits<big_float>::digits, boost::multiprecision::cpp_int> generator;
int main()
{
std::cout << std::setprecision(std::numeric_limits<big_float>::digits10) << std::showpoint;
auto ur = boost::random::uniform_real_distribution<big_float>(big_float(0), big_float(1));
generator gen = generator(42); // fixed seed
std::cout << ur(gen) << std::endl;
return 0;
}
Seems reasonable I guess. But how do I make it so that for n digits of precision, a fixed seed will produce a number x which is equivalent to y within n digits where y is defined for n+1 digits? e.g.
x = 0.213099234 // n = 9
y = 0.2130992347 // n = 10
...
To add to the excellent #user14717 answer, to get reproducible result, you would have to:
Use wide (wider than output mantissa+1) random bits generator. Lets say, you need MP doubles with no more than 128bit mantissa, then use bits generator which produces 128bit output. Internally, it could be some standard RNG like mersenne twister chaining words together to achieve desired width.
You own uniform_real_distribution, which converts this 128bits to mantissa
And at the end, DISCARD the rest of the bits in the 128bits pack.
Using this approach would guarantee you'll get the same real output, only difference being in precision.
The way these distributions work is to shift random bits into the mantissa of the floating point number. If you change the precision, you consume more of these bits on every call, so you get different random sequences.
I see no way for you to achieve your goal without writing your own uniform_real_distribution. You probably need two integer RNGs, one which fills the most significant bits, and another which fills the least significant bits.

Is there a random number generator which can skip/drop N draws in O(1)?

Is there any (non-cryptographic) pseudo random number generator that can skip/drop N draws in O(1), or maybe O(log N) but smaller than O(N).
Especially for parallel applications it would be of advantage to have a generator of the above type. Image you want to generate an array of random numbers. One could write a parallel program for this task and seed the random number generator for each thread independently. However, the numbers in the array would then not be the same as for the sequential case (except for the first half maybe).
If a random number generator of the above type would exist, the first thread could seed with the seed used for the sequential implementation. The second thread could also seed with this seed and then drop/skip N/2 samples which are generated by the first thread. The output array would then be identical to the serial case (easy testing) but still generated in less time.
Below is some pseudo code.
#define _POSIX_C_SOURCE 1
#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
void rand_r_skip(unsigned int *p_seed, int N)
{
/* Stupid O(N) Implementation */
for (int i = 0; i < N; i++)
{
rand_r(p_seed);
}
}
int main()
{
int N = 1000000;
unsigned int seed = 1234;
int *arr = (int *)malloc(sizeof(int) * N);
#pragma omp parallel firstprivate(N, seed, arr) num_threads(2)
{
if (omp_get_thread_num() == 1)
{
// skip the samples, obviously doesn't exist
rand_r_skip(&seed, N / 2);
}
#pragma omp for schedule(static)
for (int i = 0; i < N; i++)
{
arr[i] = rand_r(&seed);
}
}
return 0;
}
Thank you all very much for your help. I do know that there might be a proof that such a generator cannot exist and be "pseudo-random" at the same time. I am very grateful for any hints on where to find further information.
Sure. Linear Conguential Generator and its descendants could skip generation of N numbers in O(log(N)) time. It is based on paper of F.Brown, link.
Here is an implementation of the idea, C++11.
As kindly indicated by Severin Pappadeux, the C, C++ and Haskell implementations of a PCG variant developed by M.E. O'Neill provides an interface to such jump-ahead/jump-back functionality: herein.
Function names are: advance and backstep, which were briefly documented hereat and hereat, respectively
Quoting from the webpage (accessed at the time of writing):
... a random number generator is like a book that lists page after page of statistically random numbers. The seed gives us a starting point, but sometimes it is useful to be able to move forward or backwards in the sequence, and to be able to do so efficiently.
The C++ implementation of the PCG generation scheme provides advance to efficiently jump forwards and backstep to efficiently jump backwards.
Chris Dodd wrote the following:
Obvious candidate would be any symmetric crypto cipher in counter mode.

Strange Bank(Atcoder Beginner contest 099)

To make it difficult to withdraw money, a certain bank allows its customers to withdraw only one of the following amounts in one operation:
1 yen (the currency of Japan)
6 yen, 6^2(=36) yen, 6^3(=216) yen, ...
9 yen, 9^2(=81) yen, 9^3(=729) yen, ...
At least how many operations are required to withdraw exactly N yen in total?
It is not allowed to re-deposit the money you withdrew.
Constraints
1≤N≤100000
N is an integer.
Input is given from Standard Input in the following format:
N
Output
If at least x operations are required to withdraw exactly N yen in total, print x.
Sample Input 1
127
Sample Output 1
4
By withdrawing 1 yen, 9 yen, 36(=6^2) yen and 81(=9^2) yen, we can withdraw 127 yen in four operations.
It seemed as a simple greedy problem to me ,So that was the approach I used, but I saw I got a different result for one of the samples and figured out,
It will not always be greedy.
#include <iostream>
#include <queue>
#include <stack>
#include <algorithm>
#include <functional>
#include <cmath>
using namespace std;
int intlog(int base, long int x) {
return (int)(log(x) / log(base));
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
long int n;cin>>n;
int result=0;
while(n>0)
{
int base_9=intlog(9,n);int base_6=intlog(6,n);
int val;
val=max(pow(9,base_9),pow(6,base_6));
//cout<<pow(9,base_9)<<" "<<pow(6,base_6)<<"\n";
val=max(val,1);
if(n<=14 && n>=12)
val=6;
n-=val;
//cout<<n<<"\n";
result++;
}
cout<<result;
return 0;
}
At n 14 and above 12 , we have to pick 6 rather than 9, because To reach zero it will take less steps.
It got AC only for 18/22 TCs Please help me understand my mistake.
Greedy will not work here as the choosing the answer greedily i.e. the optimal result at every step will not guarantee the best end result (you can see that in your example). So instead you should traverse through every possible scenarios at each step to figure out the overall optimal result.
Now lets see how can we do that. As you can see that here the maximum input could be 10^5. And we can withdraw any one of the only following 12 values in one operation -
[1, 6, 9, 36(=6^2), 81(=9^2), 216(=6^3), 729(=9^3), 1296(=6^4), 6561(=9^4), 7776(=6^5), 46656(=6^6), 59049(=9^5)]
Because 6^7 and 9^6 will be more than 100000.
So at each step with value n we will try to take each possible (i.e less than or equals to n) element arr[i] from the above array and then recursively solve the subproblem for n-arr[i] until we reach at zero.
solve(n)
if n==0
return 1;
ans = n;
for(int i=0;i<arr.length;i++)
if (n>=arr[i])
ans = min(ans, 1+solve(n-arr[i]);
return ans;
Now this is very time extensive recursive solution(O(n*2^12)). We will try to optimize it. As you will try with some sample cases you will come to know that the subproblems are overlapping that means there could be duplicate subproblems. Here comes Dynamic Programming into the picture. You can store every subproblem's solution so that we can re-use them in future. So we can modify our solution as following
solve(n)
if n==0
return 1;
ans = n;
if(dp[n] is seen)
return dp[n];
for(int i=0;i<arr.length;i++)
if (n>=arr[i])
ans = min(ans, 1+solve(n-arr[i]);
return dp[n] = ans;
The time complexity for DP solution is O(n*12);

Iteraion 3u invokes unidenified error

#include <iostream>
int main()
{
for (int i = 0; i < 4; ++i)
std::cout << i*5000000000 << std::endl;
}
getting a warning from gcc whenever i try to run this.
:-
warning: iteration 3u invokes undefined behavior [-Waggressive-loop-optimizations]
std::cout << i*5000000000 << std::endl;
Whats the cause of this error?
Signed integer overflow (as strictly speaking, there is no such thing as "unsigned integer overflow") means undefined behaviour.
Unsigned integers, declared unsigned, shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer.I suspect that it's something like: (1) because every iteration with i of any value larger than 2 has undefined behavior -> (2) we can assume that i <= 2 for optimization purposes -> (3) the loop condition is always true -> (4) it's optimized away into an infinite loop.
What is going on is a case of strength reduction, more specifically, induction variable elimination. The compiler eliminates the multiplication by emitting code that instead increments i by 1e9 each iteration (and changing the loop condition accordingly). This is a perfectly valid optimization under the "as if" rule as this program could not observe the difference were it well-behaving. Alas, it's not, and the optimization "leaks"

generate random long unsigned C

This is my code:
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <crypt.h>
#include <string.h>
#include <stdlib.h>
int main(void){
int i;
unsigned long seed[2];
/* Generate a (not very) random seed */
seed[0] = time(NULL);
seed[1] = getpid() ^ (seed[0] >> 14 & 0x30000);
printf("Seed 0: %lu ; Seed 1: %lu", seed[0], seed[1]);
return 0;
}
I want to generate some very random seed that will be used into an hash function but i don't know how to do it!
You can read the random bits you need from /dev/random.
When read, the /dev/random device will only return random bytes within the estimated number of bits of noise in the entropy pool. /dev/random should be suitable for uses that need very high quality randomness such as one-time pad or key generation. When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered.(http://www.kernel.org/doc/man-pages/online/pages/man4/random.4.html)
int randomSrc = open("/dev/random", O_RDONLY);
unsigned long seed[2];
read(randomSrc , seed, 2 * sizeof(long) );
close(randomSrc);
Go for Mersenne Twister, it is a widely used pseudorandom number generator, since it is very fast, has a very long period and a very good distribution. Do not attempt to write your own implementation, use any of the available ones.
Because the algorithm is deterministic you can't get very random, only pseudo-random - for most cases what you have there is plenty, if you go overboard e.g.
Mac address + IP address + free space on HD + current free memory + epoch time in ms...
then you risk crippling the performance of your algorithm.
If your solution is interactive then you could set the user a short typing task and get them to generate the random data for you - measure the time between keystrokes and multiply that by the code of the key they pressed - even if they re-type the same string the timing will be off slightly - you could mix it up a bit, take mod 10 of the seconds when they start and only count those keystrokes.
But if you really really want 100% random numbers - then you could use the ANU Quantum Vacuum Random number generator - article
There is a project on GitHub it's pretty awesome way to beat the bad guys.

Resources