I have written this code to check if a number is prime (for numbers upto 10^9+7)
Is this a good method ??
What will be the time complexity for this ??
What I have done is that I have made a unordered_set which stores the prime numbers upto sqrt(n).
When checking if a number is prime or not if first check if its is less than the max number in the table.
If it is less it is searched in the table so the complexity should be O(1) in this case.
If it is more the number is put through a divisibility test with the numbers from the set of number containing the prime numbers.
#include<iostream>
#include<set>
#include<math.h>
#include<unordered_set>
#define sqrt10e9 31623
using namespace std;
unordered_set<long long> primeSet = { 2, 3 }; //used for fast lookups
void genrate_prime_set(long range) //this generates prime number upto sqrt(10^9+7)
{
bool flag;
set<long long> tempPrimeSet = { 2, 3 }; //a temporay set is used for genration
set<long long>::iterator j;
for (int i = 3; i <= range; i = i + 2)
{
//cout << i << " ";
flag = true;
for (j = tempPrimeSet.begin(); *j * *j <= i; ++j)
{
if (i % (*j) == 0)
{
flag = false;
break;
}
}
if (flag)
{
primeSet.insert(i);
tempPrimeSet.insert(i);
}
}
}
bool is_prime(long long i,unordered_set<long long> primeSet)
{
bool flag = true;
if(i <= sqrt10e9) //if number exist in the lookup table
return primeSet.count(i);
//if it doesn't iterate through the table
for (unordered_set<long long>::iterator j = primeSet.begin(); j != primeSet.end(); ++j)
{
if (*j * *j <= i && i % (*j) == 0)
{
flag = false;
break;
}
}
return flag;
}
int main()
{
//long long testCases, a, b, kiwiCount;
bool primeFlag = true;
//unordered_set<int> primeNum;
genrate_prime_set(sqrt10e9);
cout << primeSet.size()<<"\n";
cout << is_prime(9999991,primeSet);
return 0;
}
This doesn't strike me as a particularly efficient way to do the job at hand.
Although it probably won't make a big difference in the end, the efficient way to generate all the primes up to some specific limit is clearly to use a sieve--the sieve of Eratosthenes is simple and fast. There are a couple of modifications that can be faster, but for the small size you're dealing with, they're probably not worthwhile.
These normally produce their output in a more effective format than you're currently using as well. In particular, you typically just dedicate one bit to each possible prime (i.e., each odd number) and end up with it zeroed if the number is composite, and one if it's prime (you can, of course, reverse the sense if you prefer).
Since you only need one bit for each odd number from 3 to 31623, this requires only about 16 K bits, or about 2K bytes--a truly minuscule amount of memory by modern standards (especially: little enough to fit in L1 cache quite easily).
Since the bits are stored in order, it's also trivial to compute and test by the factors up to the square root of the number you're testing instead of testing against all the numbers in the table (including those greater than the square root of the number you're testing, which is obviously a waste of time). This also optimizes access to the memory in case some of it's not in the cache (i.e., you can access all the data in order, making life as easy as possible for the hardware prefetcher).
If you wanted to optimize further, I'd consider just using the sieve to find all primes up to 109+7, and look up inputs. Whether this is a win will depend (heavily) upon the number of queries you can expect to receive. A quick check shows that a simple implementation of the Sieve of Eratosthenes can find all primes up to 109 in about 17 seconds. After that, each query is (of course) essentially instantaneous (i.e., the cost of a single memory read). This does require around 120 megabytes of memory for the result of the sieve, which would once have been a major consideration, but (except on fairly limited systems) normally wouldn't be any more.
The very short answer: do research on the subject, starting with the term "Miller-Rabin"
The short answer is no:
Looking for factors of a number is a poor way to check for primality
Exhaustively searching through primes is a poor way to look for factors
Especially if you search through every prime, rather than just the ones less than or equal to the square root of the number
Doing a primality test on each number of them is a poor way to generate a list of primes
Also, you should take in primeSet by reference rather than copy, if it really needs to be a parameter.
Note: testing small primes to see if they divide a number is a useful first step of a primality test, but should generally only be used for the smallest primes before switching to a better method
No, it's not a very good way to determine if a number is prime. Here is pseudocode for a simple primality test that is sufficient for numbers in your range; I'll leave it to you to translate to C++:
function isPrime(n)
d := 2
while d * d <= n
if n % d == 0
return False
d := d + 1
return True
This works by trying every potential divisor up to the square root of the input number n; if no divisor has been found, then the input number could not be composite, meaning of the form n = p × q, because one of the two divisors p or q must be less than the square root of n while the other is greater than the square root of n.
There are better ways to determine primality; for instance, after initially checking if the number is even (and hence prime only if n = 2), it is only necessary to test odd potential divisors, halving the amount of work necessary. If you have a list of primes up to the square root of n, you can use that list as trial divisors and make the process even faster. And there are other techniques for larger n.
But that should be enough to get you started. When you are ready for more, come back here and ask more questions.
I can only suggest a way to use a library function in Java to check the primality of a number. As for the other questions, I do not have any answers.
The java.math.BigInteger.isProbablePrime(int certainty) returns true if this BigInteger is probably prime, false if it's definitely composite. If certainty is ≤ 0, true is returned. You should try and use it in your code. So try rewriting it in Java
Parameters
certainty - a measure of the uncertainty that the caller is willing to tolerate: if the call returns true the probability that this BigInteger is prime exceeds (1 - 1/2^certainty). The execution time of this method is proportional to the value of this parameter.
Return Value
This method returns true if this BigInteger is probably prime, false if it's definitely composite.
Example
The following example shows the usage of math.BigInteger.isProbablePrime() method
import java.math.*;
public class BigIntegerDemo {
public static void main(String[] args) {
// create 3 BigInteger objects
BigInteger bi1, bi2, bi3;
// create 3 Boolean objects
Boolean b1, b2, b3;
// assign values to bi1, bi2
bi1 = new BigInteger("7");
bi2 = new BigInteger("9");
// perform isProbablePrime on bi1, bi2
b1 = bi1.isProbablePrime(1);
b2 = bi2.isProbablePrime(1);
b3 = bi2.isProbablePrime(-1);
String str1 = bi1+ " is prime with certainity 1 is " +b1;
String str2 = bi2+ " is prime with certainity 1 is " +b2;
String str3 = bi2+ " is prime with certainity -1 is " +b3;
// print b1, b2, b3 values
System.out.println( str1 );
System.out.println( str2 );
System.out.println( str3 );
}
}
Output
7 is prime with certainity 1 is true
9 is prime with certainity 1 is false
9 is prime with certainity -1 is true
Related
Given that the first number to divide all (1,2,..,10) is 2520.
And given that the first number to divide all (1,2,..,20) is 232792560.
Find the first number to divide all (1,2,..,100). (all consecutive numbers from 1 to 100).
The answer should run in less than a minute.
I'm writing the solution in Java, and I'm facing two problems:
How can I compute this is the solution itself is a number so huge that cannot be handled?
I tried using "BigInteger" by I'm doing many additions and divisions and I don't know if this is increasing my time complexity.
How can I calculate this in a less than a minute? The solution I though about so far haven't even stopped.
This is my Java code (using big integers):
public static boolean solved(int start, int end, BigInteger answer) {
for (int i=start; i<=end; i++) {
if (answer.mod(new BigInteger(valueOf(i))).compareTo(new BigInteger(valueOf(0)))==0) {
return false;
}
}
return true;
}
public static void main(String[] args) {
BigInteger answer = new BigInteger("232792560");
BigInteger y = new BigInteger("232792560");
while(!solved(21,100, answer)) {
answer = answer.add(y);
}
System.out.println(answer);
}
I take advantage of the fact that I know already the solution for (1,..,20).
Currently is simply not stopping.
I though I could improve it by changing the function solved to check for only values we care about.
For example:
100 = 25,50,100
99 = 33,99
98 = 49,98
97 = 97
96 = 24,32,48,96
And so on. But, this simple calculation of identifying the smallest group of number needed has become a problem itself to which I didn't look for / found a solution. Of course, the time complexity should stay under a minute either case.
Any other ideas?
The first number that can be divided by all elements of some set (which is what you have there, despite the slightly different formulation) is also known as the Least Common Multiple of that set. LCM(a, b, c) = LCM(LCM(a, b), c) and so on, so in general, it can be computed by taking n - 1 pairwise LCMs where n is the number of items in the set. BigInteger does not have an lcm function, but the LCM can be computed via a * b / gcd(a, b) so in Java with BigInteger:
static BigInteger lcm(BigInteger a, BigInteger b) {
return a.multiply(b).divide(a.gcd(b));
}
For 1 to 20, computing the LCM in that way indeed results in 232792560. It's easy to do it up to 100 too.
Find all max prime powers in your range and take their product.
E.g. 1-10: 2^3, 3^2, 5^1, 7^1: product is 2520, which is the right answer (not 5250). You could find the primes via the sieve of Eratosthenes or just download them from a list of primes.
As 100 is small, you can work this out by producing the prime factorization of all numbers from 2 to 100 and keep the largest exponent of each prime among all factorizations. In fact, trying divisions by 2, 3, 5 and 7 will be enough to check primality up to 100, and there are just 25 primes to consider. You can implement a simple sieve to find the primes and perform the factorizations.
After you found all exponents of the prime decomposition of the lcm, you can either leave this as the answer, or perform the multiplications explicitly.
Fibonacci sequence is obtained by starting with 0 and 1 and then adding the two last numbers to get the next one.
All positive integers can be represented as a sum of a set of Fibonacci numbers without repetition. For example: 13 can be the sum of the sets {13}, {5,8} or {2,3,8}. But, as we have seen, some numbers have more than one set whose sum is the number. If we add the constraint that the sets cannot have two consecutive Fibonacci numbers, than we have a unique representation for each number.
We will use a binary sequence (just zeros and ones) to do that. For example, 17 = 1 + 3 + 13. Then, 17 = 100101. See figure 2 for a detailed explanation.
I want to turn some integers into this representation, but the integers may be very big. How to I do this efficiently.
The problem itself is simple. You always pick the largest fibonacci number less than the remainder. You can ignore the the constraint with the consecutive numbers (since if you need both, the next one is the sum of both so you should have picked that one instead of the initial two).
So the problem remains how to quickly find the largest fibonacci number less than some number X.
There's a known trick that starting with the matrix (call it M)
1 1
1 0
You can compute fibbonacci number by matrix multiplications(the xth number is M^x). More details here: https://www.nayuki.io/page/fast-fibonacci-algorithms . The end result is that you can compute the number you're look in O(logN) matrix multiplications.
You'll need large number computations (multiplications and additions) if they don't fit into existing types.
Also store the matrices corresponding to powers of two you compute the first time, since you'll need them again for the results.
Overall this should be O((logN)^2 * large_number_multiplications/additions)).
First I want to tell you that I really liked this question, I didn't know that All positive integers can be represented as a sum of a set of Fibonacci numbers without repetition, I saw the prove by induction and it was awesome.
To respond to your question I think that we have to figure how the presentation is created. I think that the easy way to find this is that from the number we found the closest minor fibonacci item.
For example if we want to present 40:
We have Fib(9)=34 and Fib(10)=55 so the first element in the presentation is Fib(9)
since 40 - Fib(9) = 6 and (Fib(5) =5 and Fib(6) =8) the next element is Fib(5). So we have 40 = Fib(9) + Fib(5)+ Fib(2)
Allow me to write this in C#
class Program
{
static void Main(string[] args)
{
List<int> fibPresentation = new List<int>();
int numberToPresent = Convert.ToInt32(Console.ReadLine());
while (numberToPresent > 0)
{
int k =1;
while (CalculateFib(k) <= numberToPresent)
{
k++;
}
numberToPresent = numberToPresent - CalculateFib(k-1);
fibPresentation.Add(k-1);
}
}
static int CalculateFib(int n)
{
if (n == 1)
return 1;
int a = 0;
int b = 1;
// In N steps compute Fibonacci sequence iteratively.
for (int i = 0; i < n; i++)
{
int temp = a;
a = b;
b = temp + b;
}
return a;
}
}
Your result will be in fibPresentation
This encoding is more accurately called the "Zeckendorf representation": see https://en.wikipedia.org/wiki/Fibonacci_coding
A greedy approach works (see https://en.wikipedia.org/wiki/Zeckendorf%27s_theorem) and here's some Python code that converts a number to this representation. It uses the first 100 Fibonacci numbers and works correctly for all inputs up to 927372692193078999175 (and incorrectly for any larger inputs).
fibs = [0, 1]
for _ in xrange(100):
fibs.append(fibs[-2] + fibs[-1])
def zeck(n):
i = len(fibs) - 1
r = 0
while n:
if fibs[i] <= n:
r |= 1 << (i - 2)
n -= fibs[i]
i -= 1
return r
print bin(zeck(17))
The output is:
0b100101
As the greedy approach seems to work, it suffices to be able to invert the relation N=Fn.
By the Binet formula, Fn=[φ^n/√5], where the brackets denote the nearest integer. Then with n=floor(lnφ(√5N)) you are very close to the solution.
17 => n = floor(7.5599...) => F7 = 13
4 => n = floor(4.5531) => F4 = 3
1 => n = floor(1.6722) => F1 = 1
(I do not exclude that some n values can be off by one.)
I'm not sure if this is an efficient enough for you, but you could simply use Backtracking to find a(the) valid representation.
I would try to start the backtracking steps by taking the biggest possible fib number and only switch to smaller ones if the consecutive or the only once constraint is violated.
p.s. I have referred to this as Random, but this is a Seed Based Random Shuffle, where the Seed will be generated by a PRNG, but with the same Seed, the same "random" distribution will be observed.
I am currently trying to find a method to assist in doing 2 things:
1) Generate Non-Repeating Sequence
This will take 2 arguments: Seed; and N. It will generate a sequence, of size N, populated with numbers between 1 and N, with no repetitions.
I have found a few good methods to do this, but most of them get stumped by feasibility with the second thing.
2) Extract an entry from the Sequence
This will take 3 arguments: Seed; N; and I. This is for determining what value would appear at position I in a Sequence that would be generated with Seed and N. However, in order to work with what I have in mind, it absolutely cannot use a generated sequence, and pick out an element.
I initially worked with pre-calculating the sequence, then querying it, but this only really works in test cases, as the number of Seeds, and the value of N that will be used would create a database into the Petabytes.
From what I can tell, having a method that implements requirement 1 by using requirement 2 would be the most ideal method.
i.e. a sequence is generated by:
function Generate_Sequence(int S, int N) {
int[] sequence = new int[N];
for (int i = 0; i < N; i++) {
sequence[i] = Extract_From_Sequence(S, N, i);
}
return sequence;
}
For Example
GS = Generate Sequence
ES = Extract from Sequence
for:
S = 1
N = 5
I = 4
GS(S, N) = { 4, 2, 5, 1, 3 }
ES(S, N, I) = 1
let S = 2
GS(S, N) = { 3, 5, 2, 4, 1 }
ES(S, N, I) = 4
One way to do this is to make a permutation over the bit positions of the number. Assume that N is a power of two (I will discuss the general case later!).
Use the seed S to generate a permutation \sigma over the set of {1,2,...,log(n)}. Then permute the bits of I according to the \sigma to obtain I'. In other words, the bit of I' at the position \sigma(x) is obtained from the bit of I at the position x.
One problem with this method is its linearity (It is closed under the XOR operation). To overcome this, you can find a number p with gcd(p,N)=1 (this can be done easily even for very large Ns) and generate a random number (q < N) using the seed S. The output of the Extract_From_Sequence(S, N, I) would be (p*I'+q mod N).
Now the case where N is not a complete power of two. The problem arises when the I' falls outside the range of [1,N]. In that case, we return the most significant bits of I to their initial position until the resulting value falls into the desired range. This is done by changing the \sigma(log(n)) bit of I' with the log(n) bit, and so on ....
I am going through an algorithms and datastructures textbook and came accross this question:
1-28. Write a function to perform integer division without using
either the / or * operators. Find a fast way to do it.
How can we come up with a fast way to do it?
I like this solution: https://stackoverflow.com/a/34506599/1008519, but I find it somewhat hard to reason about (especially the |-part). This solution makes a little more sense in my head:
var divide = function (dividend, divisor) {
// Handle 0 divisor
if (divisor === 0) {
return NaN;
}
// Handle negative numbers
var isNegative = false;
if (dividend < 0) {
// Change sign
dividend = ~dividend+1;
isNegative = !isNegative;
}
if (divisor < 0) {
// Change sign
divisor = ~divisor+1;
isNegative = !isNegative;
}
/**
* Main algorithm
*/
var result = 1;
var denominator = divisor;
// Double denominator value with bitwise shift until bigger than dividend
while (dividend > denominator) {
denominator <<= 1;
result <<= 1;
}
// Subtract divisor value until denominator is smaller than dividend
while (denominator > dividend) {
denominator -= divisor;
result -= 1;
}
// If one of dividend or divisor was negative, change sign of result
if (isNegative) {
result = ~result+1;
}
return result;
}
We initialize our result to 1 (since we are going to double our denominator until it is bigger than the dividend)
Double our denominator (with bitwise shifts) until it is bigger than the dividend
Since we know our denominator is bigger than our dividend, we can minus our divisor until it is less than our dividend
Return result since denominator is now as close to the result as possible using the divisor
Here are some test runs:
console.log(divide(-16, 3)); // -5
console.log(divide(16, 3)); // 5
console.log(divide(16, 33)); // 0
console.log(divide(16, 0)); // NaN
console.log(divide(384, 15)); // 25
Here is a gist of the solution: https://gist.github.com/mlunoe/e34f14cff4d5c57dd90a5626266c4130
Typically, when an algorithms textbook says fast they mean in terms of computational complexity. That is, the number of operations per bit of input. In general, they don't care about constants, so if you have an input of n bits, whether it takes two operations per bit or a hundred operations per bit, we say the algorithm takes O(n) time. This is because if we have an algorithm that runs in O(n^2) time (polynomial... in this case, square time) and we imagine a O(n) algorithm that does 100 operations per bit compared to our algorithm which may do 1 operation per bit, once the input size is 100 bits, the polynomial algorithm starts to run really slow really quickly (compared to our other algorithm). Essentially, you can imagine two lines, y=100x and y=x^2. Your teacher probably made you do an exercise in Algebra (maybe it was calculus?) where you have to say which one is bigger as x approaches infinity. This is actually a key concept in divergence/convergence in calculus if you have gotten there already in mathematics. Regardless, with a little algebra, you can imagine our graphs intersecting at x=100, and y=x^2 being larger for all points where x is greater than 100.
As far as most textbooks are concerned, O(nlgn) or better is considered "fast". One example of a really bad algorithm to solve this problem would be the following:
crappyMultiplicationAlg(int a, int b)
int product = 0
for (b>0)
product = product + a
b = b-1
return product
This algorithm basically uses "b" as a counter and just keeps adding "a" to some variable for each time b counts down. To calculate how "fast" the algorithm is (in terms of algorithmic complexity) we count how many runs different components will take. In this case, we only have a for loop and some initialization (which is negligible in this case, ignore it). How many times does the for loop run? You may be saying "Hey, guy! It only runs 'b' times! That may not even be half the input. Thats way better than O(n) time!"
The trick here, is that we are concerned with the size of the input in terms of storage... and we all (should) know that to store an n bit integer, we need lgn bits. In other words, if we have x bits, we can store any (unsigned) number up to (2^x)-1. As a result, if we are using a standard 4 byte integer, that number could be up to 2^32 - 1 which is a number well into the billions, if my memory serves me right. If you dont trust me, run this algorithm with a number like 10,000,000 and see how long it takes. Still not convinced? Use a long to use a number like 1,000,000,000.
Since you didn't ask for help with the algorithm, Ill leave it for you as a homework exercise (not trying to be a jerk, I am a total geek and love algorithm problems). If you need help with it, feel free to ask! I already typed up some hints by accident since I didnt read your question properly at first.
EDIT: I accidentally did a crappy multiplication algorithm. An example of a really terrible division algorithm (i cheated) would be:
AbsolutelyTerribleDivisionAlg(int a, int b)
int quotient = 0
while crappyMultiplicationAlg(int b, int quotient) < a
quotient = quotient + 1
return quotient
This algorithm is bad for a whole bunch of reasons, not the least of which is the use of my crappy multiplication algorithm (which will be called more than once even on a relatively "tame" run). Even if we were allowed to use the * operator though, this is still a really bad algorithm, largely due to the same mechanism used in my awful mult alg.
PS There may be a fence-post error or two in my two algs... i posted them more for conceptual clarity than correctness. No matter how accurate they are at doing multiplication or division, though, never use them. They will give your laptop herpes and then cause it to burn up in a sulfur-y implosion of sadness.
I don't know what you mean by fast...and this seems like a basic question to test your thought process.
A simple function can be use a counter and keep subtracting the divisor from the dividend till it becomes 0. This is O(n) process.
int divide(int n, int d){
int c = 0;
while(1){
n -= d;
if(n >= 0)
c++;
else
break;
}
return c;
}
Another way can be using shift operator, which should do it in log(n) steps.
int divide(int n, int d){
if(d <= 0)
return -1;
int k = d;
int i, c, index=1;
c = 0;
while(n > d){
d <<= 1;
index <<= 1;
}
while(1){
if(k > n)
return c;
if(n >= d){
c |= index;
n -= d;
}
index >>= 1;
d >>= 1;
}
return c;
}
This is just like integer division as we do in High-School Mathematics.
PS: If you need a better explanation, I will. Just post that in comments.
EDIT: edited the code wrt Erobrere's comment.
The simplest way to perform a division is by successive subtractions: subtract b from a as long as a remains positive. The quotient is the number of subtractions performed.
This can be pretty slow, as you will perform q subtractions and tests.
With a=28 and b=3,
28-3-3-3-3-3-3-3-3-3=1
the quotient is 9 and the remainder 1.
The next idea that comes to mind is to subtract several times b in a single go. We can try with 2b or 4b or 8b... as these numbers are easy to compute with additions. We can go as for as possible as long as the multiple of b does not exceed a.
In the example, 2³.3 is the largest multiple which is possible
28>=2³.3
So we subtract 8 times 3 in a single go, getting
28-2³.3=4
Now we continue to reduce the remainder with the lower multiples, 2², 2 and 1, when possible
4-2².3<0
4-2.3 <0
4-1.3 =1
Then our quotient is 2³+1=9 and the remainder 1.
As you can check, every multiple of b is tried once only, and the total number of attempts equals the number of doublings required to reach a. This number is just the number of bits required to write q, which is much smaller than q itself.
This is not the fastest solution, but I think it's readable enough and works:
def weird_div(dividend, divisor):
if divisor == 0:
return None
dend = abs(dividend)
dsor = abs(divisor)
result = 0
# This is the core algorithm, the rest is just for ensuring it works with negatives and 0
while dend >= dsor:
dend -= dsor
result += 1
# Let's handle negative numbers too
if (dividend < 0 and divisor > 0) or (dividend > 0 and divisor < 0):
return -result
else:
return result
# Let's test it:
print("49 divided by 7 is {}".format(weird_div(49,7)))
print("100 divided by 7 is {} (Discards the remainder) ".format(weird_div(100,7)))
print("-49 divided by 7 is {}".format(weird_div(-49,7)))
print("49 divided by -7 is {}".format(weird_div(49,-7)))
print("-49 divided by -7 is {}".format(weird_div(-49,-7)))
print("0 divided by 7 is {}".format(weird_div(0,7)))
print("49 divided by 0 is {}".format(weird_div(49,0)))
It prints the following results:
49 divided by 7 is 7
100 divided by 7 is 14 (Discards the remainder)
-49 divided by 7 is -7
49 divided by -7 is -7
-49 divided by -7 is 7
0 divided by 7 is 0
49 divided by 0 is None
unsigned bitdiv (unsigned a, unsigned d)
{
unsigned res,c;
for (c=d; c <= a; c <<=1) {;}
for (res=0;(c>>=1) >= d; ) {
res <<= 1;
if ( a >= c) { res++; a -= c; }
}
return res;
}
The pseudo code:
count = 0
while (dividend >= divisor)
dividend -= divisor
count++
//Get count, your answer
How to write two functions for generating random numbers that supporting next and previous?
I mean how to write two functions: next_number() and previous_number(), that next_number() function generates a new random number and previous_number() function generates previously generated random number.
for example:
int next_number()
{
// ...?
}
int previous_number()
{
// ...?
}
int num;
// Forward random number generating.
// ---> 54, 86, 32, 46, 17
num = next_number(); // num = 54
num = next_number(); // num = 86
num = next_number(); // num = 32
num = next_number(); // num = 46
num = next_number(); // num = 17
// Backward random number generating.
// <--- 17, 46, 32, 86, 54
num = previous_number(); // num = 46
num = previous_number(); // num = 32
num = previous_number(); // num = 86
num = previous_number(); // num = 54
You can trivially do this with a Pseudo-Random Function (PRF).
Such functions take a key and a value, and output a pseudo-random number based on them. You'd select a key from /dev/random that remains the same for the run of the program, and then feed the function an integer that you increment to go forward or decrement to go back.
Here's an example in pseudo-code:
initialize():
Key = sufficiently many bytes from /dev/random
N = 0
next_number():
N = N + 1
return my_prf(Key, N)
previous_number():
N = N - 1
return my_prf(Key, N)
Strong, Pseudo-Random Functions are found in most cryptography libraries. As rici points out, you can also use any encryption function (encryption functions are pseudo-random permutations, a subset of PRFs, and the period is so huge that the difference doesn't matter).
Some linear congruential generators (a common but not very good PRNG) are reversible.
They work by next = (a * previous + c) mod m. That's reversible if a has a modular multiplicative inverse mod m. That's often the case, because m is often a power of two and a is usually odd.
For example for the "MSVC" parameters from the table from the first link:
m = 232
a = 214013
c = 2531011
The reverse is:
previous = (current - 2531011) * 0xb9b33155;
With types chosen to make it work modulo 232.
Suppose you have a linear congruential sequence S defined by
S[0] = seed
S[i] = (p * S[i-1] + k) % m
for some p, m, k such that gcd(p, m) == 1. Then you can find q such that (p * q) % m == 1 and compute:
S[i-1] = (q * (S[i] - k)) % m
In other words: if you pick suitable p and precompute q, you can traverse your sequence in either order in O(1) time.
A reasonably simple way of generating an indexable pseudo-random sequence -- that is, a sequence which looks random, but can be traversed in either direction -- is to choose some (reasonably good) encryption algorithm and a fixed encryption key, and then define:
sequence(i): encrypt(i, known_key)
You don't need to know the value of i, because you can decrypt it from the number:
next(r): encrypt(decrypt(r, known_key) + 1)
prev(r): encrypt(decrypt(r, known_key) - 1)
Consequently, i does not have to be a small integer; since the only arithmetic you need to do to it is addition and subtraction by a small integer, a bignum implementation is trivial. So if you wanted 128-bit pseudorando numbers, you could set the first i to be a 128-bit random number extracted from /dev/random.
You have to keep the entire value of i in static storage, and the period of the pseudorandom numbers cannot be greater than the range of i. That will be true of any solution to this problem, though: since the next() and prev() operators are required to be functions, every value has a unique successor and predecessor, and thus can only appear once in the cycle of values. That's quite different from the Mersenne twister, for example, whose cycle is much larger than 232.
I think what you are asking for is random number generator that is deterministic. This does not make sense because if it is deterministic, it's not random. The only solution is to generate a list of random numbers and then step back and forward in this list.
PS! I know that essentialy all software PRNG-s are deterministic. You can of course use this to create the functionality you need, but don't fool yourself, it has nothing to do with randomness. If your software design requires having deterministic PRNG then you could probably skip the PRNG part at all.