Find the total number of distinct Non decreasing arrays possible - algorithm

Given the exact no. of elements that must be present in the array (let=r) and the max value of the last element of the array (let=n) find the total number of distinct non decreasing arrays possible (all elements of array must be >=0)
Example- If r=3 and n=2 then some possible non decreasing arrays are {0,0,2},{0,0,1},{0,0,0},{1,2,2} etc.
I need the no. of these kind of arrays possible.
I tried to solve it using recursion and memoization but it is too slow.
here is my code (ll means long long)-
ll solve(ll i,ll curlevel)
{
if(dp[i][curlevel]!=-1)
return dp[i][curlevel];
if(i<0)
return dp[i][curlevel]=0;
if(curlevel==r)
return dp[i][curlevel]=1;
if(curlevel>r)
return dp[i][curlevel]=0;
ll ans=0;
for(ll k=i;k>=0;k--)
{
ans+= solve(k, curlevel+1);
}
return dp[i][curlevel]=ans;
}
I call this function as follows.
for(ll i=n;i>=0;i--)
{
res+=solve(i, 1);
}
I am looking for a faster way to do this.

Let's take some non-decreasing sequence which qualifies, and encode it using 0s and 1s. The decoding algorithm is simple:
Set the_value to 0
For each element in the coded sequence:
If the element is 0, output the_value.
If the element is 1, add 1 to the_value.
Now, I claim that any non-decreasing sequence can be encoded with a sequence of exactly r 0s (because we need to output exactly r values) and n 1s (because we cannot exceed the value n), and every such coded sequence corresponds to a unique non-decreasing sequence. (The encoding algorithm and the proof of bijection are left as exercises.)
So the number of uncoded sequences is the same as the number of coded sequences. But the number of coded sequences is simply the number of ways of choosing r positions to insert a 0 from the n+r positions in the coded sequence. Hence the number of possibilities is n+r choose r, or (n+r)!/(n!*r!).
These numbers grow rapidly, and you will need bignum arithmetic to compuate them for even moderately sized r and n. For example, if n and r are both 2000, then the count of sequences is a number with 1203 digits, approximately 1.66 * 101202.
Obviously, it is futile to attempt to enumerate a set of sequences of this size. For small values of r and n, the sequences can be enumerated in amortized time O(1) per sequence, using the standard lexicographical enumeration algorithm, which takes a sequence and produces the next sequence in lexicographical order:
Find the rightmost element of the sequence which can be made larger. (In this case, find the rightmost element of the sequence which is not equal to n.) If there is no such element, all sequences have been enumerated.
Advance the element which has been found. (In this case, add 1 to the element.)
Set all subsequent elements (if any) to their smallest possible values. (In this case, set all subsequent elements to the new value of the element found in step 1.

not for the added part: It boils down to the combinations with repetition allowed in non-descendign order. For n we have 0-n symbols (i.e. n+1 symbols) and r the length. With this in mid and this answer on math.stackexchange we get a simple formula:

Related

How to Partition an array into N smallest sum subsets?

How to partition an integer array into N subsets such that the sum of these subsets is minimum?
For example the array is consisted of 11 elements and I need 6 subsets out of it.
{2,1,1,3,4,4,3,2,1,2,3}
The subsets : {2,1,1,3}, {4}, {4,3}, {3,2}, {1,2}, {3} Minimum sum=7.
an alternative answer: {2,1,1} {3,4} {4} {3,2} {1,2} {3} Minimum sum=7.
Note: The order in which the numbers appear in the original set, must be maintained while partitioning.
One possible approach is to binary search for the answer.
We need a procedure that would check whether we can partition the set using only sums equal or below a parameter, S. Let's call this procedure onlySumsBelow(S).
We can use a greedy solution to implement onlySumsBelow(S). Always add as many elements as possible in each subset, and stop just before reaching a sum larger than S (I am assuming here that we don't have negative elements, which may complicate the discussion). If we cannot reach the end of the sequence using the allowed number of subsets, then the sum is not valid (it is too small).
function onlySumsBelow(S) {
partitionsUsed = 1;
currentSum = 0;
for each value in sequence {
if (value > S) return false;
if (currentSum + value > S) {
// start a new partition
currentSum = value;
partitionsUsed++;
} else {
currentSum += value;
}
}
return partitionsUsed <= N;
}
Once we have the onlySumsBelow(S) procedure, we can binary search for the answer, starting with an interval that at the left end has a value that ensures that the searched answer is not below (e.g. 0) and at the right end has a large enough number that ensures that the searched answer is not above (e.g. the sum of all numbers in the sequence).
If the efficiency is not a concern, instead of binary searching you can simply try multiple candidate answers one by one, starting from a small enough value, e.g. sum of all numbers divided by N, and then increasing by one until reaching a fine solution.
Remark: without the note in the end of the question (that restricts us to taking into account subsets of numbers that appear at neighboring positions in the input) the problem is NP-complete, since it is a generalization of the Partition problem, which only uses two sets.

Given a permutation's lexicographic number, is it possible to get any item in it in O(1)

I want to know whether the task explained below is even theoretically possible, and if so how I could do it.
You are given a space of N elements (i.e. all numbers between 0 and N-1.) Let's look at the space of all permutations on that space, and call it S. The ith member of S, which can be marked S[i], is the permutation with the lexicographic number i.
For example, if N is 3, then S is this list of permutations:
S[0]: 0, 1, 2
S[1]: 0, 2, 1
S[2]: 1, 0, 2
S[3]: 1, 2, 0
S[4]: 2, 0, 1
S[5]: 2, 1, 0
(Of course, when looking at a big N, this space becomes very large, N! to be exact.)
Now, I already know how to get the permutation by its index number i, and I already know how to do the reverse (get the lexicographic number of a given permutation.) But I want something better.
Some permutations can be huge by themselves. For example, if you're looking at N=10^20. (The size of S would be (10^20)! which I believe is the biggest number I ever mentioned in a Stack Overflow question :)
If you're looking at just a random permutation on that space, it would be so big that you wouldn't be able to store the whole thing on your harddrive, let alone calculate each one of the items by lexicographic number. What I want is to be able to do item access on that permutation, and also get the index of each item. That is, given N and i to specify a permutation, have one function that takes an index number and find the number that resides in that index, and another function that takes a number and finds in which index it resides. I want to do that in O(1), so I don't need to store or iterate over each member in the permutation.
Crazy, you say? Impossible? That may be. But consider this: A block cipher, like AES, is essentially a permutation, and it almost accomplishes the tasks I outlined above. AES has a block size of 16 bytes, meaning that N is 256^16 which is around 10^38. (The size of S, not that it matters, is a staggering (256^16)!, or around 10^85070591730234615865843651857942052838, which beats my recent record for "biggest number mentioned on Stack Overflow" :)
Each AES encryption key specifies a single permutation on N=256^16. That permutation couldn't be stored whole on your computer, because it has more members than there are atoms in the solar system. But, it allows you item access. By encrypting data using AES, you're looking at the data block by block, and for each block (member of range(N)) you output the encrypted block, which the member of range(N) that is in the index number of the original block in the permutation. And when you're decrypting, you're doing the reverse (Finding the index number of a block.) I believe this is done in O(1), I'm not sure but in any case it's very fast.
The problem with using AES or any other block cipher is that it limits you to very specific N, and it probably only captures a tiny fraction of the possible permutations, while I want to be able to use any N I like, and do item access on any permutation S[i] that I like.
Is it possible to get O(1) item access on a permutation, given size N and permutation number i? If so, how?
(If I'm lucky enough to get code answers here, I'd appreciate if they'll be in Python.)
UPDATE:
Some people pointed out the sad fact that the permutation number itself would be so huge, that just reading the number would make the task non-feasible. Then, I'd like to revise my question: Given access to the factoradic representation of a permutation's lexicographic number, is it possible to get any item in the permutation in O(as small as possible)?
The secret to doing this is to "count in base factorial".
In the same way that 134 = 1*10^2+3*10 + 4, 134 = 5! + 2 * 3! + 2! => 10210 in factorial notation (include 1!, exclude 0!). If you want to represent N!, you will then need N^2 base ten digits. (For each factorial digit N, the maximum number it can hold is N). Up to a bit of confusion about what you call 0, this factorial representation is exactly the lexicographic number of a permutation.
You can use this insight to solve Euler Problem 24 by hand. So I will do that here, and you will see how to solve your problem. We want the millionth permutation of 0-9. In factorial representation we take 1000000 => 26625122. Now to convert that to the permutation, I take my digits 0,1,2,3,4,5,6,7,8,9, and The first number is 2, which is the third (it could be 0), so I select 2 as the first digit, then I have a new list 0,1,3,4,5,6,7,8,9 and I take the seventh number which is 8 etc, and I get 2783915604.
However, this assumes that you start your lexicographic ordering at 0, if you actually start it at one, you have to subtract 1 from it, which gives 2783915460. Which is indeed the millionth permutation of the numbers 0-9.
You can obviously reverse this procedure, and hence convert backwards and forwards easily between the lexiographic number and the permutation that it represents.
I am not entirely clear what it is that you want to do here, but understanding the above procedure should help. For example, its clear that the lexiographic number represents an ordering which could be used as the key in a hashtable. And you can order numbers by comparing digits left to right so once you have inserted a number you never have to work outs it factorial.
Your question is a bit moot, because your input size for an arbitrary permutation index has size log(N!) (assuming you want to represent all possible permutations) which is Theta(N log N), so if N is really large then just reading the input of the permutation index would take too long, certainly much longer than O(1). It may be possible to store the permutation index in such a way that if you already had it stored, then you could access elements in O(1) time. But probably any such method would be equivalent to just storing the permutation in contiguous memory (which also has Theta(N log N) size), and if you store the permutation directly in memory then the question becomes trivial assuming you can do O(1) memory access. (However you still need to account for the size of the bit encoding of the element, which is O(log N)).
In the spirit of your encryption analogy, perhaps you should specify a small SUBSET of permutations according to some property, and ask if O(1) or O(log N) element access is possible for that small subset.
Edit:
I misunderstood the question, but it was not in waste. My algorithms let me understand: the factoradic representation of a permutation's lexicographic number is almost the same as the permutation itself. In fact the first digit of the factoradic representation is the same as the first element of the corresponding permutation (assuming your space consists of numbers from 0 to N-1). Knowing this there is not really a point in storing the index rather than the permutation itself . To see how to convert the lexicographic number into a permutation, read below.
See also this wikipedia link about Lehmer code.
Original post:
In the S space there are N elements that can fill the first slot, meaning that there are (N-1)! elements that start with 0. So i/(N-1)! is the first element (lets call it 'a'). The subset of S that starts with 0 consists of (N-1)! elements. These are the possible permutations of the set N{a}. Now you can get the second element: its the i(%((N-1)!)/(N-2)!). Repeat the process and you got the permutation.
Reverse is just as simple. Start with i=0. Get the 2nd last element of the permutation. Make a set of the last two elements, and find the element's position in it (its either the 0th element or the 1st), lets call this position j. Then i+=j*2!. Repeat the process (you can start with the last element too, but it will always be the 0th element of the possibilities).
Java-ish pesudo code:
find_by_index(List N, int i){
String str = "";
for(int l = N.length-1; i >= 0; i--){
int pos = i/fact(l);
str += N.get(pos);
N.remove(pos);
i %= fact(l);
}
return str;
}
find_index(String str){
OrderedList N;
int i = 0;
for(int l = str.length-1; l >= 0; l--){
String item = str.charAt(l);
int pos = N.add(item);
i += pos*fact(str.length-l)
}
return i;
}
find_by_index should run in O(n) assuming that N is pre ordered, while find_index is O(n*log(n)) (where n is the size of the N space)
After some research in Wikipedia, I desgined this algorithm:
def getPick(fact_num_list):
"""fact_num_list should be a list with the factorial number representation,
getPick will return a tuple"""
result = [] #Desired pick
#This will hold all the numbers pickable; not actually a set, but a list
#instead
inputset = range(len(fact_num_list))
for fnl in fact_num_list:
result.append(inputset[fnl])
del inputset[fnl] #Make sure we can't pick the number again
return tuple(result)
Obviously, this won't reach O(1) due the factor we need to "pick" every number. Due we do a for loop and thus, assuming all operations are O(1), getPick will run in O(n).
If we need to convert from base 10 to factorial base, this is an aux function:
import math
def base10_baseFactorial(number):
"""Converts a base10 number into a factorial base number. Output is a list
for better handle of units over 36! (after using all 0-9 and A-Z)"""
loop = 1
#Make sure n! <= number
while math.factorial(loop) <= number:
loop += 1
result = []
if not math.factorial(loop) == number:
loop -= 1 #Prevent dividing over a smaller number than denominator
while loop > 0:
denominator = math.factorial(loop)
number, rem = divmod(number, denominator)
result.append(rem)
loop -= 1
result.append(0) #Don't forget to divide to 0! as well!
return result
Again, this will run in O(n) due to the whiles.
Summing all, the best time we can find is O(n).
PS: I'm not a native English speaker, so spelling and phrasing errors may appear. Apologies in advance, and let me know if you can't get around something.
All correct algorithms for accessing the kth item of a permutation stored in factoradic form must read the first k digits. This is because, regardless of the values of the other digits among the first k, it makes a difference whether an unread digit is a 0 or takes on its maximum value. That this is the case can be seen by tracing the canonical correct decoding program in two parallel executions.
For example, if we want to decode the third digit of the permutation 1?0, then for 100, that digit is 0, and for 110, that digit is 2.

Check if array B is a permutation of A

I tried to find a solution to this but couldn't get much out of my head.
We are given two unsorted integer arrays A and B. We have to check whether array B is a permutation of A. How can this be done.? Even XORing the numbers wont work as there can be several counterexamples which have same XOR value bt are not permutation of each other.
A solution needs to be O(n) time and with space O(1)
Any help is welcome!!
Thanks.
The question is theoretical but you can do it in O(n) time and o(1) space. Allocate an array of 232 counters and set them all to zero. This is O(1) step because the array has constant size. Then iterate through the two arrays. For array A, increment the counters corresponding to the integers read. For array B, decrement them. If you run into a negative counter value during iteration of array B, stop --- the arrays are not permutations of each others. Otherwise at the end (assuming A and B have the same size, a prerequisite) the counter array is all zero and the two arrays are permutations of each other.
This is O(1) space and O(n) time solution. However it is not practical, but would easily pass as a solution to the interview question. At least it should.
More obscure solutions
Using a nondeterministic model of computation, checking that the two arrays are not permutations of each others can be done in O(1) space, O(n) time by guessing an element that has differing count on the two arrays, and then counting the instances of that element on both of the arrays.
In randomized model of computation, construct a random commutative hash function and calculate the hash values for the two arrays. If the hash values differ, the arrays are not permutations of each others. Otherwise they might be. Repeat many times to bring the probability of error below desired threshold. Also on O(1) space O(n) time approach, but randomized.
In parallel computation model, let 'n' be the size of the input array. Allocate 'n' threads. Every thread i = 1 .. n reads the ith number from the first array; let that be x. Then the same thread counts the number of occurrences of x in the first array, and then check for the same count on the second array. Every single thread uses O(1) space and O(n) time.
Interpret an integer array [ a1, ..., an ] as polynomial xa1 + xa2 + ... + xan where x is a free variable and the check numerically for the equivalence of the two polynomials obtained. Use floating point arithmetics for O(1) space and O(n) time operation. Not an exact method because of rounding errors and because numerical checking for equivalence is probabilistic. Alternatively, interpret the polynomial over integers modulo a prime number, and perform the same probabilistic check.
If we are allowed to freely access a large list of primes, you can solve this problem by leveraging properties of prime factorization.
For both arrays, calculate the product of Prime[i] for each integer i, where Prime[i] is the ith prime number. The value of the products of the arrays are equal iff they are permutations of one another.
Prime factorization helps here for two reasons.
Multiplication is transitive, and so the ordering of the operands to calculate the product is irrelevant. (Some alluded to the fact that if the arrays were sorted, this problem would be trivial. By multiplying, we are implicitly sorting.)
Prime numbers multiply losslessly. If we are given a number and told it is the product of only prime numbers, we can calculate exactly which prime numbers were fed into it and exactly how many.
Example:
a = 1,1,3,4
b = 4,1,3,1
Product of ith primes in a = 2 * 2 * 5 * 7 = 140
Product of ith primes in b = 7 * 2 * 5 * 2 = 140
That said, we probably aren't allowed access to a list of primes, but this seems a good solution otherwise, so I thought I'd post it.
I apologize for posting this as an answer as it should really be a comment on antti.huima's answer, but I don't have the reputation yet to comment.
The size of the counter array seems to be O(log(n)) as it is dependent on the number of instances of a given value in the input array.
For example, let the input array A be all 1's with a length of (2^32) + 1. This will require a counter of size 33 bits to encode (which, in practice, would double the size of the array, but let's stay with theory). Double the size of A (still all 1 values) and you need 65 bits for each counter, and so on.
This is a very nit-picky argument, but these interview questions tend to be very nit-picky.
If we need not sort this in-place, then the following approach might work:
Create a HashMap, Key as array element, Value as number of occurances. (To handle multiple occurrences of the same number)
Traverse array A.
Insert the array elements in the HashMap.
Next, traverse array B.
Search every element of B in the HashMap. If the corresponding value is 1, delete the entry. Else, decrement the value by 1.
If we are able to process entire array B and the HashMap is empty at that time, Success. else Failure.
HashMap will use constant space and you will traverse each array only once.
Not sure if this is what you are looking for. Let me know if I have missed any constraint about space/time.
You're given two constraints: Computational O(n), where n means the total length of both A and B and memory O(1).
If two series A, B are permutations of each other, then theres also a series C resulting from permutation of either A or B. So the problem is permuting both A and B into series C_A and C_B and compare them.
One such permutation would be sorting. There are several sorting algorithms which work in place, so you can sort A and B in place. Now in a best case scenario Smooth Sort sorts with O(n) computational and O(1) memory complexity, in the worst case with O(n log n) / O(1).
The per element comparision then happens at O(n), but since in O notation O(2*n) = O(n), using a Smooth Sort and comparison will give you a O(n) / O(1) check if two series are permutations of each other. However in the worst case it will be O(n log n)/O(1)
The solution needs to be O(n) time and with space O(1).
This leaves out sorting and the space O(1) requirement is a hint that you probably should make a hash of the strings and compare them.
If you have access to a prime number list do as cheeken's solution.
Note: If the interviewer says you don't have access to a prime number list. Then generate the prime numbers and store them. This is O(1) because the Alphabet length is a constant.
Else here's my alternative idea. I will define the Alphabet as = {a,b,c,d,e} for simplicity.
The values for the letters are defined as:
a, b, c, d, e
1, 2, 4, 8, 16
note: if the interviewer says this is not allowed, then make a lookup table for the Alphabet, this takes O(1) space because the size of the Alphabet is a constant
Define a function which can find the distinct letters in a string.
// set bit value of char c in variable i and return result
distinct(char c, int i) : int
E.g. distinct('a', 0) returns 1
E.g. distinct('a', 1) returns 1
E.g. distinct('b', 1) returns 3
Thus if you iterate the string "aab" the distinct function should give 3 as the result
Define a function which can calculate the sum of the letters in a string.
// return sum of c and i
sum(char c, int i) : int
E.g. sum('a', 0) returns 1
E.g. sum('a', 1) returns 2
E.g. sum('b', 2) returns 4
Thus if you iterate the string "aab" the sum function should give 4 as the result
Define a function which can calculate the length of the letters in a string.
// return length of string s
length(string s) : int
E.g. length("aab") returns 3
Running the methods on two strings and comparing the results takes O(n) running time. Storing the hash values takes O(1) in space.
e.g.
distinct of "aab" => 3
distinct of "aba" => 3
sum of "aab => 4
sum of "aba => 4
length of "aab => 3
length of "aba => 3
Since all the values are equal for both strings, they must be a permutation of each other.
EDIT: The solutions is not correct with the given alphabet values as pointed out in the comments.
You can convert one of the two arrays into an in-place hashtable. This will not be exactly O(N), but it will come close, in non-pathological cases.
Just use [number % N] as it's desired index or in the chain that starts there. If any element has to be replaced, it can be placed at the index where the offending element started. Rinse , wash, repeat.
UPDATE:
This is a similar (N=M) hash table It did use chaining, but it could be downgraded to open addressing.
I'd use a randomized algorithm that has a low chance of error.
The key is to use a universal hash function.
def hash(array, hash_fn):
cur = 0
for item in array:
cur ^= hash_item(item)
return cur
def are_perm(a1, a2):
hash_fn = pick_random_universal_hash_func()
return hash_fn(a1, hash_fn) == hash_fn(a2, hash_fn)
If the arrays are permutations, it will always be right. If they are different, the algorithm might incorrectly say that they are the same, but it will do so with very low probability. Further, you can get an exponential decrease in chance for error with a linear amount of work by asking many are_perm() questions on the same input, if it ever says no, then they are definitely not permutations of each other.
I just find a counterexample. So, the assumption below is incorrect.
I can not prove it, but I think this may be possible true.
Since all elements of the arrays are integers, suppose each array has 2 elements,
and we have
a1 + a2 = s
a1 * a2 = m
b1 + b2 = s
b1 * b2 = m
then {a1, a2} == {b1, b2}
if this is true, it's true for arrays have n-elements.
So we compare the sum and product of each array, if they equal, one is the permutation
of the other.

Generating permutations with sub-linear memory

I am wondering if there is a sufficiently simple algorithm for generating permutations of N elements, say 1..N, which uses less than O(N) memory. It does not have to be compute n-th permutation, but it must be able to compute all permutations.
Of course, this algorithm should be a generator of some kind, or use some internal data structure which uses less than O(N) memory, since return the result as a vector of size N already violates the restriction on sub-linear memory.
Let's assume that the random permutation is being generated one entry at a time. The state of the generator must encode the set of elements remaining (run it to completion) and so, since no possibility can be excluded, the generator state is at least n bits.
Maybe you can, with factoradic numbers. You can extract the resulting permutation from it step by step, so you never have to have the entire result in memory.
But the reason I started with maybe, is that I'm not sure what the growing behaviour of the size of the factoradic number itself is. If it fits in an 32bit integer or something like that, N would be limited to a constant so O(N) would equal O(1), so we have to use an array for it, but I'm unsure how big it will be in terms of N.
I think the answer has to be "no".
Consider the generator for N-element permutations as a state machine: it must contain at a least as many states as there are permutations, else it will start repeating before it finishes generating all of them.
There are N! such permutations, which will require at least ceil(log2(N!)) bits to represent. Stirling's approximation tells us log2(N!) is O(N log N), so we will be unable to create such a generator with sub-linear memory.
The C++ algorithm next_permutation performs an in-place rearrangement of a sequence into its next permutation, or returns false when no further permutations exist. The algorithm is as follows:
template <class BidirectionalIterator>
bool next_permutation(BidirectionalIterator first, BidirectionalIterator last) {
if (first == last) return false; // False for empty ranges.
BidirectionalIterator i = first;
++i;
if (i == last) return false; // False for single-element ranges.
i = last;
--i;
for(;;) {
BidirectionalIterator ii = i--;
// Find an element *n < *(n + 1).
if (*i <*ii) {
BidirectionalIterator j = last;
// Find the last *m >= *n.
while (!(*i < *--j)) {}
// Swap *n and *m, and reverse from m to the end.
iter_swap(i, j);
reverse(ii, last);
return true;
}
// No n was found.
if (i == first) {
// Reverse the sequence to its original order.
reverse(first, last);
return false;
}
}
}
This uses constant space (the iterators) for each permutation generated. Do you consider that linear?
I think that to even store your result (which will be an ordered list of N items) will be O(N) in memory, no?
Anyhow, to answer your later question about picking a permutation at random, here's a technique that will be better than just producing all N! possibilities in a list, say, and then picking an index randomly. If we can just pick the index randomly and generate the associated permutation from it, we're much better off.
We can imagine the dictionary order on your words/permutations, and associate a unique number to these based on the word's/permutation's order of appearance in the dictionary. E.g., words on three characters would be
perm. index
012 <----> 0
021 <----> 1
102 <----> 2
120 <----> 3
201 <----> 4
210 <----> 5
You'll see later why it was easiest to use the numbers we did, but others could be accommodated with a bit more work.
To choose one at random, you could pick its associated index randomly from the range 0 ... N!-1 with a uniform probability (the simplest implementation of this is clearly out of the question for even moderately large N, I know, but I think there are decent workarounds) and then determine its associated permutation. Notice that the list begins with all the permutations of the last N-1 elements, keeping the first digit fixed equal to 0. After those possibilities are exhausted, we generate all those that start with 1. After these next (N-1)! permutations are exhausted, we generate those that start with a 2. Etc. Thus we can determine the leading digit is Floor[R / (N-1)!], where R was the index in the sense shown above. See now why we zero indexed, too?
To generate the remaining N-1 digits in the permutation, let's say that we determined Floor[R/(N-1)!] = a0. Start with the list {0, ..., N-1} - {a0} (set subtraction). We want the Qth permutation of this list, for Q = R mod (N-1)!. Except for accounting for the fact that there's a missing digit, this is just the same as the problem we've just solved. Recurse.

question on array and number

i have one problem
for example we have array
int a[]=new int[]{a1,a2,a3,a4,..........an}:
task is fill the same array by elements which are not in the array
for example
a={1,3,4,5,6,7} should be filled by any numbers {2,8,9,12,13,90}or others but not by elements which are in array this must not be{1,12,13,14,110} because 1 is in the array a
thanks
Interesting problem.
If the array is of signed integers, I believe it is possible in O(n) time and O(1) space, with no overflows, assuming the length is small enough to permit such a thing to happen.
The basic idea is as follows:
We have n numbers. Now on dividing those numbers by n+1, we get n remainders. So atleast one of the remainders in {0,1,2, ..., n} must be missing (say r). We fill the array with numbers whose remainders are r.
First, we add a multiple of n+1 to all negative numbers to make them positive.
Next we walk the array and find the remainder of each number with n+1. If remainder is r, we set a[r] to be -a[r] if a[r] was positive. (If we encounter negative numbers when walking, we use the negated version when taking remainder).
We also have an extra int for remainder = n.
At the end, we walk the array again to see if there are any positive numbers (there will be one, or the extra int for remainder = n will be unset).
Once we have the remainder, it is easy to generate n numbers with that remainder. Of course we could always generate just one number and fill it with that, as the problem never said anything about unique numbers.
If the array was of unsigned integers, we could probably still do this with better book-keeping.
For instance we could try using the first n/logn integers as our bitarray to denote which remainders have been seen and use some extra O(1) integers to hold the numbers temporarily.
For eg, you do tmp = a[0], find remainder and set the appropriate bit of a[0] (after setting it to zero first). tmp = a[1], set bit etc. We will never overwrite a number before we need it to find its remainder.
Just get the highest and lowest number in the array, create a new array with elements from your lower bound value to n.
Obtaining the highest and lowest number can be done in the same loop.
Assuming 12,4,3,5,7,8,89, it'll detect 3 as the lowest, 89 as the highest value. It then creates a new array and fills it with 3..89; then discard the old array.

Resources