Efficiently grab some subsets that meet criteria [duplicate] - ruby

This question already has an answer here:
Count the total number of subsets that don't have consecutive elements
(1 answer)
Closed 4 years ago.
Given a set of consecutive numbers from 1 to n, I'm trying to find the number of subsets that do not contain consecutive numbers.
E.g., for the set [1, 2, 3], some possible subsets are [1, 2] and [1, 3]. The former would not be counted while the latter would be, since 1 and 3 are not consecutive numbers.
Here is what I have:
def f(n)
consecutives = Array(1..n)
stop = (n / 2.0).round
(1..stop).flat_map { |x|
consecutives.combination(x).select { |combo|
consecutive = false
combo.each_cons(2) do |l, r|
consecutive = l.next == r
break if consecutive
end
combo.length == 1 || !consecutive
}
}.size
end
It works, but I need it to work faster, under 12 seconds for n <= 75. How do I optimize this method so I can handle high n values no sweat?
I looked at:
Check if array is an ordered subset
How do I return a group of sequential numbers that might exist in an array?
Check if an array is subset of another array in Ruby
and some others. I can't seem to find an answer.
Suggested duplicate is Count the total number of subsets that don't have consecutive elements, although that question is slightly different as I was asking for this optimization in Ruby and I do not want the empty subset in my answer. That question would have been very helpful had I initially found that one though! But SergGr's answer is exactly what I was looking for.

Although #user3150716 idea is correct the details are wrong. Particularly you can see that for n = 3 there are 4 subsets: [1],[2],[3],[1,3] while his formula gives only 3. That is because he missed the subset [3] (i.e. the subset consisting of just [i]) and that error accumulates for larger n. Also I think it is easier to think if you start from 1 rather than n. So the correct formulas would be
f(1) = 1
f(2) = 2
f(n) = f(n-1) + f(n-2) + 1
Those formulas are easy to code using a simple loop in constant space and O(n) speed:
def f(n)
return 1 if n == 1
return 2 if n == 2
# calculate
# f(n) = f(n-1) + f(n - 2) + 1
# using simple loop
v2 = 1
v1 = 2
i = 3
while i <= n do
i += 1
v1, v2 = v1 + v2 + 1, v1
end
v1
end
You can see this online together with the original code here
This should be pretty fast for any n <= 75. For much larger n you might require some additional tricks like noticing that f(n) is actually one less than a Fibonacci number
f(n) = Fib(n+2) - 1
and there is a closed formula for Fibonacci number that theoretically can be computed faster for big n.

let number of subsets with no consecutive numbers from{i...n} be f(i), then f(i) is the sum of:
1) f(i+1) , the number of such subsets without i in them.
2) f(i+2) + 1 , the number of such subsets with i in them (hence leaving out i+1 from the subset)
So,
f(i)=f(i+1)+f(i+2)+1
f(n)=1
f(n-1)=2
f(1) will be your answer.
You can solve it using matrix exponentiation(http://zobayer.blogspot.in/2010/11/matrix-exponentiation.html) in O(logn) time.

Related

Find the number of pairs in an array with product between l and r incluseive

this is the actual question
however, it simplifies to
Find all SEMPIPRIMES (numbers which are products of 2 DISTINCT prime factors e.g. 6 (2*3) in range L to R
there will be multiple queries for L and R
we cant precompute semiprimes as N is large
BUT we can store primes as they are only upto 10^6 as per the question
Now, assume i have all primes by sieve of eratostheneses
i need all possible pairs of primes with product between L to R
OR THE QUESTION SIMPLIFIES TO GIVEN A SORTED ARRAY.
FIND ALL POSSIBLE PAIRS WITH PRODUCTS BETWEEN L AND R INCLUSIVE
i am including the part of code in the editorial which does this..
for(int i=0; i<cnt and ar[i]<=r; i++)
{
int lower = L/ar[i];
if(L%ar[i]>0)
lower++;
lower = max(lower, ar[i]+1);
int upper = R/ar[i];
if(upper<lower)
continue;
ans += upper_bound(ar.begin(),ar.end(),upper)-
lower_bound(ar.begin(),ar.end(),lower);
}
Here's one approach, this may not be faster but it seems reasonable.
The number of primes below 10^8 is around 5*10^6.
reference: https://en.wikipedia.org/wiki/Prime-counting_function
But we may not have to keep all the primes, as it would be rather inefficient. We can keep the Semiprimes only.
There's already the generative process for Semprimes. Each Semiprime is a product of 2 distinct prime factors.
So, we can keep an array which will store all the semiprimes, as there will be at most 10^5 semiprimes in the range, we can sort that array. For each query, we will just binary search on the array to find the number of elements in the range.
So, how to save the semiprimes?
We can slightly modify the sieve of Eratosthenes to generate semiprimes.
The idea is we will keep a countDivision array which will store the number of divisor for each integer in range. We only consider a integer semiprime is countDivision index value is 2 for that integer (2 divisors).
def createSemiPrimeSieve(n):
v = [0 for i in range(n + 1)]
# This array will initially store the indexes
# After performing below operations if any
# element of array becomes 1 this means
# that the given index is a semi-prime number
# Storing indices in each element of vector
for i in range(1, n + 1):
v[i] = i
countDivision = [0 for i in range(n + 1)]
for i in range(n + 1):
countDivision[i] = 2
# This array will initially be initialized by 2 and
# will just count the divisions of a number
# As a semiprime number has only 2 prime factors
# which means after dividing by the 2 prime numbers
# if the index countDivision[x] = 0 and v[x] = 1
# this means that x is a semiprime number
# If number a is prime then its
# countDivision[a] = 2 and v[a] = a
for i in range(2, n + 1, 1):
# If v[i] != i this means that it is
# not a prime number as it contains
# a divisor which has already divided it
# same reason if countDivision[i] != 2
if (v[i] == i and countDivision[i] == 2):
# j goes for each factor of i
for j in range(2 * i, n + 1, i):
if (countDivision[j] > 0):
# Dividing the number by i
# and storing the dividend
v[j] = int(v[j] / i)
# Decreasing the countDivision
countDivision[j] -= 1
# A new vector to store all Semi Primes
res = []
for i in range(2, n + 1, 1):
# If a number becomes one and
# its countDivision becomes 0
# it means the number has
# two prime divisors
if (v[i] == 1 and countDivision[i] == 0):
res.append(i)
return res
Credit: https://www.geeksforgeeks.org/print-all-semi-prime-numbers-less-than-or-equal-to-n/
But, generating has same complexity as sieve which is O(nloglogn). If (R-L) is < 10^5 this approach will pass. But as (R-L) can be as big as 10^8 it's not feasible.
Another approach is to count instead of generating.
Let's work on an example.
2 10
Now, let's say, we know all the primes up to 10^6 (as p and q can't be more than 10^6).
primes = [2, 3, 5, 7, 11, ...]
The number of primes below 10^6 is less than 10^5 (so we can store
them in an array) and the time complexity is also manageable.
Now, we can scan our primes array to count the contribution for each prime to generate semiprimes in range (L, R).
First, let's start with 2, how many semiprimes we'll generate with the help of 2.
Let's look at primes = [2, 3, 5, 7, 11, ..]
We can't choose 2, because 2 and 2 are not different (p, q must be different). But, 3 is in, as 2*3 <= 10, so is 5, 2*5 2*5 <= 10.
How to count this?
We will take the lower bound as 2//2 (L//primes[i]) but we have to make sure, we don't consider the current prime again as (p and q must be different) so we take the max of L//primes[i] and primes[i]+1.
For 2, our start number is 3 (because of 2+1 = 3, we can't start at 1 or 2 as if we consider 2 then we'll calculate cases like 2*2 = 4 which is not valid). Our end number is 10//2 = 5, how many numbers are there within range 3, 5 in primes. It's 2 and can be found via a simple binary search.
Rest is easy, we'll have to binary search how many primes are there in range (max(L//primes[i], primes[i]+1), R//primes[i]).
This has complexity pre-processing time complexity of O(10^6*loglog(10^6)) + O(10^6*log(10^6)) and O(log(10^6)) per query.

Analyze the run time of a nested for loops algorithm

Say i have the following code
def func(A,n):
for i = 0 to n-1:
for k = i+1 to n-1:
for l = k+1 to n-1:
if A[i]+A[k]+A[l] = 0:
return True
A is an array, and n denotes the length of A.
As I read it, the code checks if any 3 consecutive integers in A sum up to 0. I see the time complexity as
T(n) = (n-2)(n-1)(n-2)+O(1) => O(n^3)
Is this correct, or am I missing something? I have a hard time finding reading material about this (and I own CLRS)
You have the functionality wrong: it checks to see whether any three elements add up to 0. To improve execution time, it considers them only in index order: i < k < j.
You are correct about the complexity. Although each loop takes a short-cut, that short-cut is merely a scalar divisor on the number of iterations. Each loop is still O(n).
As for the coding, you already have most of it done -- and Stack Overflow is not a coding service. Give it your best shot; if that doesn't work and you're stuck, post another question.
If you really want to teach yourself a new technique, look up Python's itertools package. You can use this to generate all the combinations in triples. You can then merely check sum(triple) in each case. In fact, you can use the any method to check whether any one triple sums to 0, which could reduce your function body to a single line of Python code.
I'll leave that research to you. You'll learn other neat stuff on the way.
Addition for OP's comment.
Let's set N to 4, and look at what happens:
i = 0
for k = 1 to 3
... three k loop
i = 1
for k = 2 to 3
... two k loops
i = 2
for k = 3 to 3
... one k loop
The number of k-loop executions is the "triangle" number of n-1: 3 + 2 + 1. Let m = n-1; the formula is T(m) = m(m-1)/2.
Now, you propagate the same logic to the l loops. You run T(k) loops on l for k= 1, 2, 3. If I recall, this third-order "pyramid" formula is P(m) = m(m-1)(m-2)/6.
In terms of n, this is (n-1)(n-2)(n-3)/6 loops on l. When you multiply this out, you get a straightforward cubic formula in n.
Here is the sequence for n=5:
0 1 2
0 1 3
0 1 4
change k
0 2 3
0 2 4
change k
0 3 4
change k
change k
change l
1 2 3
1 2 4
change k
1 3 4
change k
change k
change l
2 3 4
BTW, l is a bad variable name, easily confused with 1.

Number of different binary sequences of length n generated using exactly k flip operations

Consider a binary sequence b of length N. Initially, all the bits are set to 0. We define a flip operation with 2 arguments, flip(L,R), such that:
All bits with indices between L and R are "flipped", meaning a bit with value 1 becomes a bit with value 0 and vice-versa. More exactly, for all i in range [L,R]: b[i] = !b[i].
Nothing happens to bits outside the specified range.
You are asked to determine the number of possible different sequences that can be obtained using exactly K flip operations modulo an arbitrary given number, let's call it MOD.
More specifically, each test contains on the first line a number T, the number of queries to be given. Then there are T queries, each one being of the form N, K, MOD with the meaning from above.
1 ≤ N, K ≤ 300 000
T ≤ 250
2 ≤ MOD ≤ 1 000 000 007
Sum of all N-s in a test is ≤ 600 000
time limit: 2 seconds
memory limit: 65536 kbytes
Example :
Input :
1
2 1 1000
Output :
3
Explanation :
There is a single query. The initial sequence is 00. We can do the following operations :
flip(1,1) ⇒ 10
flip(2,2) ⇒ 01
flip(1,2) ⇒ 11
So there are 3 possible sequences that can be generated using exactly 1 flip.
Some quick observations that I've made, although I'm not sure they are totally correct :
If K is big enough, that is if we have a big enough number of flips at our disposal, we should be able to obtain 2n sequences.
If K=1, then the result we're looking for is N(N+1)/2. It's also C(n,1)+C(n,2), where C is the binomial coefficient.
Currently trying a brute force approach to see if I can spot a rule of some kind. I think this is a sum of some binomial coefficients, but I'm not sure.
I've also come across a somewhat simpler variant of this problem, where the flip operation only flips a single specified bit. In that case, the result is
C(n,k)+C(n,k-2)+C(n,k-4)+...+C(n,(1 or 0)). Of course, there's the special case where k > n, but it's not a huge difference. Anyway, it's pretty easy to understand why that happens.I guess it's worth noting.
Here are a few ideas:
We may assume that no flip operation occurs twice (otherwise, we can assume that it did not happen). It does affect the number of operations, but I'll talk about it later.
We may assume that no two segments intersect. Indeed, if L1 < L2 < R1 < R2, we can just do the (L1, L2 - 1) and (R1 + 1, R2) flips instead. The case when one segment is inside the other is handled similarly.
We may also assume that no two segments touch each other. Otherwise, we can glue them together and reduce the number of operations.
These observations give the following formula for the number of different sequences one can obtain by flipping exactly k segments without "redundant" flips: C(n + 1, 2 * k) (we choose 2 * k ends of segments. They are always different. The left end is exclusive).
If we had perform no more than K flips, the answer would be sum for k = 0...K of C(n + 1, 2 * k)
Intuitively, it seems that its possible to transform the sequence of no more than K flips into a sequence of exactly K flips (for instance, we can flip the same segment two more times and add 2 operations. We can also split a segment of more than two elements into two segments and add one operation).
By running the brute force search (I know that it's not a real proof, but looks correct combined with the observations mentioned above) that the answer this sum minus 1 if n or k is equal to 1 and exactly the sum otherwise.
That is, the result is C(n + 1, 0) + C(n + 1, 2) + ... + C(n + 1, 2 * K) - d, where d = 1 if n = 1 or k = 1 and 0 otherwise.
Here is code I used to look for patterns running a brute force search and to verify that the formula is correct for small n and k:
reachable = set()
was = set()
def other(c):
"""
returns '1' if c == '0' and '0' otherwise
"""
return '0' if c == '1' else '1'
def flipped(s, l, r):
"""
Flips the [l, r] segment of the string s and returns the result
"""
res = s[:l]
for i in range(l, r + 1):
res += other(s[i])
res += s[r + 1:]
return res
def go(xs, k):
"""
Exhaustive search. was is used to speed up the search to avoid checking the
same string with the same number of remaining operations twice.
"""
p = (xs, k)
if p in was:
return
was.add(p)
if k == 0:
reachable.add(xs)
return
for l in range(len(xs)):
for r in range(l, len(xs)):
go(flipped(xs, l, r), k - 1)
def calc_naive(n, k):
"""
Counts the number of reachable sequences by running an exhaustive search
"""
xs = '0' * n
global reachable
global was
was = set()
reachable = set()
go(xs, k)
return len(reachable)
def fact(n):
return 1 if n == 0 else n * fact(n - 1)
def cnk(n, k):
if k > n:
return 0
return fact(n) // fact(k) // fact(n - k)
def solve(n, k):
"""
Uses the formula shown above to compute the answer
"""
res = 0
for i in range(k + 1):
res += cnk(n + 1, 2 * i)
if k == 1 or n == 1:
res -= 1
return res
if __name__ == '__main__':
# Checks that the formula gives the right answer for small values of n and k
for n in range(1, 11):
for k in range(1, 11):
assert calc_naive(n, k) == solve(n, k)
This solution is much better than the exhaustive search. For instance, it can run in O(N * K) time per test case if we compute the coefficients using Pascal's triangle. Unfortunately, it is not fast enough. I know how to solve it more efficiently for prime MOD (using Lucas' theorem), but O do not have a solution in general case.
Multiplicative modular inverses can't solve this problem immediately as k! or (n - k)! may not have an inverse modulo MOD.
Note: I assumed that C(n, m) is defined for all non-negative n and m and is equal to 0 if n < m.
I think I know how to solve it for an arbitrary MOD now.
Let's factorize the MOD into prime factors p1^a1 * p2^a2 * ... * pn^an. Now can solve this problem for each prime factor independently and combine the result using the Chinese remainder theorem.
Let's fix a prime p. Let's assume that p^a|MOD (that is, we need to get the result modulo p^a). We can precompute all p-free parts of the factorial and the maximum power of p that divides the factorial for all 0 <= n <= N in linear time using something like this:
powers = [0] * (N + 1)
p_free = [i for i in range(N + 1)]
p_free[0] = 1
for cur_p in powers of p <= N:
i = cur_p
while i < N:
powers[i] += 1
p_free[i] /= p
i += cur_p
Now the p-free part of the factorial is the product of p_free[i] for all i <= n and the power of p that divides n! is the prefix sum of the powers.
Now we can divide two factorials: the p-free part is coprime with p^a so it always has an inverse. The powers of p are just subtracted.
We're almost there. One more observation: we can precompute the inverses of p-free parts in linear time. Let's compute the inverse for the p-free part of N! using Euclid's algorithm. Now we can iterate over all i from N to 0. The inverse of the p-free part of i! is the inverse for i + 1 times p_free[i] (it's easy to prove it if we rewrite the inverse of the p-free part as a product using the fact that elements coprime with p^a form an abelian group under multiplication).
This algorithm runs in O(N * number_of_prime_factors + the time to solve the system using the Chinese remainder theorem + sqrt(MOD)) time per test case. Now it looks good enough.
You're on a good path with binomial-coefficients already. There are several factors to consider:
Think of your number as a binary-string of length n. Now we can create another array counting the number of times a bit will be flipped:
[0, 1, 0, 0, 1] number
[a, b, c, d, e] number of flips.
But even numbers of flips all lead to the same result and so do all odd numbers of flips. So basically the relevant part of the distribution can be represented %2
Logical next question: How many different combinations of even and odd values are available. We'll take care of the ordering later on, for now just assume the flipping-array is ordered descending for simplicity. We start of with k as the only flipping-number in the array. Now we want to add a flip. Since the whole flipping-array is used %2, we need to remove two from the value of k to achieve this and insert them into the array separately. E.g.:
[5, 0, 0, 0] mod 2 [1, 0, 0, 0]
[3, 1, 1, 0] [1, 1, 1, 0]
[4, 1, 0, 0] [0, 1, 0, 0]
As the last example shows (remember we're operating modulo 2 in the final result), moving a single 1 doesn't change the number of flips in the final outcome. Thus we always have to flip an even number bits in the flipping-array. If k is even, so will the number of flipped bits be and same applies vice versa, no matter what the value of n is.
So now the question is of course how many different ways of filling the array are available? For simplicity we'll start with mod 2 right away.
Obviously we start with 1 flipped bit, if k is odd, otherwise with 1. And we always add 2 flipped bits. We can continue with this until we either have flipped all n bits (or at least as many as we can flip)
v = (k % 2 == n % 2) ? n : n - 1
or we can't spread k further over the array.
v = k
Putting this together:
noOfAvailableFlips:
if k < n:
return k
else:
return (k % 2 == n % 2) ? n : n - 1
So far so well, there are always v / 2 flipping-arrays (mod 2) that differ by the number of flipped bits. Now we come to the next part permuting these arrays. This is just a simple permutation-function (permutation with repetition to be precise):
flipArrayNo(flippedbits):
return factorial(n) / (factorial(flippedbits) * factorial(n - flippedbits)
Putting it all together:
solutionsByFlipping(n, k):
res = 0
for i in [k % 2, noOfAvailableFlips(), step=2]:
res += flipArrayNo(i)
return res
This also shows that for sufficiently large numbers we can't obtain 2^n sequences for the simply reason that we can not arrange operations as we please. The number of flips that actually affect the outcome will always be either even or odd depending upon k. There's no way around this. The best result one can get is 2^(n-1) sequences.
For completeness, here's a dynamic program. It can deal easily with arbitrary modulo since it is based on sums, but unfortunately I haven't found a way to speed it beyond O(n * k).
Let a[n][k] be the number of binary strings of length n with k non-adjacent blocks of contiguous 1s that end in 1. Let b[n][k] be the number of binary strings of length n with k non-adjacent blocks of contiguous 1s that end in 0.
Then:
# we can append 1 to any arrangement of k non-adjacent blocks of contiguous 1's
# that ends in 1, or to any arrangement of (k-1) non-adjacent blocks of contiguous
# 1's that ends in 0:
a[n][k] = a[n - 1][k] + b[n - 1][k - 1]
# we can append 0 to any arrangement of k non-adjacent blocks of contiguous 1's
# that ends in either 0 or 1:
b[n][k] = b[n - 1][k] + a[n - 1][k]
# complete answer would be sum (a[n][i] + b[n][i]) for i = 0 to k
I wonder if the following observations might be useful: (1) a[n][k] and b[n][k] are zero when n < 2*k - 1, and (2) on the flip side, for values of k greater than ⌊(n + 1) / 2⌋ the overall answer seems to be identical.
Python code (full matrices are defined for simplicity, but I think only one row of each would actually be needed, space-wise, for a bottom-up method):
a = [[0] * 11 for i in range(0,11)]
b = [([1] + [0] * 10) for i in range(0,11)]
def f(n,k):
return fa(n,k) + fb(n,k)
def fa(n,k):
global a
if a[n][k] or n == 0 or k == 0:
return a[n][k]
elif n == 2*k - 1:
a[n][k] = 1
return 1
else:
a[n][k] = fb(n-1,k-1) + fa(n-1,k)
return a[n][k]
def fb(n,k):
global b
if b[n][k] or n == 0 or n == 2*k - 1:
return b[n][k]
else:
b[n][k] = fb(n-1,k) + fa(n-1,k)
return b[n][k]
def g(n,k):
return sum([f(n,i) for i in range(0,k+1)])
# example
print(g(10,10))
for i in range(0,11):
print(a[i])
print()
for i in range(0,11):
print(b[i])

Generating a stateless, pseudo-random permutation of integers from 0 to n?

Question spawned from this one. The problem can be formulated as follows:
Given two positive integers n and m, with m <= n, is there a way to find a suite of numbers, which cycles and covers all possible values from 0 to n?
As a basic example, if we take 3 as a number, for whatever number current between 0 and 3, we can compute the next value as:
next = (current+3) % 4
This will cycle. For instance: 1 -> 0 -> 3 -> 2 -> 1 etc. I found this solution by "chance" and it is even general ((i + n) % (n + 1) for any n), I cannot prove it mathematically. And it is a little too obvious.
Are there better ways to generate such a permutation?
I'm not sure what you intend m in the question to refer to, or how you're defining "a suite of numbers"). However, one way of getting a cycle of number is to use a recursion (or iteration) of the form:
next = f(current)
for some function f. For example, linear congruential RNGs use the iteration:
x = ( a · x + c ) mod m where 0 < a, c < m
They don't always produce all values from 0 to m-1, but under certain circumstances they do:
c and m are relatively prime
a - 1 is divisible by every prime factor of m (not including m)
if m is divisible by 4, a - 1 is divisible by 4.
(This is the Hull-Dobell theorem.)
Note that a, c == 1 satisfies the above criteria for any m. Futhermore, if m is prime, any values of a and c satisify the criteria, and if m is a power of 2, then the criteria are satisfied by any a, c such that a == 1 mod 4 and c == 1 mod 2. However, for certain values of m (eg. 6), the only value of a which will work is 1.
This might not qualify as "stateless", but I don't think that there is any strictly stateless solution; for example, you might look for some function f such that:
f(0), f(1),... f(m-1)
is a permutation of
0, 1, ..., m-1
so that you could generate the cycle by calling f(i) for successive values of i. But that's still a state, since you have to remember the last value of i you used,
Incrementing each subsequent number by any number that does not share a common prime divisor with (n-m+1) would cover the sequence (e.g. for the sequence [2-11] (10 numbers) incrementing by 3, 7, or 9 would work but 2, 4, 5, 6, and 8 would not because they share a common divisor (2 and/or 5)
EDIT
I took out the shuffling idea since it seems that you want to increment by the same number each time. If you want a truly "random" sequence that has m at the first element just take m out and place it at the beginning. I'm not sure how that helps you, though.

Number of Positive Solutions to a1 x1+a2 x2+......+an xn=k (k<=10^18)

The question is Number of solutions to a1 x1+a2 x2+....+an xn=k with constraints: 1)ai>0 and ai<=15 2)n>0 and n<=15 3)xi>=0 I was able to formulate a Dynamic programming solution but it is running too long for n>10^10. Please guide me to get a more efficient soution.
The code
int dp[]=new int[16];
dp[0]=1;
BigInteger seen=new BigInteger("0");
while(true)
{
for(int i=0;i<arr[0];i++)
{
if(dp[0]==0)
break;
dp[arr[i+1]]=(dp[arr[i+1]]+dp[0])%1000000007;
}
for(int i=1;i<15;i++)
dp[i-1]=dp[i];
seen=seen.add(new BigInteger("1"));
if(seen.compareTo(n)==0)
break;
}
System.out.println(dp[0]);
arr is the array containing coefficients and answer should be mod 1000000007 as the number of ways donot fit into an int.
Update for real problem:
The actual problem is much simpler. However, it's hard to be helpful without spoiling it entirely.
Stripping it down to the bare essentials, the problem is
Given k distinct positive integers L1, ... , Lk and a nonnegative integer n, how many different finite sequences (a1, ..., ar) are there such that 1. for all i (1 <= i <= r), ai is one of the Lj, and 2. a1 + ... + ar = n. (In other words, the number of compositions of n using only the given Lj.)
For convenience, you are also told that all the Lj are <= 15 (and hence k <= 15), and n <= 10^18. And, so that the entire computation can be carried out using 64-bit integers (the number of sequences grows exponentially with n, you wouldn't have enough memory to store the exact number for large n), you should only calculate the remainder of the sequence count modulo 1000000007.
To solve such a problem, start by looking at the simplest cases first. The very simplest cases are when only one L is given, then evidently there is one admissible sequence if n is a multiple of L and no admissible sequence if n mod L != 0. That doesn't help yet. So consider the next simplest cases, two L values given. Suppose those are 1 and 2.
0 has one composition, the empty sequence: N(0) = 1
1 has one composition, (1): N(1) = 1
2 has two compositions, (1,1); (2): N(2) = 2
3 has three compositions, (1,1,1);(1,2);(2,1): N(3) = 3
4 has five compositions, (1,1,1,1);(1,1,2);(1,2,1);(2,1,1);(2,2): N(4) = 5
5 has eight compositions, (1,1,1,1,1);(1,1,1,2);(1,1,2,1);(1,2,1,1);(2,1,1,1);(1,2,2);(2,1,2);(2,2,1): N(5) = 8
You may see it now, or need a few more terms, but you'll notice that you get the Fibonacci sequence (shifted by one), N(n) = F(n+1), thus the sequence N(n) satisfies the recurrence relation
N(n) = N(n-1) + N(n-2) (for n >= 2; we have not yet proved that, so far it's a hypothesis based on pattern-spotting). Now, can we see that without calculating many values? Of course, there are two types of admissible sequences, those ending with 1 and those ending with 2. Since that partitioning of the admissible sequences restricts only the last element, the number of ad. seq. summing to n and ending with 1 is N(n-1) and the number of ad. seq. summing to n and ending with 2 is N(n-2).
That reasoning immediately generalises, given L1 < L2 < ... < Lk, for all n >= Lk, we have
N(n) = N(n-L1) + N(n-L2) + ... + N(n-Lk)
with the obvious interpretation if we're only interested in N(n) % m.
Umm, that linear recurrence still leaves calculating N(n) as an O(n) task?
Yes, but researching a few of the mentioned keywords quickly leads to an algorithm needing only O(log n) steps ;)
Algorithm for misinterpreted problem, no longer relevant, but may still be interesting:
The question looks a little SPOJish, so I won't give a complete algorithm (at least, not before I've googled around a bit to check if it's a contest question). I hope no restriction has been omitted in the description, such as that permutations of such representations should only contribute one to the count, that would considerably complicate the matter. So I count 1*3 + 2*4 = 11 and 2*4 + 1*3 = 11 as two different solutions.
Some notations first. For m-tuples of numbers, let < | > denote the canonical bilinear pairing, i.e.
<a|x> = a_1*x_1 + ... + a_m*x_m. For a positive integer B, let A_B = {1, 2, ..., B} be the set of positive integers not exceeding B. Let N denote the set of natural numbers, i.e. of nonnegative integers.
For 0 <= m, k and B > 0, let C(B,m,k) = card { (a,x) \in A_B^m × N^m : <a|x> = k }.
Your problem is then to find \sum_{m = 1}^15 C(15,m,k) (modulo 1000000007).
For completeness, let us mention that C(B,0,k) = if k == 0 then 1 else 0, which can be helpful in theoretical considerations. For the case of a positive number of summands, we easily find the recursion formula
C(B,m+1,k) = \sum_{j = 0}^k C(B,1,j) * C(B,m,k-j)
By induction, C(B,m,_) is the convolution¹ of m factors C(B,1,_). Calculating the convolution of two known functions up to k is O(k^2), so if C(B,1,_) is known, that gives an O(n*k^2) algorithm to compute C(B,m,k), 1 <= m <= n. Okay for small k, but our galaxy won't live to see you calculating C(15,15,10^18) that way. So, can we do better? Well, if you're familiar with the Laplace-transformation, you'll know that an analogous transformation will convert the convolution product to a pointwise product, which is much easier to calculate. However, although the transformation is in this case easy to compute, the inverse is not. Any other idea? Why, yes, let's take a closer look at C(B,1,_).
C(B,1,k) = card { a \in A_B : (k/a) is an integer }
In other words, C(B,1,k) is the number of divisors of k not exceeding B. Let us denote that by d_B(k). It is immediately clear that 1 <= d_B(k) <= B. For B = 2, evidently d_2(k) = 1 if k is odd, 2 if k is even. d_3(k) = 3 if and only if k is divisible by 2 and by 3, hence iff k is a multiple of 6, d_3(k) = 2 if and only if one of 2, 3 divides k but not the other, that is, iff k % 6 \in {2,3,4} and finally, d_3(k) = 1 iff neither 2 nor 3 divides k, i.e. iff gcd(k,6) = 1, iff k % 6 \in {1,5}. So we've seen that d_2 is periodic with period 2, d_3 is periodic with period 6. Generally, like reasoning shows that d_B is periodic for all B, and the minimal positive period divides B!.
Given any positive period P of C(B,1,_) = d_B, we can split the sum in the convolution (k = q*P+r, 0 <= r < P):
C(B,m+1, q*P+r) = \sum_{c = 0}^{q-1} (\sum_{j = 0}^{P-1} d_B(j)*C(B,m,(q-c)*P + (r-j)))
+ \sum_{j = 0}^r d_B(j)*C(B,m,r-j)
The functions C(B,m,_) are no longer periodic for m >= 2, but there are simple formulae to obtain C(B,m,q*P+r) from C(B,m,r). Thus, with C(B,1,_) = d_B and C(B,m,_) known up to P, calculating C(B,m+1,_) up to P is an O(P^2) task², getting the data necessary for calculating C(B,m+1,k) for arbitrarily large k, needs m such convolutions, hence that's O(m*P^2).
Then finding C(B,m,k) for 1 <= m <= n and arbitrarily large k is O(n^2*P^2), in time and O(n^2*P) in space.
For B = 15, we have 15! = 1.307674368 * 10^12, so using that for P isn't feasible. Fortunately, the smallest positive period of d_15 is much smaller, so you get something workable. From a rough estimate, I would still expect the calculation of C(15,15,k) to take time more appropriately measured in hours than seconds, but it's an improvement over O(k) which would take years (for k in the region of 10^18).
¹ The convolution used here is (f \ast g)(k) = \sum_{j = 0}^k f(j)*g(k-j).
² Assuming all arithmetic operations are O(1); if, as in the OP, only the residue modulo some M > 0 is desired, that holds if all intermediate calculations are done modulo M.

Resources