How to get all the combination in order - algorithm

For example
Input
2,1,3
Output
1,1,1
1,1,2
1,1,3
2,1,1
2,1,2
2,1,3

If I correctly understand the question then this should work (code is in Haskell, will produce the results in a different order than the example)
combinations [] = []
combinations [x]
|x > 0 = [x]:combinations [(x-1)]
|otherwise = []
combinations (x:xs)
|x > 0 = (map (\c -> x:c) (combinations xs)) ++ combinations((x-1):xs)
|otherwise = []
Or this to get it in the same order as you gave (also just a nicer solution)
combinations' [x] = [[c]|c<-[1..x]]
combinations' (x:xs) = [c:d|c<-[1..x],d<-combinations' xs]
It will take me a bit to produce an answer in an "imperative" language (C, Java, etc). This is the kind of thing where functional languages shine.
Okay, so in Java.
Disclaimer: this code is more or less just a direct translation of the Haskell. It isn't clean, or the best way of doing things. I have not tested it, or really given it enough thought to make sure it is correct
public List<List<Integer>> combinations(List<Integer> workwith){
List<List<Integer>> d = new LinkedList<LinkedList<Integer>>();
if(workwith.size() == 1){
int max = workwith.get(0);
for(int i = 1; i=<max;i++){
List<Integer> toAdd = new LinkedList<Integer>();
toAdd.add(i);
d.add(toAdd);
}
return d
}
Integer max = workwith.remove(0);
List<List<Integer>> back = combinations(workwith);
for(int i = 1, i<=max;i++)
for(List<Integer> b: back){
List<Integer> toAdd = new LinkedList<Integer>();
toAdd.add(i);
toAdd.addAll(b);
d.add(toAdd)
}
}
return d;
}

a[] - is an input vector
int prod = 1;
for (int i = 0; i < a.size(); i++) prod *= a[i]; // find the count of lines in output
for (int i = 0; i < prod; i++){
vector<int> b; // vector of the current output
for (int j = a.size()-1; j >= 0; j--){ // for each output calculate its values
b[j] = i % a[j]; // each value will be between 0 and a[j]
i /= a[j];
}
for (int j = 0; j < a.size(); j++) // output it
cout << b[j] + 1 << " ";
cout << endl;
}

Not the most efficient way to do it, but here's a C implementation:
/* Assumes output is allocated with enough room for 'len' ints. */
/* Generates the 'num'-th combination in 'output'. */
void get_comb_number(int num, int len, int *input, int *output) {
int i;
for (i=num-i-1; i >= 0; --i) {
output[i] = (num % input[i]) + 1;
num /= input[i];
}
}
Then you can just loop from 0 to the product of the input (for the example above it would be 2*1*3 = 6), calling get_comb_number for each and print out each combination. The code is slightly inefficient because it has to call a function for each combination and has to do all the mods and divisions for each combination, but IMO the simplicity of the code makes up for it if you don't need the efficiency. Note that the combination number will overflow somewhat quickly, but assuming 32-bit ints, you'll be spending several minutes just generating all the combinations at that point and much much longer trying to print them all.

Related

How to convert this recursive function to a dp based solution?

This is the recursive function
def integerPartition(m, n):
if(n==0):
return 0
if(m ==0):
return 1
if(m<0):
return 0
return integerPartition(m,n-1) + integerPartition(m-n,n)
and this is what i have done in c++
// n -> no. of persons
// m -> amount of money to be distributed
// dp table of order (n+1)*(m+1)
long long int dp[n+1][m+1] ;
//initializing values to 0
for(i = 0; i<=n ; i++)
for(j = 0; j<= m ; j++)
dp[i][j] = 0;
Print(n,m,dp);
cout << "\n";
//Case 1 - if there is no persons i.e n = 0 answer will be 0
//Case 2 - if there is no money i.e. m = 0 there is only 1 way answer will be 1
for ( i = 1; i<= n ; i++ )
dp[i][0] = 1;
dp[i][i] = 1;
Print(n,m,dp);
for ( i = 1; i<= n ; i++){
for ( j = 1; j<= m ; j++){
dp[i][j] = dp[i][j-1] ;
if(i>=j){
dp[i][j] += dp[i-j][j];
}
// else if(i==j){
// dp[i][j] += 1;
// }
}
}
but the answers i am getting are not matching with the recursive one i don't understand what am i missing if anyone can please help me to correct i will be thankful since i have just started with dynamic programming i really am not able to figure it out
Some issues:
You seem to use non-local variables for your for loops. This is bad practice and can lead to errors that are difficult to debug. Instead
do for (int i = 1; ...etc.
dp[i][i] = 1; is not part of the for loop. You would have detected this if you would have defined i only as a variable local to the for loop.
It is good practice to always use braces for the body of a for loop (also if, else, ...etc), even if you would only have one
statement in the body.
dp[i][i] = 1; is also a bad assignment: it just is not true that integerPartition(i, i) always returns 1. It happens to be true
for small values of i, but not when i is greater than 3. For instance, integerPartition(4, 4) should return 5.
Just remove this line.
In the final nested for loop you are mixing up the row/column in your dp array. Note that you had reserved the first dimension for n and the second dimension for m, so opposite to the parameter order.
That is fine, but you do not stick to that decision in this for loop. Instead of dp[i][j-1] you should have written dp[i-1][j], and instead of dp[i-j][j] you should have
written dp[i][j-i]. And so the if condition should be adapted accordingly.
There is no return statement in your version, but maybe you just forgot to include it in the question. It should be
return dp[n][m];
Here is the corrected code:
long long int dp[n+1][m+1];
for(int i = 0; i <=n; i++) {
for(int j = 0; j <= m; j++) {
dp[i][j] = 0;
}
}
for (int i = 1; i <= n; i++) {
dp[i][0] = 1;
}
for (int i = 1; i <= n; i++){
for (int j = 1; j <= m ; j++) {
dp[i][j] = dp[i-1][j];
if (j >= i) {
dp[i][j] += dp[i][j-i];
}
}
}
return dp[n][m];
Not sure that this technically is DP, but if your goal is to get the benefits of DP, memorization might be a better approach.
The idea is made up of 2 parts:
At the start of each call to integerPartition, look up in a table (your dp will do nicely) to see if that computation has already been done, and if it has, just return the value stored in the table.
Just before any point where integerPartition is to return a value, store it in the table.
Note that this means you don't need to try to "pivot" the original code -- it proceeds as it did originally, so you are almost guaranteed to get the same results, but without as much unnecessary re-computation (at the code of extra storage).
so, basis of your code comment,
I am going to assume you only want 1 when n > 0 and m = 0 according to your recursive code, but in dp code, you interchanged them, that is i go to upto n, and j go upto m
so updating your code, try to find the mistake
// n -> no. of persons
// m -> amount of money to be distributed
// dp table of order (n+1)*(m+1)
long long int dp[n+1][m+1] ;
//initializing values to 0
for(i = 0; i<=n ; i++)
for(j = 0; j<= m ; j++)
dp[i][j] = 0;
Print(n,m,dp);
cout << "\n";
//Case 1 - if there is no persons i.e n = 0 answer will be 0
//Case 2 - if there is no money i.e. m = 0 there is only 1 way answer will be 1
for ( i = 1; i<= n; i++){
dp[i][0] = 0;
}
for(int j = 1; j <= m; j++){
dp[0][j] = 1;
}
Print(n,m,dp);
for ( i = 1; i<= n ; i++){
for ( j = 1; j<= m ; j++){
dp[i][j] = dp[i][j-1] ;
if(i>=j){
dp[i][j] += dp[i-j][j];
}
// else if(i==j){
// dp[i][j] += 1;
// }
}
}

Google Foobar, maximum unique visits under a resource limit, negative weights in graph

I'm having trouble figuring out the type of problem this is. I'm still a student and haven't taken a graph theory/linear optimization class yet.
The only thing I know for sure is to check for negative cycles, as this means you can rack the resource limit up to infinity, allowing for you to pick up each rabbit. I don't know the "reason" to pick the next path. I also don't know when to terminate, as you could keep using all of the edges and make the resource limit drop below 0 forever, but never escape.
I'm not really looking for code (as this is a coding challenge), only the type of problem this is (Ex: Max Flow, Longest Path, Shortest Path, etc.) If you an algorithm that fits this already that would be extra awesome. Thanks.
The time it takes to move from your starting point to all of the bunnies and to the bulkhead will be given to you in a square matrix of integers. Each row will tell you the time it takes to get to the start, first bunny, second bunny, ..., last bunny, and the bulkhead in that order. The order of the rows follows the same pattern (start, each bunny, bulkhead). The bunnies can jump into your arms, so picking them up is instantaneous, and arriving at the bulkhead at the same time as it seals still allows for a successful, if dramatic, escape. (Don't worry, any bunnies you don't pick up will be able to escape with you since they no longer have to carry the ones you did pick up.) You can revisit different spots if you wish, and moving to the bulkhead doesn't mean you have to immediately leave - you can move to and from the bulkhead to pick up additional bunnies if time permits.
In addition to spending time traveling between bunnies, some paths interact with the space station's security checkpoints and add time back to the clock. Adding time to the clock will delay the closing of the bulkhead doors, and if the time goes back up to 0 or a positive number after the doors have already closed, it triggers the bulkhead to reopen. Therefore, it might be possible to walk in a circle and keep gaining time: that is, each time a path is traversed, the same amount of time is used or added.
Write a function of the form answer(times, time_limit) to calculate the most bunnies you can pick up and which bunnies they are, while still escaping through the bulkhead before the doors close for good. If there are multiple sets of bunnies of the same size, return the set of bunnies with the lowest prisoner IDs (as indexes) in sorted order. The bunnies are represented as a sorted list by prisoner ID, with the first bunny being 0. There are at most 5 bunnies, and time_limit is a non-negative integer that is at most 999.
It's a planning problem, basically. The generic approach to planning is to identify the possible states of the world, the initial state, transitions between states, and the final states. Then search the graph that this data imply, most simply using breadth-first search.
For this problem, the relevant state is (1) how much time is left (2) which rabbits we've picked up (3) where we are right now. This means 1,000 clock settings (I'll talk about added time in a minute) times 2^5 = 32 subsets of bunnies times 7 positions = 224,000 possible states, which is a lot for a human but not a computer.
We can deal with added time by swiping a trick from Johnson's algorithm. As Tymur suggests in a comment, run Bellman--Ford and either find a negative cycle (in which case all rabbits can be saved by running around the negative cycle enough times first) or potentials that, when applied, make all times nonnegative. Don't forget to adjust the starting time by the difference in potential between the starting position and the bulkhead.
There you go. I started Google Foobar yesterday. I'll be starting Level 5 shortly. This was my 2nd problem here at level 4. The solution is fast enough as I tried memoizing the states without using the utils class. Anyway, loved the experience. This was by far the best problem solved by me since I got to use Floyd-Warshall(to find the negative cycle if it exists), Bellman-Ford(as a utility function to the weight readjustment step used popularly in algorithms like Johnson's and Suurballe's), Johnson(weight readjustment!), DFS(for recursing over steps) and even memoization using a self-designed hashing function :)
Happy Coding!!
public class Solution
{
public static final int INF = 100000000;
public static final int MEMO_SIZE = 10000;
public static int[] lookup;
public static int[] lookup_for_bunnies;
public static int getHashValue(int[] state, int loc)
{
int hashval = 0;
for(int i = 0; i < state.length; i++)
hashval += state[i] * (1 << i);
hashval += (1 << loc) * 100;
return hashval % MEMO_SIZE;
}
public static boolean findNegativeCycle(int[][] times)
{
int i, j, k;
int checkSum = 0;
int V = times.length;
int[][] graph = new int[V][V];
for(i = 0; i < V; i++)
for(j = 0; j < V; j++)
{
graph[i][j] = times[i][j];
checkSum += times[i][j];
}
if(checkSum == 0)
return true;
for(k = 0; k < V; k++)
for(i = 0; i < V; i++)
for(j = 0; j < V; j++)
if(graph[i][j] > graph[i][k] + graph[k][j])
graph[i][j] = graph[i][k] + graph[k][j];
for(i = 0; i < V; i++)
if(graph[i][i] < 0)
return true;
return false;
}
public static void dfs(int[][] times, int[] state, int loc, int tm, int[] res)
{
int V = times.length;
if(loc == V - 1)
{
int rescued = countArr(state);
int maxRescued = countArr(res);
if(maxRescued < rescued)
for(int i = 0; i < V; i++)
res[i] = state[i];
if(rescued == V - 2)
return;
}
else if(loc > 0)
state[loc] = 1;
int hashval = getHashValue(state, loc);
if(tm < lookup[hashval])
return;
else if(tm == lookup[hashval] && countArr(state) <= lookup_for_bunnies[loc])
return;
else
{
lookup_for_bunnies[loc] = countArr(state);
lookup[hashval] = tm;
for(int i = 0; i < V; i++)
{
if(i != loc && (tm - times[loc][i]) >= 0)
{
boolean stateCache = state[i] == 1;
dfs(times, state, i, tm - times[loc][i], res);
if(stateCache)
state[i] = 1;
else
state[i] = 0;
}
}
}
}
public static int countArr(int[] arr)
{
int counter = 0;
for(int i = 0; i < arr.length; i++)
if(arr[i] == 1)
counter++;
return counter;
}
public static int bellmanFord(int[][] adj, int times_limit)
{
int V = adj.length;
int i, j, k;
int[][] graph = new int[V + 1][V + 1];
for(i = 1; i <= V; i++)
graph[i][0] = INF;
for(i = 0; i < V; i++)
for(j = 0; j < V; j++)
graph[i + 1][j + 1] = adj[i][j];
int[] distance = new int[V + 1] ;
for(i = 1; i <= V; i++)
distance[i] = INF;
for(i = 1; i <= V; i++)
for(j = 0; j <= V; j++)
{
int minDist = INF;
for(k = 0; k <= V; k++)
if(graph[k][j] != INF)
minDist = Math.min(minDist, distance[k] + graph[k][j]);
distance[j] = Math.min(distance[j], minDist);
}
for(i = 0; i < V; i++)
for(j = 0; j < V; j++)
adj[i][j] += distance[i + 1] - distance[j + 1];
return times_limit + distance[1] - distance[V];
}
public static int[] solution(int[][] times, int times_limit)
{
int V = times.length;
if(V == 2)
return new int[]{};
if(findNegativeCycle(times))
{
int ans[] = new int[times.length - 2];
for(int i = 0; i < ans.length; i++)
ans[i] = i;
return ans;
}
lookup = new int[MEMO_SIZE];
lookup_for_bunnies = new int[V];
for(int i = 0; i < V; i++)
lookup_for_bunnies[i] = -1;
times_limit = bellmanFord(times, times_limit);
int initial[] = new int[V];
int res[] = new int[V];
dfs(times, initial, 0, times_limit, res);
int len = countArr(res);
int ans[] = new int[len];
int counter = 0;
for(int i = 0; i < res.length; i++)
if(res[i] == 1)
{
ans[counter++] = i - 1;
if(counter == len)
break;
}
return ans;
}
}

Maximum subarray sum modulo M

Most of us are familiar with the maximum sum subarray problem. I came across a variant of this problem which asks the programmer to output the maximum of all subarray sums modulo some number M.
The naive approach to solve this variant would be to find all possible subarray sums (which would be of the order of N^2 where N is the size of the array). Of course, this is not good enough. The question is - how can we do better?
Example: Let us consider the following array:
6 6 11 15 12 1
Let M = 13. In this case, subarray 6 6 (or 12 or 6 6 11 15 or 11 15 12) will yield maximum sum ( = 12 ).
We can do this as follow:
Maintaining an array sum which at index ith, it contains the modulus sum from 0 to ith.
For each index ith, we need to find the maximum sub sum that end at this index:
For each subarray (start + 1 , i ), we know that the mod sum of this sub array is
int a = (sum[i] - sum[start] + M) % M
So, we can only achieve a sub-sum larger than sum[i] if sum[start] is larger than sum[i] and as close to sum[i] as possible.
This can be done easily if you using a binary search tree.
Pseudo code:
int[] sum;
sum[0] = A[0];
Tree tree;
tree.add(sum[0]);
int result = sum[0];
for(int i = 1; i < n; i++){
sum[i] = sum[i - 1] + A[i];
sum[i] %= M;
int a = tree.getMinimumValueLargerThan(sum[i]);
result = max((sum[i] - a + M) % M, result);
tree.add(sum[i]);
}
print result;
Time complexity :O(n log n)
Let A be our input array with zero-based indexing. We can reduce A modulo M without changing the result.
First of all, let's reduce the problem to a slightly easier one by computing an array P representing the prefix sums of A, modulo M:
A = 6 6 11 2 12 1
P = 6 12 10 12 11 12
Now let's process the possible left borders of our solution subarrays in decreasing order. This means that we will first determine the optimal solution that starts at index n - 1, then the one that starts at index n - 2 etc.
In our example, if we chose i = 3 as our left border, the possible subarray sums are represented by the suffix P[3..n-1] plus a constant a = A[i] - P[i]:
a = A[3] - P[3] = 2 - 12 = 3 (mod 13)
P + a = * * * 2 1 2
The global maximum will occur at one point too. Since we can insert the suffix values from right to left, we have now reduced the problem to the following:
Given a set of values S and integers x and M, find the maximum of S + x modulo M
This one is easy: Just use a balanced binary search tree to manage the elements of S. Given a query x, we want to find the largest value in S that is smaller than M - x (that is the case where no overflow occurs when adding x). If there is no such value, just use the largest value of S. Both can be done in O(log |S|) time.
Total runtime of this solution: O(n log n)
Here's some C++ code to compute the maximum sum. It would need some minor adaptions to also return the borders of the optimal subarray:
#include <bits/stdc++.h>
using namespace std;
int max_mod_sum(const vector<int>& A, int M) {
vector<int> P(A.size());
for (int i = 0; i < A.size(); ++i)
P[i] = (A[i] + (i > 0 ? P[i-1] : 0)) % M;
set<int> S;
int res = 0;
for (int i = A.size() - 1; i >= 0; --i) {
S.insert(P[i]);
int a = (A[i] - P[i] + M) % M;
auto it = S.lower_bound(M - a);
if (it != begin(S))
res = max(res, *prev(it) + a);
res = max(res, (*prev(end(S)) + a) % M);
}
return res;
}
int main() {
// random testing to the rescue
for (int i = 0; i < 1000; ++i) {
int M = rand() % 1000 + 1, n = rand() % 1000 + 1;
vector<int> A(n);
for (int i = 0; i< n; ++i)
A[i] = rand() % M;
int should_be = 0;
for (int i = 0; i < n; ++i) {
int sum = 0;
for (int j = i; j < n; ++j) {
sum = (sum + A[j]) % M;
should_be = max(should_be, sum);
}
}
assert(should_be == max_mod_sum(A, M));
}
}
For me, all explanations here were awful, since I didn't get the searching/sorting part. How do we search/sort, was unclear.
We all know that we need to build prefixSum, meaning sum of all elems from 0 to i with modulo m
I guess, what we are looking for is clear.
Knowing that subarray[i][j] = (prefix[i] - prefix[j] + m) % m (indicating the modulo sum from index i to j), our maxima when given prefix[i] is always that prefix[j] which is as close as possible to prefix[i], but slightly bigger.
E.g. for m = 8, prefix[i] being 5, we are looking for the next value after 5, which is in our prefixArray.
For efficient search (binary search) we sort the prefixes.
What we can not do is, build the prefixSum first, then iterate again from 0 to n and look for index in the sorted prefix array, because we can find and endIndex which is smaller than our startIndex, which is no good.
Therefore, what we do is we iterate from 0 to n indicating the endIndex of our potential max subarray sum and then look in our sorted prefix array, (which is empty at the beginning) which contains sorted prefixes between 0 and endIndex.
def maximumSum(coll, m):
n = len(coll)
maxSum, prefixSum = 0, 0
sortedPrefixes = []
for endIndex in range(n):
prefixSum = (prefixSum + coll[endIndex]) % m
maxSum = max(maxSum, prefixSum)
startIndex = bisect.bisect_right(sortedPrefixes, prefixSum)
if startIndex < len(sortedPrefixes):
maxSum = max(maxSum, prefixSum - sortedPrefixes[startIndex] + m)
bisect.insort(sortedPrefixes, prefixSum)
return maxSum
From your question, it seems that you have created an array to store the cumulative sums (Prefix Sum Array), and are calculating the sum of the sub-array arr[i:j] as (sum[j] - sum[i] + M) % M. (arr and sum denote the given array and the prefix sum array respectively)
Calculating the sum of every sub-array results in a O(n*n) algorithm.
The question that arises is -
Do we really need to consider the sum of every sub-array to reach the desired maximum?
No!
For a value of j the value (sum[j] - sum[i] + M) % M will be maximum when sum[i] is just greater than sum[j] or the difference is M - 1.
This would reduce the algorithm to O(nlogn).
You can take a look at this explanation! https://www.youtube.com/watch?v=u_ft5jCDZXk
There are already a bunch of great solutions listed here, but I wanted to add one that has O(nlogn) runtime without using a balanced binary tree, which isn't in the Python standard library. This solution isn't my idea, but I had to think a bit as to why it worked. Here's the code, explanation below:
def maximumSum(a, m):
prefixSums = [(0, -1)]
for idx, el in enumerate(a):
prefixSums.append(((prefixSums[-1][0] + el) % m, idx))
prefixSums = sorted(prefixSums)
maxSeen = prefixSums[-1][0]
for (a, a_idx), (b, b_idx) in zip(prefixSums[:-1], prefixSums[1:]):
if a_idx > b_idx and b > a:
maxSeen = max((a-b) % m, maxSeen)
return maxSeen
As with the other solutions, we first calculate the prefix sums, but this time we also keep track of the index of the prefix sum. We then sort the prefix sums, as we want to find the smallest difference between prefix sums modulo m - sorting lets us just look at adjacent elements as they have the smallest difference.
At this point you might think we're neglecting an essential part of the problem - we want the smallest difference between prefix sums, but the larger prefix sum needs to appear before the smaller prefix sum (meaning it has a smaller index). In the solutions using trees, we ensure that by adding prefix sums one by one and recalculating the best solution.
However, it turns out that we can look at adjacent elements and just ignore ones that don't satisfy our index requirement. This confused me for some time, but the key realization is that the optimal solution will always come from two adjacent elements. I'll prove this via a contradiction. Let's say that the optimal solution comes from two non-adjacent prefix sums x and z with indices i and k, where z > x (it's sorted!) and k > i:
x ... z
k ... i
Let's consider one of the numbers between x and z, and let's call it y with index j. Since the list is sorted, x < y < z.
x ... y ... z
k ... j ... i
The prefix sum y must have index j < i, otherwise it would be part of a better solution with z. But if j < i, then j < k and y and x form a better solution than z and x! So any elements between x and z must form a better solution with one of the two, which contradicts our original assumption. Therefore the optimal solution must come from adjacent prefix sums in the sorted list.
Here is Java code for maximum sub array sum modulo. We handle the case we can not find least element in the tree strictly greater than s[i]
public static long maxModulo(long[] a, final long k) {
long[] s = new long[a.length];
TreeSet<Long> tree = new TreeSet<>();
s[0] = a[0] % k;
tree.add(s[0]);
long result = s[0];
for (int i = 1; i < a.length; i++) {
s[i] = (s[i - 1] + a[i]) % k;
// find least element in the tree strictly greater than s[i]
Long v = tree.higher(s[i]);
if (v == null) {
// can't find v, then compare v and s[i]
result = Math.max(s[i], result);
} else {
result = Math.max((s[i] - v + k) % k, result);
}
tree.add(s[i]);
}
return result;
}
Few points from my side that might hopefully help someone understand the problem better.
You do not need to add +M to the modulo calculation, as mentioned, % operator handles negative numbers well, so a % M = (a + M) % M
As mentioned, the trick is to build the proxy sum table such that
proxy[n] = (a[1] + ... a[n]) % M
This then allows one to represent the maxSubarraySum[i, j] as
maxSubarraySum[i, j] = (proxy[j] - proxy[j]) % M
The implementation trick is to build the proxy table as we iterate through the elements, instead of first pre-building it and then using. This is because for each new element in the array a[i] we want to compute proxy[i] and find proxy[j] that is bigger than but as close as possible to proxy[i] (ideally bigger by 1 because this results in a reminder of M - 1). For this we need to use a clever data structure for building proxy table while keeping it sorted and
being able to quickly find a closest bigger element to proxy[i]. bisect.bisect_right is a good choice in Python.
See my Python implementation below (hope this helps but I am aware this might not necessarily be as concise as others' solutions):
def maximumSum(a, m):
prefix_sum = [a[0] % m]
prefix_sum_sorted = [a[0] % m]
current_max = prefix_sum_sorted[0]
for elem in a[1:]:
prefix_sum_next = (prefix_sum[-1] + elem) % m
prefix_sum.append(prefix_sum_next)
idx_closest_bigger = bisect.bisect_right(prefix_sum_sorted, prefix_sum_next)
if idx_closest_bigger >= len(prefix_sum_sorted):
current_max = max(current_max, prefix_sum_next)
bisect.insort_right(prefix_sum_sorted, prefix_sum_next)
continue
if prefix_sum_sorted[idx_closest_bigger] > prefix_sum_next:
current_max = max(current_max, (prefix_sum_next - prefix_sum_sorted[idx_closest_bigger]) % m)
bisect.insort_right(prefix_sum_sorted, prefix_sum_next)
return current_max
Total java implementation with O(n*log(n))
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.util.TreeSet;
import java.util.stream.Stream;
public class MaximizeSumMod {
public static void main(String[] args) throws Exception{
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
Long times = Long.valueOf(in.readLine());
while(times --> 0){
long[] pair = Stream.of(in.readLine().split(" ")).mapToLong(Long::parseLong).toArray();
long mod = pair[1];
long[] numbers = Stream.of(in.readLine().split(" ")).mapToLong(Long::parseLong).toArray();
printMaxMod(numbers,mod);
}
}
private static void printMaxMod(long[] numbers, Long mod) {
Long maxSoFar = (numbers[numbers.length-1] + numbers[numbers.length-2])%mod;
maxSoFar = (maxSoFar > (numbers[0]%mod)) ? maxSoFar : numbers[0]%mod;
numbers[0] %=mod;
for (Long i = 1L; i < numbers.length; i++) {
long currentNumber = numbers[i.intValue()]%mod;
maxSoFar = maxSoFar > currentNumber ? maxSoFar : currentNumber;
numbers[i.intValue()] = (currentNumber + numbers[i.intValue()-1])%mod;
maxSoFar = maxSoFar > numbers[i.intValue()] ? maxSoFar : numbers[i.intValue()];
}
if(mod.equals(maxSoFar+1) || numbers.length == 2){
System.out.println(maxSoFar);
return;
}
long previousNumber = numbers[0];
TreeSet<Long> set = new TreeSet<>();
set.add(previousNumber);
for (Long i = 2L; i < numbers.length; i++) {
Long currentNumber = numbers[i.intValue()];
Long ceiling = set.ceiling(currentNumber);
if(ceiling == null){
set.add(numbers[i.intValue()-1]);
continue;
}
if(ceiling.equals(currentNumber)){
set.remove(ceiling);
Long greaterCeiling = set.ceiling(currentNumber);
if(greaterCeiling == null){
set.add(ceiling);
set.add(numbers[i.intValue()-1]);
continue;
}
set.add(ceiling);
ceiling = greaterCeiling;
}
Long newMax = (currentNumber - ceiling + mod);
maxSoFar = maxSoFar > newMax ? maxSoFar :newMax;
set.add(numbers[i.intValue()-1]);
}
System.out.println(maxSoFar);
}
}
Adding STL C++11 code based on the solution suggested by #Pham Trung. Might be handy.
#include <iostream>
#include <set>
int main() {
int N;
std::cin>>N;
for (int nn=0;nn<N;nn++){
long long n,m;
std::set<long long> mSet;
long long maxVal = 0; //positive input values
long long sumVal = 0;
std::cin>>n>>m;
mSet.insert(m);
for (long long q=0;q<n;q++){
long long tmp;
std::cin>>tmp;
sumVal = (sumVal + tmp)%m;
auto itSub = mSet.upper_bound(sumVal);
maxVal = std::max(maxVal,(m + sumVal - *itSub)%m);
mSet.insert(sumVal);
}
std::cout<<maxVal<<"\n";
}
}
As you can read in Wikipedia exists a solution called Kadane's algorithm, which compute the maximum subarray sum watching ate the maximum subarray ending at position i for all positions i by iterating once over the array. Then this solve the problem with with runtime complexity O(n).
Unfortunately, I think that Kadane's algorithm isn't able to find all possible solution when more than one solution exists.
An implementation in Java, I didn't tested it:
public int[] kadanesAlgorithm (int[] array) {
int start_old = 0;
int start = 0;
int end = 0;
int found_max = 0;
int max = array[0];
for(int i = 0; i<array.length; i++) {
max = Math.max(array[i], max + array[i]);
found_max = Math.max(found_max, max);
if(max < 0)
start = i+1;
else if(max == found_max) {
start_old=start;
end = i;
}
}
return Arrays.copyOfRange(array, start_old, end+1);
}
I feel my thoughts are aligned with what have been posted already, but just in case - Kotlin O(NlogN) solution:
val seen = sortedSetOf(0L)
var prev = 0L
return max(a.map { x ->
val z = (prev + x) % m
prev = z
seen.add(z)
seen.higher(z)?.let{ y ->
(z - y + m) % m
} ?: z
})
Implementation in java using treeset...
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.TreeSet;
public class Main {
public static void main(String[] args) throws IOException {
BufferedReader read = new BufferedReader(new InputStreamReader(System.in)) ;
String[] str = read.readLine().trim().split(" ") ;
int n = Integer.parseInt(str[0]) ;
long m = Long.parseLong(str[1]) ;
str = read.readLine().trim().split(" ") ;
long[] arr = new long[n] ;
for(int i=0; i<n; i++) {
arr[i] = Long.parseLong(str[i]) ;
}
long maxCount = 0L ;
TreeSet<Long> tree = new TreeSet<>() ;
tree.add(0L) ;
long prefix = 0L ;
for(int i=0; i<n; i++) {
prefix = (prefix + arr[i]) % m ;
maxCount = Math.max(prefix, maxCount) ;
Long temp = tree.higher(prefix) ;
System.out.println(temp);
if(temp != null) {
maxCount = Math.max((prefix-temp+m)%m, maxCount) ;
}
//System.out.println(maxCount);
tree.add(prefix) ;
}
System.out.println(maxCount);
}
}
Here is one implementation of solution in java for this problem which works using TreeSet in java for optimized solution !
public static long maximumSum2(long[] arr, long n, long m)
{
long x = 0;
long prefix = 0;
long maxim = 0;
TreeSet<Long> S = new TreeSet<Long>();
S.add((long)0);
// Traversing the array.
for (int i = 0; i < n; i++)
{
// Finding prefix sum.
prefix = (prefix + arr[i]) % m;
// Finding maximum of prefix sum.
maxim = Math.max(maxim, prefix);
// Finding iterator poing to the first
// element that is not less than value
// "prefix + 1", i.e., greater than or
// equal to this value.
long it = S.higher(prefix)!=null?S.higher(prefix):0;
// boolean isFound = false;
// for (long j : S)
// {
// if (j >= prefix + 1)
// if(isFound == false) {
// it = j;
// isFound = true;
// }
// else {
// if(j < it) {
// it = j;
// }
// }
// }
if (it != 0)
{
maxim = Math.max(maxim, prefix - it + m);
}
// adding prefix in the set.
S.add(prefix);
}
return maxim;
}
public static int MaxSequence(int[] arr)
{
int maxSum = 0;
int partialSum = 0;
int negative = 0;
for (int i = 0; i < arr.Length; i++)
{
if (arr[i] < 0)
{
negative++;
}
}
if (negative == arr.Length)
{
return 0;
}
foreach (int item in arr)
{
partialSum += item;
maxSum = Math.Max(maxSum, partialSum);
if (partialSum < 0)
{
partialSum = 0;
}
}
return maxSum;
}
Modify Kadane algorithm to keep track of #occurrence. Below is the code.
#python3
#source: https://github.com/harishvc/challenges/blob/master/dp-largest-sum-sublist-modulo.py
#Time complexity: O(n)
#Space complexity: O(n)
def maxContiguousSum(a,K):
sum_so_far =0
max_sum = 0
count = {} #keep track of occurrence
for i in range(0,len(a)):
sum_so_far += a[i]
sum_so_far = sum_so_far%K
if sum_so_far > 0:
max_sum = max(max_sum,sum_so_far)
if sum_so_far in count.keys():
count[sum_so_far] += 1
else:
count[sum_so_far] = 1
else:
assert sum_so_far < 0 , "Logic error"
#IMPORTANT: reset sum_so_far
sum_so_far = 0
return max_sum,count[max_sum]
a = [6, 6, 11, 15, 12, 1]
K = 13
max_sum,count = maxContiguousSum(a,K)
print("input >>> %s max sum=%d #occurrence=%d" % (a,max_sum,count))

Find unique number among 3n+1 numbers [duplicate]

This question already has answers here:
Finding an element in an array that isn't repeated a multiple of three times?
(4 answers)
Closed 7 years ago.
I have been asked this question in an interview.
Given that, there are 3n+1 numbers. n of those numbers occur in triplets, only 1 occurs single time. How do we find the unique number in linear time i.e., O(n) ? The numbers are not sorted.
Note that, if there were 2n+1 numbers, n of which occur in pairs, we could just XOR all the numbers to find the unique one. The interviewer told me that it can be done by bit manipulation.
Count the number of times that each bit occurs in the set of 3n+1 numbers.
Reduce each bit count modulo 3.
What is left is the bit pattern of the single number.
Oh, dreamzor (above) has beaten me to it.
You can invent a 3nary XOR (call it XOR3) operation which operates in base 3 instead of base 2 and simply takes each 3nary digit modulo 3 (when usual XOR takes 2nary digit modulo 2).
Then, if you XOR3 all the numbers (converting them to 3nary first) this way, you will be left with the unique number (in base 3 so you will need to convert it back).
The complexity is not exactly linear, though, because the conversions from/to base 3 require additional logarithmic time. However, if the range of numbers is constant then the conversion time is also constant.
Code on C++ (intentionally verbose):
vector<int> to_base3(int num) {
vector<int> base3;
for (; num > 0; num /= 3) {
base3.push_back(num % 3);
}
return base3;
}
int from_base3(const vector<int> &base3) {
int num = 0;
for (int i = 0, three = 1; i < base3.size(); ++i, three *= 3) {
num += base3[i] * three;
}
return num;
}
int find_unique(const vector<int> &a) {
vector<int> unique_base3(20, 0); // up to 3^20
for (int num : a) {
vector<int> num_base3 = to_base3(num);
for (int i = 0; i < num_base3.size(); ++i) {
unique_base3[i] = (unique_base3[i] + num_base3[i]) % 3;
}
}
int unique_num = from_base3(unique_base3);
return unique_num;
}
int main() {
vector<int> rands { 1287318, 172381, 5144, 566546, 7123 };
vector<int> a;
for (int r : rands) {
for (int i = 0; i < 3; ++i) {
a.push_back(r);
}
}
a.push_back(13371337); // unique number
random_shuffle(a.begin(), a.end());
int unique_num = find_unique(a);
cout << unique_num << endl;
}
byte [] oneCount = new byte [32];
int [] test = {1,2,3,1,5,2,9,9,3,1,2,3,9};
for (int n: test) {
for (int bit = 0; bit < 32; bit++) {
if (((n >> bit) & 1) == 1) {
oneCount[bit]++;
oneCount[bit] = (byte)(oneCount[bit] % 3);
}
}
}
int result = 0;
int x = 1;
for (int bit = 0; bit < 32; bit++) {
result += oneCount[bit] * x;
x = x << 1;
}
System.out.print(result);
Looks like while I was coding, others gave the main idea

Implement Number division by multiplication method [duplicate]

I was asked this question in a job interview, and I'd like to know how others would solve it. I'm most comfortable with Java, but solutions in other languages are welcome.
Given an array of numbers, nums, return an array of numbers products, where products[i] is the product of all nums[j], j != i.
Input : [1, 2, 3, 4, 5]
Output: [(2*3*4*5), (1*3*4*5), (1*2*4*5), (1*2*3*5), (1*2*3*4)]
= [120, 60, 40, 30, 24]
You must do this in O(N) without using division.
An explanation of polygenelubricants method is:
The trick is to construct the arrays (in the case for 4 elements):
{ 1, a[0], a[0]*a[1], a[0]*a[1]*a[2], }
{ a[1]*a[2]*a[3], a[2]*a[3], a[3], 1, }
Both of which can be done in O(n) by starting at the left and right edges respectively.
Then, multiplying the two arrays element-by-element gives the required result.
My code would look something like this:
int a[N] // This is the input
int products_below[N];
int p = 1;
for (int i = 0; i < N; ++i) {
products_below[i] = p;
p *= a[i];
}
int products_above[N];
p = 1;
for (int i = N - 1; i >= 0; --i) {
products_above[i] = p;
p *= a[i];
}
int products[N]; // This is the result
for (int i = 0; i < N; ++i) {
products[i] = products_below[i] * products_above[i];
}
If you need the solution be O(1) in space as well, you can do this (which is less clear in my opinion):
int a[N] // This is the input
int products[N];
// Get the products below the current index
int p = 1;
for (int i = 0; i < N; ++i) {
products[i] = p;
p *= a[i];
}
// Get the products above the current index
p = 1;
for (int i = N - 1; i >= 0; --i) {
products[i] *= p;
p *= a[i];
}
Here is a small recursive function (in C++) to do the modification in-place. It requires O(n) extra space (on stack) though. Assuming the array is in a and N holds the array length, we have:
int multiply(int *a, int fwdProduct, int indx) {
int revProduct = 1;
if (indx < N) {
revProduct = multiply(a, fwdProduct*a[indx], indx+1);
int cur = a[indx];
a[indx] = fwdProduct * revProduct;
revProduct *= cur;
}
return revProduct;
}
Here's my attempt to solve it in Java. Apologies for the non-standard formatting, but the code has a lot of duplication, and this is the best I can do to make it readable.
import java.util.Arrays;
public class Products {
static int[] products(int... nums) {
final int N = nums.length;
int[] prods = new int[N];
Arrays.fill(prods, 1);
for (int
i = 0, pi = 1 , j = N-1, pj = 1 ;
(i < N) && (j >= 0) ;
pi *= nums[i++] , pj *= nums[j--] )
{
prods[i] *= pi ; prods[j] *= pj ;
}
return prods;
}
public static void main(String[] args) {
System.out.println(
Arrays.toString(products(1, 2, 3, 4, 5))
); // prints "[120, 60, 40, 30, 24]"
}
}
The loop invariants are pi = nums[0] * nums[1] *.. nums[i-1] and pj = nums[N-1] * nums[N-2] *.. nums[j+1]. The i part on the left is the "prefix" logic, and the j part on the right is the "suffix" logic.
Recursive one-liner
Jasmeet gave a (beautiful!) recursive solution; I've turned it into this (hideous!) Java one-liner. It does in-place modification, with O(N) temporary space in the stack.
static int multiply(int[] nums, int p, int n) {
return (n == nums.length) ? 1
: nums[n] * (p = multiply(nums, nums[n] * (nums[n] = p), n + 1))
+ 0*(nums[n] *= p);
}
int[] arr = {1,2,3,4,5};
multiply(arr, 1, 0);
System.out.println(Arrays.toString(arr));
// prints "[120, 60, 40, 30, 24]"
Translating Michael Anderson's solution into Haskell:
otherProducts xs = zipWith (*) below above
where below = scanl (*) 1 $ init xs
above = tail $ scanr (*) 1 xs
Sneakily circumventing the "no divisions" rule:
sum = 0.0
for i in range(a):
sum += log(a[i])
for i in range(a):
output[i] = exp(sum - log(a[i]))
Here you go, simple and clean solution with O(N) complexity:
int[] a = {1,2,3,4,5};
int[] r = new int[a.length];
int x = 1;
r[0] = 1;
for (int i=1;i<a.length;i++){
r[i]=r[i-1]*a[i-1];
}
for (int i=a.length-1;i>0;i--){
x=x*a[i];
r[i-1]=x*r[i-1];
}
for (int i=0;i<r.length;i++){
System.out.println(r[i]);
}
Travel Left->Right and keep saving product. Call it Past. -> O(n)
Travel Right -> left keep the product. Call it Future. -> O(n)
Result[i] = Past[i-1] * future[i+1] -> O(n)
Past[-1] = 1; and Future[n+1]=1;
O(n)
C++, O(n):
long long prod = accumulate(in.begin(), in.end(), 1LL, multiplies<int>());
transform(in.begin(), in.end(), back_inserter(res),
bind1st(divides<long long>(), prod));
Here is my solution in modern C++. It makes use of std::transform and is pretty easy to remember.
Online code (wandbox).
#include<algorithm>
#include<iostream>
#include<vector>
using namespace std;
vector<int>& multiply_up(vector<int>& v){
v.insert(v.begin(),1);
transform(v.begin()+1, v.end()
,v.begin()
,v.begin()+1
,[](auto const& a, auto const& b) { return b*a; }
);
v.pop_back();
return v;
}
int main() {
vector<int> v = {1,2,3,4,5};
auto vr = v;
reverse(vr.begin(),vr.end());
multiply_up(v);
multiply_up(vr);
reverse(vr.begin(),vr.end());
transform(v.begin(),v.end()
,vr.begin()
,v.begin()
,[](auto const& a, auto const& b) { return b*a; }
);
for(auto& i: v) cout << i << " ";
}
Precalculate the product of the numbers to the left and to the right of each element.
For every element the desired value is the product of it's neigbors's products.
#include <stdio.h>
unsigned array[5] = { 1,2,3,4,5};
int main(void)
{
unsigned idx;
unsigned left[5]
, right[5];
left[0] = 1;
right[4] = 1;
/* calculate products of numbers to the left of [idx] */
for (idx=1; idx < 5; idx++) {
left[idx] = left[idx-1] * array[idx-1];
}
/* calculate products of numbers to the right of [idx] */
for (idx=4; idx-- > 0; ) {
right[idx] = right[idx+1] * array[idx+1];
}
for (idx=0; idx <5 ; idx++) {
printf("[%u] Product(%u*%u) = %u\n"
, idx, left[idx] , right[idx] , left[idx] * right[idx] );
}
return 0;
}
Result:
$ ./a.out
[0] Product(1*120) = 120
[1] Product(1*60) = 60
[2] Product(2*20) = 40
[3] Product(6*5) = 30
[4] Product(24*1) = 24
(UPDATE: now I look closer, this uses the same method as Michael Anderson, Daniel Migowski and polygenelubricants above)
Tricky:
Use the following:
public int[] calc(int[] params) {
int[] left = new int[n-1]
in[] right = new int[n-1]
int fac1 = 1;
int fac2 = 1;
for( int i=0; i<n; i++ ) {
fac1 = fac1 * params[i];
fac2 = fac2 * params[n-i];
left[i] = fac1;
right[i] = fac2;
}
fac = 1;
int[] results = new int[n];
for( int i=0; i<n; i++ ) {
results[i] = left[i] * right[i];
}
Yes, I am sure i missed some i-1 instead of i, but thats the way to solve it.
This is O(n^2) but f# is soooo beautiful:
List.fold (fun seed i -> List.mapi (fun j x -> if i=j+1 then x else x*i) seed)
[1;1;1;1;1]
[1..5]
There also is a O(N^(3/2)) non-optimal solution. It is quite interesting, though.
First preprocess each partial multiplications of size N^0.5(this is done in O(N) time complexity). Then, calculation for each number's other-values'-multiple can be done in 2*O(N^0.5) time(why? because you only need to multiple the last elements of other ((N^0.5) - 1) numbers, and multiply the result with ((N^0.5) - 1) numbers that belong to the group of the current number). Doing this for each number, one can get O(N^(3/2)) time.
Example:
4 6 7 2 3 1 9 5 8
partial results:
4*6*7 = 168
2*3*1 = 6
9*5*8 = 360
To calculate the value of 3, one needs to multiply the other groups' values 168*360, and then with 2*1.
public static void main(String[] args) {
int[] arr = { 1, 2, 3, 4, 5 };
int[] result = { 1, 1, 1, 1, 1 };
for (int i = 0; i < arr.length; i++) {
for (int j = 0; j < i; j++) {
result[i] *= arr[j];
}
for (int k = arr.length - 1; k > i; k--) {
result[i] *= arr[k];
}
}
for (int i : result) {
System.out.println(i);
}
}
This solution i came up with and i found it so clear what do you think!?
Based on Billz answer--sorry I can't comment, but here is a scala version that correctly handles duplicate items in the list, and is probably O(n):
val list1 = List(1, 7, 3, 3, 4, 4)
val view = list1.view.zipWithIndex map { x => list1.view.patch(x._2, Nil, 1).reduceLeft(_*_)}
view.force
returns:
List(1008, 144, 336, 336, 252, 252)
Adding my javascript solution here as I didn't find anyone suggesting this.
What is to divide, except to count the number of times you can extract a number from another number? I went through calculating the product of the whole array, and then iterate over each element, and substracting the current element until zero:
//No division operation allowed
// keep substracting divisor from dividend, until dividend is zero or less than divisor
function calculateProducsExceptCurrent_NoDivision(input){
var res = [];
var totalProduct = 1;
//calculate the total product
for(var i = 0; i < input.length; i++){
totalProduct = totalProduct * input[i];
}
//populate the result array by "dividing" each value
for(var i = 0; i < input.length; i++){
var timesSubstracted = 0;
var divisor = input[i];
var dividend = totalProduct;
while(divisor <= dividend){
dividend = dividend - divisor;
timesSubstracted++;
}
res.push(timesSubstracted);
}
return res;
}
Just 2 passes up and down. Job done in O(N)
private static int[] multiply(int[] numbers) {
int[] multiplied = new int[numbers.length];
int total = 1;
multiplied[0] = 1;
for (int i = 1; i < numbers.length; i++) {
multiplied[i] = numbers[i - 1] * multiplied[i - 1];
}
for (int j = numbers.length - 2; j >= 0; j--) {
total *= numbers[j + 1];
multiplied[j] = total * multiplied[j];
}
return multiplied;
}
def productify(arr, prod, i):
if i < len(arr):
prod.append(arr[i - 1] * prod[i - 1]) if i > 0 else prod.append(1)
retval = productify(arr, prod, i + 1)
prod[i] *= retval
return retval * arr[i]
return 1
if __name__ == "__main__":
arr = [1, 2, 3, 4, 5]
prod = []
productify(arr, prod, 0)
print(prod)
Well,this solution can be considered that of C/C++.
Lets say we have an array "a" containing n elements
like a[n],then the pseudo code would be as below.
for(j=0;j<n;j++)
{
prod[j]=1;
for (i=0;i<n;i++)
{
if(i==j)
continue;
else
prod[j]=prod[j]*a[i];
}
One more solution, Using division. with twice traversal.
Multiply all the elements and then start dividing it by each element.
{-
Recursive solution using sqrt(n) subsets. Runs in O(n).
Recursively computes the solution on sqrt(n) subsets of size sqrt(n).
Then recurses on the product sum of each subset.
Then for each element in each subset, it computes the product with
the product sum of all other products.
Then flattens all subsets.
Recurrence on the run time is T(n) = sqrt(n)*T(sqrt(n)) + T(sqrt(n)) + n
Suppose that T(n) ≤ cn in O(n).
T(n) = sqrt(n)*T(sqrt(n)) + T(sqrt(n)) + n
≤ sqrt(n)*c*sqrt(n) + c*sqrt(n) + n
≤ c*n + c*sqrt(n) + n
≤ (2c+1)*n
&in; O(n)
Note that ceiling(sqrt(n)) can be computed using a binary search
and O(logn) iterations, if the sqrt instruction is not permitted.
-}
otherProducts [] = []
otherProducts [x] = [1]
otherProducts [x,y] = [y,x]
otherProducts a = foldl' (++) [] $ zipWith (\s p -> map (*p) s) solvedSubsets subsetOtherProducts
where
n = length a
-- Subset size. Require that 1 < s < n.
s = ceiling $ sqrt $ fromIntegral n
solvedSubsets = map otherProducts subsets
subsetOtherProducts = otherProducts $ map product subsets
subsets = reverse $ loop a []
where loop [] acc = acc
loop a acc = loop (drop s a) ((take s a):acc)
Here is my code:
int multiply(int a[],int n,int nextproduct,int i)
{
int prevproduct=1;
if(i>=n)
return prevproduct;
prevproduct=multiply(a,n,nextproduct*a[i],i+1);
printf(" i=%d > %d\n",i,prevproduct*nextproduct);
return prevproduct*a[i];
}
int main()
{
int a[]={2,4,1,3,5};
multiply(a,5,1,0);
return 0;
}
Here's a slightly functional example, using C#:
Func<long>[] backwards = new Func<long>[input.Length];
Func<long>[] forwards = new Func<long>[input.Length];
for (int i = 0; i < input.Length; ++i)
{
var localIndex = i;
backwards[i] = () => (localIndex > 0 ? backwards[localIndex - 1]() : 1) * input[localIndex];
forwards[i] = () => (localIndex < input.Length - 1 ? forwards[localIndex + 1]() : 1) * input[localIndex];
}
var output = new long[input.Length];
for (int i = 0; i < input.Length; ++i)
{
if (0 == i)
{
output[i] = forwards[i + 1]();
}
else if (input.Length - 1 == i)
{
output[i] = backwards[i - 1]();
}
else
{
output[i] = forwards[i + 1]() * backwards[i - 1]();
}
}
I'm not entirely certain that this is O(n), due to the semi-recursion of the created Funcs, but my tests seem to indicate that it's O(n) in time.
To be complete here is the code in Scala:
val list1 = List(1, 2, 3, 4, 5)
for (elem <- list1) println(list1.filter(_ != elem) reduceLeft(_*_))
This will print out the following:
120
60
40
30
24
The program will filter out the current elem (_ != elem); and multiply the new list with reduceLeft method. I think this will be O(n) if you use scala view or Iterator for lazy eval.
// This is the recursive solution in Java
// Called as following from main product(a,1,0);
public static double product(double[] a, double fwdprod, int index){
double revprod = 1;
if (index < a.length){
revprod = product2(a, fwdprod*a[index], index+1);
double cur = a[index];
a[index] = fwdprod * revprod;
revprod *= cur;
}
return revprod;
}
A neat solution with O(n) runtime:
For each element calculate the product of all the elements that occur before that and it store in an array "pre".
For each element calculate the product of all the elements that occur after that element and store it in an array "post"
Create a final array "result", for an element i,
result[i] = pre[i-1]*post[i+1];
Here is the ptyhon version
# This solution use O(n) time and O(n) space
def productExceptSelf(self, nums):
"""
:type nums: List[int]
:rtype: List[int]
"""
N = len(nums)
if N == 0: return
# Initialzie list of 1, size N
l_prods, r_prods = [1]*N, [1]*N
for i in range(1, N):
l_prods[i] = l_prods[i-1] * nums[i-1]
for i in reversed(range(N-1)):
r_prods[i] = r_prods[i+1] * nums[i+1]
result = [x*y for x,y in zip(l_prods,r_prods)]
return result
# This solution use O(n) time and O(1) space
def productExceptSelfSpaceOptimized(self, nums):
"""
:type nums: List[int]
:rtype: List[int]
"""
N = len(nums)
if N == 0: return
# Initialzie list of 1, size N
result = [1]*N
for i in range(1, N):
result[i] = result[i-1] * nums[i-1]
r_prod = 1
for i in reversed(range(N)):
result[i] *= r_prod
r_prod *= nums[i]
return result
I'm use to C#:
public int[] ProductExceptSelf(int[] nums)
{
int[] returnArray = new int[nums.Length];
List<int> auxList = new List<int>();
int multTotal = 0;
// If no zeros are contained in the array you only have to calculate it once
if(!nums.Contains(0))
{
multTotal = nums.ToList().Aggregate((a, b) => a * b);
for (int i = 0; i < nums.Length; i++)
{
returnArray[i] = multTotal / nums[i];
}
}
else
{
for (int i = 0; i < nums.Length; i++)
{
auxList = nums.ToList();
auxList.RemoveAt(i);
if (!auxList.Contains(0))
{
returnArray[i] = auxList.Aggregate((a, b) => a * b);
}
else
{
returnArray[i] = 0;
}
}
}
return returnArray;
}
Here is simple Scala version in Linear O(n) time:
def getProductEff(in:Seq[Int]):Seq[Int] = {
//create a list which has product of every element to the left of this element
val fromLeft = in.foldLeft((1, Seq.empty[Int]))((ac, i) => (i * ac._1, ac._2 :+ ac._1))._2
//create a list which has product of every element to the right of this element, which is the same as the previous step but in reverse
val fromRight = in.reverse.foldLeft((1,Seq.empty[Int]))((ac,i) => (i * ac._1,ac._2 :+ ac._1))._2.reverse
//merge the two list by product at index
in.indices.map(i => fromLeft(i) * fromRight(i))
}
This works because essentially the answer is an array which has product of all elements to the left and to the right.
import java.util.Arrays;
public class Pratik
{
public static void main(String[] args)
{
int[] array = {2, 3, 4, 5, 6}; // OUTPUT: 360 240 180 144 120
int[] products = new int[array.length];
arrayProduct(array, products);
System.out.println(Arrays.toString(products));
}
public static void arrayProduct(int array[], int products[])
{
double sum = 0, EPSILON = 1e-9;
for(int i = 0; i < array.length; i++)
sum += Math.log(array[i]);
for(int i = 0; i < array.length; i++)
products[i] = (int) (EPSILON + Math.exp(sum - Math.log(array[i])));
}
}
OUTPUT:
[360, 240, 180, 144, 120]
Time complexity : O(n)
Space complexity: O(1)

Resources