Puzzle.. solving product of values in array X - algorithm

Can you please help me solving this one?
You have an unordered array X of n integers. Find the array M containing n elements where Mi is the product of all integers in X except for Xi. You may not use division. You can use extra memory. (Hint: There are solutions faster than O(n^2).)
The basic ones - O(n^2) and one using division is easy. But I just can't get another solution that is faster than O(n^2).

Let left[i] be the product of all elements in X from 1..i. Let right[i] be the product of all elements in X from i..N. You can compute both in O(n) without division in the following way: left[i] = left[i - 1] * X[i] and right[i] = right[i + 1] * X[i];
Now we will compute M: M[i] = left[i - 1] * right[i + 1]
Note: left and right are arrays.
Hope it is clear:)

Here's a solution in Python. I did the easy way with division to compare against the hard way without. Do I get the job?
L = [2, 1, 3, 5, 4]
prod = 1
for i in L: prod *= i
easy = map(lambda x: prod/x, L)
print easy
hard = [1]*len(L)
hmm = 1
for i in range(len(L) - 1):
hmm *= L[i]
hard[i + 1] *= hmm
huh = 1
for i in range(len(L) - 1, 0, -1):
huh *= L[i]
hard[i - 1] *= huh
print hard

O(n) - http://nbl.cewit.stonybrook.edu:60128/mediawiki/index.php/TADM2E_3.28
two passes -
int main (int argc, char **argv) {
int array[] = {2, 5, 3, 4};
int fwdprod[] = {1, 1, 1, 1};
int backprod[] = {1, 1, 1, 1};
int mi[] = {1, 1, 1, 1};
int i, n = 4;
for (i=1; i<=n-1; i++) {
fwdprod[i]=fwdprod[i-1]*array[i-1];
}
for (i=n-2; i>=0; i--) {
backprod[i] = backprod[i+1]*array[i+1];
}
for (i=0;i<=n-1;i++) {
mi[i]=fwdprod[i]*backprod[i];
}
return 0;
}

Old but very cool, I've been asked this at an interview myself and seen several solutions since but this is my favorite as taken from
http://www.polygenelubricants.com/2010/04/on-all-other-products-no-division.html
static int[] products(int... nums) {
final int N = nums.length;
int[] prods = new int[N];
java.util.Arrays.fill(prods, 1);
for (int // pi----> * <----pj
i = 0, pi = 1 , j = N-1, pj = 1 ;
(i < N) & (j >= 0) ;
pi *= nums[i++] , pj *= nums[j--] )
{
prods[i] *= pi ; prods[j] *= pj ;
System.out.println("pi up to this point is " + pi + "\n");
System.out.println("pj up to this point is " + pj + "\n");
System.out.println("prods[i]:" + prods[i] + "pros[j]:" + prods[j] + "\n");
}
return prods;
}
Here's what's going on, if you write out prods[i] for all the iterations, you'll see the following being calculated
prods[0], prods[n-1]
prods[1], prods[n-2]
prods[2], prods[n-3]
prods[3], prods[n-4]
.
.
.
prods[n-3], prods[2]
prods[n-2], prods[1]
prods[n-1], prods[0]
so each prods[i] get hit twice, one from the going from head to tail and once from tail to head, and both of these iterations are accumulating the product as they
traverse towards the center so it's easy to see we'll get exactly what we need, we just need to be careful and see that it misses the element itself and that's where
it gets tricky. the key lies in the
pi *= nums[i++], pj *= nums[j--]
in the for loop conditional itself and not in the body which do not happen until the end of the
iteration. so for
prods[0],
it starts at 1*1 and then pi gets set to 120 after, so prods[0] misses the first elements
prods[1], it's 1 * 120 = 120 and then pi gets set to 120*60 after
so on and so on

O(nlogn) approach:
int multiply(int arr[], int start, int end) {
int mid;
if (start > end) {
return 1;
}
if (start == end) {
return arr[start];
}
mid = (start+end)/2;
return (multiply(arr, start, mid)*multiply(arr, mid+1, end));
}
int compute_mi(int arr[], int i, int n) {
if ((i >= n) || (i < 0)) {
return 0;
}
return (multiply(arr, 0, i-1)*multiply(arr, i+1, n-1));
}

Here is my solution in Python: Easy way but with high computational cost may be?
def product_list(x):
ans = [p for p in range(len(x))]
for i in range(0, len(x)):
a = 1
for j in range(0, len(x)):
if i != j:
a = a*x[j]
ans[i] = a
return ans

Related

Print all combination of a set after pairing consecutive numbers

Question is such that given a set of numbers we have to write a recursive program which prints all possible combination after pairing consecutive numbers or leaving them single.
<div>
Ex set 1,2,3,4,5,6
Output
<ul>
<li>1,2,3,4,5,6</li>
<li>12,3,4,5,6</li>
<li>1,23,4,5,6</li>
<li>1,2,34,5,6</li>
<li>1,2,3,45,6</li>
<li>1,2,3,4,56</li>
<li>12,34,5,6</li>
<li>12,3,45,6</li>
<li>12,3,4,56</li>
<li>1,23,45,6</li>
<li>1,23,4,56</li>
<li>1,2,34,56</li>
<li>12,34,56</li>
</div>
I use c++ to code.
Suppose the given set is a(a[0], a[1], ..., a[n - 1]), and the length of a is n
And the current answer is saved in b
void dfs(int pos, int depth)
{
if(pos >= n)
for(int i = 0; i < depth; ++i)
printf("%d%c", b[i], i == depth - 1 ? '\n' : ',');
else
{
b[depth] = a[pos];
dfs(pos + 1, depth + 1);
if(pos + 1 < n)
{
int c = 1, x = a[pos];
while(x) c *= 10, x /= 10;
b[depth] = a[pos] * c + a[pos + 1];
dfs(pos + 2, depth + 1);
}
}
}

Finding minimal absolute sum of a subarray

There's an array A containing (positive and negative) integers. Find a (contiguous) subarray whose elements' absolute sum is minimal, e.g.:
A = [2, -4, 6, -3, 9]
|(−4) + 6 + (−3)| = 1 <- minimal absolute sum
I've started by implementing a brute-force algorithm which was O(N^2) or O(N^3), though it produced correct results. But the task specifies:
complexity:
- expected worst-case time complexity is O(N*log(N))
- expected worst-case space complexity is O(N)
After some searching I thought that maybe Kadane's algorithm can be modified to fit this problem but I failed to do it.
My question is - is Kadane's algorithm the right way to go? If not, could you point me in the right direction (or name an algorithm that could help me here)? I don't want a ready-made code, I just need help in finding the right algorithm.
If you compute the partial sums
such as
2, 2 +(-4), 2 + (-4) + 6, 2 + (-4) + 6 + (-3)...
Then the sum of any contiguous subarray is the difference of two of the partial sums. So to find the contiguous subarray whose absolute value is minimal, I suggest that you sort the partial sums and then find the two values which are closest together, and use the positions of these two partial sums in the original sequence to find the start and end of the sub-array with smallest absolute value.
The expensive bit here is the sort, so I think this runs in time O(n * log(n)).
This is C++ implementation of Saksow's algorithm.
int solution(vector<int> &A) {
vector<int> P;
int min = 20000 ;
int dif = 0 ;
P.resize(A.size()+1);
P[0] = 0;
for(int i = 1 ; i < P.size(); i ++)
{
P[i] = P[i-1]+A[i-1];
}
sort(P.begin(),P.end());
for(int i = 1 ; i < P.size(); i++)
{
dif = P[i]-P[i-1];
if(dif<min)
{
min = dif;
}
}
return min;
}
I was doing this test on Codility and I found mcdowella answer quite helpful, but not enough I have to say: so here is a 2015 answer guys!
We need to build the prefix sums of array A (called P here) like: P[0] = 0, P[1] = P[0] + A[0], P[2] = P[1] + A[1], ..., P[N] = P[N-1] + A[N-1]
The "min abs sum" of A will be the minimum absolute difference between 2 elements in P. So we just have to .sort() P and loop through it taking every time 2 successive elements. This way we have O(N + Nlog(N) + N) which equals to O(Nlog(N)).
That's it!
The answer is yes, Kadane's algorithm is definitely the way to go for solving your problem.
http://en.wikipedia.org/wiki/Maximum_subarray_problem
Source - I've closely worked with a PhD student who's entire PhD thesis was devoted to the maximum subarray problem.
def min_abs_subarray(a):
s = [a[0]]
for e in a[1:]:
s.append(s[-1] + e)
s = sorted(s)
min = abs(s[0])
t = s[0]
for x in s[1:]:
cur = abs(x)
min = cur if cur < min else min
cur = abs(t-x)
min = cur if cur < min else min
t = x
return min
You can run Kadane's algorithmtwice(or do it in one go) to find minimum and maximum sum where finding minimum works in same way as maximum with reversed signs and then calculate new maximum by comparing their absolute value.
Source-Someone's(dont remember who) comment in this site.
Here is an Iterative solution in python. It's 100% correct.
def solution(A):
memo = []
if not len(A):
return 0
for ind, val in enumerate(A):
if ind == 0:
memo.append([val, -1*val])
else:
newElem = []
for i in memo[ind - 1]:
newElem.append(i+val)
newElem.append(i-val)
memo.append(newElem)
return min(abs(n) for n in memo.pop())
Short Sweet and work like a charm. JavaScript / NodeJs solution
function solution(A, i=0, sum =0 ) {
//Edge case if Array is empty
if(A.length == 0) return 0;
// Base case. For last Array element , add and substart from sum
// and find min of their absolute value
if(A.length -1 === i){
return Math.min( Math.abs(sum + A[i]), Math.abs(sum - A[i])) ;
}
// Absolute value by adding the elem with the sum.
// And recusrively move to next elem
let plus = Math.abs(solution(A, i+1, sum+A[i]));
// Absolute value by substracting the elem from the sum
let minus = Math.abs(solution(A, i+1, sum-A[i]));
return Math.min(plus, minus);
}
console.log(solution([-100, 3, 2, 4]))
Here is a C solution based on Kadane's algorithm.
Hopefully its helpful.
#include <stdio.h>
int min(int a, int b)
{
return (a >= b)? b: a;
}
int min_slice(int A[], int N) {
if (N==0 || N>1000000)
return 0;
int minTillHere = A[0];
int minSoFar = A[0];
int i;
for(i = 1; i < N; i++){
minTillHere = min(A[i], minTillHere + A[i]);
minSoFar = min(minSoFar, minTillHere);
}
return minSoFar;
}
int main(){
int A[]={3, 2, -6, 4, 0}, N = 5;
//int A[]={3, 2, 6, 4, 0}, N = 5;
//int A[]={-4, -8, -3, -2, -4, -10}, N = 6;
printf("Minimum slice = %d \n", min_slice(A,N));
return 0;
}
public static int solution(int[] A) {
int minTillHere = A[0];
int absMinTillHere = A[0];
int minSoFar = A[0];
int i;
for(i = 1; i < A.length; i++){
absMinTillHere = Math.min(Math.abs(A[i]),Math.abs(minTillHere + A[i]));
minTillHere = Math.min(A[i], minTillHere + A[i]);
minSoFar = Math.min(Math.abs(minSoFar), absMinTillHere);
}
return minSoFar;
}
int main()
{
int n; cin >> n;
vector<int>a(n);
for(int i = 0; i < n; i++) cin >> a[i];
long long local_min = 0, global_min = LLONG_MAX;
for(int i = 0; i < n; i++)
{
if(abs(local_min + a[i]) > abs(a[i]))
{
local_min = a[i];
}
else local_min += a[i];
global_min = min(global_min, abs(local_min));
}
cout << global_min << endl;
}

Codility Peaks Complexity

I've just done the following Codility Peaks problem. The problem is as follows:
A non-empty zero-indexed array A consisting of N integers is given.
A peak is an array element which is larger than its neighbors. More precisely, it is an index P such that 0 < P < N − 1, A[P − 1] < A[P] and A[P] > A[P + 1].
For example, the following array A:
A[0] = 1
A[1] = 2
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
has exactly three peaks: 3, 5, 10.
We want to divide this array into blocks containing the same number of elements. More precisely, we want to choose a number K that will yield the following blocks:
A[0], A[1], ..., A[K − 1],
A[K], A[K + 1], ..., A[2K − 1],
...
A[N − K], A[N − K + 1], ..., A[N − 1].
What's more, every block should contain at least one peak. Notice that extreme elements of the blocks (for example A[K − 1] or A[K]) can also be peaks, but only if they have both neighbors (including one in an adjacent blocks).
The goal is to find the maximum number of blocks into which the array A can be divided.
Array A can be divided into blocks as follows:
one block (1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2). This block contains three peaks.
two blocks (1, 2, 3, 4, 3, 4) and (1, 2, 3, 4, 6, 2). Every block has a peak.
three blocks (1, 2, 3, 4), (3, 4, 1, 2), (3, 4, 6, 2). Every block has a peak.
Notice in particular that the first block (1, 2, 3, 4) has a peak at A[3], because A[2] < A[3] > A[4], even though A[4] is in the adjacent block.
However, array A cannot be divided into four blocks, (1, 2, 3), (4, 3, 4), (1, 2, 3) and (4, 6, 2), because the (1, 2, 3) blocks do not contain a peak. Notice in particular that the (4, 3, 4) block contains two peaks: A[3] and A[5].
The maximum number of blocks that array A can be divided into is three.
Write a function:
class Solution { public int solution(int[] A); }
that, given a non-empty zero-indexed array A consisting of N integers, returns the maximum number of blocks into which A can be divided.
If A cannot be divided into some number of blocks, the function should return 0.
For example, given:
A[0] = 1
A[1] = 2
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
the function should return 3, as explained above.
Assume that:
N is an integer within the range [1..100,000];
each element of array A is an integer within the range [0..1,000,000,000].
Complexity:
expected worst-case time complexity is O(N*log(log(N)))
expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments).
Elements of input arrays can be modified.
My Question
So I solve this with what to me appears to be the brute force solution – go through every group size from 1..N, and check whether every group has at least one peak. The first 15 minutes I was trying to solve this I was trying to figure out some more optimal way, since the required complexity is O(N*log(log(N))).
This is my "brute-force" code that passes all the tests, including the large ones, for a score of 100/100:
public int solution(int[] A) {
int N = A.length;
ArrayList<Integer> peaks = new ArrayList<Integer>();
for(int i = 1; i < N-1; i++){
if(A[i] > A[i-1] && A[i] > A[i+1]) peaks.add(i);
}
for(int size = 1; size <= N; size++){
if(N % size != 0) continue;
int find = 0;
int groups = N/size;
boolean ok = true;
for(int peakIdx : peaks){
if(peakIdx/size > find){
ok = false;
break;
}
if(peakIdx/size == find) find++;
}
if(find != groups) ok = false;
if(ok) return groups;
}
return 0;
}
My question is how do I deduce that this is in fact O(N*log(log(N))), as it's not at all obvious to me, and I was surprised I pass the test cases. I'm looking for even the simplest complexity proof sketch that would convince me of this runtime. I would assume that a log(log(N)) factor means some kind of reduction of a problem by a square root on each iteration, but I have no idea how this applies to my problem. Thanks a lot for any help
You're completely right: to get the log log performance the problem needs to be reduced.
A n.log(log(n)) solution in python [below]. Codility no longer test 'performance' on this problem (!) but the python solution scores 100% for accuracy.
As you've already surmised:
Outer loop will be O(n) since it is testing whether each size of block is a clean divisor
Inner loop must be O(log(log(n))) to give O(n log(log(n))) overall.
We can get good inner loop performance because we only need to perform d(n), the number of divisors of n. We can store a prefix sum of peaks-so-far, which uses the O(n) space allowed by the problem specification. Checking whether a peak has occurred in each 'group' is then an O(1) lookup operation using the group start and end indices.
Following this logic, when the candidate block size is 3 the loop needs to perform n / 3 peak checks. The complexity becomes a sum: n/a + n/b + ... + n/n where the denominators (a, b, ...) are the factors of n.
Short story: The complexity of n.d(n) operations is O(n.log(log(n))).
Longer version:
If you've been doing the Codility Lessons you'll remember from the Lesson 8: Prime and composite numbers that the sum of harmonic number operations will give O(log(n)) complexity. We've got a reduced set, because we're only looking at factor denominators. Lesson 9: Sieve of Eratosthenes shows how the sum of reciprocals of primes is O(log(log(n))) and claims that 'the proof is non-trivial'. In this case Wikipedia tells us that the sum of divisors sigma(n) has an upper bound (see Robin's inequality, about half way down the page).
Does that completely answer your question? Suggestions on how to improve my python code are also very welcome!
def solution(data):
length = len(data)
# array ends can't be peaks, len < 3 must return 0
if len < 3:
return 0
peaks = [0] * length
# compute a list of 'peaks to the left' in O(n) time
for index in range(2, length):
peaks[index] = peaks[index - 1]
# check if there was a peak to the left, add it to the count
if data[index - 1] > data[index - 2] and data[index - 1] > data[index]:
peaks[index] += 1
# candidate is the block size we're going to test
for candidate in range(3, length + 1):
# skip if not a factor
if length % candidate != 0:
continue
# test at each point n / block
valid = True
index = candidate
while index != length:
# if no peak in this block, break
if peaks[index] == peaks[index - candidate]:
valid = False
break
index += candidate
# one additional check since peaks[length] is outside of array
if index == length and peaks[index - 1] == peaks[index - candidate]:
valid = False
if valid:
return length / candidate
return 0
Credits:
Major kudos to #tmyklebu for his SO answer which helped me a lot.
I'm don't think that the time complexity of your algorithm is O(Nlog(logN)).
However, it is certainly much lesser than O(N^2). This is because your inner loop is entered only k times where k is the number of factors of N. The number of factors of an integer can be seen in this link: http://www.cut-the-knot.org/blue/NumberOfFactors.shtml
I may be inaccurate but from the link it seems,
k ~ logN * logN * logN ...
Also, the inner loop has a complexity of O(N) since the number of peaks can be N/2 in the worst case.
Hence, in my opinion, the complexity of your algorithm is O(NlogN) at best but it must be sufficient to clear all test cases.
#radicality
There's at least one point where you can optimize the number of passes in the second loop to O(sqrt(N)) -- collect divisors of N and iterate through them only.
That will make your algo a little less "brute force".
Problem definition allows for O(N) space complexity. You can store divisors without violating this condition.
This is my solution based on prefix sums. Hope it helps:
class Solution {
public int solution(int[] A) {
int n = A.length;
int result = 1;
if (n < 3)
return 0;
int[] prefixSums = new int[n];
for (int i = 1; i < n-1; i++)
if (A[i] > A[i-1] && A[i] > A[i+1])
prefixSums[i] = prefixSums[i-1] + 1;
else
prefixSums[i] = prefixSums[i-1];
prefixSums[n-1] = prefixSums[n-2];
if (prefixSums[n-1] <= 1)
return prefixSums[n-1];
for (int i = 2; i <= prefixSums[n-2]; i++) {
if (n % i != 0)
continue;
int prev = 0;
boolean containsPeak = true;
for (int j = n/i - 1; j < n; j += n/i) {
if (prefixSums[j] == prev) {
containsPeak = false;
break;
}
prev = prefixSums[j];
}
if (containsPeak)
result = i;
}
return result;
}
}
def solution(A):
length = len(A)
if length <= 2:
return 0
peek_indexes = []
for index in range(1, length-1):
if A[index] > A[index - 1] and A[index] > A[index + 1]:
peek_indexes.append(index)
for block in range(3, int((length/2)+1)):
if length % block == 0:
index_to_check = 0
temp_blocks = 0
for peek_index in peek_indexes:
if peek_index >= index_to_check and peek_index < index_to_check + block:
temp_blocks += 1
index_to_check = index_to_check + block
if length/block == temp_blocks:
return temp_blocks
if len(peek_indexes) > 0:
return 1
else:
return 0
print(solution([1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2, 1, 2, 5, 2]))
I just found the factors at first,
then just iterated in A and tested all number of blocks to see which is the greatest block division.
This is the code that got 100 (in java)
https://app.codility.com/demo/results/training9593YB-39H/
A javascript solution with complexity of O(N * log(log(N))).
function solution(A) {
let N = A.length;
if (N < 3) return 0;
let peaks = 0;
let peaksTillNow = [ 0 ];
let dividers = [];
for (let i = 1; i < N - 1; i++) {
if (A[i - 1] < A[i] && A[i] > A[i + 1]) peaks++;
peaksTillNow.push(peaks);
if (N % i === 0) dividers.push(i);
}
peaksTillNow.push(peaks);
if (peaks === 0) return 0;
let blocks;
let result = 1;
for (blocks of dividers) {
let K = N / blocks;
let prevPeaks = 0;
let OK = true;
for (let i = 1; i <= blocks; i++) {
if (peaksTillNow[i * K - 1] > prevPeaks) {
prevPeaks = peaksTillNow[i * K - 1];
} else {
OK = false;
break;
}
}
if (OK) result = blocks;
}
return result;
}
Solution with C# code
public int GetPeaks(int[] InputArray)
{
List<int> lstPeaks = new List<int>();
lstPeaks.Add(0);
for (int Index = 1; Index < (InputArray.Length - 1); Index++)
{
if (InputArray[Index - 1] < InputArray[Index] && InputArray[Index] > InputArray[Index + 1])
{
lstPeaks.Add(1);
}
else
{
lstPeaks.Add(0);
}
}
lstPeaks.Add(0);
int totalEqBlocksWithPeaks = 0;
for (int factor = 1; factor <= InputArray.Length; factor++)
{
if (InputArray.Length % factor == 0)
{
int BlockLength = InputArray.Length / factor;
int BlockCount = factor;
bool isAllBlocksHasPeak = true;
for (int CountIndex = 1; CountIndex <= BlockCount; CountIndex++)
{
int BlockStartIndex = CountIndex == 1 ? 0 : (CountIndex - 1) * BlockLength;
int BlockEndIndex = (CountIndex * BlockLength) - 1;
if (!(lstPeaks.GetRange(BlockStartIndex, BlockLength).Sum() > 0))
{
isAllBlocksHasPeak = false;
}
}
if (isAllBlocksHasPeak)
totalEqBlocksWithPeaks++;
}
}
return totalEqBlocksWithPeaks;
}
There is actually an O(n) runtime complexity solution for this task, so this is a humble attempt to share that.
The trick to go from the proposed O(n * loglogn) solutions to O(n) is to calculate the maximum gap between any two peaks (or a leading or trailing peak to the corresponding endpoint).
This can be done while building the peak hash in the first O(n) loop.
Then, if the gap is 'g' between two consecutive peaks, then the minimum group size must be 'g/2'. It will simply be 'g' between start and first peak, or last peak and end. Also, there will be at least one peak in any group from group size 'g', so the range to check for is: g/2, 1+g/2, 2+g/2, ... g.
Therefore, the runtime is the sum over d = g/2, g/2+1, ... g) * n/d where 'd' is the divisor'.
(sum over d = g/2, 1 + g/2, ... g) * n/d = n/(g/2) + n/(1 + g/2) + ... + (n/g)
if g = 5, this n/5 + n/6 + n/7 + n/8 + n/9 + n/10 = n(1/5+1/6+1/7+1/8+1/9+1/10)
If you replace each item with the largest element, then you get sum <= n * (1/5 + 1/5 + 1/5 + 1/5 + 1/5) = n
Now, generalising this, every element is replaced with n / (g/2).
The number of items from g/2 to g is 1 + g/2 since there are (g - g/2 + 1) items.
So, the whole sum is: n/(g/2) * (g/2 + 1) = n + 2n/g < 3n.
Therefore, the bound on the total number of operations is O(n).
The code, implementing this in C++, is here:
int solution(vector<int> &A)
{
int sizeA = A.size();
vector<bool> hash(sizeA, false);
int min_group_size = 2;
int pi = 0;
for (int i = 1, pi = 0; i < sizeA - 1; ++i) {
const int e = A[i];
if (e > A[i - 1] && e > A[i + 1]) {
hash[i] = true;
int diff = i - pi;
if (pi) diff /= 2;
if (diff > min_group_size) min_group_size = diff;
pi = i;
}
}
min_group_size = min(min_group_size, sizeA - pi);
vector<int> hash_next(sizeA, 0);
for (int i = sizeA - 2; i >= 0; --i) {
hash_next[i] = hash[i] ? i : hash_next[i + 1];
}
for (int group_size = min_group_size; group_size <= sizeA; ++group_size) {
if (sizeA % group_size != 0) continue;
int number_of_groups = sizeA / group_size;
int group_index = 0;
for (int peak_index = 0; peak_index < sizeA; peak_index = group_index * group_size) {
peak_index = hash_next[peak_index];
if (!peak_index) break;
int lower_range = group_index * group_size;
int upper_range = lower_range + group_size - 1;
if (peak_index > upper_range) {
break;
}
++group_index;
}
if (number_of_groups == group_index) return number_of_groups;
}
return 0;
}
var prev, curr, total = 0;
for (var i=1; i<A.length; i++) {
if (curr == 0) {
curr = A[i];
} else {
if(A[i] != curr) {
if (prev != 0) {
if ((prev < curr && A[i] < curr) || (prev > curr && A[i] > curr)) {
total += 1;
}
} else {
prev = curr;
total += 1;
}
prev = curr;
curr = A[i];
}
}
}
if(prev != curr) {
total += 1;
}
return total;
I agree with GnomeDePlume answer... the piece on looking for the divisors on the proposed solution is O(N), and that could be decreased to O(sqrt(N)) by using the algorithm provided on the lesson text.
So just adding, here is my solution using Java that solves the problem on the required complexity.
Be aware, it has way more code then yours - some cleanup (debug sysouts and comments) would always be possible :-)
public int solution(int[] A) {
int result = 0;
int N = A.length;
// mark accumulated peaks
int[] peaks = new int[N];
int count = 0;
for (int i = 1; i < N -1; i++) {
if (A[i-1] < A[i] && A[i+1] < A[i])
count++;
peaks[i] = count;
}
// set peaks count on last elem as it will be needed during div checks
peaks[N-1] = count;
// check count
if (count > 0) {
// if only one peak, will need the whole array
if (count == 1)
result = 1;
else {
// at this point (peaks > 1) we know at least the single group will satisfy the criteria
// so set result to 1, then check for bigger numbers of groups
result = 1;
// for each divisor of N, check if that number of groups work
Integer[] divisors = getDivisors(N);
// result will be at least 1 at this point
boolean candidate;
int divisor, startIdx, endIdx;
// check from top value to bottom - stop when one is found
// for div 1 we know num groups is 1, and we already know that is the minimum. No need to check.
// for div = N we know it's impossible, as all elements would have to be peaks (impossible by definition)
for (int i = divisors.length-2; i > 0; i--) {
candidate = true;
divisor = divisors[i];
for (int j = 0; j < N; j+= N/divisor) {
startIdx = (j == 0 ? j : j-1);
endIdx = j + N/divisor-1;
if (peaks[startIdx] == peaks[endIdx]) {
candidate = false;
break;
}
}
// if all groups had at least 1 peak, this is the result!
if (candidate) {
result = divisor;
break;
}
}
}
}
return result;
}
// returns ordered array of all divisors of N
private Integer[] getDivisors(int N) {
Set<Integer> set = new TreeSet<Integer>();
double sqrt = Math.sqrt(N);
int i = 1;
for (; i < sqrt; i++) {
if (N % i == 0) {
set.add(i);
set.add(N/i);
}
}
if (i * i == N)
set.add(i);
return set.toArray(new Integer[]{});
}
Thanks,
Davi

Discover long patterns

Given a sorted list of numbers, I would like to find the longest subsequence where the differences between successive elements are geometrically increasing. So if the list is
1, 2, 3, 4, 7, 15, 27, 30, 31, 81
then the subsequence is 1, 3, 7, 15, 31. Alternatively consider 1, 2, 5, 6, 11, 15, 23, 41, 47 which has subsequence 5, 11, 23, 47 with a = 3 and k = 2.
Can this be solved in O(n2) time? Where n is the length of the list.
I am interested both in the general case where the progression of differences is ak, ak2, ak3, etc., where both a and k are integers, and in the special case where a = 1, so the progression of difference is k, k2, k3, etc.
Update
I have made an improvement of the algorithm that it takes an average of O(M + N^2) and memory needs of O(M+N). Mainly is the same that the protocol described below, but to calculate the possible factors A,K for ech diference D, I preload a table. This table takes less than a second to be constructed for M=10^7.
I have made a C implementation that takes less than 10minutes to solve N=10^5 diferent random integer elements.
Here is the source code in C: To execute just do: gcc -O3 -o findgeo findgeo.c
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <memory.h>
#include <time.h>
struct Factor {
int a;
int k;
struct Factor *next;
};
struct Factor *factors = 0;
int factorsL=0;
void ConstructFactors(int R) {
int a,k,C;
int R2;
struct Factor *f;
float seconds;
clock_t end;
clock_t start = clock();
if (factors) free(factors);
factors = malloc (sizeof(struct Factor) *((R>>1) + 1));
R2 = R>>1 ;
for (a=0;a<=R2;a++) {
factors[a].a= a;
factors[a].k=1;
factors[a].next=NULL;
}
factorsL=R2+1;
R2 = floor(sqrt(R));
for (k=2; k<=R2; k++) {
a=1;
C=a*k*(k+1);
while (C<R) {
C >>= 1;
f=malloc(sizeof(struct Factor));
*f=factors[C];
factors[C].a=a;
factors[C].k=k;
factors[C].next=f;
a++;
C=a*k*(k+1);
}
}
end = clock();
seconds = (float)(end - start) / CLOCKS_PER_SEC;
printf("Construct Table: %f\n",seconds);
}
void DestructFactors() {
int i;
struct Factor *f;
for (i=0;i<factorsL;i++) {
while (factors[i].next) {
f=factors[i].next->next;
free(factors[i].next);
factors[i].next=f;
}
}
free(factors);
factors=NULL;
factorsL=0;
}
int ipow(int base, int exp)
{
int result = 1;
while (exp)
{
if (exp & 1)
result *= base;
exp >>= 1;
base *= base;
}
return result;
}
void findGeo(int **bestSolution, int *bestSolutionL,int *Arr, int L) {
int i,j,D;
int mustExistToBeBetter;
int R=Arr[L-1]-Arr[0];
int *possibleSolution;
int possibleSolutionL=0;
int exp;
int NextVal;
int idx;
int kMax,aMax;
float seconds;
clock_t end;
clock_t start = clock();
kMax = floor(sqrt(R));
aMax = floor(R/2);
ConstructFactors(R);
*bestSolutionL=2;
*bestSolution=malloc(0);
possibleSolution = malloc(sizeof(int)*(R+1));
struct Factor *f;
int *H=malloc(sizeof(int)*(R+1));
memset(H,0, sizeof(int)*(R+1));
for (i=0;i<L;i++) {
H[ Arr[i]-Arr[0] ]=1;
}
for (i=0; i<L-2;i++) {
for (j=i+2; j<L; j++) {
D=Arr[j]-Arr[i];
if (D & 1) continue;
f = factors + (D >>1);
while (f) {
idx=Arr[i] + f->a * f->k - Arr[0];
if ((f->k <= kMax)&& (f->a<aMax)&&(idx<=R)&&H[idx]) {
if (f->k ==1) {
mustExistToBeBetter = Arr[i] + f->a * (*bestSolutionL);
} else {
mustExistToBeBetter = Arr[i] + f->a * f->k * (ipow(f->k,*bestSolutionL) - 1)/(f->k-1);
}
if (mustExistToBeBetter< Arr[L-1]+1) {
idx= floor(mustExistToBeBetter - Arr[0]);
} else {
idx = R+1;
}
if ((idx<=R)&&H[idx]) {
possibleSolution[0]=Arr[i];
possibleSolution[1]=Arr[i] + f->a*f->k;
possibleSolution[2]=Arr[j];
possibleSolutionL=3;
exp = f->k * f->k * f->k;
NextVal = Arr[j] + f->a * exp;
idx=NextVal - Arr[0];
while ( (idx<=R) && H[idx]) {
possibleSolution[possibleSolutionL]=NextVal;
possibleSolutionL++;
exp = exp * f->k;
NextVal = NextVal + f->a * exp;
idx=NextVal - Arr[0];
}
if (possibleSolutionL > *bestSolutionL) {
free(*bestSolution);
*bestSolution = possibleSolution;
possibleSolution = malloc(sizeof(int)*(R+1));
*bestSolutionL=possibleSolutionL;
kMax= floor( pow (R, 1/ (*bestSolutionL) ));
aMax= floor(R / (*bestSolutionL));
}
}
}
f=f->next;
}
}
}
if (*bestSolutionL == 2) {
free(*bestSolution);
possibleSolutionL=0;
for (i=0; (i<2)&&(i<L); i++ ) {
possibleSolution[possibleSolutionL]=Arr[i];
possibleSolutionL++;
}
*bestSolution = possibleSolution;
*bestSolutionL=possibleSolutionL;
} else {
free(possibleSolution);
}
DestructFactors();
free(H);
end = clock();
seconds = (float)(end - start) / CLOCKS_PER_SEC;
printf("findGeo: %f\n",seconds);
}
int compareInt (const void * a, const void * b)
{
return *(int *)a - *(int *)b;
}
int main(void) {
int N=100000;
int R=10000000;
int *A = malloc(sizeof(int)*N);
int *Sol;
int SolL;
int i;
int *S=malloc(sizeof(int)*R);
for (i=0;i<R;i++) S[i]=i+1;
for (i=0;i<N;i++) {
int r = rand() % (R-i);
A[i]=S[r];
S[r]=S[R-i-1];
}
free(S);
qsort(A,N,sizeof(int),compareInt);
/*
int step = floor(R/N);
A[0]=1;
for (i=1;i<N;i++) {
A[i]=A[i-1]+step;
}
*/
findGeo(&Sol,&SolL,A,N);
printf("[");
for (i=0;i<SolL;i++) {
if (i>0) printf(",");
printf("%d",Sol[i]);
}
printf("]\n");
printf("Size: %d\n",SolL);
free(Sol);
free(A);
return EXIT_SUCCESS;
}
Demostration
I will try to demonstrate that the algorithm that I proposed is in average for an equally distributed random sequence. I’m not a mathematician and I am not used to do this kind of demonstrations, so please fill free to correct me any error that you can see.
There are 4 indented loops, the two firsts are the N^2 factor. The M is for the calculation of the possible factors table).
The third loop is executed only once in average for each pair. You can see this checking the size of the pre-calculated factors table. It’s size is M when N->inf. So the average steps for each pair is M/M=1.
So the proof happens to check that the forth loop. (The one that traverses the good made sequences is executed less that or equal O(N^2) for all the pairs.
To demonstrate that, I will consider two cases: one where M>>N and other where M ~= N. Where M is the maximum difference of the initial array: M= S(n)-S(1).
For the first case, (M>>N) the probability to find a coincidence is p=N/M. To start a sequence, it must coincide the second and the b+1 element where b is the length of the best sequence until now. So the loop will enter times. And the average length of this series (supposing an infinite series) is . So the total number of times that the loop will be executed is . And this is close to 0 when M>>N. The problem here is when M~=N.
Now lets consider this case where M~=N. Lets consider that b is the best sequence length until now. For the case A=k=1, then the sequence must start before N-b, so the number of sequences will be N-b, and the times that will go for the loop will be a maximum of (N-b)*b.
For A>1 and k=1 we can extrapolate to where d is M/N (the average distance between numbers). If we add for all A’s from 1 to dN/b then we see a top limit of:
For the cases where k>=2, we see that the sequence must start before , So the loop will enter an average of and adding for all As from 1 to dN/k^b, it gives a limit of
Here, the worst case is when b is minimum. Because we are considering minimum series, lets consider a very worst case of b= 2 so the number of passes for the 4th loop for a given k will be less than
.
And if we add all k’s from 2 to infinite will be:
So adding all the passes for k=1 and k>=2, we have a maximum of:
Note that d=M/N=1/p.
So we have two limits, One that goes to infinite when d=1/p=M/N goes to 1 and other that goes to infinite when d goes to infinite. So our limit is the minimum of both, and the worst case is when both equetions cross. So if we solve the equation:
we see that the maximum is when d=1.353
So it is demonstrated that the forth loops will be processed less than 1.55N^2 times in total.
Of course, this is for the average case. For the worst case I am not able to find a way to generate series whose forth loop are higher than O(N^2), and I strongly believe that they does not exist, but I am not a mathematician to prove it.
Old Answer
Here is a solution in average of O((n^2)*cube_root(M)) where M is the difference between the first and last element of the array. And memory requirements of O(M+N).
1.- Construct an array H of length M so that M[i - S[0]]=true if i exists in the initial array and false if it does not exist.
2.- For each pair in the array S[j], S[i] do:
2.1 Check if it can be the first and third elements of a possible solution. To do so, calculate all possible A,K pairs that meet the equation S(i) = S(j) + AK + AK^2. Check this SO question to see how to solve this problem. And check that exist the second element: S[i]+ A*K
2.2 Check also that exist the element one position further that the best solution that we have. For example, if the best solution that we have until now is 4 elements long then check that exist the element A[j] + AK + AK^2 + AK^3 + AK^4
2.3 If 2.1 and 2.2 are true, then iterate how long is this series and set as the bestSolution until now is is longer that the last.
Here is the code in javascript:
function getAKs(A) {
if (A / 2 != Math.floor(A / 2)) return [];
var solution = [];
var i;
var SR3 = Math.pow(A, 1 / 3);
for (i = 1; i <= SR3; i++) {
var B, C;
C = i;
B = A / (C * (C + 1));
if (B == Math.floor(B)) {
solution.push([B, C]);
}
B = i;
C = (-1 + Math.sqrt(1 + 4 * A / B)) / 2;
if (C == Math.floor(C)) {
solution.push([B, C]);
}
}
return solution;
}
function getBestGeometricSequence(S) {
var i, j, k;
var bestSolution = [];
var H = Array(S[S.length-1]-S[0]);
for (i = 0; i < S.length; i++) H[S[i] - S[0]] = true;
for (i = 0; i < S.length; i++) {
for (j = 0; j < i; j++) {
var PossibleAKs = getAKs(S[i] - S[j]);
for (k = 0; k < PossibleAKs.length; k++) {
var A = PossibleAKs[k][0];
var K = PossibleAKs[k][17];
var mustExistToBeBetter;
if (K==1) {
mustExistToBeBetter = S[j] + A * bestSolution.length;
} else {
mustExistToBeBetter = S[j] + A * K * (Math.pow(K,bestSolution.length) - 1)/(K-1);
}
if ((H[S[j] + A * K - S[0]]) && (H[mustExistToBeBetter - S[0]])) {
var possibleSolution=[S[j],S[j] + A * K,S[i]];
exp = K * K * K;
var NextVal = S[i] + A * exp;
while (H[NextVal - S[0]] === true) {
possibleSolution.push(NextVal);
exp = exp * K;
NextVal = NextVal + A * exp;
}
if (possibleSolution.length > bestSolution.length) {
bestSolution = possibleSolution;
}
}
}
}
}
return bestSolution;
}
//var A= [ 1, 2, 3,5,7, 15, 27, 30,31, 81];
var A=[];
for (i=1;i<=3000;i++) {
A.push(i);
}
var sol=getBestGeometricSequence(A);
$("#result").html(JSON.stringify(sol));
You can check the code here: http://jsfiddle.net/6yHyR/1/
I maintain the other solution because I believe that it is still better when M is very big compared to N.
Just to start with something, here is a simple solution in JavaScript:
var input = [0.7, 1, 2, 3, 4, 7, 15, 27, 30, 31, 81],
output = [], indexes, values, i, index, value, i_max_length,
i1, i2, i3, j1, j2, j3, difference12a, difference23a, difference12b, difference23b,
scale_factor, common_ratio_a, common_ratio_b, common_ratio_c,
error, EPSILON = 1e-9, common_ratio_is_integer,
resultDiv = $("#result");
for (i1 = 0; i1 < input.length - 2; ++i1) {
for (i2 = i1 + 1; i2 < input.length - 1; ++i2) {
scale_factor = difference12a = input[i2] - input[i1];
for (i3 = i2 + 1; i3 < input.length; ++i3) {
difference23a = input[i3] - input[i2];
common_ratio_1a = difference23a / difference12a;
common_ratio_2a = Math.round(common_ratio_1a);
error = Math.abs((common_ratio_2a - common_ratio_1a) / common_ratio_1a);
common_ratio_is_integer = error < EPSILON;
if (common_ratio_2a > 1 && common_ratio_is_integer) {
indexes = [i1, i2, i3];
j1 = i2;
j2 = i3
difference12b = difference23a;
for (j3 = j2 + 1; j3 < input.length; ++j3) {
difference23b = input[j3] - input[j2];
common_ratio_1b = difference23b / difference12b;
common_ratio_2b = Math.round(common_ratio_1b);
error = Math.abs((common_ratio_2b - common_ratio_1b) / common_ratio_1b);
common_ratio_is_integer = error < EPSILON;
if (common_ratio_is_integer && common_ratio_2a === common_ratio_2b) {
indexes.push(j3);
j1 = j2;
j2 = j3
difference12b = difference23b;
}
}
values = [];
for (i = 0; i < indexes.length; ++i) {
index = indexes[i];
value = input[index];
values.push(value);
}
output.push(values);
}
}
}
}
if (output !== []) {
i_max_length = 0;
for (i = 1; i < output.length; ++i) {
if (output[i_max_length].length < output[i].length)
i_max_length = i;
}
for (i = 0; i < output.length; ++i) {
if (output[i_max_length].length == output[i].length)
resultDiv.append("<p>[" + output[i] + "]</p>");
}
}
Output:
[1, 3, 7, 15, 31]
I find the first three items of every subsequence candidate, calculate the scale factor and the common ratio from them, and if the common ratio is integer, then I iterate over the remaining elements after the third one, and add those to the subsequence, which fit into the geometric progression defined by the first three items. As a last step, I select the sebsequence/s which has/have the largest length.
In fact it is exactly the same question as Longest equally-spaced subsequence, you just have to consider the logarithm of your data. If the sequence is a, ak, ak^2, ak^3, the logarithmique value is ln(a), ln(a) + ln(k), ln(a)+2ln(k), ln(a)+3ln(k), so it is equally spaced. The opposite is of course true. There is a lot of different code in the question above.
I don't think the special case a=1 can be resolved more efficiently than an adaptation from an algorithm above.
Here is my solution in Javascript. It should be close to O(n^2) except may be in some pathological cases.
function bsearch(Arr,Val, left,right) {
if (left == right) return left;
var m=Math.floor((left + right) /2);
if (Val <= Arr[m]) {
return bsearch(Arr,Val,left,m);
} else {
return bsearch(Arr,Val,m+1,right);
}
}
function findLongestGeometricSequence(S) {
var bestSolution=[];
var i,j,k;
var H={};
for (i=0;i<S.length;i++) H[S[i]]=true;
for (i=0;i<S.length;i++) {
for (j=0;j<i;j++) {
for (k=j+1;k<i;) {
var possibleSolution=[S[j],S[k],S[i]];
var K = (S[i] - S[k]) / (S[k] - S[j]);
var A = (S[k] - S[j]) * (S[k] - S[j]) / (S[i] - S[k]);
if ((Math.floor(K) == K) && (Math.floor(A)==A)) {
exp= K*K*K;
var NextVal= S[i] + A * exp;
while (H[NextVal] === true) {
possibleSolution.push(NextVal);
exp = exp * K;
NextVal= NextVal + A * exp;
}
if (possibleSolution.length > bestSolution.length)
bestSolution=possibleSolution;
K--;
} else {
K=Math.floor(K);
}
if (K>0) {
var NextPossibleMidValue= (S[i] + K*S[j]) / (K +1);
k++;
if (S[k]<NextPossibleMidValue) {
k=bsearch(S,NextPossibleMidValue, k+1, i);
}
} else {
k=i;
}
}
}
}
return bestSolution;
}
function Run() {
var MyS= [0.7, 1, 2, 3, 4, 5,6,7, 15, 27, 30,31, 81];
var sol = findLongestGeometricSequence(MyS);
alert(JSON.stringify(sol));
}
Small Explanation
If we take 3 numbers of the array S(j) < S(k) < S(i) then you can calculate a and k so that: S(k) = S(j) + a*k and S(i) = S(k) + a*k^2 (2 equations and 2 incognits). With that in mind, you can check if exist a number in the array that is S(next) = S(i) + a*k^3. If that is the case, then continue checknng for S(next2) = S(next) + a*k^4 and so on.
This would be a O(n^3) solution, but you can hava advantage that k must be integer in order to limit the S(k) points selected.
In case that a is known, then you can calculate a(k) and you need to check only one number in the third loop, so this case will be clearly a O(n^2).
I think this task is related with not so long ago posted Longest equally-spaced subsequence. I've just modified my algorithm in Python a little bit:
from math import sqrt
def add_precalc(precalc, end, (a, k), count, res, N):
if end + a * k ** res[1]["count"] > N: return
x = end + a * k ** count
if x > N or x < 0: return
if precalc[x] is None: return
if (a, k) not in precalc[x]:
precalc[x][(a, k)] = count
return
def factors(n):
res = []
for x in range(1, int(sqrt(n)) + 1):
if n % x == 0:
y = n / x
res.append((x, y))
res.append((y, x))
return res
def work(input):
precalc = [None] * (max(input) + 1)
for x in input: precalc[x] = {}
N = max(input)
res = ((0, 0), {"end":0, "count":0})
for i, x in enumerate(input):
for y in input[i::-1]:
for a, k in factors(x - y):
if (a, k) in precalc[x]: continue
add_precalc(precalc, x, (a, k), 2, res, N)
for step, count in precalc[x].iteritems():
count += 1
if count > res[1]["count"]: res = (step, {"end":x, "count":count})
add_precalc(precalc, x, step, count, res, N)
precalc[x] = None
d = [res[1]["end"]]
for x in range(res[1]["count"] - 1, 0, -1):
d.append(d[-1] - res[0][0] * res[0][1] ** x)
d.reverse()
return d
explanation
Traversing the array
For each previous element of the array calculate factors of the difference between current and taken previous element and then precalculate next possible element of the sequence and saving it to precalc array
So when arriving at element i there're already all possible sequences with element i in the precalc array, so we have to calculate next possible element and save it to precalc.
Currently there's one place in algorithm that could be slow - factorization of each previous number. I think it could be made faster with two optimizations:
more effective factorization algorithm
find a way not to see at each element of array, using the fact that array is sorted and there's already a precalculated sequences
Python:
def subseq(a):
seq = []
aset = set(a)
for i, x in enumerate(a):
# elements after x
for j, x2 in enumerate(a[i+1:]):
j += i + 1 # enumerate starts j at 0, we want a[j] = x2
bk = x2 - x # b*k (assuming k and k's exponent start at 1)
# given b*k, bruteforce values of k
for k in range(1, bk + 1):
items = [x, x2] # our subsequence so far
nextdist = bk * k # what x3 - x2 should look like
while items[-1] + nextdist in aset:
items.append(items[-1] + nextdist)
nextdist *= k
if len(items) > len(seq):
seq = items
return seq
Running time is O(dn^3), where d is the (average?) distance between two elements,
and n is of course len(a).

Implement Number division by multiplication method [duplicate]

I was asked this question in a job interview, and I'd like to know how others would solve it. I'm most comfortable with Java, but solutions in other languages are welcome.
Given an array of numbers, nums, return an array of numbers products, where products[i] is the product of all nums[j], j != i.
Input : [1, 2, 3, 4, 5]
Output: [(2*3*4*5), (1*3*4*5), (1*2*4*5), (1*2*3*5), (1*2*3*4)]
= [120, 60, 40, 30, 24]
You must do this in O(N) without using division.
An explanation of polygenelubricants method is:
The trick is to construct the arrays (in the case for 4 elements):
{ 1, a[0], a[0]*a[1], a[0]*a[1]*a[2], }
{ a[1]*a[2]*a[3], a[2]*a[3], a[3], 1, }
Both of which can be done in O(n) by starting at the left and right edges respectively.
Then, multiplying the two arrays element-by-element gives the required result.
My code would look something like this:
int a[N] // This is the input
int products_below[N];
int p = 1;
for (int i = 0; i < N; ++i) {
products_below[i] = p;
p *= a[i];
}
int products_above[N];
p = 1;
for (int i = N - 1; i >= 0; --i) {
products_above[i] = p;
p *= a[i];
}
int products[N]; // This is the result
for (int i = 0; i < N; ++i) {
products[i] = products_below[i] * products_above[i];
}
If you need the solution be O(1) in space as well, you can do this (which is less clear in my opinion):
int a[N] // This is the input
int products[N];
// Get the products below the current index
int p = 1;
for (int i = 0; i < N; ++i) {
products[i] = p;
p *= a[i];
}
// Get the products above the current index
p = 1;
for (int i = N - 1; i >= 0; --i) {
products[i] *= p;
p *= a[i];
}
Here is a small recursive function (in C++) to do the modification in-place. It requires O(n) extra space (on stack) though. Assuming the array is in a and N holds the array length, we have:
int multiply(int *a, int fwdProduct, int indx) {
int revProduct = 1;
if (indx < N) {
revProduct = multiply(a, fwdProduct*a[indx], indx+1);
int cur = a[indx];
a[indx] = fwdProduct * revProduct;
revProduct *= cur;
}
return revProduct;
}
Here's my attempt to solve it in Java. Apologies for the non-standard formatting, but the code has a lot of duplication, and this is the best I can do to make it readable.
import java.util.Arrays;
public class Products {
static int[] products(int... nums) {
final int N = nums.length;
int[] prods = new int[N];
Arrays.fill(prods, 1);
for (int
i = 0, pi = 1 , j = N-1, pj = 1 ;
(i < N) && (j >= 0) ;
pi *= nums[i++] , pj *= nums[j--] )
{
prods[i] *= pi ; prods[j] *= pj ;
}
return prods;
}
public static void main(String[] args) {
System.out.println(
Arrays.toString(products(1, 2, 3, 4, 5))
); // prints "[120, 60, 40, 30, 24]"
}
}
The loop invariants are pi = nums[0] * nums[1] *.. nums[i-1] and pj = nums[N-1] * nums[N-2] *.. nums[j+1]. The i part on the left is the "prefix" logic, and the j part on the right is the "suffix" logic.
Recursive one-liner
Jasmeet gave a (beautiful!) recursive solution; I've turned it into this (hideous!) Java one-liner. It does in-place modification, with O(N) temporary space in the stack.
static int multiply(int[] nums, int p, int n) {
return (n == nums.length) ? 1
: nums[n] * (p = multiply(nums, nums[n] * (nums[n] = p), n + 1))
+ 0*(nums[n] *= p);
}
int[] arr = {1,2,3,4,5};
multiply(arr, 1, 0);
System.out.println(Arrays.toString(arr));
// prints "[120, 60, 40, 30, 24]"
Translating Michael Anderson's solution into Haskell:
otherProducts xs = zipWith (*) below above
where below = scanl (*) 1 $ init xs
above = tail $ scanr (*) 1 xs
Sneakily circumventing the "no divisions" rule:
sum = 0.0
for i in range(a):
sum += log(a[i])
for i in range(a):
output[i] = exp(sum - log(a[i]))
Here you go, simple and clean solution with O(N) complexity:
int[] a = {1,2,3,4,5};
int[] r = new int[a.length];
int x = 1;
r[0] = 1;
for (int i=1;i<a.length;i++){
r[i]=r[i-1]*a[i-1];
}
for (int i=a.length-1;i>0;i--){
x=x*a[i];
r[i-1]=x*r[i-1];
}
for (int i=0;i<r.length;i++){
System.out.println(r[i]);
}
Travel Left->Right and keep saving product. Call it Past. -> O(n)
Travel Right -> left keep the product. Call it Future. -> O(n)
Result[i] = Past[i-1] * future[i+1] -> O(n)
Past[-1] = 1; and Future[n+1]=1;
O(n)
C++, O(n):
long long prod = accumulate(in.begin(), in.end(), 1LL, multiplies<int>());
transform(in.begin(), in.end(), back_inserter(res),
bind1st(divides<long long>(), prod));
Here is my solution in modern C++. It makes use of std::transform and is pretty easy to remember.
Online code (wandbox).
#include<algorithm>
#include<iostream>
#include<vector>
using namespace std;
vector<int>& multiply_up(vector<int>& v){
v.insert(v.begin(),1);
transform(v.begin()+1, v.end()
,v.begin()
,v.begin()+1
,[](auto const& a, auto const& b) { return b*a; }
);
v.pop_back();
return v;
}
int main() {
vector<int> v = {1,2,3,4,5};
auto vr = v;
reverse(vr.begin(),vr.end());
multiply_up(v);
multiply_up(vr);
reverse(vr.begin(),vr.end());
transform(v.begin(),v.end()
,vr.begin()
,v.begin()
,[](auto const& a, auto const& b) { return b*a; }
);
for(auto& i: v) cout << i << " ";
}
Precalculate the product of the numbers to the left and to the right of each element.
For every element the desired value is the product of it's neigbors's products.
#include <stdio.h>
unsigned array[5] = { 1,2,3,4,5};
int main(void)
{
unsigned idx;
unsigned left[5]
, right[5];
left[0] = 1;
right[4] = 1;
/* calculate products of numbers to the left of [idx] */
for (idx=1; idx < 5; idx++) {
left[idx] = left[idx-1] * array[idx-1];
}
/* calculate products of numbers to the right of [idx] */
for (idx=4; idx-- > 0; ) {
right[idx] = right[idx+1] * array[idx+1];
}
for (idx=0; idx <5 ; idx++) {
printf("[%u] Product(%u*%u) = %u\n"
, idx, left[idx] , right[idx] , left[idx] * right[idx] );
}
return 0;
}
Result:
$ ./a.out
[0] Product(1*120) = 120
[1] Product(1*60) = 60
[2] Product(2*20) = 40
[3] Product(6*5) = 30
[4] Product(24*1) = 24
(UPDATE: now I look closer, this uses the same method as Michael Anderson, Daniel Migowski and polygenelubricants above)
Tricky:
Use the following:
public int[] calc(int[] params) {
int[] left = new int[n-1]
in[] right = new int[n-1]
int fac1 = 1;
int fac2 = 1;
for( int i=0; i<n; i++ ) {
fac1 = fac1 * params[i];
fac2 = fac2 * params[n-i];
left[i] = fac1;
right[i] = fac2;
}
fac = 1;
int[] results = new int[n];
for( int i=0; i<n; i++ ) {
results[i] = left[i] * right[i];
}
Yes, I am sure i missed some i-1 instead of i, but thats the way to solve it.
This is O(n^2) but f# is soooo beautiful:
List.fold (fun seed i -> List.mapi (fun j x -> if i=j+1 then x else x*i) seed)
[1;1;1;1;1]
[1..5]
There also is a O(N^(3/2)) non-optimal solution. It is quite interesting, though.
First preprocess each partial multiplications of size N^0.5(this is done in O(N) time complexity). Then, calculation for each number's other-values'-multiple can be done in 2*O(N^0.5) time(why? because you only need to multiple the last elements of other ((N^0.5) - 1) numbers, and multiply the result with ((N^0.5) - 1) numbers that belong to the group of the current number). Doing this for each number, one can get O(N^(3/2)) time.
Example:
4 6 7 2 3 1 9 5 8
partial results:
4*6*7 = 168
2*3*1 = 6
9*5*8 = 360
To calculate the value of 3, one needs to multiply the other groups' values 168*360, and then with 2*1.
public static void main(String[] args) {
int[] arr = { 1, 2, 3, 4, 5 };
int[] result = { 1, 1, 1, 1, 1 };
for (int i = 0; i < arr.length; i++) {
for (int j = 0; j < i; j++) {
result[i] *= arr[j];
}
for (int k = arr.length - 1; k > i; k--) {
result[i] *= arr[k];
}
}
for (int i : result) {
System.out.println(i);
}
}
This solution i came up with and i found it so clear what do you think!?
Based on Billz answer--sorry I can't comment, but here is a scala version that correctly handles duplicate items in the list, and is probably O(n):
val list1 = List(1, 7, 3, 3, 4, 4)
val view = list1.view.zipWithIndex map { x => list1.view.patch(x._2, Nil, 1).reduceLeft(_*_)}
view.force
returns:
List(1008, 144, 336, 336, 252, 252)
Adding my javascript solution here as I didn't find anyone suggesting this.
What is to divide, except to count the number of times you can extract a number from another number? I went through calculating the product of the whole array, and then iterate over each element, and substracting the current element until zero:
//No division operation allowed
// keep substracting divisor from dividend, until dividend is zero or less than divisor
function calculateProducsExceptCurrent_NoDivision(input){
var res = [];
var totalProduct = 1;
//calculate the total product
for(var i = 0; i < input.length; i++){
totalProduct = totalProduct * input[i];
}
//populate the result array by "dividing" each value
for(var i = 0; i < input.length; i++){
var timesSubstracted = 0;
var divisor = input[i];
var dividend = totalProduct;
while(divisor <= dividend){
dividend = dividend - divisor;
timesSubstracted++;
}
res.push(timesSubstracted);
}
return res;
}
Just 2 passes up and down. Job done in O(N)
private static int[] multiply(int[] numbers) {
int[] multiplied = new int[numbers.length];
int total = 1;
multiplied[0] = 1;
for (int i = 1; i < numbers.length; i++) {
multiplied[i] = numbers[i - 1] * multiplied[i - 1];
}
for (int j = numbers.length - 2; j >= 0; j--) {
total *= numbers[j + 1];
multiplied[j] = total * multiplied[j];
}
return multiplied;
}
def productify(arr, prod, i):
if i < len(arr):
prod.append(arr[i - 1] * prod[i - 1]) if i > 0 else prod.append(1)
retval = productify(arr, prod, i + 1)
prod[i] *= retval
return retval * arr[i]
return 1
if __name__ == "__main__":
arr = [1, 2, 3, 4, 5]
prod = []
productify(arr, prod, 0)
print(prod)
Well,this solution can be considered that of C/C++.
Lets say we have an array "a" containing n elements
like a[n],then the pseudo code would be as below.
for(j=0;j<n;j++)
{
prod[j]=1;
for (i=0;i<n;i++)
{
if(i==j)
continue;
else
prod[j]=prod[j]*a[i];
}
One more solution, Using division. with twice traversal.
Multiply all the elements and then start dividing it by each element.
{-
Recursive solution using sqrt(n) subsets. Runs in O(n).
Recursively computes the solution on sqrt(n) subsets of size sqrt(n).
Then recurses on the product sum of each subset.
Then for each element in each subset, it computes the product with
the product sum of all other products.
Then flattens all subsets.
Recurrence on the run time is T(n) = sqrt(n)*T(sqrt(n)) + T(sqrt(n)) + n
Suppose that T(n) ≤ cn in O(n).
T(n) = sqrt(n)*T(sqrt(n)) + T(sqrt(n)) + n
≤ sqrt(n)*c*sqrt(n) + c*sqrt(n) + n
≤ c*n + c*sqrt(n) + n
≤ (2c+1)*n
&in; O(n)
Note that ceiling(sqrt(n)) can be computed using a binary search
and O(logn) iterations, if the sqrt instruction is not permitted.
-}
otherProducts [] = []
otherProducts [x] = [1]
otherProducts [x,y] = [y,x]
otherProducts a = foldl' (++) [] $ zipWith (\s p -> map (*p) s) solvedSubsets subsetOtherProducts
where
n = length a
-- Subset size. Require that 1 < s < n.
s = ceiling $ sqrt $ fromIntegral n
solvedSubsets = map otherProducts subsets
subsetOtherProducts = otherProducts $ map product subsets
subsets = reverse $ loop a []
where loop [] acc = acc
loop a acc = loop (drop s a) ((take s a):acc)
Here is my code:
int multiply(int a[],int n,int nextproduct,int i)
{
int prevproduct=1;
if(i>=n)
return prevproduct;
prevproduct=multiply(a,n,nextproduct*a[i],i+1);
printf(" i=%d > %d\n",i,prevproduct*nextproduct);
return prevproduct*a[i];
}
int main()
{
int a[]={2,4,1,3,5};
multiply(a,5,1,0);
return 0;
}
Here's a slightly functional example, using C#:
Func<long>[] backwards = new Func<long>[input.Length];
Func<long>[] forwards = new Func<long>[input.Length];
for (int i = 0; i < input.Length; ++i)
{
var localIndex = i;
backwards[i] = () => (localIndex > 0 ? backwards[localIndex - 1]() : 1) * input[localIndex];
forwards[i] = () => (localIndex < input.Length - 1 ? forwards[localIndex + 1]() : 1) * input[localIndex];
}
var output = new long[input.Length];
for (int i = 0; i < input.Length; ++i)
{
if (0 == i)
{
output[i] = forwards[i + 1]();
}
else if (input.Length - 1 == i)
{
output[i] = backwards[i - 1]();
}
else
{
output[i] = forwards[i + 1]() * backwards[i - 1]();
}
}
I'm not entirely certain that this is O(n), due to the semi-recursion of the created Funcs, but my tests seem to indicate that it's O(n) in time.
To be complete here is the code in Scala:
val list1 = List(1, 2, 3, 4, 5)
for (elem <- list1) println(list1.filter(_ != elem) reduceLeft(_*_))
This will print out the following:
120
60
40
30
24
The program will filter out the current elem (_ != elem); and multiply the new list with reduceLeft method. I think this will be O(n) if you use scala view or Iterator for lazy eval.
// This is the recursive solution in Java
// Called as following from main product(a,1,0);
public static double product(double[] a, double fwdprod, int index){
double revprod = 1;
if (index < a.length){
revprod = product2(a, fwdprod*a[index], index+1);
double cur = a[index];
a[index] = fwdprod * revprod;
revprod *= cur;
}
return revprod;
}
A neat solution with O(n) runtime:
For each element calculate the product of all the elements that occur before that and it store in an array "pre".
For each element calculate the product of all the elements that occur after that element and store it in an array "post"
Create a final array "result", for an element i,
result[i] = pre[i-1]*post[i+1];
Here is the ptyhon version
# This solution use O(n) time and O(n) space
def productExceptSelf(self, nums):
"""
:type nums: List[int]
:rtype: List[int]
"""
N = len(nums)
if N == 0: return
# Initialzie list of 1, size N
l_prods, r_prods = [1]*N, [1]*N
for i in range(1, N):
l_prods[i] = l_prods[i-1] * nums[i-1]
for i in reversed(range(N-1)):
r_prods[i] = r_prods[i+1] * nums[i+1]
result = [x*y for x,y in zip(l_prods,r_prods)]
return result
# This solution use O(n) time and O(1) space
def productExceptSelfSpaceOptimized(self, nums):
"""
:type nums: List[int]
:rtype: List[int]
"""
N = len(nums)
if N == 0: return
# Initialzie list of 1, size N
result = [1]*N
for i in range(1, N):
result[i] = result[i-1] * nums[i-1]
r_prod = 1
for i in reversed(range(N)):
result[i] *= r_prod
r_prod *= nums[i]
return result
I'm use to C#:
public int[] ProductExceptSelf(int[] nums)
{
int[] returnArray = new int[nums.Length];
List<int> auxList = new List<int>();
int multTotal = 0;
// If no zeros are contained in the array you only have to calculate it once
if(!nums.Contains(0))
{
multTotal = nums.ToList().Aggregate((a, b) => a * b);
for (int i = 0; i < nums.Length; i++)
{
returnArray[i] = multTotal / nums[i];
}
}
else
{
for (int i = 0; i < nums.Length; i++)
{
auxList = nums.ToList();
auxList.RemoveAt(i);
if (!auxList.Contains(0))
{
returnArray[i] = auxList.Aggregate((a, b) => a * b);
}
else
{
returnArray[i] = 0;
}
}
}
return returnArray;
}
Here is simple Scala version in Linear O(n) time:
def getProductEff(in:Seq[Int]):Seq[Int] = {
//create a list which has product of every element to the left of this element
val fromLeft = in.foldLeft((1, Seq.empty[Int]))((ac, i) => (i * ac._1, ac._2 :+ ac._1))._2
//create a list which has product of every element to the right of this element, which is the same as the previous step but in reverse
val fromRight = in.reverse.foldLeft((1,Seq.empty[Int]))((ac,i) => (i * ac._1,ac._2 :+ ac._1))._2.reverse
//merge the two list by product at index
in.indices.map(i => fromLeft(i) * fromRight(i))
}
This works because essentially the answer is an array which has product of all elements to the left and to the right.
import java.util.Arrays;
public class Pratik
{
public static void main(String[] args)
{
int[] array = {2, 3, 4, 5, 6}; // OUTPUT: 360 240 180 144 120
int[] products = new int[array.length];
arrayProduct(array, products);
System.out.println(Arrays.toString(products));
}
public static void arrayProduct(int array[], int products[])
{
double sum = 0, EPSILON = 1e-9;
for(int i = 0; i < array.length; i++)
sum += Math.log(array[i]);
for(int i = 0; i < array.length; i++)
products[i] = (int) (EPSILON + Math.exp(sum - Math.log(array[i])));
}
}
OUTPUT:
[360, 240, 180, 144, 120]
Time complexity : O(n)
Space complexity: O(1)

Resources