Related
Here is one question from hackerrank, I have a solution but there is some testcase failed because time limit exceeded. I don't know the better solution for it.
Find Sum of elements in a subarray (if in subarray has 0, sum = sum + number x)
input:
numbers: main array(1-indexed)
queries:
array of query: left index, right index, number x(0-indexed)
output:
array of sum corresponding with queries.
my solution on C++ code:
vector<long> findSum(vector<int>numbers, vector<vector<int>> queries)
{
vector<long>result;
long sum = 0;
int count = 0;
for(auto i : queries)
{
sum = 0;
count = 0;
int l = i[0] - 1;
int r = i[1]-1;
int x = i[2];
for(int j =l; j<r;j++)
{
sum+=numbers[j]==0?x:numbers[j];
}
result.push_back(sum);
}
return result;
}
As suggested by #Annity, you need to maintain two arrays:
Sum of all numbers so far. At any point of the index, it should have the sum of all previous numbers.
Same as above but it should have the total count of all previous Zeros.
You should avoid nested loops to reduce time complexity. Here is a solution in javascript:
function findSum(numbers, queries) {
let result = [];
let subArraySum = [];
let countZero = numbers[0] == 0 ? 1 : 0;
let zeroArr = [];
zeroArr[0] = countZero;
subArraySum[0] = numbers[0];
for (let i = 1; i <= numbers.length - 1; i++) {
if (numbers[i] == 0) {
countZero++;
zeroArr[i] = countZero;
}
else {
zeroArr[i] = countZero;
}
subArraySum[i] = subArraySum[i - 1] + numbers[i];
}
for (let q of queries) {
if (q.length == 3) {
let i = q[0];
let j = q[1];
let x = q[2];
let sum = 0;
sum = subArraySum[j - 1] - ((i - 2 < 0 ) ? 0 : subArraySum[i - 2]);
sum = sum + (zeroArr[j - 1] - ((i - 2 < 0 ) ? 0 : zeroArr[i - 2])) * x;
result.push(sum);
}
}
return result;
}
console.log(findSum([5, 10, 15], [
[1, 2, 5]
]))
console.log(findSum([0, 10, 15], [
[1, 2, 5]
]))
Here's the solution that occurred to me.
You need to create two new arrays - one for the sums and one for the number of zeroes. Eg. sums[] and zeroes[].
In these two arrays you will store the value for the sum and the number of zeroes from the first element to the current. This is how it goes:
Loop through all of the numbers.
For each number with index i save in sums[i] the cumulative sum of all the elements from the first to the current and in zeroes[i] - the number of zeroes from the first element to the current.
Then loop through all of the queries.
For every query calculate the sum:
You take the sum from the sums array for the element with the right index and deduct the sum for the element before the left index.
(sum = sums[r] - sums[l-1])
If you deduct the count of zeroes in the element before the left index from the number in the element on the right index - you will get the number of zeroes in the interval.
(zero = zeroes[r] - zeroes[l-1])
Then multiply it by the third element in the query and add it to the sum.
And this is how you have the sum for the query.
this is in python
tempRes=0
temp = []
for b in range(len(queries)):
for k in range(queries[b][0]-1,queries[b][1]):
if numbers[k] == 0:
tempRes = tempRes + queries[b][2]
tempRes = tempRes + numbers[k]
temp.append(tempRes)
tempRes = 0
return temp
Your code takes a linear time per query. Invest in a linear time once to precompute an array of partial sums. Then each query will be answered in a constant time.
public static List<Long> findSum(List<Integer> numbers, List<List<Integer>> queries) {
// Write your code here
ArrayList<Long> arr = new ArrayList<>();
for(int i = 0; i < queries.size(); i++) {
int sum = 0;
for(int j = queries.get(i).get(0) - 1; j <= queries.get(i).get(1) - 1; j++) {
if(numbers.get(j) == 0){
sum += queries.get(i).get(2);
} else {
sum += numbers.get(j);
}
}
arr.add((long)sum);
sum = 0;
}
return arr;
}
The logic tries to sum each index with the last index.
But initializes the first index with 0.
The same with counting the zero.
After that, just subtract the left side with the desired index.
Also with the zero value in the index, but multiplid by the queries.
There you have it. I can't post the answer.
JS Solution:
There are two methods to solve this problem (Brute force Solution [Nested Loop]) and that's the method that responds with a timeout error throughout the execution of the test, you need to solve it with another time complexity, here is my answer 1st method O(N^2) and then 2nd method Optimized to be O(N) to solving the timeout error.
Answers in JavaScript wish it helps you.
/*
-- Brainstorming --
Have 1-indexed Array [numbers] of length n;
Have numbers of queries (q). [2D Array]
Each Query => [startIdx i, endIdx r, number x]
Find => Sum of numbers between indexes i and r (both included)
if zero occured in range add x instead of it.
*/
// 1st Method: Brute force Solution [Nested Loop]
function findSum(numbers, queries) {
const sum = new Array(queries.length).fill(0);
for (let i = 0; i < queries.length; i++) {
// let j = queries[i][0] - 1 Cause numbers 1-index not 0
for (let j = queries[i][0] - 1; j < queries[i][1]; j++) {
if (numbers[j] === 0) {
sum[i] += queries[i][2];
} else {
sum[i] += numbers[j];
}
}
}
return sum;
}
// 2nd Method: The Optimized Solution Single Loop
function findSum(numbers, queries) {
const sums = [];
const subArraySum = [];
const zerosArr = [];
let zerosCount = 0;
for (let i = 0; i < numbers.length; i++) {
if (numbers[i] === 0) {
zerosCount++;
zerosArr[i] = zerosCount;
} else {
zerosArr[i] = zerosCount;
}
subArraySum[i] = numbers[i] + (subArraySum[i - 1] || 0);
}
for (let q of queries) {
const i = q[0] - 1;
const r = q[1] - 1;
const x = q[2];
let finalSum = subArraySum[r] - (subArraySum[i - 1] || 0) + (zerosArr[r] - (zerosArr[i - 1] || 0)) * x;
sums.push(finalSum);
}
return sums;
}
console.log("1st Test: " + findSum([5,10,10], [[1,2,5]])) // 15
console.log("2nd Test: " + findSum([-5,0], [ [2, 2 ,20] , [1,2,10]])) // 20 , 5
console.log("3rd Test: " + findSum([-1,-1,1,-4,3,-3,-4], [ [1, 4 ,2]])) // -5
console.log("4th Test: " + findSum([1000000000], [[1, 1,100]]))// 1000000000
console.log("5th Test: " + findSum([-1000000000], [[1, 1,100]]))// -1000000000
Try this(in javascript):
function findSum(numbers, queries) {
// Write your code here
let sumArr = [];
for (let i = 0; i < queries.length; i++) {
let sum = 0;
let start = queries[i][0];
let end = queries[i][1] + 1;
let x = queries[i][2];
const newArr = numbers.slice(start, end);
newArr.forEach((item) => {
if (item === 0) {
sum = sum + x;
}
sum = sum + item;
});
sumArr.push(sum);
}
return sumArr;
}
#!/usr/bin/python3
def solution(numbers, queries):
if not isinstance(numbers, list) and not isinstance(queries, list):
return
result = []
total = 0
for query in queries:
if len(query) == 3:
start = query[0]
end = query[1]
additional = query[2]
selection = numbers[start-1:end]
has_zeros = False
for el in selection:
if el == 0:
has_zeros = True
print(f"has_zeros: {has_zeros}")
print(f"selection: {selection}")
total = sum(selection)
if has_zeros:
total += additional
result.append(total)
return result
if "__main__" == __name__:
# 15
_input = [5, 10, 10]
_queries = [[1], [3], [1, 2, 5]]
print(f"input: {_input}")
print(f"queries: {_queries}")
_output = solution(_input, _queries)
print(f"ouput: {_output}")
Solution in python3 :
def findSum(numbers, queries):
a = [0]
b = [0]
for x in numbers:
a.append(a[-1] + x)
b.append(b[-1] + (x == 0)) if in subarray has 0, sum = sum + number x)
return [a[r] - a[l - 1] + x * (b[r] - b[l - 1]) for l, r, x in queries]
We have a bit array like below
{1 0 1 0 0 1 0 1}
Number of bits in above array is 8
If we take range from [1,5] then number of bits in [1,5] range is [0 1 0 0 1].
If we flip this range then after flipping it will be [ 1 0 1 1 0]
So total number of 1's after flipping [1,5] range is [1 1 0 1 1 0 0 1] = 5
If we take range from [1,6] then number of bits in [1,6] range is [0 1 0 0 1 0].
If we flip this range then after flipping it will be [ 1 0 1 1 0 1]
So total number of 1's after flipping [1,5] range is [1 1 0 1 1 0 1 1] = 6
So the answer is range [1,6] and after flipping we can get 6 1's in array
Is there a good algorithm that can solve this problem. I an only think of dynamic programming because this problem can be broken down into sub problems which can be combined.
Inspired by #Nabbs comment, there is an easy way to solve this in linear time: by reducing the problem to maximal segment sum.
Transform all 0s to 1s and all 1s to -1s. The problem is then the same as minimizing the sum of the array after transforming. (the minimal sum contains most -1s in the transformed array, which corresponds to most 1s in the original problem).
We can calculate the sum as
sum(after flipping) = sum(non-flipped) - sum(flipped part before flipping)
because the sum of the flipped part is inverted. If we now express the non-flipped part as follows:
sum(non-flipped) = sum(original array) - sum(flipped part before flipping)
we find that we need to minimize
sum(after flipping) = sum(original array) - 2 sum(flipped part before flipping)
The first part is a constant, so we really need to maximize the sum of the flipped part. This is exactly what the maximum segment sum problem does.
I wrote a lengthy derivation on how to solve that problem in linear time a while ago, so now I'll only share the code. Below I updated the code to also store the boundaries. I chose javascript as the language, because it is so easy to test in the browser and because I don't have to make the types of variables x and y explicit.
var A = Array(1, 0, 1, 0, 0, 1, 0, 1);
var sum = 0;
// count the 1s in the original array and
// do the 0 -> 1 and 1 -> -1 conversion
for(var i = 0; i < A.length; i++) {
sum += A[i];
A[i] = A[i] == 0 ? 1 : -1;
}
// find the maximum subarray
var x = { value: 0, left: 0, right: 0 };
var y = { value: 0, left: 0 };
for (var n = 0; n < A.length; n++) {
// update y
if (y.value + A[n] >= 0) {
y.value += A[n];
} else {
y.left = n+1;
y.value = 0;
}
// update x
if (x.value < y.value) {
x.left = y.left;
x.right = n;
x.value = y.value;
}
}
// convert the result back
alert("result = " + (sum + x.value)
+ " in range [" + x.left + ", " + x.right + "]");
You can easily verify this in your browser. For instance in Chrome, press F12, click Console and paste this code. It should output
result = 6 in range [1, 4]
The solution uses Kadane's Algorithm.
We have to pick that substring where there are maximum number of 0s and minimum number of 1s, i.e., substring with max(count(0)-count(1)). So that after the flip, we can get maximum number of 1s in the final string.
Iterate over the string and keep a count. Increment this count whenever we encounter a 0 and decrement it when we encounter 1. The substring which will have the maximum value of this count will be our answer.
Here's a video by alGOds which explains the approach nicely. Do watch it if you have any doubts.
Link : https://youtu.be/cLVpE5q_-DE
The following code uses the trivial algorithm and runs in O(n²).
#include <iostream>
#include <bitset>
#include <utility>
typedef std::pair<unsigned, unsigned> range_t;
template <std::size_t N>
range_t max_flip(const std::bitset<N>& bs){
int overall_score = 0;
range_t result = range_t{0,0};
for(std::size_t i = 0; i < N; ++i){
int score = bs[i] ? -1 : 1;
auto max = i;
for(std::size_t j = i + 1; j < N; ++j){
auto new_score = score + (bs[j] ? -1 : 1);
if(new_score > score){
score = new_score;
max = j;
}
}
if(score > overall_score){
overall_score = score;
result = {i,max};
}
}
return result;
}
int main(){
std::bitset<8> bitfield(std::string("10100101"));
range_t range = max_flip(bitfield);
std::cout << range.first << " .. " << range.second << std::endl;
}
Attempt 2.0 in O(n)
Start at the beginning of the array. Step through the array. Until you reach a 0. When you reach the first 0, set count to 0, remember the start position and continue stepping while counting: +1 for 0 and -1 for 1. If the count becomes negative, reset the count and continue until you reach the end. If you find another zero set count to 0 and repeat the previous algorithm. At the end you flip the range of the start and end position if there is one.
void Flip( int* arr , int len )
{
int s = -1 , e = -1 , c ;
for( int i = 0 ; i < len ; i++ )
{
if( arr[i] == 0 )
{
c = 0 ;
s = i ;
e = i ;
for( int j = i ; j < len ; j++ , i++ )
{
if( arr[i] == 0 )
c++ ;
else
c-- ;
if( c < 0 )
{
s = -1 ;
e = -1 ;
break ;
}
if( arr[i] == 0 )
e = i ;
}
}
}
if( s > -1 )
for( int i = s ; i <= e ; i++ )
arr[i] ^= 1 ;
for( int i = 0 ; i < len ; i++ )
printf("%d " , arr[i] ) ;
}
int main(void)
{
int a[13] = {1,0,1,1,0,0,1,0,1,1,0,1,0} ;
Flip( a , 13 ) ;
return 0;
}
Not thoroughly tested, there may be bugs( edge cases ) but it works in principle.
void maxones(int n)
{
int table[n+1][n+1], i, j, totalones = 0, max = INT_MIN, start_pos = 0, end_pos =0;
if(n == 0)
{
printf("Max no of ones from 0 to %d \n",sizeof(int) * 8 -1);
return;
}
/* size of (int) * 8 bits, calculate total no of ones in every bit */
for(i=0; i<sizeof(n) * 8; i++)
totalones += n & (1 >> i);
/* Base conditions to be filled */
for(i=0; i<n; i++)
table[i][i] = (n & (1 >> i)) ? totalones - 1 : totalones + 1;
for(i=0; i<n; i++ )
for(j=i+1; j<n; j++)
{
table[i][j] = table[i][j-1] + ( n & (1 >> j)) ? 0 : 1;
if (max < table[i][j])
{
max = table[i][j];
start_pos = i;
end_pos = j;
}
}
printf("Max no of Ones on fliping bits from pos %d to pos %d \n",start_pos, end_pos);
}
int main()
{
int n;
printf("Enter number %d \n", &n);
maxones(n);
return 0;
}
Here is a recursive approach:
https://ideone.com/Su2Mmb
public static void main(String[] args) {
int [] input = {1, 0, 0, 1, 0, 0, 1,1,1,1, 0,1};
System.out.println(findMaxNumberOfOnes(input,0, input.length-1));
}
private static int findMaxNumberOfOnes(int[] input, int i, int j) {
if (i==j)
return 1;
int option1 = input[i] + findMaxNumberOfOnes(input, i+1, j);
int option2 = count(input , i , j, true);
int option3 = count(input, i, j, false);
int option4 =findMaxNumberOfOnes(input, i, j-1) +input[j];
return Math.max(option1, Math.max(option2,Math.max(option3,option4)));
}
private static int count(int[] input, int i, int j, boolean flipped) {
int a = flipped?0:1;
int count = 0;
while (i<=j){
count += (input[i++]==a)?1:0;
}
return count;
}
This problem can be solved using dynamic programming in linear time and space. You can create an array left where left[i] is the number of 1 on subarray 0 to i (inclusive). So for two index i and j:
case 1: i==j, result is array size sz-1 (if no 0 in array) or sz+1 (if there is at least one 0 in array)
case 2: i less than j, result is:
left[i-1] (# of 1 on subarray 0 ~ i-1) +
(j-i+1-(left[j]-left[i-1])) (# of 0 on subarray i ~ j) +
left[sz-1]-left[j] (# of 1 on subarray j+1 ~ sz-1)
this equals to: (j-2*left[j])-(i-2*left[i-1])+left[sz-1]+1
So according to case 2, we need another array temp to store for every j min{i-2*left[i-1] where i<j}
So we can traverse the array, at each index j, we calculate the above case two (in constant time) and update final result if it's larger.
My code in c++:
int func(vector<int>& arr){
int res = 0;
int sz = arr.size();
vector<int> left(sz, 0);
for(int i=0; i<sz; i++){
left[i] = (arr[i]==1?1:0)+(i==0?0:left[i-1]);
}
bool all_1 = true;
for(int i=0; i<sz; i++){
if(arr[i] == 0) all_1=false;
}
if(all_1) return sz-1;
res = left[sz-1]+1;
vector<int> temp(sz, INT_MAX);
for(int i=1; i<sz; i++)
temp[i] = min(temp[i-1], i-2*left[i-1]);
for(int j=1; j<sz; j++){
int val = j+1-left[j]+(left[sz-1]-left[j]);
val = max(val, j-2*left[j]-temp[j]+left[sz-1]+1);
res = max(res, val);
}
return res;
}
I also thought the same way as #this has mentioned. But there are bugs in his solution. My code after fixing the bug(see explanation below):
vector<int> Solution::flip(string arr) {
int s = -1 , e = -1 , c , len = arr.size(), S = -1, E = -1, Max = 0;
for( int i = 0 ; i < len ; i++ )
{
if( arr[i] == '0' )
{
c = 0 ;
s = i ;
e = i ;
for( int j = i ; j < len ; j++, i++ )
{
if( arr[j] == '0' )
c++ ;
else
c-- ;
//cout << c << " ";
if( c < 0 )
{
s = -1 ;
e = -1 ;
break ;
}
if( arr[j] == '0' )
e = i ;
if(c > Max){
S = s;
E = e;
Max = c;
}
}
}
}
vector<int> ans;
if( S > -1 ){
ans.push_back(S);
ans.push_back(E);
return ans;
}
else
return ans;
}
Explanation:
Start at the beginning of the array. Step through the array. Until you reach a 0. When you reach the first 0, set count to 0, remember the start position and continue stepping while counting: +1 for 0 and -1 for 1.Max stores the value of max(#zeros in all set of [s, e]). If c becomes greater than Max then the current set [s, e] contains the maximum number of '0' bits. Hence update Max, S, E,. If the count becomes negative, it means the number of '1' is greater then number of '0' in set [s, e], so reset the count c, local start s, local end e. and continue until you reach the end. If you find another zero set count to 0 and repeat the previous algorithm. The final value of S, E are the index of the range in which bits are to be flipped. If no such range exist(all bits are '1') then S = -1, E = - 1.
This Solution is also inspired by #Nabb's comment. I have created a new array with 0 replaced as 1 and 1 as -1. Then I used maximum subarray sum range problem's concept to solve it. The code is as below:
vector<int> Solution::flip(string A) {
vector<int> vec;
vector<int> res;
for(int i=0;i<A.length();i++){
if(A[i]=='1')
vec.push_back(-1);
else
vec.push_back(1);
}
int l=0,r=0,s=0;
int sum=0;
int sum_prev=INT_MIN;
for(int i=0;i<vec.size();i++){
sum+=vec[i];
if(sum_prev<sum){
sum_prev=sum;
l=s;
r=i;
}
if(sum<0){
sum=0;
s=i+1;
}
}
//cout<<"l: "<<l<<" r: "<<r<<endl;
if((l>=0 && r>0)||((l==0 && r==0) && A[0]=='0')){
res.push_back(l+1);
res.push_back(r+1);
}
return res;
}
Let me provide the solution and it is actually based on the Kadane's Algorithm.
The code is a bit long but most of them are comments written by me to help you understand a bit more.
Space complexity: O(1)
Time complexity: O(n)
# flip to zero to get max one
def flip_zero(nums):
# max number of 0 at index and global
max_nums_at_index, max_nums_global = None, None
start, end = None, None
for i in range(len(nums)):
if i == 0:
if nums[i] == 0:
# In position 0, if the digit is 0, then the count of zero will be 1
max_nums_at_index, max_nums_global = 1, 1
else:
# In position 0, if the digit is 1, then the count of zero will be 0
max_nums_at_index, max_nums_global = 0, 0
# Mark the start and end position of the substring
start, end = i, i
else:
# In other position, we need to consider we are going to added it or not
if nums[i] == 0:
# If the number is 0, then after we included it the count of zero will be increased by 1
# If we don't add it and means we will start a new subarray from current index
# the count of zero at current index will be 1
# So here we need to do comparison and see which one is bigger.
max_nums_at_index = max(max_nums_at_index + 1, 1)
# Check whether we start a new sub array, if yes, update the start index
if max_nums_at_index == 1:
start = i
else:
# If the number is 1, then after we include it the count of zero will remain unchange
# If we don't add it and means we will start a new array from current index
# the count of zero at current index will be 0
# So here we need to do comparison and see which one is bigger.
max_nums_at_index = max(max_nums_at_index, 0)
# Check whether we start a new sub array, if yes, update the start index
if max_nums_at_index == 0:
start = i
temp = max_nums_global
max_nums_global = max(max_nums_global, max_nums_at_index)
# Check whether the global max has been updated, if yes, update the end index
if max_nums_global != temp:
end = i
return [start, end]
And the result return is [1, 6]
It is possible even much more simple. See this python example O (n):
def flipBits_for_maximum_1s (a, n):
countOfOnes = 0
# find contiguous subarray with biggest sum
# of 'count of 0s' - 'count of 1s'
big = cur_big = 0
for x in a:
if x:
countOfOnes += 1
cur_big -= 1
else: cur_big += 1
if cur_big > big: big = cur_big
if (cur_big < 0): cur_big = 0;
return big + countOfOnes
I've just done the following Codility Peaks problem. The problem is as follows:
A non-empty zero-indexed array A consisting of N integers is given.
A peak is an array element which is larger than its neighbors. More precisely, it is an index P such that 0 < P < N − 1, A[P − 1] < A[P] and A[P] > A[P + 1].
For example, the following array A:
A[0] = 1
A[1] = 2
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
has exactly three peaks: 3, 5, 10.
We want to divide this array into blocks containing the same number of elements. More precisely, we want to choose a number K that will yield the following blocks:
A[0], A[1], ..., A[K − 1],
A[K], A[K + 1], ..., A[2K − 1],
...
A[N − K], A[N − K + 1], ..., A[N − 1].
What's more, every block should contain at least one peak. Notice that extreme elements of the blocks (for example A[K − 1] or A[K]) can also be peaks, but only if they have both neighbors (including one in an adjacent blocks).
The goal is to find the maximum number of blocks into which the array A can be divided.
Array A can be divided into blocks as follows:
one block (1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2). This block contains three peaks.
two blocks (1, 2, 3, 4, 3, 4) and (1, 2, 3, 4, 6, 2). Every block has a peak.
three blocks (1, 2, 3, 4), (3, 4, 1, 2), (3, 4, 6, 2). Every block has a peak.
Notice in particular that the first block (1, 2, 3, 4) has a peak at A[3], because A[2] < A[3] > A[4], even though A[4] is in the adjacent block.
However, array A cannot be divided into four blocks, (1, 2, 3), (4, 3, 4), (1, 2, 3) and (4, 6, 2), because the (1, 2, 3) blocks do not contain a peak. Notice in particular that the (4, 3, 4) block contains two peaks: A[3] and A[5].
The maximum number of blocks that array A can be divided into is three.
Write a function:
class Solution { public int solution(int[] A); }
that, given a non-empty zero-indexed array A consisting of N integers, returns the maximum number of blocks into which A can be divided.
If A cannot be divided into some number of blocks, the function should return 0.
For example, given:
A[0] = 1
A[1] = 2
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
the function should return 3, as explained above.
Assume that:
N is an integer within the range [1..100,000];
each element of array A is an integer within the range [0..1,000,000,000].
Complexity:
expected worst-case time complexity is O(N*log(log(N)))
expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments).
Elements of input arrays can be modified.
My Question
So I solve this with what to me appears to be the brute force solution – go through every group size from 1..N, and check whether every group has at least one peak. The first 15 minutes I was trying to solve this I was trying to figure out some more optimal way, since the required complexity is O(N*log(log(N))).
This is my "brute-force" code that passes all the tests, including the large ones, for a score of 100/100:
public int solution(int[] A) {
int N = A.length;
ArrayList<Integer> peaks = new ArrayList<Integer>();
for(int i = 1; i < N-1; i++){
if(A[i] > A[i-1] && A[i] > A[i+1]) peaks.add(i);
}
for(int size = 1; size <= N; size++){
if(N % size != 0) continue;
int find = 0;
int groups = N/size;
boolean ok = true;
for(int peakIdx : peaks){
if(peakIdx/size > find){
ok = false;
break;
}
if(peakIdx/size == find) find++;
}
if(find != groups) ok = false;
if(ok) return groups;
}
return 0;
}
My question is how do I deduce that this is in fact O(N*log(log(N))), as it's not at all obvious to me, and I was surprised I pass the test cases. I'm looking for even the simplest complexity proof sketch that would convince me of this runtime. I would assume that a log(log(N)) factor means some kind of reduction of a problem by a square root on each iteration, but I have no idea how this applies to my problem. Thanks a lot for any help
You're completely right: to get the log log performance the problem needs to be reduced.
A n.log(log(n)) solution in python [below]. Codility no longer test 'performance' on this problem (!) but the python solution scores 100% for accuracy.
As you've already surmised:
Outer loop will be O(n) since it is testing whether each size of block is a clean divisor
Inner loop must be O(log(log(n))) to give O(n log(log(n))) overall.
We can get good inner loop performance because we only need to perform d(n), the number of divisors of n. We can store a prefix sum of peaks-so-far, which uses the O(n) space allowed by the problem specification. Checking whether a peak has occurred in each 'group' is then an O(1) lookup operation using the group start and end indices.
Following this logic, when the candidate block size is 3 the loop needs to perform n / 3 peak checks. The complexity becomes a sum: n/a + n/b + ... + n/n where the denominators (a, b, ...) are the factors of n.
Short story: The complexity of n.d(n) operations is O(n.log(log(n))).
Longer version:
If you've been doing the Codility Lessons you'll remember from the Lesson 8: Prime and composite numbers that the sum of harmonic number operations will give O(log(n)) complexity. We've got a reduced set, because we're only looking at factor denominators. Lesson 9: Sieve of Eratosthenes shows how the sum of reciprocals of primes is O(log(log(n))) and claims that 'the proof is non-trivial'. In this case Wikipedia tells us that the sum of divisors sigma(n) has an upper bound (see Robin's inequality, about half way down the page).
Does that completely answer your question? Suggestions on how to improve my python code are also very welcome!
def solution(data):
length = len(data)
# array ends can't be peaks, len < 3 must return 0
if len < 3:
return 0
peaks = [0] * length
# compute a list of 'peaks to the left' in O(n) time
for index in range(2, length):
peaks[index] = peaks[index - 1]
# check if there was a peak to the left, add it to the count
if data[index - 1] > data[index - 2] and data[index - 1] > data[index]:
peaks[index] += 1
# candidate is the block size we're going to test
for candidate in range(3, length + 1):
# skip if not a factor
if length % candidate != 0:
continue
# test at each point n / block
valid = True
index = candidate
while index != length:
# if no peak in this block, break
if peaks[index] == peaks[index - candidate]:
valid = False
break
index += candidate
# one additional check since peaks[length] is outside of array
if index == length and peaks[index - 1] == peaks[index - candidate]:
valid = False
if valid:
return length / candidate
return 0
Credits:
Major kudos to #tmyklebu for his SO answer which helped me a lot.
I'm don't think that the time complexity of your algorithm is O(Nlog(logN)).
However, it is certainly much lesser than O(N^2). This is because your inner loop is entered only k times where k is the number of factors of N. The number of factors of an integer can be seen in this link: http://www.cut-the-knot.org/blue/NumberOfFactors.shtml
I may be inaccurate but from the link it seems,
k ~ logN * logN * logN ...
Also, the inner loop has a complexity of O(N) since the number of peaks can be N/2 in the worst case.
Hence, in my opinion, the complexity of your algorithm is O(NlogN) at best but it must be sufficient to clear all test cases.
#radicality
There's at least one point where you can optimize the number of passes in the second loop to O(sqrt(N)) -- collect divisors of N and iterate through them only.
That will make your algo a little less "brute force".
Problem definition allows for O(N) space complexity. You can store divisors without violating this condition.
This is my solution based on prefix sums. Hope it helps:
class Solution {
public int solution(int[] A) {
int n = A.length;
int result = 1;
if (n < 3)
return 0;
int[] prefixSums = new int[n];
for (int i = 1; i < n-1; i++)
if (A[i] > A[i-1] && A[i] > A[i+1])
prefixSums[i] = prefixSums[i-1] + 1;
else
prefixSums[i] = prefixSums[i-1];
prefixSums[n-1] = prefixSums[n-2];
if (prefixSums[n-1] <= 1)
return prefixSums[n-1];
for (int i = 2; i <= prefixSums[n-2]; i++) {
if (n % i != 0)
continue;
int prev = 0;
boolean containsPeak = true;
for (int j = n/i - 1; j < n; j += n/i) {
if (prefixSums[j] == prev) {
containsPeak = false;
break;
}
prev = prefixSums[j];
}
if (containsPeak)
result = i;
}
return result;
}
}
def solution(A):
length = len(A)
if length <= 2:
return 0
peek_indexes = []
for index in range(1, length-1):
if A[index] > A[index - 1] and A[index] > A[index + 1]:
peek_indexes.append(index)
for block in range(3, int((length/2)+1)):
if length % block == 0:
index_to_check = 0
temp_blocks = 0
for peek_index in peek_indexes:
if peek_index >= index_to_check and peek_index < index_to_check + block:
temp_blocks += 1
index_to_check = index_to_check + block
if length/block == temp_blocks:
return temp_blocks
if len(peek_indexes) > 0:
return 1
else:
return 0
print(solution([1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2, 1, 2, 5, 2]))
I just found the factors at first,
then just iterated in A and tested all number of blocks to see which is the greatest block division.
This is the code that got 100 (in java)
https://app.codility.com/demo/results/training9593YB-39H/
A javascript solution with complexity of O(N * log(log(N))).
function solution(A) {
let N = A.length;
if (N < 3) return 0;
let peaks = 0;
let peaksTillNow = [ 0 ];
let dividers = [];
for (let i = 1; i < N - 1; i++) {
if (A[i - 1] < A[i] && A[i] > A[i + 1]) peaks++;
peaksTillNow.push(peaks);
if (N % i === 0) dividers.push(i);
}
peaksTillNow.push(peaks);
if (peaks === 0) return 0;
let blocks;
let result = 1;
for (blocks of dividers) {
let K = N / blocks;
let prevPeaks = 0;
let OK = true;
for (let i = 1; i <= blocks; i++) {
if (peaksTillNow[i * K - 1] > prevPeaks) {
prevPeaks = peaksTillNow[i * K - 1];
} else {
OK = false;
break;
}
}
if (OK) result = blocks;
}
return result;
}
Solution with C# code
public int GetPeaks(int[] InputArray)
{
List<int> lstPeaks = new List<int>();
lstPeaks.Add(0);
for (int Index = 1; Index < (InputArray.Length - 1); Index++)
{
if (InputArray[Index - 1] < InputArray[Index] && InputArray[Index] > InputArray[Index + 1])
{
lstPeaks.Add(1);
}
else
{
lstPeaks.Add(0);
}
}
lstPeaks.Add(0);
int totalEqBlocksWithPeaks = 0;
for (int factor = 1; factor <= InputArray.Length; factor++)
{
if (InputArray.Length % factor == 0)
{
int BlockLength = InputArray.Length / factor;
int BlockCount = factor;
bool isAllBlocksHasPeak = true;
for (int CountIndex = 1; CountIndex <= BlockCount; CountIndex++)
{
int BlockStartIndex = CountIndex == 1 ? 0 : (CountIndex - 1) * BlockLength;
int BlockEndIndex = (CountIndex * BlockLength) - 1;
if (!(lstPeaks.GetRange(BlockStartIndex, BlockLength).Sum() > 0))
{
isAllBlocksHasPeak = false;
}
}
if (isAllBlocksHasPeak)
totalEqBlocksWithPeaks++;
}
}
return totalEqBlocksWithPeaks;
}
There is actually an O(n) runtime complexity solution for this task, so this is a humble attempt to share that.
The trick to go from the proposed O(n * loglogn) solutions to O(n) is to calculate the maximum gap between any two peaks (or a leading or trailing peak to the corresponding endpoint).
This can be done while building the peak hash in the first O(n) loop.
Then, if the gap is 'g' between two consecutive peaks, then the minimum group size must be 'g/2'. It will simply be 'g' between start and first peak, or last peak and end. Also, there will be at least one peak in any group from group size 'g', so the range to check for is: g/2, 1+g/2, 2+g/2, ... g.
Therefore, the runtime is the sum over d = g/2, g/2+1, ... g) * n/d where 'd' is the divisor'.
(sum over d = g/2, 1 + g/2, ... g) * n/d = n/(g/2) + n/(1 + g/2) + ... + (n/g)
if g = 5, this n/5 + n/6 + n/7 + n/8 + n/9 + n/10 = n(1/5+1/6+1/7+1/8+1/9+1/10)
If you replace each item with the largest element, then you get sum <= n * (1/5 + 1/5 + 1/5 + 1/5 + 1/5) = n
Now, generalising this, every element is replaced with n / (g/2).
The number of items from g/2 to g is 1 + g/2 since there are (g - g/2 + 1) items.
So, the whole sum is: n/(g/2) * (g/2 + 1) = n + 2n/g < 3n.
Therefore, the bound on the total number of operations is O(n).
The code, implementing this in C++, is here:
int solution(vector<int> &A)
{
int sizeA = A.size();
vector<bool> hash(sizeA, false);
int min_group_size = 2;
int pi = 0;
for (int i = 1, pi = 0; i < sizeA - 1; ++i) {
const int e = A[i];
if (e > A[i - 1] && e > A[i + 1]) {
hash[i] = true;
int diff = i - pi;
if (pi) diff /= 2;
if (diff > min_group_size) min_group_size = diff;
pi = i;
}
}
min_group_size = min(min_group_size, sizeA - pi);
vector<int> hash_next(sizeA, 0);
for (int i = sizeA - 2; i >= 0; --i) {
hash_next[i] = hash[i] ? i : hash_next[i + 1];
}
for (int group_size = min_group_size; group_size <= sizeA; ++group_size) {
if (sizeA % group_size != 0) continue;
int number_of_groups = sizeA / group_size;
int group_index = 0;
for (int peak_index = 0; peak_index < sizeA; peak_index = group_index * group_size) {
peak_index = hash_next[peak_index];
if (!peak_index) break;
int lower_range = group_index * group_size;
int upper_range = lower_range + group_size - 1;
if (peak_index > upper_range) {
break;
}
++group_index;
}
if (number_of_groups == group_index) return number_of_groups;
}
return 0;
}
var prev, curr, total = 0;
for (var i=1; i<A.length; i++) {
if (curr == 0) {
curr = A[i];
} else {
if(A[i] != curr) {
if (prev != 0) {
if ((prev < curr && A[i] < curr) || (prev > curr && A[i] > curr)) {
total += 1;
}
} else {
prev = curr;
total += 1;
}
prev = curr;
curr = A[i];
}
}
}
if(prev != curr) {
total += 1;
}
return total;
I agree with GnomeDePlume answer... the piece on looking for the divisors on the proposed solution is O(N), and that could be decreased to O(sqrt(N)) by using the algorithm provided on the lesson text.
So just adding, here is my solution using Java that solves the problem on the required complexity.
Be aware, it has way more code then yours - some cleanup (debug sysouts and comments) would always be possible :-)
public int solution(int[] A) {
int result = 0;
int N = A.length;
// mark accumulated peaks
int[] peaks = new int[N];
int count = 0;
for (int i = 1; i < N -1; i++) {
if (A[i-1] < A[i] && A[i+1] < A[i])
count++;
peaks[i] = count;
}
// set peaks count on last elem as it will be needed during div checks
peaks[N-1] = count;
// check count
if (count > 0) {
// if only one peak, will need the whole array
if (count == 1)
result = 1;
else {
// at this point (peaks > 1) we know at least the single group will satisfy the criteria
// so set result to 1, then check for bigger numbers of groups
result = 1;
// for each divisor of N, check if that number of groups work
Integer[] divisors = getDivisors(N);
// result will be at least 1 at this point
boolean candidate;
int divisor, startIdx, endIdx;
// check from top value to bottom - stop when one is found
// for div 1 we know num groups is 1, and we already know that is the minimum. No need to check.
// for div = N we know it's impossible, as all elements would have to be peaks (impossible by definition)
for (int i = divisors.length-2; i > 0; i--) {
candidate = true;
divisor = divisors[i];
for (int j = 0; j < N; j+= N/divisor) {
startIdx = (j == 0 ? j : j-1);
endIdx = j + N/divisor-1;
if (peaks[startIdx] == peaks[endIdx]) {
candidate = false;
break;
}
}
// if all groups had at least 1 peak, this is the result!
if (candidate) {
result = divisor;
break;
}
}
}
}
return result;
}
// returns ordered array of all divisors of N
private Integer[] getDivisors(int N) {
Set<Integer> set = new TreeSet<Integer>();
double sqrt = Math.sqrt(N);
int i = 1;
for (; i < sqrt; i++) {
if (N % i == 0) {
set.add(i);
set.add(N/i);
}
}
if (i * i == N)
set.add(i);
return set.toArray(new Integer[]{});
}
Thanks,
Davi
Given a sorted list of numbers, I would like to find the longest subsequence where the differences between successive elements are geometrically increasing. So if the list is
1, 2, 3, 4, 7, 15, 27, 30, 31, 81
then the subsequence is 1, 3, 7, 15, 31. Alternatively consider 1, 2, 5, 6, 11, 15, 23, 41, 47 which has subsequence 5, 11, 23, 47 with a = 3 and k = 2.
Can this be solved in O(n2) time? Where n is the length of the list.
I am interested both in the general case where the progression of differences is ak, ak2, ak3, etc., where both a and k are integers, and in the special case where a = 1, so the progression of difference is k, k2, k3, etc.
Update
I have made an improvement of the algorithm that it takes an average of O(M + N^2) and memory needs of O(M+N). Mainly is the same that the protocol described below, but to calculate the possible factors A,K for ech diference D, I preload a table. This table takes less than a second to be constructed for M=10^7.
I have made a C implementation that takes less than 10minutes to solve N=10^5 diferent random integer elements.
Here is the source code in C: To execute just do: gcc -O3 -o findgeo findgeo.c
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <memory.h>
#include <time.h>
struct Factor {
int a;
int k;
struct Factor *next;
};
struct Factor *factors = 0;
int factorsL=0;
void ConstructFactors(int R) {
int a,k,C;
int R2;
struct Factor *f;
float seconds;
clock_t end;
clock_t start = clock();
if (factors) free(factors);
factors = malloc (sizeof(struct Factor) *((R>>1) + 1));
R2 = R>>1 ;
for (a=0;a<=R2;a++) {
factors[a].a= a;
factors[a].k=1;
factors[a].next=NULL;
}
factorsL=R2+1;
R2 = floor(sqrt(R));
for (k=2; k<=R2; k++) {
a=1;
C=a*k*(k+1);
while (C<R) {
C >>= 1;
f=malloc(sizeof(struct Factor));
*f=factors[C];
factors[C].a=a;
factors[C].k=k;
factors[C].next=f;
a++;
C=a*k*(k+1);
}
}
end = clock();
seconds = (float)(end - start) / CLOCKS_PER_SEC;
printf("Construct Table: %f\n",seconds);
}
void DestructFactors() {
int i;
struct Factor *f;
for (i=0;i<factorsL;i++) {
while (factors[i].next) {
f=factors[i].next->next;
free(factors[i].next);
factors[i].next=f;
}
}
free(factors);
factors=NULL;
factorsL=0;
}
int ipow(int base, int exp)
{
int result = 1;
while (exp)
{
if (exp & 1)
result *= base;
exp >>= 1;
base *= base;
}
return result;
}
void findGeo(int **bestSolution, int *bestSolutionL,int *Arr, int L) {
int i,j,D;
int mustExistToBeBetter;
int R=Arr[L-1]-Arr[0];
int *possibleSolution;
int possibleSolutionL=0;
int exp;
int NextVal;
int idx;
int kMax,aMax;
float seconds;
clock_t end;
clock_t start = clock();
kMax = floor(sqrt(R));
aMax = floor(R/2);
ConstructFactors(R);
*bestSolutionL=2;
*bestSolution=malloc(0);
possibleSolution = malloc(sizeof(int)*(R+1));
struct Factor *f;
int *H=malloc(sizeof(int)*(R+1));
memset(H,0, sizeof(int)*(R+1));
for (i=0;i<L;i++) {
H[ Arr[i]-Arr[0] ]=1;
}
for (i=0; i<L-2;i++) {
for (j=i+2; j<L; j++) {
D=Arr[j]-Arr[i];
if (D & 1) continue;
f = factors + (D >>1);
while (f) {
idx=Arr[i] + f->a * f->k - Arr[0];
if ((f->k <= kMax)&& (f->a<aMax)&&(idx<=R)&&H[idx]) {
if (f->k ==1) {
mustExistToBeBetter = Arr[i] + f->a * (*bestSolutionL);
} else {
mustExistToBeBetter = Arr[i] + f->a * f->k * (ipow(f->k,*bestSolutionL) - 1)/(f->k-1);
}
if (mustExistToBeBetter< Arr[L-1]+1) {
idx= floor(mustExistToBeBetter - Arr[0]);
} else {
idx = R+1;
}
if ((idx<=R)&&H[idx]) {
possibleSolution[0]=Arr[i];
possibleSolution[1]=Arr[i] + f->a*f->k;
possibleSolution[2]=Arr[j];
possibleSolutionL=3;
exp = f->k * f->k * f->k;
NextVal = Arr[j] + f->a * exp;
idx=NextVal - Arr[0];
while ( (idx<=R) && H[idx]) {
possibleSolution[possibleSolutionL]=NextVal;
possibleSolutionL++;
exp = exp * f->k;
NextVal = NextVal + f->a * exp;
idx=NextVal - Arr[0];
}
if (possibleSolutionL > *bestSolutionL) {
free(*bestSolution);
*bestSolution = possibleSolution;
possibleSolution = malloc(sizeof(int)*(R+1));
*bestSolutionL=possibleSolutionL;
kMax= floor( pow (R, 1/ (*bestSolutionL) ));
aMax= floor(R / (*bestSolutionL));
}
}
}
f=f->next;
}
}
}
if (*bestSolutionL == 2) {
free(*bestSolution);
possibleSolutionL=0;
for (i=0; (i<2)&&(i<L); i++ ) {
possibleSolution[possibleSolutionL]=Arr[i];
possibleSolutionL++;
}
*bestSolution = possibleSolution;
*bestSolutionL=possibleSolutionL;
} else {
free(possibleSolution);
}
DestructFactors();
free(H);
end = clock();
seconds = (float)(end - start) / CLOCKS_PER_SEC;
printf("findGeo: %f\n",seconds);
}
int compareInt (const void * a, const void * b)
{
return *(int *)a - *(int *)b;
}
int main(void) {
int N=100000;
int R=10000000;
int *A = malloc(sizeof(int)*N);
int *Sol;
int SolL;
int i;
int *S=malloc(sizeof(int)*R);
for (i=0;i<R;i++) S[i]=i+1;
for (i=0;i<N;i++) {
int r = rand() % (R-i);
A[i]=S[r];
S[r]=S[R-i-1];
}
free(S);
qsort(A,N,sizeof(int),compareInt);
/*
int step = floor(R/N);
A[0]=1;
for (i=1;i<N;i++) {
A[i]=A[i-1]+step;
}
*/
findGeo(&Sol,&SolL,A,N);
printf("[");
for (i=0;i<SolL;i++) {
if (i>0) printf(",");
printf("%d",Sol[i]);
}
printf("]\n");
printf("Size: %d\n",SolL);
free(Sol);
free(A);
return EXIT_SUCCESS;
}
Demostration
I will try to demonstrate that the algorithm that I proposed is in average for an equally distributed random sequence. I’m not a mathematician and I am not used to do this kind of demonstrations, so please fill free to correct me any error that you can see.
There are 4 indented loops, the two firsts are the N^2 factor. The M is for the calculation of the possible factors table).
The third loop is executed only once in average for each pair. You can see this checking the size of the pre-calculated factors table. It’s size is M when N->inf. So the average steps for each pair is M/M=1.
So the proof happens to check that the forth loop. (The one that traverses the good made sequences is executed less that or equal O(N^2) for all the pairs.
To demonstrate that, I will consider two cases: one where M>>N and other where M ~= N. Where M is the maximum difference of the initial array: M= S(n)-S(1).
For the first case, (M>>N) the probability to find a coincidence is p=N/M. To start a sequence, it must coincide the second and the b+1 element where b is the length of the best sequence until now. So the loop will enter times. And the average length of this series (supposing an infinite series) is . So the total number of times that the loop will be executed is . And this is close to 0 when M>>N. The problem here is when M~=N.
Now lets consider this case where M~=N. Lets consider that b is the best sequence length until now. For the case A=k=1, then the sequence must start before N-b, so the number of sequences will be N-b, and the times that will go for the loop will be a maximum of (N-b)*b.
For A>1 and k=1 we can extrapolate to where d is M/N (the average distance between numbers). If we add for all A’s from 1 to dN/b then we see a top limit of:
For the cases where k>=2, we see that the sequence must start before , So the loop will enter an average of and adding for all As from 1 to dN/k^b, it gives a limit of
Here, the worst case is when b is minimum. Because we are considering minimum series, lets consider a very worst case of b= 2 so the number of passes for the 4th loop for a given k will be less than
.
And if we add all k’s from 2 to infinite will be:
So adding all the passes for k=1 and k>=2, we have a maximum of:
Note that d=M/N=1/p.
So we have two limits, One that goes to infinite when d=1/p=M/N goes to 1 and other that goes to infinite when d goes to infinite. So our limit is the minimum of both, and the worst case is when both equetions cross. So if we solve the equation:
we see that the maximum is when d=1.353
So it is demonstrated that the forth loops will be processed less than 1.55N^2 times in total.
Of course, this is for the average case. For the worst case I am not able to find a way to generate series whose forth loop are higher than O(N^2), and I strongly believe that they does not exist, but I am not a mathematician to prove it.
Old Answer
Here is a solution in average of O((n^2)*cube_root(M)) where M is the difference between the first and last element of the array. And memory requirements of O(M+N).
1.- Construct an array H of length M so that M[i - S[0]]=true if i exists in the initial array and false if it does not exist.
2.- For each pair in the array S[j], S[i] do:
2.1 Check if it can be the first and third elements of a possible solution. To do so, calculate all possible A,K pairs that meet the equation S(i) = S(j) + AK + AK^2. Check this SO question to see how to solve this problem. And check that exist the second element: S[i]+ A*K
2.2 Check also that exist the element one position further that the best solution that we have. For example, if the best solution that we have until now is 4 elements long then check that exist the element A[j] + AK + AK^2 + AK^3 + AK^4
2.3 If 2.1 and 2.2 are true, then iterate how long is this series and set as the bestSolution until now is is longer that the last.
Here is the code in javascript:
function getAKs(A) {
if (A / 2 != Math.floor(A / 2)) return [];
var solution = [];
var i;
var SR3 = Math.pow(A, 1 / 3);
for (i = 1; i <= SR3; i++) {
var B, C;
C = i;
B = A / (C * (C + 1));
if (B == Math.floor(B)) {
solution.push([B, C]);
}
B = i;
C = (-1 + Math.sqrt(1 + 4 * A / B)) / 2;
if (C == Math.floor(C)) {
solution.push([B, C]);
}
}
return solution;
}
function getBestGeometricSequence(S) {
var i, j, k;
var bestSolution = [];
var H = Array(S[S.length-1]-S[0]);
for (i = 0; i < S.length; i++) H[S[i] - S[0]] = true;
for (i = 0; i < S.length; i++) {
for (j = 0; j < i; j++) {
var PossibleAKs = getAKs(S[i] - S[j]);
for (k = 0; k < PossibleAKs.length; k++) {
var A = PossibleAKs[k][0];
var K = PossibleAKs[k][17];
var mustExistToBeBetter;
if (K==1) {
mustExistToBeBetter = S[j] + A * bestSolution.length;
} else {
mustExistToBeBetter = S[j] + A * K * (Math.pow(K,bestSolution.length) - 1)/(K-1);
}
if ((H[S[j] + A * K - S[0]]) && (H[mustExistToBeBetter - S[0]])) {
var possibleSolution=[S[j],S[j] + A * K,S[i]];
exp = K * K * K;
var NextVal = S[i] + A * exp;
while (H[NextVal - S[0]] === true) {
possibleSolution.push(NextVal);
exp = exp * K;
NextVal = NextVal + A * exp;
}
if (possibleSolution.length > bestSolution.length) {
bestSolution = possibleSolution;
}
}
}
}
}
return bestSolution;
}
//var A= [ 1, 2, 3,5,7, 15, 27, 30,31, 81];
var A=[];
for (i=1;i<=3000;i++) {
A.push(i);
}
var sol=getBestGeometricSequence(A);
$("#result").html(JSON.stringify(sol));
You can check the code here: http://jsfiddle.net/6yHyR/1/
I maintain the other solution because I believe that it is still better when M is very big compared to N.
Just to start with something, here is a simple solution in JavaScript:
var input = [0.7, 1, 2, 3, 4, 7, 15, 27, 30, 31, 81],
output = [], indexes, values, i, index, value, i_max_length,
i1, i2, i3, j1, j2, j3, difference12a, difference23a, difference12b, difference23b,
scale_factor, common_ratio_a, common_ratio_b, common_ratio_c,
error, EPSILON = 1e-9, common_ratio_is_integer,
resultDiv = $("#result");
for (i1 = 0; i1 < input.length - 2; ++i1) {
for (i2 = i1 + 1; i2 < input.length - 1; ++i2) {
scale_factor = difference12a = input[i2] - input[i1];
for (i3 = i2 + 1; i3 < input.length; ++i3) {
difference23a = input[i3] - input[i2];
common_ratio_1a = difference23a / difference12a;
common_ratio_2a = Math.round(common_ratio_1a);
error = Math.abs((common_ratio_2a - common_ratio_1a) / common_ratio_1a);
common_ratio_is_integer = error < EPSILON;
if (common_ratio_2a > 1 && common_ratio_is_integer) {
indexes = [i1, i2, i3];
j1 = i2;
j2 = i3
difference12b = difference23a;
for (j3 = j2 + 1; j3 < input.length; ++j3) {
difference23b = input[j3] - input[j2];
common_ratio_1b = difference23b / difference12b;
common_ratio_2b = Math.round(common_ratio_1b);
error = Math.abs((common_ratio_2b - common_ratio_1b) / common_ratio_1b);
common_ratio_is_integer = error < EPSILON;
if (common_ratio_is_integer && common_ratio_2a === common_ratio_2b) {
indexes.push(j3);
j1 = j2;
j2 = j3
difference12b = difference23b;
}
}
values = [];
for (i = 0; i < indexes.length; ++i) {
index = indexes[i];
value = input[index];
values.push(value);
}
output.push(values);
}
}
}
}
if (output !== []) {
i_max_length = 0;
for (i = 1; i < output.length; ++i) {
if (output[i_max_length].length < output[i].length)
i_max_length = i;
}
for (i = 0; i < output.length; ++i) {
if (output[i_max_length].length == output[i].length)
resultDiv.append("<p>[" + output[i] + "]</p>");
}
}
Output:
[1, 3, 7, 15, 31]
I find the first three items of every subsequence candidate, calculate the scale factor and the common ratio from them, and if the common ratio is integer, then I iterate over the remaining elements after the third one, and add those to the subsequence, which fit into the geometric progression defined by the first three items. As a last step, I select the sebsequence/s which has/have the largest length.
In fact it is exactly the same question as Longest equally-spaced subsequence, you just have to consider the logarithm of your data. If the sequence is a, ak, ak^2, ak^3, the logarithmique value is ln(a), ln(a) + ln(k), ln(a)+2ln(k), ln(a)+3ln(k), so it is equally spaced. The opposite is of course true. There is a lot of different code in the question above.
I don't think the special case a=1 can be resolved more efficiently than an adaptation from an algorithm above.
Here is my solution in Javascript. It should be close to O(n^2) except may be in some pathological cases.
function bsearch(Arr,Val, left,right) {
if (left == right) return left;
var m=Math.floor((left + right) /2);
if (Val <= Arr[m]) {
return bsearch(Arr,Val,left,m);
} else {
return bsearch(Arr,Val,m+1,right);
}
}
function findLongestGeometricSequence(S) {
var bestSolution=[];
var i,j,k;
var H={};
for (i=0;i<S.length;i++) H[S[i]]=true;
for (i=0;i<S.length;i++) {
for (j=0;j<i;j++) {
for (k=j+1;k<i;) {
var possibleSolution=[S[j],S[k],S[i]];
var K = (S[i] - S[k]) / (S[k] - S[j]);
var A = (S[k] - S[j]) * (S[k] - S[j]) / (S[i] - S[k]);
if ((Math.floor(K) == K) && (Math.floor(A)==A)) {
exp= K*K*K;
var NextVal= S[i] + A * exp;
while (H[NextVal] === true) {
possibleSolution.push(NextVal);
exp = exp * K;
NextVal= NextVal + A * exp;
}
if (possibleSolution.length > bestSolution.length)
bestSolution=possibleSolution;
K--;
} else {
K=Math.floor(K);
}
if (K>0) {
var NextPossibleMidValue= (S[i] + K*S[j]) / (K +1);
k++;
if (S[k]<NextPossibleMidValue) {
k=bsearch(S,NextPossibleMidValue, k+1, i);
}
} else {
k=i;
}
}
}
}
return bestSolution;
}
function Run() {
var MyS= [0.7, 1, 2, 3, 4, 5,6,7, 15, 27, 30,31, 81];
var sol = findLongestGeometricSequence(MyS);
alert(JSON.stringify(sol));
}
Small Explanation
If we take 3 numbers of the array S(j) < S(k) < S(i) then you can calculate a and k so that: S(k) = S(j) + a*k and S(i) = S(k) + a*k^2 (2 equations and 2 incognits). With that in mind, you can check if exist a number in the array that is S(next) = S(i) + a*k^3. If that is the case, then continue checknng for S(next2) = S(next) + a*k^4 and so on.
This would be a O(n^3) solution, but you can hava advantage that k must be integer in order to limit the S(k) points selected.
In case that a is known, then you can calculate a(k) and you need to check only one number in the third loop, so this case will be clearly a O(n^2).
I think this task is related with not so long ago posted Longest equally-spaced subsequence. I've just modified my algorithm in Python a little bit:
from math import sqrt
def add_precalc(precalc, end, (a, k), count, res, N):
if end + a * k ** res[1]["count"] > N: return
x = end + a * k ** count
if x > N or x < 0: return
if precalc[x] is None: return
if (a, k) not in precalc[x]:
precalc[x][(a, k)] = count
return
def factors(n):
res = []
for x in range(1, int(sqrt(n)) + 1):
if n % x == 0:
y = n / x
res.append((x, y))
res.append((y, x))
return res
def work(input):
precalc = [None] * (max(input) + 1)
for x in input: precalc[x] = {}
N = max(input)
res = ((0, 0), {"end":0, "count":0})
for i, x in enumerate(input):
for y in input[i::-1]:
for a, k in factors(x - y):
if (a, k) in precalc[x]: continue
add_precalc(precalc, x, (a, k), 2, res, N)
for step, count in precalc[x].iteritems():
count += 1
if count > res[1]["count"]: res = (step, {"end":x, "count":count})
add_precalc(precalc, x, step, count, res, N)
precalc[x] = None
d = [res[1]["end"]]
for x in range(res[1]["count"] - 1, 0, -1):
d.append(d[-1] - res[0][0] * res[0][1] ** x)
d.reverse()
return d
explanation
Traversing the array
For each previous element of the array calculate factors of the difference between current and taken previous element and then precalculate next possible element of the sequence and saving it to precalc array
So when arriving at element i there're already all possible sequences with element i in the precalc array, so we have to calculate next possible element and save it to precalc.
Currently there's one place in algorithm that could be slow - factorization of each previous number. I think it could be made faster with two optimizations:
more effective factorization algorithm
find a way not to see at each element of array, using the fact that array is sorted and there's already a precalculated sequences
Python:
def subseq(a):
seq = []
aset = set(a)
for i, x in enumerate(a):
# elements after x
for j, x2 in enumerate(a[i+1:]):
j += i + 1 # enumerate starts j at 0, we want a[j] = x2
bk = x2 - x # b*k (assuming k and k's exponent start at 1)
# given b*k, bruteforce values of k
for k in range(1, bk + 1):
items = [x, x2] # our subsequence so far
nextdist = bk * k # what x3 - x2 should look like
while items[-1] + nextdist in aset:
items.append(items[-1] + nextdist)
nextdist *= k
if len(items) > len(seq):
seq = items
return seq
Running time is O(dn^3), where d is the (average?) distance between two elements,
and n is of course len(a).
I was asked this question in a job interview, and I'd like to know how others would solve it. I'm most comfortable with Java, but solutions in other languages are welcome.
Given an array of numbers, nums, return an array of numbers products, where products[i] is the product of all nums[j], j != i.
Input : [1, 2, 3, 4, 5]
Output: [(2*3*4*5), (1*3*4*5), (1*2*4*5), (1*2*3*5), (1*2*3*4)]
= [120, 60, 40, 30, 24]
You must do this in O(N) without using division.
An explanation of polygenelubricants method is:
The trick is to construct the arrays (in the case for 4 elements):
{ 1, a[0], a[0]*a[1], a[0]*a[1]*a[2], }
{ a[1]*a[2]*a[3], a[2]*a[3], a[3], 1, }
Both of which can be done in O(n) by starting at the left and right edges respectively.
Then, multiplying the two arrays element-by-element gives the required result.
My code would look something like this:
int a[N] // This is the input
int products_below[N];
int p = 1;
for (int i = 0; i < N; ++i) {
products_below[i] = p;
p *= a[i];
}
int products_above[N];
p = 1;
for (int i = N - 1; i >= 0; --i) {
products_above[i] = p;
p *= a[i];
}
int products[N]; // This is the result
for (int i = 0; i < N; ++i) {
products[i] = products_below[i] * products_above[i];
}
If you need the solution be O(1) in space as well, you can do this (which is less clear in my opinion):
int a[N] // This is the input
int products[N];
// Get the products below the current index
int p = 1;
for (int i = 0; i < N; ++i) {
products[i] = p;
p *= a[i];
}
// Get the products above the current index
p = 1;
for (int i = N - 1; i >= 0; --i) {
products[i] *= p;
p *= a[i];
}
Here is a small recursive function (in C++) to do the modification in-place. It requires O(n) extra space (on stack) though. Assuming the array is in a and N holds the array length, we have:
int multiply(int *a, int fwdProduct, int indx) {
int revProduct = 1;
if (indx < N) {
revProduct = multiply(a, fwdProduct*a[indx], indx+1);
int cur = a[indx];
a[indx] = fwdProduct * revProduct;
revProduct *= cur;
}
return revProduct;
}
Here's my attempt to solve it in Java. Apologies for the non-standard formatting, but the code has a lot of duplication, and this is the best I can do to make it readable.
import java.util.Arrays;
public class Products {
static int[] products(int... nums) {
final int N = nums.length;
int[] prods = new int[N];
Arrays.fill(prods, 1);
for (int
i = 0, pi = 1 , j = N-1, pj = 1 ;
(i < N) && (j >= 0) ;
pi *= nums[i++] , pj *= nums[j--] )
{
prods[i] *= pi ; prods[j] *= pj ;
}
return prods;
}
public static void main(String[] args) {
System.out.println(
Arrays.toString(products(1, 2, 3, 4, 5))
); // prints "[120, 60, 40, 30, 24]"
}
}
The loop invariants are pi = nums[0] * nums[1] *.. nums[i-1] and pj = nums[N-1] * nums[N-2] *.. nums[j+1]. The i part on the left is the "prefix" logic, and the j part on the right is the "suffix" logic.
Recursive one-liner
Jasmeet gave a (beautiful!) recursive solution; I've turned it into this (hideous!) Java one-liner. It does in-place modification, with O(N) temporary space in the stack.
static int multiply(int[] nums, int p, int n) {
return (n == nums.length) ? 1
: nums[n] * (p = multiply(nums, nums[n] * (nums[n] = p), n + 1))
+ 0*(nums[n] *= p);
}
int[] arr = {1,2,3,4,5};
multiply(arr, 1, 0);
System.out.println(Arrays.toString(arr));
// prints "[120, 60, 40, 30, 24]"
Translating Michael Anderson's solution into Haskell:
otherProducts xs = zipWith (*) below above
where below = scanl (*) 1 $ init xs
above = tail $ scanr (*) 1 xs
Sneakily circumventing the "no divisions" rule:
sum = 0.0
for i in range(a):
sum += log(a[i])
for i in range(a):
output[i] = exp(sum - log(a[i]))
Here you go, simple and clean solution with O(N) complexity:
int[] a = {1,2,3,4,5};
int[] r = new int[a.length];
int x = 1;
r[0] = 1;
for (int i=1;i<a.length;i++){
r[i]=r[i-1]*a[i-1];
}
for (int i=a.length-1;i>0;i--){
x=x*a[i];
r[i-1]=x*r[i-1];
}
for (int i=0;i<r.length;i++){
System.out.println(r[i]);
}
Travel Left->Right and keep saving product. Call it Past. -> O(n)
Travel Right -> left keep the product. Call it Future. -> O(n)
Result[i] = Past[i-1] * future[i+1] -> O(n)
Past[-1] = 1; and Future[n+1]=1;
O(n)
C++, O(n):
long long prod = accumulate(in.begin(), in.end(), 1LL, multiplies<int>());
transform(in.begin(), in.end(), back_inserter(res),
bind1st(divides<long long>(), prod));
Here is my solution in modern C++. It makes use of std::transform and is pretty easy to remember.
Online code (wandbox).
#include<algorithm>
#include<iostream>
#include<vector>
using namespace std;
vector<int>& multiply_up(vector<int>& v){
v.insert(v.begin(),1);
transform(v.begin()+1, v.end()
,v.begin()
,v.begin()+1
,[](auto const& a, auto const& b) { return b*a; }
);
v.pop_back();
return v;
}
int main() {
vector<int> v = {1,2,3,4,5};
auto vr = v;
reverse(vr.begin(),vr.end());
multiply_up(v);
multiply_up(vr);
reverse(vr.begin(),vr.end());
transform(v.begin(),v.end()
,vr.begin()
,v.begin()
,[](auto const& a, auto const& b) { return b*a; }
);
for(auto& i: v) cout << i << " ";
}
Precalculate the product of the numbers to the left and to the right of each element.
For every element the desired value is the product of it's neigbors's products.
#include <stdio.h>
unsigned array[5] = { 1,2,3,4,5};
int main(void)
{
unsigned idx;
unsigned left[5]
, right[5];
left[0] = 1;
right[4] = 1;
/* calculate products of numbers to the left of [idx] */
for (idx=1; idx < 5; idx++) {
left[idx] = left[idx-1] * array[idx-1];
}
/* calculate products of numbers to the right of [idx] */
for (idx=4; idx-- > 0; ) {
right[idx] = right[idx+1] * array[idx+1];
}
for (idx=0; idx <5 ; idx++) {
printf("[%u] Product(%u*%u) = %u\n"
, idx, left[idx] , right[idx] , left[idx] * right[idx] );
}
return 0;
}
Result:
$ ./a.out
[0] Product(1*120) = 120
[1] Product(1*60) = 60
[2] Product(2*20) = 40
[3] Product(6*5) = 30
[4] Product(24*1) = 24
(UPDATE: now I look closer, this uses the same method as Michael Anderson, Daniel Migowski and polygenelubricants above)
Tricky:
Use the following:
public int[] calc(int[] params) {
int[] left = new int[n-1]
in[] right = new int[n-1]
int fac1 = 1;
int fac2 = 1;
for( int i=0; i<n; i++ ) {
fac1 = fac1 * params[i];
fac2 = fac2 * params[n-i];
left[i] = fac1;
right[i] = fac2;
}
fac = 1;
int[] results = new int[n];
for( int i=0; i<n; i++ ) {
results[i] = left[i] * right[i];
}
Yes, I am sure i missed some i-1 instead of i, but thats the way to solve it.
This is O(n^2) but f# is soooo beautiful:
List.fold (fun seed i -> List.mapi (fun j x -> if i=j+1 then x else x*i) seed)
[1;1;1;1;1]
[1..5]
There also is a O(N^(3/2)) non-optimal solution. It is quite interesting, though.
First preprocess each partial multiplications of size N^0.5(this is done in O(N) time complexity). Then, calculation for each number's other-values'-multiple can be done in 2*O(N^0.5) time(why? because you only need to multiple the last elements of other ((N^0.5) - 1) numbers, and multiply the result with ((N^0.5) - 1) numbers that belong to the group of the current number). Doing this for each number, one can get O(N^(3/2)) time.
Example:
4 6 7 2 3 1 9 5 8
partial results:
4*6*7 = 168
2*3*1 = 6
9*5*8 = 360
To calculate the value of 3, one needs to multiply the other groups' values 168*360, and then with 2*1.
public static void main(String[] args) {
int[] arr = { 1, 2, 3, 4, 5 };
int[] result = { 1, 1, 1, 1, 1 };
for (int i = 0; i < arr.length; i++) {
for (int j = 0; j < i; j++) {
result[i] *= arr[j];
}
for (int k = arr.length - 1; k > i; k--) {
result[i] *= arr[k];
}
}
for (int i : result) {
System.out.println(i);
}
}
This solution i came up with and i found it so clear what do you think!?
Based on Billz answer--sorry I can't comment, but here is a scala version that correctly handles duplicate items in the list, and is probably O(n):
val list1 = List(1, 7, 3, 3, 4, 4)
val view = list1.view.zipWithIndex map { x => list1.view.patch(x._2, Nil, 1).reduceLeft(_*_)}
view.force
returns:
List(1008, 144, 336, 336, 252, 252)
Adding my javascript solution here as I didn't find anyone suggesting this.
What is to divide, except to count the number of times you can extract a number from another number? I went through calculating the product of the whole array, and then iterate over each element, and substracting the current element until zero:
//No division operation allowed
// keep substracting divisor from dividend, until dividend is zero or less than divisor
function calculateProducsExceptCurrent_NoDivision(input){
var res = [];
var totalProduct = 1;
//calculate the total product
for(var i = 0; i < input.length; i++){
totalProduct = totalProduct * input[i];
}
//populate the result array by "dividing" each value
for(var i = 0; i < input.length; i++){
var timesSubstracted = 0;
var divisor = input[i];
var dividend = totalProduct;
while(divisor <= dividend){
dividend = dividend - divisor;
timesSubstracted++;
}
res.push(timesSubstracted);
}
return res;
}
Just 2 passes up and down. Job done in O(N)
private static int[] multiply(int[] numbers) {
int[] multiplied = new int[numbers.length];
int total = 1;
multiplied[0] = 1;
for (int i = 1; i < numbers.length; i++) {
multiplied[i] = numbers[i - 1] * multiplied[i - 1];
}
for (int j = numbers.length - 2; j >= 0; j--) {
total *= numbers[j + 1];
multiplied[j] = total * multiplied[j];
}
return multiplied;
}
def productify(arr, prod, i):
if i < len(arr):
prod.append(arr[i - 1] * prod[i - 1]) if i > 0 else prod.append(1)
retval = productify(arr, prod, i + 1)
prod[i] *= retval
return retval * arr[i]
return 1
if __name__ == "__main__":
arr = [1, 2, 3, 4, 5]
prod = []
productify(arr, prod, 0)
print(prod)
Well,this solution can be considered that of C/C++.
Lets say we have an array "a" containing n elements
like a[n],then the pseudo code would be as below.
for(j=0;j<n;j++)
{
prod[j]=1;
for (i=0;i<n;i++)
{
if(i==j)
continue;
else
prod[j]=prod[j]*a[i];
}
One more solution, Using division. with twice traversal.
Multiply all the elements and then start dividing it by each element.
{-
Recursive solution using sqrt(n) subsets. Runs in O(n).
Recursively computes the solution on sqrt(n) subsets of size sqrt(n).
Then recurses on the product sum of each subset.
Then for each element in each subset, it computes the product with
the product sum of all other products.
Then flattens all subsets.
Recurrence on the run time is T(n) = sqrt(n)*T(sqrt(n)) + T(sqrt(n)) + n
Suppose that T(n) ≤ cn in O(n).
T(n) = sqrt(n)*T(sqrt(n)) + T(sqrt(n)) + n
≤ sqrt(n)*c*sqrt(n) + c*sqrt(n) + n
≤ c*n + c*sqrt(n) + n
≤ (2c+1)*n
∈ O(n)
Note that ceiling(sqrt(n)) can be computed using a binary search
and O(logn) iterations, if the sqrt instruction is not permitted.
-}
otherProducts [] = []
otherProducts [x] = [1]
otherProducts [x,y] = [y,x]
otherProducts a = foldl' (++) [] $ zipWith (\s p -> map (*p) s) solvedSubsets subsetOtherProducts
where
n = length a
-- Subset size. Require that 1 < s < n.
s = ceiling $ sqrt $ fromIntegral n
solvedSubsets = map otherProducts subsets
subsetOtherProducts = otherProducts $ map product subsets
subsets = reverse $ loop a []
where loop [] acc = acc
loop a acc = loop (drop s a) ((take s a):acc)
Here is my code:
int multiply(int a[],int n,int nextproduct,int i)
{
int prevproduct=1;
if(i>=n)
return prevproduct;
prevproduct=multiply(a,n,nextproduct*a[i],i+1);
printf(" i=%d > %d\n",i,prevproduct*nextproduct);
return prevproduct*a[i];
}
int main()
{
int a[]={2,4,1,3,5};
multiply(a,5,1,0);
return 0;
}
Here's a slightly functional example, using C#:
Func<long>[] backwards = new Func<long>[input.Length];
Func<long>[] forwards = new Func<long>[input.Length];
for (int i = 0; i < input.Length; ++i)
{
var localIndex = i;
backwards[i] = () => (localIndex > 0 ? backwards[localIndex - 1]() : 1) * input[localIndex];
forwards[i] = () => (localIndex < input.Length - 1 ? forwards[localIndex + 1]() : 1) * input[localIndex];
}
var output = new long[input.Length];
for (int i = 0; i < input.Length; ++i)
{
if (0 == i)
{
output[i] = forwards[i + 1]();
}
else if (input.Length - 1 == i)
{
output[i] = backwards[i - 1]();
}
else
{
output[i] = forwards[i + 1]() * backwards[i - 1]();
}
}
I'm not entirely certain that this is O(n), due to the semi-recursion of the created Funcs, but my tests seem to indicate that it's O(n) in time.
To be complete here is the code in Scala:
val list1 = List(1, 2, 3, 4, 5)
for (elem <- list1) println(list1.filter(_ != elem) reduceLeft(_*_))
This will print out the following:
120
60
40
30
24
The program will filter out the current elem (_ != elem); and multiply the new list with reduceLeft method. I think this will be O(n) if you use scala view or Iterator for lazy eval.
// This is the recursive solution in Java
// Called as following from main product(a,1,0);
public static double product(double[] a, double fwdprod, int index){
double revprod = 1;
if (index < a.length){
revprod = product2(a, fwdprod*a[index], index+1);
double cur = a[index];
a[index] = fwdprod * revprod;
revprod *= cur;
}
return revprod;
}
A neat solution with O(n) runtime:
For each element calculate the product of all the elements that occur before that and it store in an array "pre".
For each element calculate the product of all the elements that occur after that element and store it in an array "post"
Create a final array "result", for an element i,
result[i] = pre[i-1]*post[i+1];
Here is the ptyhon version
# This solution use O(n) time and O(n) space
def productExceptSelf(self, nums):
"""
:type nums: List[int]
:rtype: List[int]
"""
N = len(nums)
if N == 0: return
# Initialzie list of 1, size N
l_prods, r_prods = [1]*N, [1]*N
for i in range(1, N):
l_prods[i] = l_prods[i-1] * nums[i-1]
for i in reversed(range(N-1)):
r_prods[i] = r_prods[i+1] * nums[i+1]
result = [x*y for x,y in zip(l_prods,r_prods)]
return result
# This solution use O(n) time and O(1) space
def productExceptSelfSpaceOptimized(self, nums):
"""
:type nums: List[int]
:rtype: List[int]
"""
N = len(nums)
if N == 0: return
# Initialzie list of 1, size N
result = [1]*N
for i in range(1, N):
result[i] = result[i-1] * nums[i-1]
r_prod = 1
for i in reversed(range(N)):
result[i] *= r_prod
r_prod *= nums[i]
return result
I'm use to C#:
public int[] ProductExceptSelf(int[] nums)
{
int[] returnArray = new int[nums.Length];
List<int> auxList = new List<int>();
int multTotal = 0;
// If no zeros are contained in the array you only have to calculate it once
if(!nums.Contains(0))
{
multTotal = nums.ToList().Aggregate((a, b) => a * b);
for (int i = 0; i < nums.Length; i++)
{
returnArray[i] = multTotal / nums[i];
}
}
else
{
for (int i = 0; i < nums.Length; i++)
{
auxList = nums.ToList();
auxList.RemoveAt(i);
if (!auxList.Contains(0))
{
returnArray[i] = auxList.Aggregate((a, b) => a * b);
}
else
{
returnArray[i] = 0;
}
}
}
return returnArray;
}
Here is simple Scala version in Linear O(n) time:
def getProductEff(in:Seq[Int]):Seq[Int] = {
//create a list which has product of every element to the left of this element
val fromLeft = in.foldLeft((1, Seq.empty[Int]))((ac, i) => (i * ac._1, ac._2 :+ ac._1))._2
//create a list which has product of every element to the right of this element, which is the same as the previous step but in reverse
val fromRight = in.reverse.foldLeft((1,Seq.empty[Int]))((ac,i) => (i * ac._1,ac._2 :+ ac._1))._2.reverse
//merge the two list by product at index
in.indices.map(i => fromLeft(i) * fromRight(i))
}
This works because essentially the answer is an array which has product of all elements to the left and to the right.
import java.util.Arrays;
public class Pratik
{
public static void main(String[] args)
{
int[] array = {2, 3, 4, 5, 6}; // OUTPUT: 360 240 180 144 120
int[] products = new int[array.length];
arrayProduct(array, products);
System.out.println(Arrays.toString(products));
}
public static void arrayProduct(int array[], int products[])
{
double sum = 0, EPSILON = 1e-9;
for(int i = 0; i < array.length; i++)
sum += Math.log(array[i]);
for(int i = 0; i < array.length; i++)
products[i] = (int) (EPSILON + Math.exp(sum - Math.log(array[i])));
}
}
OUTPUT:
[360, 240, 180, 144, 120]
Time complexity : O(n)
Space complexity: O(1)