Bounded square sum algorithm - algorithm

The problem goes as follows:
You are given two arrays of integers a and b, and two integers lower and upper.
Your task is to find the number of pairs (i, j) such that lower ≤ a[i] * a[i] + b[j] * b[j] ≤ upper.
Example:
For a = [3, -1, 9], b = [100, 5, -2], lower = 7, and upper = 99, the output should be boundedSquareSum(a, b, lower, upper) = 4.
There are only four pairs that satisfy the requirement:
If i = 0 and j = 1, then a[0] = 3, b[1] = 5, and 7 ≤ 3 * 3 + 5 * 5 = 9 + 25 = 36 ≤ 99.
If i = 0 and j = 2, then a[0] = 3, b[2] = -2, and 7 ≤ 3 * 3 + (-2) * (-2) = 9 + 4 = 13 ≤ 99.
If i = 1 and j = 1, then a[1] = -1, b[1] = 5, and 7 ≤ (-1) * (-1) + 5 * 5 = 1 + 25 = 26 ≤ 99.
If i = 2 and j = 2, then a[2] = 9, b[2] = -2, and 7 ≤ 9 * 9 + (-2) * (-2) = 81 + 4 = 85 ≤ 99.
For a = [1, 2, 3, -1, -2, -3], b = [10], lower = 0, and upper = 100, the output should be boundedSquareSum(a, b, lower, upper) = 0.
Since the array b contains only one element 10 and the array a does not contain 0, it is not possible to satisfy 0 ≤ a[i] * a[i] + 10 * 10 ≤ 100.
Now, I know there is a brute force way to solve this, but what would be the optimal solution for this problem?

Sort the smaller array using the absolute value of the elements, then for each element in the unsorted array, binary search the interval on the sorted one.

You can break loop when calculation goes higher than upper limit.
I will reduce execution time.
function boundedSquareSum(a, b, lower, upper) {
let result = 0;
a = a.sort((i,j) => Math.abs(i) - Math.abs(j));
b = b.sort((i,j) => Math.abs(i) - Math.abs(j))
for(let i = 0; i < a.length; i++) {
let aValue = a[i] ** 2;
if(aValue > upper) {
break; // Don't need to check further
}
for(let j = 0; j < b.length; j++) {
let bValue = b[j] ** 2;
let total = aValue + bValue;
if(total > upper) {
break; // Don't need to check further
}
if((total >= lower && total <= upper) ) {
result++;
}
}
}
return result;
}

Related

How can I find the minimum index of the array in this case?

We are given an array with n values.
Example: [1,4,5,6,6]
For each index i of the array a ,we construct a new element of array b such that,
b[i]= [a[i]/1] + [a[i+1]/2] + [a[i+2]/3] + ⋯ + [a[n]/(n−i+1)] where [.] denotes the greatest integer function.
We are given an integer k as well.
We have to find the minimum i such that b[i] ≤ k.
I know the brute-force O(n^2) algorithm (to create the array - 'b'), can anybody suggest a better time complexity and way solve it?
For example, for the input [1,2,3],k=3, the output is 1(minimum-index).
Here, a[1]=1; a[2]=2; a[3]=3;
Now, b[1] = [a[1]/1] + [a[2]/2] + [a[3]/3] = [1/1] + [2/2] + [3/3] = 3;
b[2] = [a[2]/1] + [a[3]/2] = [2/1] + [3/2] = 3;
b[3] = [a[3]/1] = [3/1] = 3 (obvious)
Now, we have to find the index i such that b[i]<=k , k='3' , also b[1]<=3, henceforth, 1 is our answer! :-)
Constraints : - Time limits: -(2-seconds) , 1 <= a[i] <= 10^5, 1 <=
n <= 10^5, 1 <= k <= 10^9
Here's an O(n √A)-time algorithm to compute the b array where n is the number of elements in the a array and A is the maximum element of the a array.
This algorithm computes the difference sequence of the b array (∆b = b[0], b[1] - b[0], b[2] - b[1], ..., b[n-1] - b[n-2]) and derives b itself as the cumulative sums. Since the differences are linear, we can start with ∆b = 0, 0, ..., 0, loop over each element a[i], and add the difference sequence for [a[i]], [a[i]/2], [a[i]/3], ... at the appropriate spot. The key is that this difference sequence is sparse (less than 2√a[i] elements). For example, for a[i] = 36,
>>> [36//j for j in range(1,37)]
[36, 18, 12, 9, 7, 6, 5, 4, 4, 3, 3, 3, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> list(map(operator.sub,_,[0]+_[:-1]))
[36, -18, -6, -3, -2, -1, -1, -1, 0, -1, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
We can derive the difference sequence from a subroutine that, given a positive integer r, returns all maximal pairs of positive integers (p, q) such that pq ≤ r.
See complete Python code below.
def maximal_pairs(r):
p = 1
q = r
while p < q:
yield (p, q)
p += 1
q = r // p
while q > 0:
p = r // q
yield (p, q)
q -= 1
def compute_b_fast(a):
n = len(a)
delta_b = [0] * n
for i, ai in enumerate(a):
previous_j = i
for p, q in maximal_pairs(ai):
delta_b[previous_j] += q
j = i + p
if j >= n:
break
delta_b[j] -= q
previous_j = j
for i in range(1, n):
delta_b[i] += delta_b[i - 1]
return delta_b
def compute_b_slow(a):
n = len(a)
b = [0] * n
for i, ai in enumerate(a):
for j in range(n - i):
b[i + j] += ai // (j + 1)
return b
for n in range(1, 100):
print(list(maximal_pairs(n)))
lst = [1, 34, 3, 2, 9, 21, 3, 2, 2, 1]
print(compute_b_fast(lst))
print(compute_b_slow(lst))
This probably cannot reach the efficiency of David Eisenstat's answer but since I spent quite a long time figuring out an implementation, I thought I'd leave it up anyway. As it is, it seems about O(n^2).
The elements of b[i] may be out of order, but sections of them are not:
[a[1]/1] + [a[2]/2] + [a[3]/3]
|------ s2_1 -----|
|-s1_1-|
[a[2]/1] + [a[3]/2]
|------ s2_2 -----|
|-s1_2-|
[a[3]/1]
|-s1_3-|
s2_1 < s2_2
s1_1 < s1_2 < s1_3
Binary search for k on s1. Any result with an s1_i greater than k will rule out a section of ordered rows (rows are b_is).
Binary search for k on s2 on the remaining rows. Any result with an s2_i greater than k will rule out a section of ordered rows (rows are b_is).
This wouldn't help much since in the worst case, we'd have O(n^2 * log n) complexity, greater than O(n^2).
But we can also search horizontally. If we know that b_i ≤ k, then it will rule out both all rows with greater or equal length and the need to search smaller s(m)s, not because smaller s(m)s cannot produce a sum >= k, but because they will necessarily produce one with a higher i and we are looking for the minimum i.
JavaScript code:
var sum_width_iterations = 0
var total_width_summed = 0
var sum_width_cache = {}
function sum_width(A, i, width){
let key = `${i},${width}`
if (sum_width_cache.hasOwnProperty(key))
return sum_width_cache[key]
sum_width_iterations++
total_width_summed += width
let result = 0
for (let j=A.length-width; j<A.length; j++)
result += ~~(A[j] / (j + 1 - i))
return sum_width_cache[key] = result
}
function get_b(A){
let result = []
A.map(function(a, i){
result.push(sum_width(A, i, A.length - i))
})
return result
}
function find_s_greater_than_k(A, width, low, high, k){
let mid = low + ((high - low) >> 1)
let s = sum_width(A, mid, width)
while (low <= high){
mid = low + ((high - low) >> 1)
s = sum_width(A, mid, width)
if (s > k)
high = mid - 1
else
low = mid + 1
}
return [mid, s]
}
function f(A, k, l, r){
let n = A.length
if (l > r){
console.log(`l > r: l, r: ${l}, ${r}`)
return [n + 1, Infinity]
}
let width = n - l
console.log(`\n(call) width, l, r: ${width}, ${l}, ${r}`)
let mid = l + ((r - l) >> 1)
let mid_width = n - mid
console.log(`mid: ${mid}`)
console.log('mid_width: ' + mid_width)
let highest_i = n - mid_width
let [i, s] = find_s_greater_than_k(A, mid_width, 0, highest_i, k)
console.log(`hi_i, s,i,k: ${highest_i}, ${s}, ${i}, ${k}`)
if (mid_width == width)
return [i, s]
// either way we need to look left
// and down
console.log(`calling left`)
let [li, ls] = f(A, k, l, mid - 1)
// if i is the highest, width is
// the width of b_i
console.log(`got left: li, ls, i, high_i: ${li}, ${ls}, ${i}, ${highest_i}`)
if (i == highest_i){
console.log(`i == highest_i, s <= k: ${s <= k}`)
// b_i is small enough
if (s <= k){
if (ls <= k)
return [li, ls]
else
return [i, s]
// b_i is larger than k
} else {
console.log(`b_i > k`)
let [ri, rs] = f(A, k, mid + 1, r)
console.log(`ri, rs: ${ri}, ${rs}`)
if (ls <= k)
return [li, ls]
else if (rs <= k)
return [ri, rs]
else
return [i, s]
}
// i < highest_i
} else {
console.log(`i < highest_i: high_i, i, s, li, ls, mid, mid_width, width, l, r: ${highest_i}, ${i}, ${s}, ${li}, ${ls}, ${mid}, ${mid_width}, ${width}, ${l}, ${r}`)
// get the full sum for this b
let b_i = sum_width(A, i, n - i)
console.log(`b_i: ${b_i}`)
// suffix sum is less than k
// so we cannot rule out either side
if (s < k){
console.log(`s < k`)
let ll = l
let lr = mid - 1
let [lli, lls] = f(A, k, ll, lr)
console.log(`ll, lr, lli, lls: ${ll}, ${lr}, ${lli}, ${lls}`)
// b_i is a match so we don't
// need to look to the right
if (b_i <= k){
console.log(`b_i <= k: i, b_i: ${i}, ${b_i}`)
if (lls <= k)
return [lli, lls]
else
return [i, b_i]
// b_i > k
} else {
console.log(`b_i > k: i, b_i: ${i}, ${b_i}`)
let rl = mid + 1
let rr = r
let [rri, rrs] = f(A, k, rl, rr)
console.log(`rl, rr, rri, rrs: ${rl}, ${rr}, ${rri}, ${rrs}`)
// return the best of right
// and left sections
if (lls <= k)
return [lli, lls]
else if (rrs <= k)
return [rri, rrs]
else
return [i, b_i]
}
// suffix sum is greater than or
// equal to k so we can rule out
// this and all higher rows (`b`s)
// that share this suffix
} else {
console.log(`s >= k`)
let ll = l
// the suffix rules out b_i
// and above
let lr = i - 1
let [lli, lls] = f(A, k, ll, lr)
console.log(`ll, lr, lli, lls: ${ll}, ${lr}, ${lli}, ${lls}`)
let rl = highest_i + 1
let rr = r
let [rri, rrs] = f(A, k, rl, rr)
console.log(`rl, rr, rri, rrs: ${rl}, ${rr}, ${rri}, ${rrs}`)
// return the best of right
// and left sections
if (lls <= k)
return [lli, lls]
else if (rrs <= k)
return [rri, rrs]
else
return [i, b_i]
}
}
}
let lst = [1, 2, 3, 1]
// b [3, 3, 3, 1]
lst = [ 1, 34, 3, 2, 9, 21, 3, 2, 2, 1]
// b [23, 41, 12, 13, 20, 22, 4, 3, 2, 1]
console.log(
JSON.stringify(f(lst, 20, 0, lst.length)))
console.log(`sum_width_iterations: ${sum_width_iterations}`)
console.log(`total_width_summed: ${total_width_summed}`)
Why should calculating b[i] lead to O(n²)? If i = 1, it takes n steps. If i = n, it takes one step to calculate b[i]...
You could improve your calculation when you abort the sum on the condition Sum > k.
Let a in N^n
Let k in N
for (i1 := 1; i1 <= n; i1++)
b := 0
for (i2 :=i1; i2 <= n; i2++) // This loop is the calculation of b[i]
b := b + ceil(a[i2]/(i2 + 1))
if (b > k)
break
if (i2 == n)
return i1

Algorithm for the largest subarray of distinct values in linear time

I'm trying to come up with a fast algorithm for, given any array of length n, obtaining the largest subarray of distinct values.
For example, the largest subarray of distinct values of
[1, 4, 3, 2, 4, 2, 8, 1, 9]
would be
[4, 2, 8, 1, 9]
This is my current solution, I think it runs in O(n^2). This is because check_dups runs in linear time, and it is called every time j or i increments.
arr = [0,...,n]
i = 0
j = 1
i_best = i
j_best = j
while i < n-1 and j < n:
if check_dups(arr, i j): //determines if there's duplicates in the subarray i,j in linear time
i += 1
else:
if j - i > j_best - i_best:
i_best = i
j_best = j
j += 1
return subarray(arr, i_best, j_best)
Does anyone have a better solution, in linear time?
Please note this is pseudocode and I'm not looking for an answer that relies on specific existing functions of a defined language (such as arr.contains()).
Thanks!
Consider the problem of finding the largest distinct-valued subarray ending at a particular index j. Conceptually this is straightforward: starting at arr[j], you go backwards and include all elements until you find a duplicate.
Let's use this intuition to solve this problem for all j from 0 up to length(arr). We need to know, at any point in the iteration, how far back we can go before we find a duplicate. That is, we need to know the least i such that subarray(arr, i, j) contains distinct values. (I'm assuming subarray treats the indices as inclusive.)
If we knew i at some point in the iteration (say, when j = k), can we quickly update i when j = k+1? Indeed, if we knew when was the last occurrence of arr[k+1], then we can update i := max(i, lastOccurrence(arr[k+1]) + 1). We can compute lastOccurrence in O(1) time with a HashMap.
Pseudocode:
arr = ... (from input)
map = empty HashMap
i = 0
i_best = 0
j_best = 0
for j from 0 to length(arr) - 1 inclusive:
if map contains-key arr[j]:
i = max(i, map[arr[j]] + 1)
map[arr[j]] = j
if j - i > j_best - i_best:
i_best = i
j_best = j
return subarray(arr, i_best, j_best)
We can adapt pkpnd's algorithm to use an array rather than hash map for an O(n log n) solution or potentially O(n) if your data allows for an O(n) stable sort, but you'd need to implement a stable sorting function that also provides the original indexes of the elements.
1 4 3 2 4 2 8 1 9
0 1 2 3 4 5 6 7 8 (indexes)
Sorted:
1 1 2 2 3 4 4 8 9
0 7 3 5 2 1 4 6 8 (indexes)
--- --- ---
Now, instead of a hash map, build a new array by iterating over the sorted array and inserting the last occurrence of each element according to the duplicate index arrangements. The final array would look like:
1 4 3 2 4 2 8 1 9
-1 -1 -1 -1 1 3 -1 0 -1 (previous occurrence)
We're now ready to run pkpnd's algorithm with a slight modification:
arr = ... (from input)
map = previous occurrence array
i = 0
i_best = 0
j_best = 0
for j from 0 to length(arr) - 1 inclusive:
if map[j] >= 0:
i = max(i, map[j] + 1)
if j - i > j_best - i_best:
i_best = i
j_best = j
return subarray(arr, i_best, j_best)
JavaScript code:
function f(arr, map){
let i = 0
let i_best = 0
let j_best = 0
for (j=0; j<arr.length; j++){
if (map[j] >= 0)
i = Math.max(i, map[j] + 1)
if (j - i > j_best - i_best){
i_best = i
j_best = j
}
}
return [i_best, j_best]
}
let arr = [ 1, 4, 3, 2, 4, 2, 8, 1, 9]
let map = [-1,-1,-1,-1, 1, 3,-1, 0,-1]
console.log(f(arr, map))
arr = [ 1, 2, 2, 2, 2, 2, 1]
map = [-1,-1, 1, 2, 3, 4, 0]
console.log(f(arr, map))
We can use Hashtable(Dictionary in c#)
public int[] FindSubarrayWithDistinctEntities(int[] arr)
{
Dictionary<int, int> dic = new Dictionary<int, int>();
Result r = new Result(); //struct containing start and end index for subarray
int result = 0;
r.st = 1;
r.end = 1;
for (int i = 0; i < arr.Length; i++)
{
if (dic.ContainsKey(arr[i]))
{
int diff = i - (dic[arr[i]] + 1);
if(result<diff)
{
result = diff;
r.st = Math.Min(r.st, (dic[arr[i]] + 1));
r.end = i-1;
}
dic.Remove(arr[i]);
}
dic.Add(arr[i], i);
}
return arr.Skip(r.st).Take(r.end).ToArray();
}
Add every number to Hashset if it isn't already in it. Hashset's insert and search are both O(1). So final result will be O(n).

which solution is better in terms of space.time complexity?

i have 2 lists of integers. they are both sorted already. I want to find the elements (one from each list) that add up to a given number.
-first idea is to iterate over first list and use binary search to look for the number needed to sum to the given number. i know this will take nlogn time.
the other is to store one of the lists in a hashtable/map (i dont really know the difference) and iterate over other list and look up the needed value. does this take n time? and n memory?
overall which would be better?
You are comparing it right. But both has different aspects. Hashing is not a good choice if you have memory constraints. But if you have plenty of memory then yes, you can afford to do that.
Also you will see many times in Computer Science the notion of space-time tradeoff. It will always be some gain by losing some. Hashing runs in O(n) and space complexity is O(n). But in case of searching only O(nlogn) time complexity but space complexity is O(1)
Long story short, scenario lets you decide which one to select. I have shown just one aspect. There can be many. Know the constraints and tradeoffs of each and you will be able to decide it.
A better solution : (Time complexity: O(n) Space complexity: O(1))
Suppose there are 2 array a and b.
Now WLOG suppose a is sorted in ascending and another in descending (Even if it is not the case we can traverse it accordingly).
index1=0;index2=0; // considered 0 indexing
while(index1 <= N1-1 && index2 <= N2-1)
{
if ((a[index1] + b[index2]) == x)
// success
else if ((a[index1] + b[index2]) > x)
index2++;
else
index1++;
}
//failure no such element.
Sort list A in ascending order, and list B in descending order. Set a = 1 and b = 1.
If A[a] + B[b] = T, record the pair, increment a, and repeat.
Otherwise, A[a] + B[b] < T, increment a, and repeat from 1.
Otherwise, A[a] + B[b] > T, increment b, and repeat from 1.
Naturally, if a or b exceeds the size of A or B, respectively, terminate.
Example:
A = 1, 2, 2, 6, 8, 10, 11
B = 9, 8, 4, 3, 1, 1
T = 10
a = 1, b = 1
A[a] + B[b] = A[1] + B[1] = 10; record; a = a + 1 = 2; repeat.
A[a] + B[b] = A[2] + B[1] = 11; b = b + 1 = 2; repeat.
A[a] + B[b] = A[2] + B[2] = 10; record; a = a + 1 = 3; repeat.
A[a] + B[b] = A[3] + B[2] = 10; record; a = a + 1 = 4; repeat.
A[a] + B[b] = A[4] + B[2] = 14; b = b + 1 = 3; repeat.
A[a] + B[b] = A[4] + B[3] = 10; record; a = a + 1 = 5; repeat.
A[a] + B[b] = A[5] + B[3] = 12; b = b + 1 = 4; repeat.
A[a] + B[b] = A[5] + B[4] = 11; b = b + 1 = 5; repeat.
A[a] + B[b] = A[5] + B[5] = 9; a = a + 1 = 6; repeat.
A[a] + B[b] = A[6] + B[5] = 11; b = b + 1 = 6; repeat.
A[a] + B[b] = A[6] + B[6] = 11; b = b + 1 = 7; repeat.
Terminate.
You can do this without additional space if instead of having B sorted in descending order, you set b = |B| and decrement it instead of incrementing it, effectively reading it backwards.
The above procedure misses out on some duplicate answers where B has a string of duplicate values, for instance:
A = 2, 2, 2
B = 8, 8, 8
The algorithm as described above will yield three pairs, but you might want nine. This can be fixed by detecting this case, keeping separate counters ca and cb for the lengths of the runs of A[a] and B[b] you have seen, and adding ca * cb - ca copies of the last pair you added to the bag. In this example:
A = 2, 2, 2
B = 8, 8, 8
a = 1, b = 1
ca = 0, cb = 0
A[a] + B[b] = 10; record pair, a = a + 1 = 2, ca = ca + 1 = 2, repeat.
A[a] + B[b] = 10; record pair, a = a + 1 = 3, ca = ca + 1 = 2, repeat.
A[a] + B[b] = 10; record pair, a = a + 1 = 4;
a exceeds bounds, value of A[a] changed;
increment b to count run of B's;
b = b + 1 = 2, cb = cb + 1 = 2
b = b + 1 = 3, cb = cb + 1 = 3
b = b + 1 = 4;
b exceeds bounds, value of B[b] changed;
add ca * cb - ca = 3 * 3 - 3 = 6 copies of pair (2, 8).

Confusion Regarding deepest pit within an Array

I got this question as prerequisite for an interview,
A non-empty zero-indexed array A consisting of N integers is given. A
pit in this array is any triplet of integers (P, Q, R) such that: 0 ≤
P < Q < R < N;
sequence [A[P], A[P+1], ..., A[Q]] is strictly decreasing, i.e. A[P] >
A[P+1] > ... > A[Q];
sequence A[Q], A[Q+1], ..., A[R] is strictly increasing, i.e. A[Q] <
A[Q+1] < ... < A[R].
The depth of a pit (P, Q, R) is the number min{A[P] − A[Q], A[R] −
A[Q]}. For example, consider array A consisting of 10 elements such
that:
A[0] = 0
A[1] = 1
A[2] = 3
A[3] = -2
A[4] = 0
A[5] = 1
A[6] = 0
A[7] = -3
A[8] = 2
A[9] = 3
Triplet (2, 3, 4) is one of pits in this array, because sequence
[A[2], A[3]] is strictly decreasing (3 > −2) and sequence [A[3], A[4]]
is strictly increasing (−2 < 0). Its depth is min{A[2] − A[3], A[4] −
A[3]} = 2.
Triplet (2, 3, 5) is another pit with depth 3.
Triplet (5, 7, 8) is yet another pit with depth 4. There is no pit in
this array deeper (i.e. having depth greater) than 4.
It says that Triplet (5, 7, 8) has the deepest pit depth of 4.
but isn't Triplet (2, 7, 9) has the deepest pit depth 6?
corresponding value of Triplet (2, 7, 9) is (3, -3, 3) and it also satisfies the conditions mentioned, i.e.
1) 0 ≤ P < Q < R < N
2) A[P] > A[P+1] > ... > A[Q] and A[Q] < A[Q+1] < ... < A[R]
so in this case min{A[P] − A[Q], A[R] − A[Q]} is 6.
What am i missing here?
P.S. if you think this post does not belong here in this forum then please point out where should i post it.
See the sequence from P to Q for 2 to 7.
It is 3 -2 0 1 0 -3.
sequence [A[P], A[P+1], ..., A[Q]] is strictly decreasing, i.e. A[P] > A[P+1] > ... > A[Q];
The rule says that this should be a decreasing sequence. But it isn't. 3>-2 but -2 is not greater than 0. Here the sequence breaks.
From 7 to 9. No problem as the sequence is increasing. -3<2<3.
answer of the deepest pit problem in swift :
func solution(_ array: [Int]) -> Int {
//guaranty we have at least three elements
if array.isEmpty {
print("isEmpty")
return -1
}
if array.count < 3 {
print("is less than 3")
return -1
}
//extremum point; max or min points
var extremumPoints = [Int]()
//adding first element
extremumPoints.append(array[0])
//calculate extremum points for 1 to one before last element
for i in 1..<(array.count - 1) {
let isRelativeExtremum = ((array[i] - array[i - 1]) * (array[i] - array[i + 1])) > 0
//we call a point semi-extremum if a point is equal to previous element or next element and not equal to previous element or next element
let isSemiExtremum = ((array[i] != array[i - 1]) && (array[i] == array[i + 1])) || ((array[i] != array[i + 1]) && (array[i] == array[i - 1]))
if isRelativeExtremum || isSemiExtremum {
extremumPoints.append(array[i])
}
}
//adding last element
extremumPoints.append(array[array.count - 1])
//we will hold depthes in this array
var depthes = [Int]()
for i in 1..<(extremumPoints.count - 1) {
let isBottomOfaPit = extremumPoints[i] < extremumPoints[i - 1] && extremumPoints[i] < extremumPoints[i + 1]
if isBottomOfaPit {
let d1 = extremumPoints[i - 1] - extremumPoints[i]
let d2 = extremumPoints[i + 1] - extremumPoints[i]
let d = min(d1, d2)
depthes.append(d)
}
}
//deepest pit
let deepestPit = depthes.max()
return deepestPit ?? -1
}
//****************************
let A = [0,1,3,-2,0,1,0,-3,2,3]
let deepestPit = solution(A)
print(deepestPit) // 4
def deepest(A):
def check(p, q, r, A):
if A[p] > A[q] and A[q] < A[r]:
return min(A[p] - A[q], A[r] - A[q])
else:
return -1
max_depth = 0
for i in range(1, len(A) - 2):
if A[i-1] > A[i] < A[i + 1]:
p = i
r = i
while 0 <= p and r <= len(A) - 1:
depth = check(p, i, r, A)
max_depth = max(max_depth, depth)
p -= 1
r += 1
return max_depth

Very interesting program of building pyramid

I have came across this very interesting program of printing numbers in pyramid.
If n = 1 then print the following,
1 2
4 3
if n = 2 then print the following,
1 2 3
8 9 4
7 6 5
if n = 3 then print the following,
1 2 3 4
12 13 14 5
11 16 15 6
10 9 8 7
I can print all these using taking quite a few loops and variables but it looks very specific. You might have noticed that all these pyramid filling starts in one direction until it find path filled. As you might have noticed 1,2,3,4,5,6,7,8,9,10,11,12 filed in outer edges till it finds 1 so after it goes in second row after 12 and prints 13,14 and so on. It prints in spiral mode something like snakes game snakes keep on going until it hits itself.
I would like to know is there any algorithms behind this pyramid generation or its just tricky time consuming pyramid generation program.
Thanks in advance. This is a very interesting challenging program so I kindly request no need of pipeline of down vote :)
I made a small recursive algorithm for your problem.
public int Determine(int n, int x, int y)
{
if (y == 0) return x + 1; // Top
if (x == n) return n + y + 1; // Right
if (y == n) return 3 * n - x + 1; // Bottom
if (x == 0) return 4 * n - y + 1; // Left
return 4 * n + Determine(n - 2, x - 1, y - 1);
}
You can call it by using a double for loop. x and y start at 0:
for (int y=0; y<=n; y++)
for (int x=0; x<=n; x++)
result[x,y] = Determine(n,x,y);
Here is some C code implementing the basic algorithm submitted by #C.Zonnerberg my example uses n=6 for a 6x6 array.
I had to make a few changes to get the output the way I expected it to look. I swapped most the the x's and y's and changed several of the n's to n-1 and changed the comparisons in the for loops from <= to <
int main(){
int x,y,n;
int result[6][6];
n=6;
for (x=0; x<n; x++){
for (y=0; y<n; y++) {
result[x][y] = Determine(n,x,y);
if(y==0)
printf("\n[%d,%d] = %2d, ", x,y, result[x][y]);
else
printf("[%d,%d] = %2d, ", x,y, result[x][y]);
}
}
return 0;
}
int Determine(int n, int x, int y)
{
if (x == 0) return y + 1; // Top
if (y == n-1) return n + x; // Right
if (x == n-1) return 3 * (n-1) - y + 1; // Bottom
if (y == 0) return 4 * (n-1) - x + 1; // Left
return 4 * (n-1) + Determine(n - 2, x - 1, y- 1);
}
Output
[0,0] = 1, [0,1] = 2, [0,2] = 3, [0,3] = 4, [0,4] = 5, [0,5] = 6,
[1,0] = 20, [1,1] = 21, [1,2] = 22, [1,3] = 23, [1,4] = 24, [1,5] = 7,
[2,0] = 19, [2,1] = 32, [2,2] = 33, [2,3] = 34, [2,4] = 25, [2,5] = 8,
[3,0] = 18, [3,1] = 31, [3,2] = 36, [3,3] = 35, [3,4] = 26, [3,5] = 9,
[4,0] = 17, [4,1] = 30, [4,2] = 29, [4,3] = 28, [4,4] = 27, [4,5] = 10,
[5,0] = 16, [5,1] = 15, [5,2] = 14, [5,3] = 13, [5,4] = 12, [5,5] = 11,
With an all-zeros array, you could start with [row,col] = [0,0], fill in this space, then add [0,1] to position (one to the right) until it's at the end or runs into a non-zero.
Then go down (add [1,0]), filling in space until it's the end or runs into a non-zero.
Then go left (add [0,-1]), filling in space until it's the end or runs into a non-zero.
Then go up (add [-1,0]), filling in space until it's the end or runs into a non-zero.
and repeat...

Resources