Getting the number of "friend" tower - algorithm

Given n towers numbered 1, 2, 3,...,n, with their height (h[i] = towers[i] 's height) and a number k.
Two tower a, b are considered friends iff:
a - b = k
h[a] == h[b]
max(h[a+1], h[a+2] ... h[b - 1]) <= h[a]
How many `friendships' are there ?
Solution is straight forward:
for i = 1, 2, 3, 4, 5, ..., n - k:
if h[i] == h[i+k]:
for j in range[i, i+k] :
MAX = max(MAX, h[j]
if MAX <= h[i]:
ans++
But I want the solution in the most efficient way. Please help.
For a large n, the program will eat the RAM; to reduce that, instead of array I used a queue to add the height of towers (when q.size() == k, Just q.pop() ) . Checking for 3rd condition with a large k with naive solution must take time.

You can use deque to provide O(n) algorithm.
At every step:
Remove too old elements from the deque head
(if currentindex - index >= k)
Remove elements from tail that have no chance to become maximum
in the k-size window (those < currentvalue)
Add new element (index) to the deque tail
This keeps index of max element in k-size window on the head of the deque, so you can determine - is there larger value between two towers
Decription of (sliding minimum) algorithm with pseudocode:
Can min/max of moving window achieve in O(N)?

Elaborating on my comment, you could use the answer to this question to build a queue that can keep track of the maximum element between two towers. Moving to the next element only takes O(1) amortized time. I made a simple implementation in pseudocode, assuming the language supports a standard stack (would be surprised if it didn't). For an explanation, see the linked answer.
class TupleStack
Stack stack
void push(int x)
if stack.isEmpty()
stack.push((value: x, max: x))
else
stack.push((value: x, max: max(x, stack.peek().max))
int pop()
return stack.pop().value
bool isEmpty()
return stack.isEmpty()
int getMax()
if isEmpty()
return -infinity
else
return stack.peek().max
class MaxQueue
TupleStack stack1
TupleStack stack2
void enqueue(int x)
stack1.push(x)
int dequeue()
if stack2.isEmpty()
while !stack1.isEmpty()
stack2.push(stack1.pop())
return stack2.pop()
int getMax()
return max(stack1.getMax(), stack2.getMax())
Your algorithm is now very trivial. Put the first k elements in the queue. After that, repetitively check if two towers at distance k have the same height, check that the max in between (which is the max of the queue) is at most their height, and move to the next two towers. Updating the queue takes O(1) amortized time, so this algorithm runs in O(n), which is clearly optimal.
MaxQueue queue
for (int i = 1; i <= k; i++) // add first k towers to queue
queue.enqueue(h[i])
for (int i = k+1; i <= n; i++)
if h[i] == h[i-k] and h[i] >= queue.getMax()
ans++
queue.enqueue(h[i])
queue.dequeue()

Related

Search a sorted integer array for an element equal to its index, where A may have duplicate entries

My question is very similar to Q1 and Q2, except that I want to deal with the case where the array may have duplicate entries.
Assume the array A consists of integers sorted in increasing order. If its entries are all distinct, you can do this easily in O(log n) with binary search. But if there are duplicate entries, it's more complicated. Here's my approach:
int search(const vector<int>& A) {
int left = 0, right = A.size() - 1;
return binarySearchHelper(A, left, right);
}
int binarySearchHelper(const vector<int>& A, int left, int right) {
int indexFound = -1;
if (left <= right) {
int mid = left + (right - left) / 2;
if (A[mid] == mid) {
return mid;
} else {
if (A[mid] <= right) {
indexFound = binarySearchHelper(A, mid + 1, right);
}
if (indexFound == -1 && A[left] <= mid) {
indexFound = binarySearchHelper(A, left, mid - 1);
}
}
}
return indexFound;
}
In the worst case (A has no element equal to its index), binarySearchHelper makes 2 recursive calls with input size halved at each level of recursion, meaning it has a worst-case time complexity of O(n). That's the same as the O(n) approach where you just read through the array in order. Is this really the best you can do? Also, is there a way to measure the algorithm's average time complexity? If not, is there some heuristic for deciding when to use the basic O(n) read-through approach and when to try a recursive approach such as mine?
If A has negative integers, then it's necessary to check the condition if (left <= right) in binarySearchHelper. Since, for example, if A = [-1], then the algorithm would recurse from bsh(A, 0, 0) to bsh(A,1,0) and to bsh(A,0,-1). My intuition leads me to believe the check if (left <= right) is necessary if and only if A has some negative integers. Can anyone help me verify this?
I would take a different approach. First I would eliminate all negative numbers in O(log n) simply by doing a binary search for the first positive number. This is allowed because no negative number can be equal to its index. Let's say the index of the first positive element is i.
Now I will keep doing the following until I find the element or find that it doesn't exist:
If i not inside A, return false.
If i < A[i] do i = A[i]. It would take A[i] - i duplicates to have i 'catch up' to A[i], so we would increment i by A[i] - i, this is equivalent to setting i to A[i]. Go to 1.
If i == A[i] return true (and index if you want to).
Find the first index greater than i such that i <= A[i]. You can do this doing a 'binary search from the left' by incrementing i by 1, 2, 4, 8, etc and then doing a binary search on the last interval you found it in. If it doesn't exist, return false.
In the worst case the above is stil O(n), but it has many tricks to speed it up way beyond that in better cases.

Maximize sum of list with no more than k consecutive elements from input

I have an array of N numbers and I want remove only those elements from the list which when removed will create a new list where there are no more K numbers adjacent to each other. There can be multiple lists that can be created with this restriction. So I just want that list in which the sum of the remaining numbers is maximum and as an output print that sum only.
The algorithm that I have come up with so far has a time complexity of O(n^2). Is it possible to get better algorithm for this problem?
Link to the question.
Here's my attempt:
int main()
{
//Total Number of elements in the list
int count = 6;
//Maximum number of elements that can be together
int maxTogether = 1;
//The list of numbers
int billboards[] = {4, 7, 2, 0, 8, 9};
int maxSum = 0;
for(int k = 0; k<=maxTogether ; k++){
int sum=0;
int size= k;
for (int i = 0; i< count; i++) {
if(size != maxTogether){
sum += billboards[i];
size++;
}else{
size = 0;
}
}
printf("%i\n", sum);
if(sum > maxSum)
{
maxSum = sum;
}
}
return 0;
}
The O(NK) dynamic programming solution is fairly easy:
Let A[i] be the best sum of the elements to the left subject to the not-k-consecutive constraint (assuming we're removing the i-th element as well).
Then we can calculate A[i] by looking back K elements:
A[i] = 0;
for j = 1 to k
A[i] = max(A[i], A[i-j])
A[i] += input[i]
And, at the end, just look through the last k elements from A, adding the elements to the right to each and picking the best one.
But this is too slow.
Let's do better.
So A[i] finds the best from A[i-1], A[i-2], ..., A[i-K+1], A[i-K].
So A[i+1] finds the best from A[i], A[i-1], A[i-2], ..., A[i-K+1].
There's a lot of redundancy there - we already know the best from indices i-1 through i-K because of A[i]'s calculation, but then we find the best of all of those except i-K (with i) again in A[i+1].
So we can just store all of them in an ordered data structure and then remove A[i-K] and insert A[i]. My choice - A binary search tree to find the minimum, along with a circular array of size K+1 of tree nodes, so we can easily find the one we need to remove.
I swapped the problem around to make it slightly simpler - instead of finding the maximum of remaining elements, I find the minimum of removed elements and then return total sum - removed sum.
High-level pseudo-code:
for each i in input
add (i + the smallest value in the BST) to the BST
add the above node to the circular array
if it wrapper around, remove the overridden element from the BST
// now the remaining nodes in the BST are the last k elements
return (the total sum - the smallest value in the BST)
Running time:
O(n log k)
Java code:
int getBestSum(int[] input, int K)
{
Node[] array = new Node[K+1];
TreeSet<Node> nodes = new TreeSet<Node>();
Node n = new Node(0);
nodes.add(n);
array[0] = n;
int arrPos = 0;
int sum = 0;
for (int i: input)
{
sum += i;
Node oldNode = nodes.first();
Node newNode = new Node(oldNode.value + i);
arrPos = (arrPos + 1) % array.length;
if (array[arrPos] != null)
nodes.remove(array[arrPos]);
array[arrPos] = newNode;
nodes.add(newNode);
}
return sum - nodes.first().value;
}
getBestSum(new int[]{1,2,3,1,6,10}, 2) prints 21, as required.
Let f[i] be the maximum total value you can get with the first i numbers, while you don't choose the last(i.e. the i-th) one. Then we have
f[i] = max{
f[i-1],
max{f[j] + sum(j + 1, i - 1) | (i - j) <= k}
}
you can use a heap-like data structure to maintain the options and get the maximum one in log(n) time, keep a global delta or whatever, and pay attention to the range i - j <= k.
The following algorithm is of O(N*K) complexity.
Examine the 1st K elements (0 to K-1) of the array. There can be at most 1 gap in this region.
Reason: If there were two gaps, then there would not be any reason to have the lower (earlier gap).
For each index i of these K gap options, following holds true:
1. Sum upto i-1 is the present score of each option.
2. If the next gap is after a distance of d, then the options for d are (K - i) to K
For every possible position of gap, calculate the best sum upto that position among the options.
The latter part of the array can be traversed similarly independently from the past gap history.
Traverse the array further till the end.

How to determine at which index has a sorted array been rotated around?

Given an array, such as [7,8,9,0,1,2,3,4,5,6], is it possible to determine the index around which a rotation has occurred faster than O(n)?
With O(n), simply iterate through all the elements and mark the first decreasing element as the index.
A possibly better solution would be to iterate from both ends towards the middle, but this still has a worst case of O(n).
(EDIT: The below assumes that elements are distinct. If they aren't distinct, I don't think there's anything better than just scanning the array.)
You can binary search it. I won't post any code, but here's the general idea: (I'll assume that a >= b for the rest of this. If a < b, then we know it's still in its sorted order)
Take the first element, call it a, the last element b, and the middle element, calling it c.
If a < c, then you know that the pivot is between c and b, and you can recurse (with c and b as your new ends). If a > c, then you know that the pivot is somewhere between the two, and recurse in that half (with a and c as ends) instead.
ADDENDUM: To extend to cases with repeats, if we have a = c > b then we recurse with c and b as our ends, while if a = c = b, we scan from a to c to see if there is some element d such that it differs. If it doesn't exist, then all of the numbers between a and c are equal, and thus we recurse with c and b as our ends. If it does, there are two scenarios:
a > d < b: Here, d is then the smallest element since we scanned from the left, and we're done.
a < d > b: Here, we know the answer is somewhere between d and b, and so we recurse with those as our ends.
In the best case scenario, we never have to use the equality case, giving us O(log n). Worst case, those scans encompass almost all of the array, giving us O(n).
For an array of N size if the array has been rotated minimum 1 time and less than N times, i think it will work fine:
int low=0, high = n-1;
int mid = (low +high)/2;
while( mid != low && mid != high)
{
if(a[low] < a[mid])
low = mid;
else
high = mid;
mid = (low + high)/2;
}
return high;
You can use a binary search. If you pick 1 as the central value, you know the break is in the first half because 7 > 1 < 6.
One observation is that the shift is equal to the index of the minimal element. So all you have to do is to use the binary search to find the minimal element. The only catch is that if the array has the equal elements, the task gets a bit tricky: you cannot achieve a better big O efficiency than O(N) time because you can have in input like [0, 0, 0, 0, ..., 100, 0, 0, 0, ..., 0] where you cannot find the only non-zero element quicker than linearly of course. But still the following algorithm achieves O(Mins + Log(N)) where Mins is the number of minimal elements iff the array[0] is one of the minima (otherwise Mins = 0 giving no penalty).
l = 0;
r = len(array) - 1;
while( l < r && array[l] == array[r] ) {
l = l + 1;
}
while( l < r ) {
m = (l + r) / 2;
if( array[m] > array[r] ) {
l = m + 1;
} else {
r = m;
}
}
// Here l is the answer: shifting the array l elements left will make it sorted
This works in O(log N) for unique-element arrays and O(N) for non-unique element ones (but still faster than a naive solution for majority of inputs).
Precondition
Array is sorted in ascending order
Array is left rotated
Approach
First we need to find the index at which smallest element is there.
The number of times array has been rotated will be equal to the difference of length of array and the index at which the smallest element is there.
so the task is to find the index of the smallest element. we can find the index of lowest element in two ways
Method 1 -
Just traverse the array and when the current element is smaller than the next element then the current index is the index of smallest element. In worst case this will take O(n)
Method 2
Find the middle element using (lowIndex+highIndex)/2
and then we need to find that in which side of the middle element we can find the smallest element because it can be found either left or right of the middle elment
we compare the first element to the middle element and if first element is greater than the middle element then it means smallest element lies in the left side of middle element and
if the first element is smaller than the middle element then lowest element can be found in the right side of the middle element
so this can be applied like a binary search and in O(log(n)) we cab find the index of smallest element
Using a recursive method:
static void Main(string[] args)
{
var arr = new int[]{7,8,9,0,1,2,3,4,5,6};
Console.WriteLine(FindRotation(arr));
}
private static int FindRotation(int[] arr)
{
var mid = arr.Length / 2;
return CheckRotation(arr, 0, mid, arr.Length-1);
}
private static int CheckRotation(int[] arr, int start, int mid, int end)
{
var returnVal = 0;
if (start < end && end - start > 1)
{
if (arr[start] > arr[mid])
{
returnVal = CheckRotation(arr, start, start + ((mid - start) / 2), mid);
}
else if (arr[end] < arr[mid])
{
returnVal = CheckRotation(arr, mid, mid + ((end - mid) / 2), end);
}
}
else
{
returnVal = end;
}
return returnVal;
}

Amortized Time Cost using Accounting Method

I written an algorithm to calculate the next lexicographic permutation of an array of integers (ex. 123, 132, 213, 231, 312,323). I dont think the code is necessary but I included it below.
I think I have appropriately determined worst case time cost of O(n) where n is the number of elements in the array. I understand however if you utilize "Amortized Cost" you would find that the time cost could be accurately shown as O(1) on average case.
Question:
I would like to learn the "ACCOUNTING METHOD" to show this as O(1) but am having difficulty understanding how to apply a cost to each operation. Accounting method: Link: Accounting_Method_Explained
Thoughts:
Ive thought to apply a cost of changing a value at a position, or applying the cost to a swap. But it really doesnt make much sense.
public static int[] getNext(int[] array) {
int temp;
int j = array.length - 1;
int k = array.length - 1;
// Find largest index j with a[j] < a[j+1]
// Finds the next adjacent pair of values not in descending order
do {
j--;
if(j < 0)
{
//Edge case, where you have the largest value, return reverse order
for(int x = 0, y = array.length-1; x<y; x++,y--)
{
temp = array[x];
array[x] = array[y];
array[y] = temp;
}
return array;
}
}while (array[j] > array[j+1]);
// Find index k such that a[k] is smallest integer
// greater than a[j] to the right of a[j]
for (;array[j] > array[k]; k--,count++);
//Swap the two elements found from j and k
temp = array[k];
array[k] = array[j];
array[j] = temp;
//Sort the elements to right of j+1 in ascending order
//This will make sure you get the next smallest order
//after swaping j and k
int r = array.length - 1;
int s = j + 1;
while (r > s) {
temp = array[s];
array[s++] = array[r];
array[r--] = temp;
}
return array;
} // end getNext
Measure running time in swaps, since the other work per iteration is worst-case O(#swaps).
The swap of array[j] and array[k] has virtual cost 2. The other swaps have virtual cost 0. Since at most one swap per iteration is costly, the running time per iteration is amortized constant (assuming that we don't go into debt).
To show that we don't go into debt, it suffices to show that, if the swap of array[j] and array[k] leaves a credit at position j, then every other swap involves a position with a credit available, which is consumed. Case analysis and induction reveal that, between iterations, if an item is larger than the one immediately following it, then it was put in its current position by a swap that left an as-yet unconsumed credit.
This problem is not a great candidate for the accounting method, given the comparatively simple potential function that can be used: number of indexes j such that array[j] > array[j + 1].
From the aggregate analysis, we see T(n) < n! · e < n! · 3, so we pay $3 for each operation, and its enough for the total n! operations. Therefore its an upper bound of actual cost. So the total amortized

Find the x smallest integers in a list of length n

You have a list of n integers and you want the x smallest. For example,
x_smallest([1, 2, 5, 4, 3], 3) should return [1, 2, 3].
I'll vote up unique runtimes within reason and will give the green check to the best runtime.
I'll start with O(n * x): Create an array of length x. Iterate through the list x times, each time pulling out the next smallest integer.
Edits
You have no idea how big or small these numbers are ahead of time.
You don't care about the final order, you just want the x smallest.
This is already being handled in some solutions, but let's say that while you aren't guaranteed a unique list, you aren't going to get a degenerate list either such as [1, 1, 1, 1, 1] either.
You can find the k-th smallest element in O(n) time. This has been discussed on StackOverflow before. There are relatively simple randomized algorithms, such as QuickSelect, that run in O(n) expected time and more complicated algorithms that run in O(n) worst-case time.
Given the k-th smallest element you can make one pass over the list to find all elements less than the k-th smallest and you are done. (I assume that the result array does not need to be sorted.)
Overall run-time is O(n).
Maintain the list of the x highest so far in sorted order in a skip-list. Iterate through the array. For each element, find where it would be inserted in the skip list (log x time). If in the interior of the list, it is one of the smallest x so far, so insert it and remove the element at the end of the list. Otherwise do nothing.
Time O(n*log(x))
Alternative implementation: maintain the collection of x highest so far in a max-heap, compare each new element with top element of the heap, and pop + insert new element only if the new element is less than the top element. Since comparison to top element is O(1) and pop/insert O(log x), this is also O(nlog(x))
Add all n numbers to a heap and delete x of them. Complexity is O((n + x) log n). Since x is obviously less than n, it's O(n log n).
If the range of numbers (L) is known, you can do a modified counting sort.
given L, x, input[]
counts <- array[0..L]
for each number in input
increment counts[number]
next
#populate the output
index <- 0
xIndex <- 0
while xIndex < x and index <= L
if counts[index] > 0 then
decrement counts[index]
output[xIndex] = index
increment xIndex
else
increment index
end if
loop
This has a runtime of O(n + L) (with memory overhead of O(L)) which makes it pretty attractive if the range is small (L < n log n).
def x_smallest(items, x):
result = sorted(items[:x])
for i in items[x:]:
if i < result[-1]:
result[-1] = i
j = x - 1
while j > 0 and result[j] < result[j-1]:
result[j-1], result[j] = result[j], result[j-1]
j -= 1
return result
Worst case is O(x*n), but will typically be closer to O(n).
Psudocode:
def x_smallest(array<int> arr, int limit)
array<int> ret = new array[limit]
ret = {INT_MAX}
for i in arr
for j in range(0..limit)
if (i < ret[j])
ret[j] = i
endif
endfor
endfor
return ret
enddef
In pseudo code:
y = length of list / 2
if (x > y)
iterate and pop off the (length - x) largest
else
iterate and pop off the x smallest
O(n/2 * x) ?
sort array
slice array 0 x
Choose the best sort algorithm and you're done: http://en.wikipedia.org/wiki/Sorting_algorithm#Comparison_of_algorithms
You can sort then take the first x values?
Java: with QuickSort O(n log n)
import java.util.Arrays;
import java.util.Random;
public class Main {
public static void main(String[] args) {
Random random = new Random(); // Random number generator
int[] list = new int[1000];
int lenght = 3;
// Initialize array with positive random values
for (int i = 0; i < list.length; i++) {
list[i] = Math.abs(random.nextInt());
}
// Solution
int[] output = findSmallest(list, lenght);
// Display Results
for(int x : output)
System.out.println(x);
}
private static int[] findSmallest(int[] list, int lenght) {
// A tuned quicksort
Arrays.sort(list);
// Send back correct lenght
return Arrays.copyOf(list, lenght);
}
}
Its pretty fast.
private static int[] x_smallest(int[] input, int x)
{
int[] output = new int[x];
for (int i = 0; i < x; i++) { // O(x)
output[i] = input[i];
}
for (int i = x; i < input.Length; i++) { // + O(n-x)
int current = input[i];
int temp;
for (int j = 0; j < output.Length; j++) { // * O(x)
if (current < output[j]) {
temp = output[j];
output[j] = current;
current = temp;
}
}
}
return output;
}
Looking at the complexity:
O(x + (n-x) * x) -- assuming x is some constant, O(n)
What about using a splay tree? Because of the splay tree's unique approach to adaptive balancing it makes for a slick implementation of the algorithm with the added benefit of being able to enumerate the x items in order afterwards. Here is some psuedocode.
public SplayTree GetSmallest(int[] array, int x)
{
var tree = new SplayTree();
for (int i = 0; i < array.Length; i++)
{
int max = tree.GetLargest();
if (array[i] < max || tree.Count < x)
{
if (tree.Count >= x)
{
tree.Remove(max);
}
tree.Add(array[i]);
}
}
return tree;
}
The GetLargest and Remove operations have an amortized complexity of O(log(n)), but because the last accessed item bubbles to the top it would normally be O(1). So the space complexity is O(x) and the runtime complexity is O(n*log(x)). If the array happens to already be ordered then this algorithm would acheive its best case complexity of O(n) with either an ascending or descending ordered array. However, a very odd or peculiar ordering could result in a O(n^2) complexity. Can you guess how the array would have to be ordered for that to happen?
In scala, and probably other functional languages, a no brainer:
scala> List (1, 3, 6, 4, 5, 1, 2, 9, 4) sortWith ( _<_ ) take 5
res18: List[Int] = List(1, 1, 2, 3, 4)

Resources