Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am given an empty stack , I need to support three operations :
PUSH x : Push element x onto the stack
POP : Pop the top element
INC L R x : Increment the L to R elements by x
After each query I need to tell the top element of array. How to do this question if their can be 10^6 queries.
We can't update all the elements again and again. So please provide an efficient solution.
We can use a segment tree that supports your required operations in O(log n):
Increment all elements in a given range
For each node in your segment tree associated with an interval included in your given range, increment a counter num_increments for it: this counter will tell you how many times the elements in this range were all incremented. Only do this for the topmost such nodes, do not recursively go down to their children once you've done this.
Query the value at a given index
The answer to this is v[index] + number_of_increments. You can find the number of increments by finding the node associated with the index in the segment tree and keeping track of its parents' num_increments values as you walk down to it.
There are a couple of things to consider, depending on your exact problem:
For a given L, R, maybe set R = min(R, stack.Size), as it makes no sense to increment elements not yet in the stack. Or maybe it does for your problem, I don't know. If it does make sense for your problem, it makes things easier, and it invalidates my second point below;
What happens when you pop an element from the stack? This method will still mark its position as incremented, so if you push one back, it will consider it incremented by 1. Think about how you can also support decrement for a given index (it's similar to the query operation).
Incrementation by x instead of 1 should be easy to achieve.
There will be more push than pop operations, otherwise the stack would be empty in the end. Look for the last push that doesn't have a corresponding pop, this is the element that will be on top of the stack in the end. Now simply increment this element for each appropriate inc operation.
Complexity for this method:
O(2n) computation
O(queries) memory
n = total number of operations of all queries
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Suppose I have a image that looks like
x x x x
x x x x
x x x x
Dimensions might change, but just give a rough idea about this.
I am curious if there is an algorithm that will help me quickly check if the current point I am looking at is a corner / points on one of the 4 sides / within the square itself, as well as helping me check all points around the point I am currently looking at.
My current approach is like writing a few helper functions that separately check if the current coordinate is a corner / points on one of the 4 sides / within the square itself? And within each helper function, I use several loops to check all the neighbor points around the point I am currently looking at. But I feel like this approach is extremely ineffective, I believe there must exists a more advanced way to do this, can anyone help me if you have encountered this kind of question before?
Thanks.
Largely you are correct, but there should be no need to use loops. You can make your functions efficient by using some index calculations and using direct access to 1-dimensional array.
Imagine that your image is stored in a 1-dimensional array D. The image is of size (m,n). Hence the array will have a size of m x n. Each data point will have its ID as the index to the array D.
To access neighbors of ID = a, use the following offsets:
a-1, a+1 for left and right neighbors
a-m, a+m for bottom and top neighbors
a-m+1, a-m-1, a+m+1, a+m-1 for diagonal neighbors
After every offset you need to check for the following:
is the neighbor index out of bound for the array D?
does the neighbor index wrap around the x-bounds i.e. assert that
abs((neighbor_id % m)-(a%m)) <= 1 , else neighbor_id is not my neighbor.
Of course, the second test assumes that your image is large enough (perhaps m > 3).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
A question has arisen on an old exam paper used for revision which asks about a type of sorting that I cannot find the name of anywhere. Hopefully somebody here can help, please?
b. Produce an algorithm which will sort an array so that the largest
items are on the ends and the smallest in the middle. For example:
[2,6,5,9,12] might become [12,6,2,5,9]
Make one pass through the sequence to find the largest value, the second largest value, and the smallest value. Swap the largest to one end, the second largest to the other end, and the smallest to the middle. Voila: largest items are on the ends and the smallest is in the middle. Calling this a "sort" is silly.
I guess the point is to create the algo yourself:
Just an idea:
biggest = value of first element
smallest= value of first element
For all elements of the array do:
If value of current element > biggest
biggest = value of current element
Add biggest as last element of the array
If value of current element < smallest
smallest = value of current element
End of for loop
Move last element of the the array at first position
#now the biggest is the first element, the second bigger number is the last one
Put smallest at middle position of the array [idx max / 2 rounded up]
# now the smallest is in the middle
I hope it helps.
Thomas
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Call every subunitary ratio with its denominator a power of 2 a perplex.
Number 1 can be written in many ways as a sum of perplexes.
Call every sum of perplexes a zeta.
Two zetas are distinct if and only if one of the zeta has as least one perplex that the other does not have. In the image shown above, the last two zetas are considered to be the same.
Find all the numbers of ways 1 can be written as a zeta with N perplexes. Because this number can be big, calculate it modulo 100003.
Please don't post the code, but rather the algorithm. Be as precise as you can.
This problem was given at a contest and the official solution, written in the Romanian language, has been uploaded at https://www.dropbox.com/s/ulvp9of5b3bfgm0/1112_descr_P2_fractii2.docx?dl=0 , as a docx file. (you can use google translate)
I do not understand what the author of the solution meant to say there.
Well, this reminds me of BFS algorithms(Breadth first search), where you radiate out from a single point to find multiple solutions w/ different permutations.
Here you can use recursion, and set the base case as when N perplexes have been reached in that 1 call stack of the recursive function.
So you can say:
function(int N <-- perplexes, ArrayList<Double> currentNumbers, double dividedNum)
if N == 0, then you're done - enter the currentNumbers array into a hashtable
clone the currentNumbers ArrayList as cloneNumbers
remove dividedNum from cloneNumbers and add 2 dividedNum/2
iterate through index of cloneNumbers
for every number x in cloneNumbers, call function(N--, cloneNumbers, x)
This is a rough, very inefficient but short way to do it. There's obviously a lot of ways you can prune the algorithm(reduce the amount of duplicates going into the hashtable, prevent cloning as much as possible, etc), but because this shows the absolute permutation of every number, and then enters that sequence into a hashtable, the hashtable will use its equals() comparison to see that the sequence already exists(such as your last 2 zetas), and reject the duplicate. That way, you'll be left with the answer you want.
The efficiency of the current algorithm: O(|E|^(N)), where |E| is the absolute number of numbers you can have inside of the array at the end of all insertions, and N is the number of insertions(or as you said, # of perplexes). Obviously this isn't the most optimal speed, but it does definitely work.
Hope this helps!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Hi I am learning algorithms using python. I was going through some sample questions in the book where it asks to show why the worst case time of the extraction operation on a heap
implemented as an array is O(log n)? I have no clue where to start on this and I am getting close to the exam. Could anyone please help me prove this?
Thanks
Let's say we have a max heap. I will illustrate for n = 7, but logic is the same heaps of bigger size.
Worst case for extract happens when the root node has been changed to contain the smallest value of all the nodes (we extract the root in O(1) and put the last element in the array to be a root).
Now when we call Max-Heapify on the root, the value will have to be exchanged down with its child at every level, until it reaches the lowest level.
This is because, after every swapping, the value will still be smaller than both its children (since it is the minimum), until it reaches the lowest level where it has no more children.
In such a heap, the number of exchanges to max-heapify the root will be equal to the
height of the tree, which is log(n). So the worst case running time is O(log n).
Assuming a min priority queue:
Disregarding the actual extraction which is O(1), remember that when you extract the smallest value from a heap, you swap the last element with the first one then restore the heap property by comparing this element to it's children and swapping accordingly. Using heap property, the children of any node p are at index 2p and 2p + 1. So in the worst case, to find the children of any node, one would have to check nodes:
{ 2p, 2 * 2p, 2 * 2 * 2p, ..., N - 1 }
{ 2p, 4p, 8p, ..., lg2N(p) }
or
{ (2 1)p, (2 2)p, (2 3)p, ..., (2 lg2N)p }
So as you can see, the maximum number of elements to check in the worst case is lg2(N)
This was a pretty rough way of showing this property, I apologize
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to propose an algorithm for the following: let us assume that we have an array consisting of zeros and ones. The array is filled with zeros from the beginning of the array to the index m, and all remaining indexes are filled with ones. I need to find this index m in O(logm) time. Here is what i thought: I think this is like binary search, first i look at the middle element of the array, if that is zero, then i forget about the left part of the array and do the same for the right part, and continue like this until i encounter a one. If the middle element is one, then i forget about the right part and do the same for left part of the array. Is this a correct O(logm) solution? Thanks
It is not "like" a binary search - it is a binary search. Unfortunately, it is O(logN), not O(logM).
To find the borderline in O(logM), start from the other end: try positions {1, 2, 4, 8, 16, ... 2^i} and so on, until you hit a 1. Then do a binary search on the interval between 2^i and 2^(i+1), where 2^i+1 is the first position where you discovered a 1.
Finding the first 1 takes O(logM), because the index is doubled on each iteration. After that, the binary search takes another O(logM), because the length of the interval 2^i..2^(i+1) is less than M as well.