Efficient way to find all zeros in a matrix? - algorithm

I am thinking of efficient algorithm to find the number of zeros in a row of matrix but can only think of O(n2) solution (i.e by iterating over each row and column). Is there a more efficient way to count the zeros?
For example, given the matrix
3, 4, 5, 6
7, 8, 0, 9
10, 11, 12, 3
4, 0, 9, 10
I would report that there are two zeros.

Without storing any external information, no, you can't do any better than Θ(N2). The rationale is simple - if you don't look at all N2 locations in the matrix, then you can't guarantee that you've found all of the zeros and might end up giving the wrong answer back. For example, if I know that you look at fewer than N2 locations, then I can run your algorithm on a matrix and see how many zeros you report. I could then look at the locations that you didn't access, replace them all with zeros, and run your algorithm again. Since your algorithm doesn't look at those locations, it can't know that they have zeros in them, and so at least one of the two runs of the algorithm would give back the wrong answer.
More generally, when designing algorithms to process data, a good way to see if you can do better than certain runtimes is to use this sort of "adversarial analysis." Ask yourself the question: if I run faster than some time O(f(n)), could an adversary manipulate the data in ways that change the answer but I wouldn't be able to detect? This is the sort of analysis that, along with some more clever math, proves that comparison-based sorting algorithms cannot do any better than Ω(n log n) in the average case.
If the matrix has some other properties to it (for example, if it's sorted), then you might be able to do a better job than running in O(N2). As an example, suppose that you know that all rows of the matrix are sorted. Then you can easily do a binary search on each row to determine how many zeros it contains, which takes O(N log N) time and is faster.
Depending on the parameters of your setup, you might be able to get the algorithm to run faster if you assume that you're allowed to scan in parallel. For example, if your machine has K processors on it that can be dedicated to the task of scanning the matrix, then you could split the matrix into K roughly evenly-sized groups, have each processor count the number of zeros in the group, then sum the results of these computations up. This ends up giving you a runtime of Θ(N2 / K), since the runtime is split across multiple cores.

Always O(n^2) - or rather O(n x m). You cannot jump over it.
But if you know that matrix is sparse (only a few elements have nonzero values), you can store only values that are non zero and matrix size. Then consider using hashing over storing whole matrix - generally create hash which maps a row number to a nested hash.
Example:
m =
[
0 0 0 0
0 2 0 0
0 0 1 0
0 0 1 0
]
Will be represented as:
row_numbers = 4
column_numbers = 4
hash = { 1 => { 1 => 2}, 2 => {2 => 1, 3 => 2}}
Then:
number_of_zeros = row_numbers * column_numbers - number_of_cells_in_hash(hash)

For any un sorted matrix it should be O(n). Since generally we represent total elements with 'n'.
If Matrix contains X Rows and Y Columns, X by Y = n.
E.g In 4 X 4 un sorted matrix it total elements 16. so When we iterate in linear with 2 loops 4 X 4 = 16 times. it will be O(n) because the total elements in the array are 16.
Many people voted for O(n^2) because they considered n X n as matrix.
Please correct me if my understanding is wrong.

Assuming that when you say "in a row of a matrix", you mean that you have the row index i and you want to count the number of zeros in the i-th row, you can do better than O(N^2).
Suppose N is the number of rows and M is the number of columns, then store your
matrix as a single array [3,4,5,6,7,8,0,9,10,11,12,34,0,9,10], then to access row i, you access the array at index N*i.
Since arrays have constant time access, this part doesn't depend on the size of the matrix. You can then iterate over the whole row by visiting the element N*i + j for j from 0 to N-1, this is O(N), provided you know which row you want to visit and you are using an array.

This is not a perfect answer for the reasons I'll explain, but it offers an alternative solution potentially faster than the one you described:
Since you don't need to know the position of the zeros in the matrix, you can flatten it into a 1D array.
After that, perform a quicksort on the elements, this may provide a performance of O(n log n), depending on the randomness of the matrix you feed in.
Finally, count the zero elements at the beginning of the array until you reach a non-zero number.
In some cases, this will be faster than checking every element, although in a worst-case scenario the quicksort will take O(n2), which in addition to the zero counting at the end may be worse than iterating over each row and column.

assuming the given Matrix is M do an M+(-M) operation but do use the default + use instead my_add(int a, int b) such that
int my_add(int a, int b){
return (a == b == 0) ? 1 : (a+b);
}
That will give you a matrix like
0 0 0 0
0 0 1 0
0 0 0 0
0 1 0 0
Now you create a s := 0 and keep adding all elements to s. s += a[i][j]
You can do both in one cycle even. s += my_add(a[i][j], (-1)*a[i][j])
But still Its O(m*n)
NOTE
To count the number of 1's you generally check all items in the Matrix. without operating on all elements I don't think you can tell the number of 1's. and to loop all elements its (m*n). It can be faster than (m*n) if and only if you can leave some elements unchecked and say the number of 1's
EDIT
However if you move a 2x2 kernel over the matrix and hop you will get (m*n)/k iteration e.g. if you operate on neighboring elements a[i][j], a[i+1][j], a[i][j+1], a[i+1][j+1] till i < m & i< n

Related

How to find 2 special elements in the array in O(n)

Let a1,...,an be a sequence of real numbers. Let m be the minimum of the sequence, and let M be the maximum of the sequence.
I proved that there exists 2 elements in the sequence, x,y, such that |x-y|<=(M-m)/n.
Now, is there a way to find an algorithm that finds such 2 elements in time complexity of O(n)?
I thought about sorting the sequence, but since I dont know anything about M I cannot use radix/bucket or any other linear time algorithm that I'm familier with.
I'd appreciate any idea.
Thanks in advance.
First find out n, M, m. If not already given they can be determined in O(n).
Then create a memory storage of n+1 elements; we will use the storage for n+1 buckets with width w=(M-m)/n.
The buckets cover the range of values equally: Bucket 1 goes from [m; m+w[, Bucket 2 from [m+w; m+2*w[, Bucket n from [m+(n-1)*w; m+n*w[ = [M-w; M[, and the (n+1)th bucket from [M; M+w[.
Now we go once through all the values and sort them into the buckets according to the assigned intervals. There should be at a maximum 1 element per bucket. If the bucket is already filled, it means that the elements are closer together than the boundaries of the half-open interval, e.g. we found elements x, y with |x-y| < w = (M-m)/n.
If no such two elements are found, afterwards n buckets of n+1 total buckets are filled with one element. And all those elements are sorted.
We once more go through all the buckets and compare the distance of the content of neighbouring buckets only, whether there are two elements, which fulfil the condition.
Due to the width of the buckets, the condition cannot be true for buckets, which are not adjoining: For those the distance is always |x-y| > w.
(The fulfilment of the last inequality in 4. is also the reason, why the interval is half-open and cannot be closed, and why we need n+1 buckets instead of n. An alternative would be, to use n buckets and make the now last bucket a special case with [M; M+w]. But O(n+1)=O(n) and using n+1 steps is preferable to special casing the last bucket.)
The running time is O(n) for step 1, 0 for step 2 - we actually do not do anything there, O(n) for step 3 and O(n) for step 4, as there is only 1 element per bucket. Altogether O(n).
This task shows, that either sorting of elements, which are not close together or coarse sorting without considering fine distances can be done in O(n) instead of O(n*log(n)). It has useful applications. Numbers on computers are discrete, they have a finite precision. I have sucessfuly used this sorting method for signal-processing / fast sorting in real-time production code.
About #Damien 's remark: The real threshold of (M-m)/(n-1) is provably true for every such sequence. I assumed in the answer so far the sequence we are looking at is a special kind, where the stronger condition is true, or at least, for all sequences, if the stronger condition was true, we would find such elements in O(n).
If this was a small mistake of the OP instead (who said to have proven the stronger condition) and we should find two elements x, y with |x-y| <= (M-m)/(n-1) instead, we can simplify:
-- 3. We would do steps 1 to 3 like above, but with n buckets and the bucket width set to w = (M-m)/(n-1). The bucket n now goes from [M; M+w[.
For step 4 we would do the following alternative:
4./alternative: n buckets are filled with one element each. The element at bucket n has to be M and is at the left boundary of the bucket interval. The distance of this element y = M to the element x in the n-1th bucket for every such possible element x in the n-1thbucket is: |M-x| <= w = (M-m)/(n-1), so we found x and y, which fulfil the condition, q.e.d.
First note that the real threshold should be (M-m)/(n-1).
The first step is to calculate the min m and max M elements, in O(N).
You calculate the mid = (m + M)/2value.
You concentrate the value less than mid at the beginning, and more than mid at the end of he array.
You select the part with the largest number of elements and you iterate until very few numbers are kept.
If both parts have the same number of elements, you can select any of them. If the remaining part has much more elements than n/2, then in order to maintain a O(n) complexity, you can keep onlyn/2 + 1 of them, as the goal is not to find the smallest difference, but one difference small enough only.
As indicated in a comment by #btilly, this solution could fail in some cases, for example with an input [0, 2.1, 2.9, 5]. For that, it is needed to calculate the max value of the left hand, and the min value of the right hand, and to test if the answer is not right_min - left_max. This doesn't change the O(n) complexity, even if the solution becomes less elegant.
Complexity of the search procedure: O(n) + O(n/2) + O(n/4) + ... + O(2) = O(2n) = O(n).
Damien is correct in his comment that the correct results is that there must be x, y such that |x-y| <= (M-m)/(n-1). If you have the sequence [0, 1, 2, 3, 4] you have 5 elements, but no two elements are closer than (M-m)/n = (4-0)/5 = 4/5.
With the right threshold, the solution is easy - find M and m by scanning through the input once, and then bucket the input into (n-1) buckets of size (M-m)/(n-1), putting values that are on the boundaries of a pair of buckets into both buckets. At least one bucket must have two values in it by the pigeon-hole principle.

Difficulties formulating a square matrix algorithm in O(n) time

I am looking back on some past papers, for my exam, and I have come across a square matrix algorithm question/analysis that I cannot for the life of me approach.
Basically, I am given a N by N matrix (a square matrix basically), and I need to implement a data structure that allows me to increase the size of the matrix by 1 (row + 1, column + 1) in O(n) time.
After coercing my tutor, I realise the best data structure would be an array of arrays, so so essentially an something like this [ {1,2,3},{4,5,6},{7,8,9} ] this would denote my matrix, row 1, row 2, row 3
Now I need to be able to expand this matrix by 1 when the increase_size() method is called, I have already tried a naive solution, that is create a new empty array of size 4 sinze our previous matrix has 3 elements, append this array to our matrix_array, and then add a 0 to all the remaining arrays, however this takes O(n^2) time.
I believe there is something here that is related to the rows and columns, when we increase our matrix size we are essentially creating a new row and column, I believe this has something to do with the solution.
I have attached the question below.
Try an array of arrays:
M = [ A1, A2, ..., An ]
each array Ax contains the values a_{i,j} if max(i,j) == x.
I'll let you do the proofs.
naive solution works in O(n) actually:
consider any matrix of size n * n implemented as array of arrays (where arrays are dynamic)
to reach size n + 1 we first create a new array of size n and append to the end of our matrix, this takes O(n) time
then for every row in our matrix we append a 0 to the end we have n + 1 rows and append 1 element each so this too takes O(n) time
in total runtime is O(n)

Can this be properly modeled with segment trees?

The problem I'm working on requires processing several queries on an array (the size of the array is less than 10k, the largest element is certainly less than 10^9).
A query consists of two integers, and one must find the total count of subarrays that have an equal count of these integers. There may be up to 5 * 10^5 queries.
For instance, given the array [1, 2, 1], and the query 1 2 we find that there are two subarrays with equal counts of 1 and 2, namely [1, 2] and [2, 1].
My initial approach was using dynamic programming in order to construct a map, such that memo[i][j] = the number of times the number i appears in the array, until index j. I would use this in a similar way one would use prefix sums, but instead frequencies would accumulate.
Constructing this map took me O(n^2). For each query, I'd do an O(1) processing for each interval and increment the answer. This leads to a complexity of O((q + 1)n * (n - 1) / 2)) [q is the number of queries], which is to say O(n^2), but I also wanted to emphasize that daunting constant factor.
After some rearrangement, I'm trying to find out if there's a way to determine for every subarray the frequency count of each element. I strongly feel this problem is about segment trees and I've struggled with coming up with a proper model and this was the only thing I could think of.
However my approach doesn't seem to be too useful in this case, considering the complexity of combining nodes holding such a great amount of information, not to mention the memory overhead.
How can this be solved efficiently?
Idea 1
You can reduce the time for each query from O(n^2) to O(n) by computing the frequency count of the cumulative count difference:
from collections import defaultdict
def query(A,a,b):
t = 0
freq = defaultdict(int)
freq[0] = 1
for x in A:
if x==a:
t+=1
elif x==b:
t-=1
freq[t] += 1
return sum(count*(count-1)/2 for count in freq.values())
print query([1,2,1],1,2)
The idea is that t represents the total discrepancy between the count of the two elements.
If we find two positions in the array with the same total discrepancy we can conclude that the subarray between these positions must have an equal number.
The expression count*(count-1)/2 simply counts the number of ways of choosing two positions from the count which have the same discrepancy.
Example
For example, suppose we have the array [1,1,1,2,2,2]. The values for the cumulative discrepancy (number of 1's take away number of 2's) will be:
0,1,2,3,2,1,0
Each pair with the same number, corresponds to a subarray with equal count. e.g. looking at the pair of 2s we find that the range from position 2 to position 4 has equal count.
Idea 2
If this is still not fast enough, you could optimize the query function to quickly skip over all elements that are not equal to a or b. For example, you could prepare a list for each element value that contains all the locations of that element.
Once you have this list, you can then instantly jump to the next location of either a or b. For all intermediate values we know the discrepancy will not change, so you can update the frequency by the number of skipped elements (instead of always adding just 1 to the count).

algorithm - How to sort a 0/1 array with 2n/3 comparisons?

In Algorithm Design Manual, there is such an excise
4-26 Consider the problem of sorting a sequence of n 0’s and 1’s using
comparisons. For each comparison of two values x and y, the algorithm
learns which of x < y, x = y, or x > y holds.
(a) Give an algorithm to sort in n − 1 comparisons in the worst case.
Show that your algorithm is optimal.
(b) Give an algorithm to sort in 2n/3 comparisons in the average case
(assuming each of the n inputs is 0 or 1 with equal probability). Show
that your algorithm is optimal.
For (a), I think it is fairly easy. I can choose a[n-1] as pivot, then do something like in quicksort partition, scan 0 to n - 2, find the middle point where left side is all 0 and right side is all 1, this take n - 1 comparisons.
But for (b), I can't get a clue. It says "each of the n inputs is 0 or 1 with equal probability", so I guess I can assume the numbers of 0 and 1 equal? But how can I get a result which is related to 1/3? divide the whole array into 3 groups?
Thanks
"0 or 1 with equal probability" is the condition for "average" case. Other cases may have worse timing.
Hint 1: 2/3 = 1/2 + 1/8 + 1/32 + 1/128 + ...
Hint 2: Consider the sequence as a sequence of pairs and compare the items in each pair. Half will return equal; half will not. Of the half that are unequal you know which item in the pair is 0 and which is 1, so those need no more comparisons.
No it means that at any position, you have the same chance (probability) of the input value being 0 or 1. this give you a first clue : your algorithm will be randomized.
The runtime will depend on some random variable, and you need to take the expected value to obtain the average complexity case. Note that in this case, you have to detail during complexity analysis, as they require a precise constant (2/3n rather than simply O(n))
Edit:
Hint. In the sorted array (the one you get at the end), what is only thing which varies, knowing you have only 2 possible elements.

Greatest GCD between some numbers

We've got some nonnegative numbers. We want to find the pair with maximum gcd. actually this maximum is more important than the pair!
For example if we have:
2 4 5 15
gcd(2,4)=2
gcd(2,5)=1
gcd(2,15)=1
gcd(4,5)=1
gcd(4,15)=1
gcd(5,15)=5
The answer is 5.
You can use the Euclidean Algorithm to find the GCD of two numbers.
while (b != 0)
{
int m = a % b;
a = b;
b = m;
}
return a;
If you want an alternative to the obvious algorithm, then assuming your numbers are in a bounded range, and you have plenty of memory, you can beat O(N^2) time, N being the number of values:
Create an array of a small integer type, indexes 1 to the max input. O(1)
For each value, increment the count of every element of the index which is a factor of the number (make sure you don't wraparound). O(N).
Starting at the end of the array, scan back until you find a value >= 2. O(1)
That tells you the max gcd, but doesn't tell you which pair produced it. For your example input, the computed array looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
4 2 1 1 2 0 0 0 0 0 0 0 0 0 1
I don't know whether this is actually any faster for the inputs you have to handle. The constant factors involved are large: the bound on your values and the time to factorise a value within that bound.
You don't have to factorise each value - you could use memoisation and/or a pregenerated list of primes. Which gives me the idea that if you are memoising the factorisation, you don't need the array:
Create an empty set of int, and a best-so-far value 1.
For each input integer:
if it's less than or equal to best-so-far, continue.
check whether it's in the set. If so, best-so-far = max(best-so-far, this-value), continue. If not:
add it to the set
repeat for all of its factors (larger than best-so-far).
Add/lookup in a set could be O(log N), although it depends what data structure you use. Each value has O(f(k)) factors, where k is the max value and I can't remember what the function f is...
The reason that you're finished with a value as soon as you encounter it in the set is that you've found a number which is a common factor of two input values. If you keep factorising, you'll only find smaller such numbers, which are not interesting.
I'm not quite sure what the best way is to repeat for the larger factors. I think in practice you might have to strike a balance: you don't want to do them quite in decreasing order because it's awkward to generate ordered factors, but you also don't want to actually find all the factors.
Even in the realms of O(N^2), you might be able to beat the use of the Euclidean algorithm:
Fully factorise each number, storing it as a sequence of exponents of primes (so for example 2 is {1}, 4 is {2}, 5 is {0, 0, 1}, 15 is {0, 1, 1}). Then you can calculate gcd(a,b) by taking the min value at each index and multiplying them back out. No idea whether this is faster than Euclid on average, but it might be. Obviously it uses a load more memory.
The optimisations I can think of is
1) start with the two biggest numbers since they are likely to have most prime factors and thus likely to have the most shared prime factors (and thus the highest GCD).
2) When calculating the GCDs of other pairs you can stop your Euclidean algorithm loop if you get below your current greatest GCD.
Off the top of my head I can't think of a way that you can work out the greatest GCD of a pair without trying to work out each pair individually (and optimise a bit as above).
Disclaimer: I've never looked at this problem before and the above is off the top of my head. There may be better ways and I may be wrong. I'm happy to discuss my thoughts in more length if anybody wants. :)
There is no O(n log n) solution to this problem in general. In fact, the worst case is O(n^2) in the number of items in the list. Consider the following set of numbers:
2^20 3^13 5^9 7^2*11^4 7^4*11^3
Only the GCD of the last two is greater than 1, but the only way to know that from looking at the GCDs is to try out every pair and notice that one of them is greater than 1.
So you're stuck with the boring brute-force try-every-pair approach, perhaps with a couple of clever optimizations to avoid doing needless work when you've already found a large GCD (while making sure that you don't miss anything).
With some constraints, e.g the numbers in the array are within a given range, say 1-1e7, it is doable in O(NlogN) / O(MAX * logMAX), where MAX is the maximum possible value in A.
Inspired from the sieve algorithm, and came across it in a Hackerrank Challenge -- there it is done for two arrays. Check their editorial.
find min(A) and max(A) - O(N)
create a binary mask, to mark which elements of A appear in the given range, for O(1) lookup; O(N) to build; O(MAX_RANGE) storage.
for every number a in the range (min(A), max(A)):
for aa = a; aa < max(A); aa += a:
if aa in A, increment a counter for aa, and compare it to current max_gcd, if counter >= 2 (i.e, you have two numbers divisible by aa);
store top two candidates for each GCD candidate.
could also ignore elements which are less than current max_gcd;
Previous answer:
Still O(N^2) -- sort the array; should eliminate some of the unnecessary comparisons;
max_gcd = 1
# assuming you want pairs of distinct elements.
sort(a) # assume in place
for ii = n - 1: -1 : 0 do
if a[ii] <= max_gcd
break
for jj = ii - 1 : -1 :0 do
if a[jj] <= max_gcd
break
current_gcd = GCD(a[ii], a[jj])
if current_gcd > max_gcd:
max_gcd = current_gcd
This should save some unnecessary computation.
There is a solution that would take O(n):
Let our numbers be a_i. First, calculate m=a_0*a_1*a_2*.... For each number a_i, calculate gcd(m/a_i, a_i). The number you are looking for is the maximum of these values.
I haven't proved that this is always true, but in your example, it works:
m=2*4*5*15=600,
max(gcd(m/2,2), gcd(m/4,4), gcd(m/5,5), gcd(m/15,15))=max(2, 2, 5, 5)=5
NOTE: This is not correct. If the number a_i has a factor p_j repeated twice, and if two other numbers also contain this factor, p_j, then you get the incorrect result p_j^2 insted of p_j. For example, for the set 3, 5, 15, 25, you get 25 as the answer instead of 5.
However, you can still use this to quickly filter out numbers. For example, in the above case, once you determine the 25, you can first do the exhaustive search for a_3=25 with gcd(a_3, a_i) to find the real maximum, 5, then filter out gcd(m/a_i, a_i), i!=3 which are less than or equal to 5 (in the example above, this filters out all others).
Added for clarification and justification:
To see why this should work, note that gcd(a_i, a_j) divides gcd(m/a_i, a_i) for all j!=i.
Let's call gcd(m/a_i, a_i) as g_i, and max(gcd(a_i, a_j),j=1..n, j!=i) as r_i. What I say above is g_i=x_i*r_i, and x_i is an integer. It is obvious that r_i <= g_i, so in n gcd operations, we get an upper bound for r_i for all i.
The above claim is not very obvious. Let's examine it a bit deeper to see why it is true: the gcd of a_i and a_j is the product of all prime factors that appear in both a_i and a_j (by definition). Now, multiply a_j with another number, b. The gcd of a_i and b*a_j is either equal to gcd(a_i, a_j), or is a multiple of it, because b*a_j contains all prime factors of a_j, and some more prime factors contributed by b, which may also be included in the factorization of a_i. In fact, gcd(a_i, b*a_j)=gcd(a_i/gcd(a_i, a_j), b)*gcd(a_i, a_j), I think. But I can't see a way to make use of this. :)
Anyhow, in our construction, m/a_i is simply a shortcut to calculate the product of all a_j, where j=1..1, j!=i. As a result, gcd(m/a_i, a_i) contains all gcd(a_i, a_j) as a factor. So, obviously, the maximum of these individual gcd results will divide g_i.
Now, the largest g_i is of particular interest to us: it is either the maximum gcd itself (if x_i is 1), or a good candidate for being one. To do that, we do another n-1 gcd operations, and calculate r_i explicitly. Then, we drop all g_j less than or equal to r_i as candidates. If we don't have any other candidate left, we are done. If not, we pick up the next largest g_k, and calculate r_k. If r_k <= r_i, we drop g_k, and repeat with another g_k'. If r_k > r_i, we filter out remaining g_j <= r_k, and repeat.
I think it is possible to construct a number set that will make this algorithm run in O(n^2) (if we fail to filter out anything), but on random number sets, I think it will quickly get rid of large chunks of candidates.
pseudocode
function getGcdMax(array[])
arrayUB=upperbound(array)
if (arrayUB<1)
error
pointerA=0
pointerB=1
gcdMax=0
do
gcdMax=MAX(gcdMax,gcd(array[pointera],array[pointerb]))
pointerB++
if (pointerB>arrayUB)
pointerA++
pointerB=pointerA+1
until (pointerB>arrayUB)
return gcdMax

Resources