Greatest GCD between some numbers - algorithm

We've got some nonnegative numbers. We want to find the pair with maximum gcd. actually this maximum is more important than the pair!
For example if we have:
2 4 5 15
gcd(2,4)=2
gcd(2,5)=1
gcd(2,15)=1
gcd(4,5)=1
gcd(4,15)=1
gcd(5,15)=5
The answer is 5.

You can use the Euclidean Algorithm to find the GCD of two numbers.
while (b != 0)
{
int m = a % b;
a = b;
b = m;
}
return a;

If you want an alternative to the obvious algorithm, then assuming your numbers are in a bounded range, and you have plenty of memory, you can beat O(N^2) time, N being the number of values:
Create an array of a small integer type, indexes 1 to the max input. O(1)
For each value, increment the count of every element of the index which is a factor of the number (make sure you don't wraparound). O(N).
Starting at the end of the array, scan back until you find a value >= 2. O(1)
That tells you the max gcd, but doesn't tell you which pair produced it. For your example input, the computed array looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
4 2 1 1 2 0 0 0 0 0 0 0 0 0 1
I don't know whether this is actually any faster for the inputs you have to handle. The constant factors involved are large: the bound on your values and the time to factorise a value within that bound.
You don't have to factorise each value - you could use memoisation and/or a pregenerated list of primes. Which gives me the idea that if you are memoising the factorisation, you don't need the array:
Create an empty set of int, and a best-so-far value 1.
For each input integer:
if it's less than or equal to best-so-far, continue.
check whether it's in the set. If so, best-so-far = max(best-so-far, this-value), continue. If not:
add it to the set
repeat for all of its factors (larger than best-so-far).
Add/lookup in a set could be O(log N), although it depends what data structure you use. Each value has O(f(k)) factors, where k is the max value and I can't remember what the function f is...
The reason that you're finished with a value as soon as you encounter it in the set is that you've found a number which is a common factor of two input values. If you keep factorising, you'll only find smaller such numbers, which are not interesting.
I'm not quite sure what the best way is to repeat for the larger factors. I think in practice you might have to strike a balance: you don't want to do them quite in decreasing order because it's awkward to generate ordered factors, but you also don't want to actually find all the factors.
Even in the realms of O(N^2), you might be able to beat the use of the Euclidean algorithm:
Fully factorise each number, storing it as a sequence of exponents of primes (so for example 2 is {1}, 4 is {2}, 5 is {0, 0, 1}, 15 is {0, 1, 1}). Then you can calculate gcd(a,b) by taking the min value at each index and multiplying them back out. No idea whether this is faster than Euclid on average, but it might be. Obviously it uses a load more memory.

The optimisations I can think of is
1) start with the two biggest numbers since they are likely to have most prime factors and thus likely to have the most shared prime factors (and thus the highest GCD).
2) When calculating the GCDs of other pairs you can stop your Euclidean algorithm loop if you get below your current greatest GCD.
Off the top of my head I can't think of a way that you can work out the greatest GCD of a pair without trying to work out each pair individually (and optimise a bit as above).
Disclaimer: I've never looked at this problem before and the above is off the top of my head. There may be better ways and I may be wrong. I'm happy to discuss my thoughts in more length if anybody wants. :)

There is no O(n log n) solution to this problem in general. In fact, the worst case is O(n^2) in the number of items in the list. Consider the following set of numbers:
2^20 3^13 5^9 7^2*11^4 7^4*11^3
Only the GCD of the last two is greater than 1, but the only way to know that from looking at the GCDs is to try out every pair and notice that one of them is greater than 1.
So you're stuck with the boring brute-force try-every-pair approach, perhaps with a couple of clever optimizations to avoid doing needless work when you've already found a large GCD (while making sure that you don't miss anything).

With some constraints, e.g the numbers in the array are within a given range, say 1-1e7, it is doable in O(NlogN) / O(MAX * logMAX), where MAX is the maximum possible value in A.
Inspired from the sieve algorithm, and came across it in a Hackerrank Challenge -- there it is done for two arrays. Check their editorial.
find min(A) and max(A) - O(N)
create a binary mask, to mark which elements of A appear in the given range, for O(1) lookup; O(N) to build; O(MAX_RANGE) storage.
for every number a in the range (min(A), max(A)):
for aa = a; aa < max(A); aa += a:
if aa in A, increment a counter for aa, and compare it to current max_gcd, if counter >= 2 (i.e, you have two numbers divisible by aa);
store top two candidates for each GCD candidate.
could also ignore elements which are less than current max_gcd;
Previous answer:
Still O(N^2) -- sort the array; should eliminate some of the unnecessary comparisons;
max_gcd = 1
# assuming you want pairs of distinct elements.
sort(a) # assume in place
for ii = n - 1: -1 : 0 do
if a[ii] <= max_gcd
break
for jj = ii - 1 : -1 :0 do
if a[jj] <= max_gcd
break
current_gcd = GCD(a[ii], a[jj])
if current_gcd > max_gcd:
max_gcd = current_gcd
This should save some unnecessary computation.

There is a solution that would take O(n):
Let our numbers be a_i. First, calculate m=a_0*a_1*a_2*.... For each number a_i, calculate gcd(m/a_i, a_i). The number you are looking for is the maximum of these values.
I haven't proved that this is always true, but in your example, it works:
m=2*4*5*15=600,
max(gcd(m/2,2), gcd(m/4,4), gcd(m/5,5), gcd(m/15,15))=max(2, 2, 5, 5)=5
NOTE: This is not correct. If the number a_i has a factor p_j repeated twice, and if two other numbers also contain this factor, p_j, then you get the incorrect result p_j^2 insted of p_j. For example, for the set 3, 5, 15, 25, you get 25 as the answer instead of 5.
However, you can still use this to quickly filter out numbers. For example, in the above case, once you determine the 25, you can first do the exhaustive search for a_3=25 with gcd(a_3, a_i) to find the real maximum, 5, then filter out gcd(m/a_i, a_i), i!=3 which are less than or equal to 5 (in the example above, this filters out all others).
Added for clarification and justification:
To see why this should work, note that gcd(a_i, a_j) divides gcd(m/a_i, a_i) for all j!=i.
Let's call gcd(m/a_i, a_i) as g_i, and max(gcd(a_i, a_j),j=1..n, j!=i) as r_i. What I say above is g_i=x_i*r_i, and x_i is an integer. It is obvious that r_i <= g_i, so in n gcd operations, we get an upper bound for r_i for all i.
The above claim is not very obvious. Let's examine it a bit deeper to see why it is true: the gcd of a_i and a_j is the product of all prime factors that appear in both a_i and a_j (by definition). Now, multiply a_j with another number, b. The gcd of a_i and b*a_j is either equal to gcd(a_i, a_j), or is a multiple of it, because b*a_j contains all prime factors of a_j, and some more prime factors contributed by b, which may also be included in the factorization of a_i. In fact, gcd(a_i, b*a_j)=gcd(a_i/gcd(a_i, a_j), b)*gcd(a_i, a_j), I think. But I can't see a way to make use of this. :)
Anyhow, in our construction, m/a_i is simply a shortcut to calculate the product of all a_j, where j=1..1, j!=i. As a result, gcd(m/a_i, a_i) contains all gcd(a_i, a_j) as a factor. So, obviously, the maximum of these individual gcd results will divide g_i.
Now, the largest g_i is of particular interest to us: it is either the maximum gcd itself (if x_i is 1), or a good candidate for being one. To do that, we do another n-1 gcd operations, and calculate r_i explicitly. Then, we drop all g_j less than or equal to r_i as candidates. If we don't have any other candidate left, we are done. If not, we pick up the next largest g_k, and calculate r_k. If r_k <= r_i, we drop g_k, and repeat with another g_k'. If r_k > r_i, we filter out remaining g_j <= r_k, and repeat.
I think it is possible to construct a number set that will make this algorithm run in O(n^2) (if we fail to filter out anything), but on random number sets, I think it will quickly get rid of large chunks of candidates.

pseudocode
function getGcdMax(array[])
arrayUB=upperbound(array)
if (arrayUB<1)
error
pointerA=0
pointerB=1
gcdMax=0
do
gcdMax=MAX(gcdMax,gcd(array[pointera],array[pointerb]))
pointerB++
if (pointerB>arrayUB)
pointerA++
pointerB=pointerA+1
until (pointerB>arrayUB)
return gcdMax

Related

what is meaning n average compared from array with length x in QuickSort

in this formula and continue this question I think if we assume n equal 3 so we have array like A={4,5,7}
and with the formula, we get 8 from n=3 and this means is 8 average compare for array with length 3 and this so weird!
what is exactly happening in quickSort to get average compare 8 form array so short! and occurs many compare and that is so bad! is true?
I think if we compare the array with 4 step is so faster than use quickSort!
So your question seems to be about the formula
E[X] = E[sum(i, 1, n-1, sum(j, i+1, n, pi,j))]
in this image you uploaded in the other question.
Here E[X] means the expected value. In simpler terms: the value X will get on average if you do the experiment (sorting a random array) many many times.
pi,j is the chance that item i and item j are compared to each other during the run of the algorithm. For a good algorithm, it happens that item i is compared to item k and item k to item j, which often makes it unnecessary to really compare item i and item j. Therefore, the better the algorithm, the lower this probability pi,j.
In the worst case, for example when n is only 3, you can have pi,j = 1. If you then calculate the formula, you get that for n=3, E[X]=3 because there are only three combinations for (i,j) : (1,2), (1,3) and (2,3). This means that for n=3 always 3 comparisons are done. This is equal for all good algorithms, because you can not take advantage of item_i < item_k < item_j.
For larger n, there are many chances to take advantage of not needing to explicitly test all item_i versus all item_j. A detailed analysis is for example available in the Khan Academy article.

Select K unique random numbers from range with sum equal to S

i have a range
R = {0, ..., N}
and i like to get K elements which have a sum equal to S, but the elements should be selected randomly.
So an easy brute force method would be to determine all element combinations containing K numbers resulting in S and picking one of the combinations by random.
I am trying to think about a recursive solution where a random number is selected and then the problem reduces to find (K-1) random numbers with sum equal to (S - K0) but this need not yield in a solution.
Is there a better approach?
A sample would be:
R = {0,1,2,3,4,5}, S = 5, K = 2
Solutions: randomly pick one of {{1,4};{2,3};{0.5}}
In general, if K is big (then N also), and S not too little, it is unpredictable, because, there are two many combinations.
Brute force: try every combinations. You are sure to find a solution, if there exists one, but if there are more than, say, 1 Md, or somewhat, it it almost impossible to list them all.
Your algorithm:
To choose at random, your algorithm is ok: take one number at random, then another, ...
But you make an assumption: there exists a solution with the numbers you pick: you dont know.
So what ? if statistically there exist many solutions, you could find it like that, perhaps, or perhaps not.
Some trails:
1 Use S/K
If every numbers < S/K, it is impossible.
if every numbers > S/K, it is impossible.
So lets assume that there are numbers < S/K, and other > S/K
2 keep only numbers < S, very interesting if S is little.
3 idea: If S is big, and numbers little, you have chance that there exist many combinations.
idea of algorithm
1 take one number N1 at random
2 if N1 < S/K, take another one N2 > S/K
3 calculate N1+N2: if < 2.S/K take another one N3> S/K, if not
4 iterate at each step: if sum < n S/K take another one > S/K, if not
5 you can have better precision, by replacing S/K by (S-sum N1,N2,...)/(K-n)
If at one step you dont can not find any number, backtrack
hope it helps
I would start with Dirichlet distribution (https://en.wikipedia.org/wiki/Dirichlet_distribution). Using it, you could sample uniformly in (0..1) distributed random numbers Xi, such that SumiXi = 1.
For S <= N, it is easy to see that sampling beyond S is useless and should be rejected outright.
So, combining with acceptance/rejection, something along the lines
Divide interval [0...1] into S (or S+1 if 0 is allowed) equal bins.
Sample K numbers from Dirichlet distribution.
Map sampled numbers to bin index, so you have now sampled integers which are
all below or equal S and have sum equal to S.
If all integers are distinct, accept the sampling, otherwise reject the sampling and go to step 2

What is the probability that all priorities are unique for Permute-By-Sorting algorithm?

I hope someone can help me answer the following question. Thanks!
Here is a pseudo code of Permute-By-Sorting algorithm:
Permute-By-Sorting (A)
n = A.length
let P[1..n] be a new array
for i = 1 to n
P[i] = Random (1,n^3)
sort A, using P as sort keys
In the above algorithm, the array P represents the priorities of the elements in array A. Line 4 chooses a random number between 1 and n^3.
The question is what is the probability that all priorities in P are unique? and how do I get the probability?
To reconcile the answers already given: for choice i = 0, ..., n - 1, given that no duplicates have been chosen yet, there are n^3 - i non-duplicate choices of n^3 total for the ith value. Thus the probability is the product for i = 0, ..., n - 1 of (1 - i/n^3).
sdcwc is using a union bound to lowerbound this probability by 1 - O(1/n). This estimate turns out to be basically right. The proof sketch is that (1 - i/n^3) is exp(-i/n^3 + O(i^2/n^6)), so the product is exp(-O(n^2)/n^3 + O(n^-3)), which is greater than or equal to 1 - O(n^2)/n^3 + O(n^-3) = 1 - O(1/n). I'm sure the fine folks on math.SE would be happy to do this derivation "properly" for you.
Others have given you the probability calculation, but I think you may be asking the wrong question.
I assume the reason you're asking about the probability of the priorities being unique, and the reason for choosing n^3 in the first place, is because you're hoping they will be unique, and choosing a large range relative to n seems to be a reasonable way of achieving uniqueness.
It is much easier to ensure that the values are unique. Simply populate the array of priorities with the numbers 1 .. n and then shuffle them with the Fisher-Yates algorithm (aka algorithm P from The Art of Computer Programming, volume 2, Seminumerical Algorithms, by Donald Knuth).
The sort would then be carried out with known unique priority values.
(There are also other ways of going about getting a random permutation. It is possible to generate the nth lexicographic permutation of a sequence using factoradic numbers (or, the factorial number system), and so generate the permutation for a randomly chosen value in [1 .. n!].)
You are choosing n numbers from 1...n^3 and asking what is the probability that they are all unique.
There are (n^3) P n = (n^3)!/(n^3-n)! ways to choose the n numbers uniquely, and (n^3)^n ways to choose the n-numbers total.
So the probability of the numbers being unique is just the first equation divided by the second, which gives
n3!
--------------
(n3-n)! n3n
Let Aij be the event: i-th and j-th elements collide. Obviously P(Aij)=1/n3.
There is at most n2 pairs, therefore probability of at least one collision is at most 1/n.
If you are interested in exact thing, see BlueRaja's answer, but in randomized algorithms it is usually enough to give this type of bound.
So the sort part is irrelevant
Assuming the "Random" is real random, the probability is just
n^3!
----------------
(n^3-n)!n^(3n)

Finding even numbers in an array without using feedback

I saw this post: Finding even numbers in an array and I was thinking about how you could do it without feedback. Here's what I mean.
Given an array of length n containing at most e even numbers and a
function isEven that returns true if the input is even and false
otherwise, write a function that prints all the even numbers in the
array using the fewest number of calls to isEven.
The answer on the post was to use a binary search, which is neat since it doesn't mean the array has to be in order. The number of times you have to check if a number is even is e log n instead if n because you do a binary search (log n) to find one even number each time (e times).
But that idea means that you divide the array in half, test for evenness, then decide which half to keep based on the result.
My question is whether or not you can beat n calls on a fixed testing scheme where you check all the numbers you want for evenness without knowing the outcome, and then figure out where the even numbers are after you've done all the tests based on the results. So I guess it's no-feedback or blind or some term like that.
I was thinking about this for a while and couldn't come up with anything. The binary search idea doesn't work at all with this constraint, but maybe something else does? Even getting down to n/2 calls instead of n (yes, I know they are the same big-O) would be good.
The technical term for "no-feedback or blind" is "non-adaptive". O(e log n) calls still suffice, but the algorithm is rather more involved.
Instead of testing the evenness of products, we're going to test the evenness of sums. Let E ≠ F be distinct subsets of {1, …, n}. If we have one array x1, …, xn with even numbers at positions E and another array y1, …, yn with even numbers at positions F, how many subsets J of {1, …, n} satisfy
(∑i in J xi) mod 2 ≠ (∑i in J yi) mod 2?
The answer is 2n-1. Let i be an index such that xi mod 2 ≠ yi mod 2. Let S be a subset of {1, …, i - 1, i + 1, … n}. Either J = S is a solution or J = S union {i} is a solution, but not both.
For every possible outcome E, we need to make calls that eliminate every other possible outcome F. Suppose we make 2e log n calls at random. For each pair E ≠ F, the probability that we still cannot distinguish E from F is (2n-1/2n)2e log n = n-2e, because there are 2n possible calls and only 2n-1 fail to distinguish. There are at most ne + 1 choices of E and thus at most (ne + 1)ne/2 pairs. By a union bound, the probability that there exists some indistinguishable pair is at most n-2e(ne + 1)ne/2 < 1 (assuming we're looking at an interesting case where e ≥ 1 and n ≥ 2), so there exists a sequence of 2e log n calls that does the job.
Note that, while I've used randomness to show that a good sequence of calls exists, the resulting algorithm is deterministic (and, of course, non-adaptive, because we chose that sequence without knowledge of the outcomes).
You can use the Chinese Remainder Theorem to do this. I'm going to change your notation a bit.
Suppose you have N numbers of which at most E are even. Choose a sequence of distinct prime powers q1,q2,...,qk such that their product is at least N^E, i.e.
qi = pi^ei
where pi is prime and ei > 0 is an integer and
q1 * q2 * ... * qk >= N^E
Now make a bunch of 0-1 matrices. Let Mi be the qi x N matrix where the entry in row r and column c has a 1 if c = r mod qi and a 0 otherwise. For example, if qi = 3^2, then row 2 has ones in columns 2, 11, 20, ... 2 + 9j and 0 elsewhere.
Now stack these matrices vertically to get a Q x N matrix M, where Q = q1 + q2 + ... + qk. The rows of M tell you which numbers to multiply together (the nonzero positions). This gives a total of Q products that you need to test for evenness. Call each row a "trial", and say that a "trial involves j" if the jth column of that row is nonempty. The theorem you need is the following:
THEOREM: The number in position j is even if and only if all trials involving j are even.
So you do a total of Q trials and then look at the results. If you choose the prime powers intelligently, then Q should be significantly smaller than N. There are asymptotic results that show you can always get Q on the order of
(2E log N)^2 / 2log(2E log N)
This theorem is actually a corollary of the Chinese Remainder Theorem. The only place that I've seen this used is in Combinatorial Group Testing. Apparently the problem originally arose when testing soldiers coming back from WWII for syphilis.
The problem you are facing is a form of group testing, type of a problem with the objective of reducing the cost of identifying certain elements of a set (up to d elements of a set of N elements).
As you've already stated, there are two basic principles via which the testing may be carried out:
Non-adaptive Group Testing, where all the tests to be performed are decided a priori.
Adaptive Group Testing, where we perform several tests, basing each test on the outcome of previous tests. Obviously, adaptive testing has a potential to reduce the cost, compared to non-adaptive testing.
Theoretical bounds for both principles have been studied, and are available in this Wiki article, or this paper.
For adaptive testing, the upper bound is O(d*log(N)) (as already described in this answer).
For non-adaptive testing, it can be shown that the upper bound is O(d*d/log(d)*log(N)), which is obviously larger than the upper bound for adaptive testing by a factor of d/log(d).
This upper bound for non-adaptive testing comes from an algorithm which uses disjunct matrices: matrices of dimension T x N ("number of tests" x "number of elements"), where each item can be either true (if an element was included in a test), or false (if it wasn't), with a property that any subset of d columns must differ from all other columns by at least a single row (test inclusion). This allows linear time of decoding (there are also "d-separable" matrices where fewer test are needed, but the time complexity for their decoding is exponential and not computationaly feasible).
Conclusion:
My question is whether or not you can beat n calls on a fixed testing scheme [...]
For such a scheme and a sufficiently large value of N, a disjunct matrix can be constructed which would have less than K * [d*d/log(d)*log(N)] rows. So, for large values of N, yes, you can beat it.
The underlying question (challenge) is kind of silly. If the binary search answer is acceptable (where it sums sub arrays and sends them to IsEven) then I can think of a way to do it with E or less calls to IsEven (assuming the numbers are integers of course).
JavaScript to demonstrate
// sort the array by only the first bit of the number
A.sort(function(x,y) { return (x & 1) - (y & 1); });
// all of the evens will be at the beginning
for(var i=0; i < E && i < A.length; i++) {
if(IsEven(A[i]))
Print(A[i]);
else
break;
}
Not exactly a solution, but just few thoughts.
It is easy to see that if a solution exists for array length n that takes less than n tests, then for any array length m > n it is easy to see that there is always a solution with less than m tests. So, if you have a solution for n = 2 or 3 or 4, then the problem is solved.
You can split the array into pairs of numbers and for each pair: if the sum is odd, then exactly one of them is even, otherwise if one of the numbers is even, then both of them are even. This way for each pair it takes either one or two tests. Best case:n/2 tests, worse case:n tests, if even and odd numbers are chosen with equal probability, then: 3n/4 tests.
My hunch is there is no solution with less than n tests. Not sure how to prove it.
UPDATE: The second solution can be extended in the following way.
Check if the sum of two numbers is even. If odd, then exactly one of them is even. Otherwise label the set as "homogeneous set of size 2". Take two "homogenous set"s of same size n. Pick one number from each set and check if their sum is even. If it is even, combine these two sets to a "homogeneous set of size 2n". Otherwise, it implies that one of those sets purely consists of even numbers and the other one purely odd numbers.
Best case:n/2 tests. Average case: 3*n/2. Worst case is still n. Worst case exists only when all the numbers are even or all the numbers are odd.
If we can add and multiply array elements, then we can compute every Boolean function (up to complementation) on the low-order bits. Simulate a circuit that encodes the positions of the even numbers as a number from 0 to nC0 + nC1 + ... + nCe - 1 represented in binary and use calls to isEven to read off the bits.
Number of calls used: within 1 of the information-theoretic optimum.
See also fully homomorphic encryption.

Efficient way to find all zeros in a matrix?

I am thinking of efficient algorithm to find the number of zeros in a row of matrix but can only think of O(n2) solution (i.e by iterating over each row and column). Is there a more efficient way to count the zeros?
For example, given the matrix
3, 4, 5, 6
7, 8, 0, 9
10, 11, 12, 3
4, 0, 9, 10
I would report that there are two zeros.
Without storing any external information, no, you can't do any better than Θ(N2). The rationale is simple - if you don't look at all N2 locations in the matrix, then you can't guarantee that you've found all of the zeros and might end up giving the wrong answer back. For example, if I know that you look at fewer than N2 locations, then I can run your algorithm on a matrix and see how many zeros you report. I could then look at the locations that you didn't access, replace them all with zeros, and run your algorithm again. Since your algorithm doesn't look at those locations, it can't know that they have zeros in them, and so at least one of the two runs of the algorithm would give back the wrong answer.
More generally, when designing algorithms to process data, a good way to see if you can do better than certain runtimes is to use this sort of "adversarial analysis." Ask yourself the question: if I run faster than some time O(f(n)), could an adversary manipulate the data in ways that change the answer but I wouldn't be able to detect? This is the sort of analysis that, along with some more clever math, proves that comparison-based sorting algorithms cannot do any better than Ω(n log n) in the average case.
If the matrix has some other properties to it (for example, if it's sorted), then you might be able to do a better job than running in O(N2). As an example, suppose that you know that all rows of the matrix are sorted. Then you can easily do a binary search on each row to determine how many zeros it contains, which takes O(N log N) time and is faster.
Depending on the parameters of your setup, you might be able to get the algorithm to run faster if you assume that you're allowed to scan in parallel. For example, if your machine has K processors on it that can be dedicated to the task of scanning the matrix, then you could split the matrix into K roughly evenly-sized groups, have each processor count the number of zeros in the group, then sum the results of these computations up. This ends up giving you a runtime of Θ(N2 / K), since the runtime is split across multiple cores.
Always O(n^2) - or rather O(n x m). You cannot jump over it.
But if you know that matrix is sparse (only a few elements have nonzero values), you can store only values that are non zero and matrix size. Then consider using hashing over storing whole matrix - generally create hash which maps a row number to a nested hash.
Example:
m =
[
0 0 0 0
0 2 0 0
0 0 1 0
0 0 1 0
]
Will be represented as:
row_numbers = 4
column_numbers = 4
hash = { 1 => { 1 => 2}, 2 => {2 => 1, 3 => 2}}
Then:
number_of_zeros = row_numbers * column_numbers - number_of_cells_in_hash(hash)
For any un sorted matrix it should be O(n). Since generally we represent total elements with 'n'.
If Matrix contains X Rows and Y Columns, X by Y = n.
E.g In 4 X 4 un sorted matrix it total elements 16. so When we iterate in linear with 2 loops 4 X 4 = 16 times. it will be O(n) because the total elements in the array are 16.
Many people voted for O(n^2) because they considered n X n as matrix.
Please correct me if my understanding is wrong.
Assuming that when you say "in a row of a matrix", you mean that you have the row index i and you want to count the number of zeros in the i-th row, you can do better than O(N^2).
Suppose N is the number of rows and M is the number of columns, then store your
matrix as a single array [3,4,5,6,7,8,0,9,10,11,12,34,0,9,10], then to access row i, you access the array at index N*i.
Since arrays have constant time access, this part doesn't depend on the size of the matrix. You can then iterate over the whole row by visiting the element N*i + j for j from 0 to N-1, this is O(N), provided you know which row you want to visit and you are using an array.
This is not a perfect answer for the reasons I'll explain, but it offers an alternative solution potentially faster than the one you described:
Since you don't need to know the position of the zeros in the matrix, you can flatten it into a 1D array.
After that, perform a quicksort on the elements, this may provide a performance of O(n log n), depending on the randomness of the matrix you feed in.
Finally, count the zero elements at the beginning of the array until you reach a non-zero number.
In some cases, this will be faster than checking every element, although in a worst-case scenario the quicksort will take O(n2), which in addition to the zero counting at the end may be worse than iterating over each row and column.
assuming the given Matrix is M do an M+(-M) operation but do use the default + use instead my_add(int a, int b) such that
int my_add(int a, int b){
return (a == b == 0) ? 1 : (a+b);
}
That will give you a matrix like
0 0 0 0
0 0 1 0
0 0 0 0
0 1 0 0
Now you create a s := 0 and keep adding all elements to s. s += a[i][j]
You can do both in one cycle even. s += my_add(a[i][j], (-1)*a[i][j])
But still Its O(m*n)
NOTE
To count the number of 1's you generally check all items in the Matrix. without operating on all elements I don't think you can tell the number of 1's. and to loop all elements its (m*n). It can be faster than (m*n) if and only if you can leave some elements unchecked and say the number of 1's
EDIT
However if you move a 2x2 kernel over the matrix and hop you will get (m*n)/k iteration e.g. if you operate on neighboring elements a[i][j], a[i+1][j], a[i][j+1], a[i+1][j+1] till i < m & i< n

Resources