calculating the complexity of the inclusion-exclusion principle - algorithm

I have an equation the uses the inclusion-exclusion principle to calculate the probabilities of correlated events by removing the duplicate counting of intersections.
Now I want to know the complexity of this equation: What is the cost of computing the inclusion-exclusion principle in relation to the number of elements? is it exponential?

Well, the formula involves all subsets of the elements. There are 2^n subsets. Therefore it's at least exponential complexity.

Related

What is the time complexity of finding the rank of a square matrix?

I need to find the number of linearly independent columns in a square n*n matrix. What is the time complexity of this operation?
The regular Gauss elimination is O(n^3).
There are other potential solutions (e.g. iteratives or ones for sparse matrices) but they usually don't have a straightforward complexity.

Does an algorithm exist that finds the product of a n*n matrix in less than n^3 iteration?

I read that there is an algorithm that can calculate the product of a matrix in n^(2.3) complexity, but was unable to find the algorithm.
There have been several algorithms found for matrix multiplication with a big O less than n^3. But here's one of the problems with making conclusions based on big O notation. It only gives the limiting behaviour as n goes to infinity. In this case a more useful metric is the total time complexity which includes the coefficients and lower order terms.
For the general algorithm the time complexity could be An^3 + Bn^2 +...
For the case of the Coppersmith-Winograd algorithm the coefficient for the n^2.375477 term is so large that for all practical purposes the general algorithm with O(n^3) complexity is faster.
This is also true for the Strassen Algorithm as well if it's used on single elements. However,
there is a paper which claims that using a hybrid algorithm which uses the Strassen Algorithm for matrix blocks down to some limit and then switches to the O(n^3) algorithm is faster for large matrices.
So although there exists algorithms which have a smaller time complexity the only one that is useful which I'm aware of is the Strassen algorithm and that's only for large matrices (whatever large means).
Edit: Wikipedia actually has a nice summary of the algorithms for matrix multiplication. Here is plot from that same link showing the reduction in omega for the different algorithms vs. the year they were discovered.
https://en.wikipedia.org/wiki/Matrix_multiplication#mediaviewer/File:Bound_on_matrix_multiplication_omega_over_time.svg
The Strassen Algorithm is able to multiply matrices with an asymptotic complexity smaller than O(n^3).
Coppersmith–Winograd algorithm calculates the the product of a NxN matrix in O(n^{2.375477}) asymptotic time.

Rewrite O(N W) in terms of N

I have this question that asks to rewrite the subset sum problem in terms of only N.
If unaware the problem is that given weights, each with cost 1 how would you find the optimal solution given a max weight to achieve.
So the O(NW) is the space and time costs, where space will be for the 2d matrix and in the use of dynamic programming. This problem is a special case of the knapsac problem.
I'm not sure how to approach this as I tried to think about it and only thing I thought of was find the sum of all weights and just have a general worst case scenario. Thanks
If the weight is not bounded, and so the complexity must depend solely on N, there is at least an O (2N) approach, which is trying all possible subsets of N elements and computing their sums.
If you are willing to use exponential space rather than polynomial space, you can solve the problem in O(n 2^(n/2)) time and O(2^(n/2)) space if you split your set of n weights into two sets A and B of roughly equal size and compute the sum of weights for all the subsets of the two sets, and then hash all sums of subsets in A and hash W - x for all sums x of subsets of B, and if you get a collision between a subset of A and a subset of B in the hash table then you have found a subset that sums to W.

Maximum sum/area submatrix

Input: nxn matrix of postitive/negative numbers and k.
Output: submatrix with maximum sum of its elements divided by its number of elements that has at least k elements.
Is there any algorithm better than O(n^4) for this problem?
An FFT-based divide-and-conquer approach to this problem:
https://github.com/thearn/maximum-submatrix-sum
It's not as efficient as Kadane's, (O(N^3) vs. O(N^3 log N)) but does give a different take on solution construction.
There is a O(n^3) 2-d kadane algorithm for finding the maximum sum submatrix (i.e. subrectangle) in an nxn matrix. (You can find posts on SO about it, or read online). Once you understand how the algorithm works, it is clear that you can get an O(n^3) time solution for your problem if you can solve the problem of finding a maximum average subinterval of length at least m in a 1-d array of n numbers in O(n) time. This is indeed possible, see the paper cs.slu.edu/~goldwasser/publications/DensityPreprint.pdf
Thus there is an O(n^3) time solution for your problem.

Integer multiplication algorithm using a divide and conquer approach?

As homework, I should implement integer multiplication on numbers of a 1000 digits using a divide and conquer approach that works below O(n). What algorithm should I look into?
Schönhage–Strassen algorithm is the one of the fastest multiplication algorithms known. It takes O(n log n log log n) time.
Fürer's algorithm is the fastest large number multiplication algorithm known so far and takes O(n*log n * 2O(log*n)) time.
I don't think any multiplication algorithm could take less than or even equal to O(n). That's simply not possible.
Take a look at the Karatsuba algorithm. It involves a recursion step which you can easily model with divide-and-conquer.

Resources