Calculating Cyclomatic Complexity - cyclomatic-complexity

So I'm trying to understand cyclomatic complexity. If I have a line of code that looks like a = b && c where a, b, and c are bool. For this statement would the complexity be 1 or would it be more like doing if statements since you have to check b and c? Thanks in advance.

Related

What is the recurrence relations and the asymptotic tight bound for the given pesudo-code?

I was given a pseudo-code to find the recurrence relations and the asymptotic tight bound for, and for the life of me i can't figure out how to even try and understand how to confront this.
Calc_a(n):
if n==1:
return 1
sum = 0
for i = 1 to n−1
sum = sum + calc_a(i)
return sum
First of all, as i said, i was asked to find a recurrence relations formula for this code, and i tried tackling it by following what the code does for a few inputs.
I figured it returns double the amount it returned last time. (except in the case of 1 and 2, where it returns 1 in both).
So i thought to myself - it would probably be something like this:
T(n) = 2*T(n-1) + c
because the sum it returns is equal to the amount of times the method is called, i added the c as a constant to show the amount of "one line works" done each iteration. (i.e the if-else and sum= lines).
The issue is it doesn't fit with what they ask me to do in the second part:
To find a tight asymptotic bound for the relation Hint:look both at T(n) and at T(n + 1) at the same time
But with the relation i found, it is very easy to prove that it is Omega(2^n) and also not that hard to prove that it is BigO(2^n).
Any help? thanks in advance. :)

Computational complexity of unknown probability

Assume that there is a job and many workers are available.
The following code may be a bad optimization idea.
But it is just for analyzing the complexity.
A is a set of N worker
while (A is not empty)
{
B=empty set
foreach a1 in A
{
foreach a2 in A
{
b= merge(a1, a2)
if (b works better than a1 **and** b works better than a2)
add b to B
}
}
A=B
}
The problem is that the probability of "b works better than a1 and a2" is unknown.
So, how to estimate the time complexity of the above code ?
For the two inner loops, the complexity is independent of probability of "b works better than a1 and a2".
However, the code seems a bit broken as I don't find there is an exit to the while loop.
Not considering the while loop, the time complexity will be
O(a1*a2) = O(N^2).
Recurrence equation will be
T(N) = T(N-1) + C

Run time analysis for the following few cases?

I'm ranking the following from slowest growth to fastest growth:
a) 2^log(n)
b) 2^2^log(n)
c) n^5/2
d) 2^n^2
e) n^2*log(n)
I have a < b < e < c < d but I was told that's wrong. Can someone provide a helpful answer and explanation? Thank you.
b) is exponential, since exponentiation is right-associative. That is, 2^2^log(n) is equal to 2^(2^log(n)) = 2^n, not 4^log(n). The relative order of the other 4 is correct; you just need to bump b up to a higher position (which position I leave to you to figure out).

Solving a recurrence relation in Merge Sort

I am learning about big O and recurrences.
I encountered a problem that mentioned,
t = { 0, n =1 ; T(n-) , n > 1 }
Can anyone tell me how to get to O(n^2) from this ?
The functioin in your question have the complexity O(n) if it was O(n²) it should look like this:
T(x) = { 1, x =0 ; n + T(x-1) , x > 1 }
wheer n is the number of calculations for t(x) then x /= 0
I do no quite understand what you are trying to ask. But, typically, O(n^2) algorithms will feature the main operation being executed inside 2-Level nested loops. Like:
for(a=0;a<5;a++) {
for(b=0;b<5;b++) {
/* Some of the main operations of the algorithm */
}
}
Similarly, 3-Level nested loops containing the main operations of the algorithm are likely to have complexity of O(n^3) and so on.
(Note: Exceptions may be there to the above methods)

Finding a Perfect Square efficiently

How to find the first perfect square from the function: f(n)=An²+Bn+C? B and C are given. A,B,C and n are always integer numbers, and A is always 1. The problem is finding n.
Example: A=1, B=2182, C=3248
The answer for the first perfect square is n=16, because sqrt(f(16))=196.
My algorithm increments n and tests if the square root is a integer nunber.
This algorithm is very slow when B or C is large, because it takes n calculations to find the answer.
Is there a faster way to do this calculation? Is there a simple formula that can produce an answer?
What you are looking for are integer solutions to a special case of the general quadratic Diophantine equation1
Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0
where you have
ax^2 + bx + c = y^2
so that A = a, B = 0, C = -1, D = b, E = 0, F = c where a, b, c are known integers and you are looking for unknown x and y that satisfy this equation. Once you recognize this, solutions to this general problem are in abundance. Mathematica can do it (use Reduce[eqn && Element[x|y, Integers], x, y]) and you can even find one implementation here including source code and an explanation of the method of solution.
1: You might recognize this as a conic section. It is, and people have been studying them for thousands of years. As such, our understanding of them is very deep and your problem is actually quite famous. The study of them is an immensely deep and still active area of mathematics.

Resources