Calculation of Cyclomatic Complexity [closed] - cyclomatic-complexity

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I am at learning stage of cyclomatic complexity(CC). For practise, I am calculating cyclomatic complexity of 2 examples and want to confirm if my answers are correct or not...
Referring to wikipedia, CC is given by M = E βˆ’ N + 2P where:
E = the number of edges of the graph
N = the number of nodes of the graph
P = the number of connected components
Please help.
Here, E = 8, N = 9 and P = 1. Hence M = 8 - 9 + (2x1) = 1.
Example 2:
Here E = 11, N = 10 and P = 1. Hence M = 10 - 11 + (2x1) = 1.
Hence for both the examples CC is 1. Please let me know if my calculation is correct or not.

You need to take more care to correctly insert the values into the formula.
In example 1, you say
Here, E = 8, N = 9 and P = 1
But actually, it's the other way round: 9 edges (=E), 8 nodes (=N), so you get a CC of 3.
In example 2, you have the values right: E=11, N=10, P=1. But you insert them in the wrong order in the formula; it actually should be 11 - 10 + (2x1) = 3.
Shortcut: If you have a picture of your graph, you can very easily determine the cyclomatic complexity. Just count the number of regions the background is divided into by the edges. In your first example, you have 2 inner regions (bordered by the edges) and one surrounding region, giving a CC of 3. Same goes with the second example. (This method requires that edges are not crossing each other, obviously.)

Also if this helps, it the number of conditional (If, while, for) statements +1. So in the above example, there are 2 conditional statements . so 2+1=3. Cyclomatic complexity in this case is 3

Just count the number of closed region and add 1 to it.
In your example above, number of closed region = 2, so the CC = 2+1 = 3

P = the number of connected components
IN OTHER WORDS
P = the number of nodes that have exit points
Source

Related

maximum possible value of the product [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
You are given 𝑛 integers π‘Ž1,π‘Ž2,…,π‘Žπ‘›. Find the maximum value of π‘šπ‘Žπ‘₯(π‘Žπ‘™,π‘Žπ‘™+1,…,π‘Žπ‘Ÿ)β‹…π‘šπ‘–π‘›(π‘Žπ‘™,π‘Žπ‘™+1,…,π‘Žπ‘Ÿ) over all pairs (𝑙,π‘Ÿ) of integers for which 1≀𝑙<π‘Ÿβ‰€π‘›.
Input
The first line contains a single integer 𝑑t (1≀𝑑≀10000) β€” the number of test cases.
The first line of each test case contains a single integer 𝑛 (2≀𝑛≀105).
The second line of each test case contains 𝑛 integers π‘Ž1,π‘Ž2,…,π‘Žπ‘› (1β‰€π‘Žπ‘–β‰€106).
It is guaranteed that the sum of 𝑛 over all test cases doesn't exceed 3β‹…105.
Output
For each test case, print a single integer β€” the maximum possible value of the product from the statement.
Example
input
4
3
2 4 3
4
3 2 3 1
2
69 69
6
719313 273225 402638 473783 804745 323328
output
12
6
4761
381274500335
Let's say you have some input where the best answer is X. Among all ranges that achieve X, let R be the smallest. Then the min and max of the range must be at the endpoints, otherwise a smaller range would also yield X.
However, if R has more than 2 elements then it has some central elements which must be greater than the min of R. Shrinking R from the endpoint which has the min of the range would give us R' which has a larger min and the same max, so would yield X' > X, a contradiction.
Thus you only need to consider ranges of size 2.
tl;dr: Take the max product of adjacent members of the input.
2 [4 3]: 4*3 = 12
[3 2] 3 1: 3*2 = 6
[69 69]: 69*69 = 4761
719313 273225 402638 [473783 804745] 323328: 473783*804745 = 381274500335

Determine the "difficulty" of quiz with multiple weights?

Im trying to determine the "difficultly" of a quiz object.
My ultimate goal is to be able to create a "difficulty score" (DS) for any quiz. This would allow me to compare one quiz to another accurately, despite being made up of different questions/answers.
When creating my quiz object, I assign each question a "difficulty index" (DI), which is number on a scale from 1-15.
15 = most difficult
1 = least difficult
Now a strait forward way to measure this "difficulty score" could be to add up each question's "difficulty index" then divide by maximum possible "difficulty index" for the quiz. ( ex. 16/30 = 53.3% Difficulty )
However, I also have multiple "weighting" properties associated to each question. These weights are again one a scale of 1-5.
5 = most impact
1 = least impact
The reason I have (2) instead of the more common (1) is so I can accommodate a scenario as follows...
If presenting the student with a very difficult question (DI=15) and the student answers "incorrect", don't have it hurt their score so much BUT if they get it "correct" have it improve their score greatly. I call these my "positive" (PW) and "negative" (NW) weights.
Quiz Example A:
Question 1: DI = 1 | PW = 3 | NW = 3
Question 2: DI = 1 | PW = 3 | NW = 3
Question 3: DI = 1 | PW = 3 | NW = 3
Question 4: DI = 15 | PW = 5 | NW = 1
Quiz Example B:
Question 1: DI = 1 | PW = 3 | NW = 3
Question 2: DI = 1 | PW = 3 | NW = 3
Question 3: DI = 1 | PW = 3 | NW = 3
Question 4: DI = 15 | PW = 1 | NW = 5
Technically the above two quizzes are very similar BUT Quiz B should be more "difficult" because the hardest question will have the greatest impact on your score if you get it wrong.
My question now becomes how can I accurately determine the "difficulty score" when considering the complex weighting system?
Any help is greatly appreciated!
The challenge of course is to determine the difficulty score for each single question.
I suggest the following model:
Hardness (H): Define a hard question such that chances of answering it correctly are lower. The hardest question is such that (1) the chance of answering it correctly are equal to random choice (because it is inherently very hard), and (2) it has the largest number of possible answers. We'll define such question as (H = 15). On the other end of the scale, we'll define (H = 0) for a question where the chances of answering it correctly are 100% (because it is trivial) (I know - such question will never appear). Now - define the hardness of each question by subjective extrapolation (remember that one can always guess between the given options). For example, if a (H = 15) question has 4 answers, and another question with similar inherent hardness has 2 answers - it would be (H = 7.5). Another example: If you believe that an average student has 62.5% of answering a question correctly - it would also be a (H = 7.5) question (this is because a H = 15 has 25% of correct answer, while H = 0 has 100%. The average is 62.5%)
Effect (E): Now, we'll measure the effect of PW and NW. For questions with 50% chance of answering correctly - the effect is E = 0.5*PW - 0.5*NW. For questions with 25% chance of answering correctly - the effect is E = 0.25*PW - 0.75*NW. For trivial question NW doesn't matter so the effect is E = PW.
Difficulty (DI): The last step is to integrate the hardness and the effect - and call it difficulty. I suggest DI = H - c*E, where c is some positive constant. You may want to normalize again.
Edit: Alternatively, you may try the following formula: DI = H * (1 - c*E), where the effect magnitude is not absolute, but relative to the question's hardness.
Clarification:
The teacher needs to estimate only one parameter about each question: What is the probability that an average student would answer this question correctly. His estimation, e, will be in the range [1/k, 1], where k is the number of answers.
The hardness, H, is a linear function of e such that 1/k is mapped to 15 and 1 is mapped to 0. The function is: H = 15 * k / (k-1) * (1-e)
The effect E depends on e, PW and NW. The formula is E = e*PW - (1-e)*NW
Example based on OP comments:
Question 1:
k = 4, e = 0.25 (hardest). Therefore H = 15
PW = 1, NW = 5, e = 0.25. Therefore E = 0.25*1 - 0.75*5 = -3.5
c = 5. DI = 15 - 5*(-3.5) = 32.5
Question 2:
k = 4, e = 0.95 (very easy). Therefore H = 1
PW = 1, NW = 5, e = 0.95. Therefore E = 0.95*1 - 0.05*5 = 0.7
c = 5. DI = 1 - 5*(0.7) = -2.5
I'd say the core of the problem is that mathematically your example quizzes A and B are identical, except that quiz A awards the student 4 gratuitous bonus points (or, equivalently, quiz B arbitrarily takes 4 points away from them). If the same students take both of them, the score distribution will be the same, except shifted by 4 points. So while the two quizzes may feel different psychologically (because, let's face it, getting extra points feels good, and losing points feels bad, even if you technically did nothing to deserve it), finding an objective way to distinguish them seems tricky.
That said, one reasonable measure of "psychological difficulty" could simply be the average score (per question) that a randomly chosen student would be expected to get from the quiz. Of course, that's not something you can reliably calculate in advance, although you could estimate it from actual quiz results after the fact.
However, if you could somehow relate your (presumably arbitrary) difficulty ratings to the fraction of students likely to answer the question correctly, then you could use that to estimate the expected average score. So, for example, we might simply assume a linear relationship with the question difficulty as the success rate, with difficulty 1 corresponding to a 100% expected success rate, and difficulty 15 corresponding to a 0% expected success rate. Then the expected average score S per question for the quiz could be calculated as:
S = avg(PW Γ— X βˆ’ NW Γ— (1 βˆ’ X))
where the average is taken over all questions in the quiz, and where PW and NW are the point weights for a correct and an incorrect answer respectively, DI below is the difficulty rating for the question, and X = (15 βˆ’ DI) / 14 is the estimated success rate.
Of course, we might want to also account for the fact that, even if a student doesn't know the answer to a question, they can still guess. Basically this means that the estimated success rate X should not range from 0 to 1, but from 1/N to 1, where N is the number of options for the question. So, taking that into account, we can adjust the formula for X to be:
X = (1 + (N βˆ’ 1) Γ— (15 βˆ’ DI) / 14) / N
One problem with this estimated average score S as a difficulty measure is that it isn't bounded in either direction, and provides no obvious scale to indicate what counts as an "easy" quiz or a "hard" one. The fundamental problem here is that you haven't specified any limits for the question weights, so there's technically nothing to stop someone from making a question with, say, a positive or negative weight of one million points.
That said, if you do impose some reasonable limits on the weights (even if they're only recommendations), then you should be able to also establish reasonable thresholds on S for a quiz to be considered e.g. easy, moderate or hard. And even if you don't, you can still at least use it to rank quizzes relative to each other by difficulty.
Ps. One way to present the expected score in a UI might be to multiply it by the number of questions in the quiz, and display the result as "par" for the quiz. That way, students could roughly judge their own performance against the difficulty of the quiz by seeing whether they scored above or below par.

From point A to point B, can only move up and right, how many possible movements? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
A friend of mine has had a programming test yesterday. He passed the test anyway, but I'm curious about the answer.
Here is the test, supposed you are on point A and need to go to point B. You can only move up and right. How many possible movements do you have?
The options he remembered were 16, 64, 3125.
What is the answer and how to explain that?
Rotate your picture. Your problem consists in finding the number of paths from a particular vertex in the Pascal graph to the root vertex. Labelling the vertices at level n by the integers 0, 1, ..., n, then the number of paths from vertex k at level n is the binomial coefficient "n choose k". In your case this is vertex k=4 at level n=8, and "n choose k"=70.
I disagree with the commentators that this question has nothing to do with programming. It can be solved fairly easily with a simple recursive function:
def n_ways(x, y):
if x == 0 or y == 0:
# we are on the "edge", there is only one way to get there
return 1
else:
# the number of ways to get here equals the sum of the number of
# ways to get to the point directly below and the number of ways
# to get to the point directly left
return n_ways(x-1, y) + n_ways(x, y-1)
Let's try it out. Point B is x position 4, y position 4.
>>> n_ways(x=4, y=4)
70
We can also use our algorithm to generate the diagram below:
>>> for y in reversed(range(5)):
... for x in range(5):
... n = n_ways(x, y)
... print str(n).rjust(2),
... print
...
1 5 15 35 70
1 4 10 20 35
1 3 6 10 15
1 2 3 4 5
1 1 1 1 1
As StΓ©phane Laurent has already mentioned, you can easily find Pascal's Triangle hiding in this algorithm! :)

Count ways to make sum N [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
For any non-negative integer K, suppose we have exactly two coins of value 2^K (i.e., two to the power of K).
Now we are given a long N. We need to find the number of different ways we can represent the value N with coins that we have.
(Two representations are considered different if there is a value that occurs a different number of times in the representations.)
Example : Let N=6 then answer is 3 as the following three representations are possible in this case:
{1, 1, 2, 2}
{1, 1, 4} and
{2, 4}
How to do it if N can be upto 10^18?
We can start with the trivial solution where we have one 2 and the rest 1. This of course requires N to be at least 3. Otherwise, there is no solution:
coins1 = N - 2
coins2 = 1
nSolutions = 1
Let's first discuss the task of finding the number of possible merge operations of M equal coins where each and every coin must be merged (i.e. only a single coin type is used). This is the number of possible integer divisions of M by 2. This procedure can cache intermediate results to perform subsequent calls faster (see dynamic programming).
But how is this useful? In the current state we can now subsequently exchange two 1's for one 2. This can be done until there are no more two 1's, a total of floor(N/2 - 1) times. After each merge (actually after each second merge is sufficient), we have to check how often all 2's can be merged. There are still 1's in the state, so we can't use additional coin types:
while(coins1 >= 3)
{
coins1 -= 2;
coins2 += 1;
nSolutions += 1;
nSolutions += findNumberOfMerges(coins2); //this could be improved
}
If the while is left, there are either one or two 1's left. If there is one left, we are done because this one must always be there and we have checked all possible combinations to represent the 2's in a different way.
if(coins1 == 1)
return nSolutions;
In the other case, we can merge once more. However, this will not result in a valid state because there is no second coin type. But we can merge once more to get a valid state (one 4 (becomes coins2) and the rest 2's (becomes coins1)):
coins1 = coins2 + 1 - 2;
coins2 = 1;
nSolutions += 1;
Now we have a similar state as at the beginning, so we can run the whole procedure again. And again, and again until it returns.

Updating a range and keeping track of the maximum value that occurs at every index [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
You are given an array, say A[], of N elements, initially all of them are equal to negative infinity.
Now you are asked to perform two types of queries(Total M queries):
Type 1. Given two integers a and d you need to update the array A[] from index l to index r. What you need to do exactly is - for each index l+i (where 0<=i<=r-l) which contains some value say 'val' you need to update the value of that index with maximum of a+i*d and val, i.e, A[l+i] = max(A[l+i], a+i*d).
Type 2. Given an integer 'i', you need to report the value of A[i].
Example :
let N = 5 and M = 4
initially A[] = {-inf, -inf, -inf, -inf, -inf}
query 1: l = 2, r = 4, a = 2, d = 3
new A[] = {-inf, 2, 5, 8, -inf}
query 2: i = 3
output = 5
query 3: l = 3, r = 5, a = 10, d = -6
new A[] = {-inf, 2, 10, 8, -2}
query 4: i = 5
output = -2
Note : value of N and M can be as large as 100000 so I am looking for better algorithms than O(N*M).
Thanks.
Think about the problem this way: You are managing a collection of piecewise linear functions
fi(x) = ai x + bi (li <= x <= ri) subject to the following operations:
Add a new function
Find the maximum of all the functions added so far that are defined for a specific value of x
I see two possible approaches:
Store only the maximum ("upper hull") of all the functions, which is in turn a collection of piecewise linear functions, but with disjoint definition intervals. UPDATE: I initially thought this could be done with a simple binary search tree, but it's not as simple as that, so I would go with option 2
Follow a more standard approach and use a segment tree on the x range (the "array"), that stores sets of linear functions in its nodes. Now for a given x, you can walk up the tree to find out what linear functions are defined at that point and use the standard convex hull trick to find their maximum at x. Complexity: O(n + M * log n * log M)

Resources