Cyclomatic Number [closed] - cyclomatic-complexity

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I am trying to understand McCabe's Cyclomatic number and I learned what it is actually for i.e. it is used to indicate the complexity of a program. It directly measures the number of linearly independent paths through a program's source code. (Read from wikipedia)
But I want to know which software entity and attribute it really measures.

Cyclomatic Complexity, CC, is measured at the granularity of the function or method. Sometimes it is summed up for a class and called the Weighted Method Count, WMC, i.e. the sum of the CC's for every method in the class.

Cyclomatic complexity analyses the code. looks for Loops and branches you have in code and assumes that greater the loops and branches, complex the code.
Complexity is then linked to maintainability. its assumed that higher the complexity difficult it gets to maintain.

This is used for methods and classes to measure the complexity .
Complexity number 3 is not bad for a method,If it is greater than 3,then it is eligible for refactoring.
It encourages to write small methods so that there is high posibility for code reuse.

Related

Performance analysis of Sorting Algorithms [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am trying to compare the performance of couple of sorting algorithms. Are there any existing benchmarks that can make my task easy? If not I want create my own benchmark. How would I achieve this?
Couple of things I need to consider:
Test on different possible input permutation
Test on different scale of input size
Keep hardware configuration consistent across all the algorithms
Major challenge is in implementing sorting algorithm. Because if I implement one and if that happens to be the non-efficient way of implementation it will generate inaccurate result. How would I tackle this?
Tomorrow if someone comes up with his/her own sorting algorithms how would he/she compare with other sorting algorithm?
Though I am flexible with any programming language but would really appreciate if someone can suggest me some functions available in python.
Well, i think you are having trouble what a doubling ratio test is. I know only basics of python so i got this code from here
#!/usr/bin/python
import time
# measure wall time
t0 = time.time()
procedure() // call from here the main function of your sorting class and as
the (sorting)process ends then it will automatically print
the time taken by a sorting algorithm
print time.time() - t0, "seconds wall time"

What is the difference between the design of algorithms and the analysis of algorithms? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am new to algorithms. What is the difference between the design of algorithms and the analysis of algorithms?
The design of an algorithm is the process of inventing the algorithm. You work out what steps to take, the order in which to take them, etc. (Think of it like writing the code for the algorithm). The analysis of an algorithm is where you work out mathematically how efficient it is, prove that it's correct in all cases, etc.
Think of the design as writing the code and the analysis as justifying why that code works and why it's efficient.
Algorithm Design is a specific instructions for completing a task.
They've also been called "recipes".Perhaps a more accurate description would be that algorithms design are patterns for completing a task in an efficient way.
Analysis of Algorithms is the determination of the amount of resources (such as time and storage) necessary to execute them.usually described as (time complexity) and storage locations (space complexity) of an algorithm and stated as a function relating the input length to the number of steps.

implementation of recursion and loops at different levels [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've read posts where people say certain compilers will implement recursion as loops but the hardware implements loops as recursion and vice versa. If I have a recursive function and an iterative function in my program, can someone please explain how the compiler and hardware are going to interpret each one? Please also address any performance benefits of one over the other if the choice of implementation does not clearly favor one method like using recursion for mergesort.
Ok, here is a brief answer:
1)A compiler can optimize tail recursive calls. But it is usually not a loop, but rather a stack frame reuse. However, I have never heard of any compiler that converts a loop into recursion(and I do not see any point of doing so: it would use additional stack space, likely to work slower and can lead to the change of semantics(stackoverflow instead of an infinite loop)).
2)I would say that it is not correct to speak about hardware implementing loops, because hardware itself does not implement loops. It has instructions(like conditional jumps, arithmetical operations and so on) which are used to implement loops.

What Is "Time" When Talking About bigO Notation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am getting ahead on next semester classes and just had a question about bigO notation. What is the time factor measured in? Is it a measure of milliseconds, nanoseconds or just an arbitrary measure based upon the amount of inputs, n, used to compare different versions of algorithims?
It kinda sorta depends on how exactly you define the notation (there are a many different definitions that ultimately describe the same thing). We defined it on turing machines, there time would be defined as the number of computation steps performed. On real machines, it'd be similar - for instance, the number of atomic instructions performed. As some of the comments have pointed out, the unit of time doesn't really matter anyway because what's measured is the asymptotic performance, that is, how the performance changes with increasing input sizes.
Note that this isn't really a programming question and probably not a good fit for the site. More of a CompSci thing, but i think the compsci stackexchange site is meant for post graduates.

What is the relation between J48 algorithm and decisionStump algorithm on weka? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have the test results on Weka and in the some of data sets there is not much difference between them when using J48 and decisionStump as an algorithm.
How could J48 algorithm have no statistically significant difference to DecisionStump algorithm when comparing by accuracy (percent correct)? Can we find the relation by examining algorithms or structure of data?
DecisionStump is intended to be a very basic building block for other classifiers, but perhaps your data happen to be adequately modeled with a simple classifier, in which case J48 will be unable to find a clever answer that is better. Degenerate cases of this are:
1) DecisionStump always produces the right answer because one of the predictors in fact completely predicts the right answer.
2) All of the predictors are completely useless, in which case DecisionStump is no different than everything else.
I'm not at all surprised because I keep seeing studies that say that no one model was spectacularly better than the others. See e.g. the abstract at http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.6753. Usually logistic regression is one of the "good enough" classifiers.

Resources