Considering this piece of Java code :
import java.util.Scanner;
class BreakWhileLoop {
public static void main(String[] args) {
int n;
Scanner input = new Scanner(System.in);
while (true) {
System.out.println("Input an integer");
n = input.nextInt();
if (n == 0) {
break;
}
System.out.println("You entered " + n);
}
}
}
Let's take this particular case : the user will always enter any integer except 0.
1.Can i consider this code as an algorithm ?
2.If yes , how to calculate its complexity ?
Thanks
To avoid trivial answers, let us relax the problem statement by removing the except 0 condition.
Then yes, it is an algorithm, we can call it a 0 acceptor.
Assuming that user input takes constant time, the time complexity is O(N) where N is the length of the nonzero sequence.
"An algorithm is a finite sequence of well-defined instructions for
calculating a function (or executing a procedure) that terminates in a
well-defined ending state."
As taken from: https://softwareengineering.stackexchange.com/questions/69083/what-is-an-algorithm
If the user will always keep inputting values then this is not an algorithm.
It will run forever. Time complexity is used to specify an upper bound for the time an algorithms runs depending on the input. Since your code will run forever no matter what the input is, it is meaningless to talk about its time complexity.
Interaction is a more powerful paradigm than rule-based algorithms for computer problem solving, overturning the prevailing view that all computing is expressible as algorithms.
You can follow the details and proof of this in this renowned article by Wegner which is "Why Interaction is More Powerful than Algorithm?"
Related
I've been given a simple pseudocode and told to determine the big O running time of the method myMethod() by counting the approximate number of operations it performs. However if within the function another function is called, do I include the operations from there as part of the running time of myMethod()?
I've been looking around the internet for answers but no luck so far, so I hope someone will be able to help me out here. Thank you
static int doIt(int n)
{
count = 0
j=1
while j < n
{
count = count +1
j=j+2
}
return count
}
static int myMethod (int n)
{
i = 1
while(i<n)
{
dolt(i)
i = ix2
}
return 1;
}
Yes, your runtime depends on everything that is called with it, unless the instructions say otherwise. The cost that you have from those functions is also scaled by the cost of evaluating the function as many times as it does.
For example, your bottom function has a while loop that runs around lg n times. You then also have to evaluate how the runtime is going to change based on the inputs, because the function calls you make also will vary due to your input. Since it's big-oh, you could set an upper bound and assume it for all invocations, however your bound may not be tight. Though from a theoretical point of view this is okay because big-oh is an upper bound.
If this is for say some school assignment, you will probably not get marks if your bounds aren't tight.
Suppose we have a recursive function which only terminates if a randomly generated parameter meets some condition:
e.g:
{
define (some-recursive-function)
x = (random in range of 1 to 100000);
if (x == 10)
{
return "this function has terminated";
}
else
{
(some-recursive-function)
}
}
I understand that for infinite loops, there would not be an complexity defined. What about some function that definitely terminates, but after an unknown amount of time?
Finding the average time complexity for this would be fine. How would one go about finding the worse case time complexity, if one exists?
Thank you in advance!
EDIT: As several have pointed out, I've completely missed the fact that there is no input to this function. Suppose instead, we have:
{define (some-recursive-function n)
x = (random in range of 1 to n);
if (x == 10)
{
return "this function has terminated";
}
else
{
(some-recursive-function)
}
}
Would this change anything?
If there is no function of n which bounds the runtime of the function from above, then there just isn't an upper bound on the runtime. There could be an lower bound on the runtime, depending on the case. We can also speak about the expected runtime, and even put bounds on the expected runtime, but that is distinct from, on the one hand, bounds on the average case and, on the other hand, bounds on the runtime itself.
As it's currently written, there are no bounds at all when n is under 10: the function just doesn't terminate in any event. For n >= 10, there is still no upper bound on any of the cases - it can take arbitrarily long to finish - but the lower bound in any case is as low as linear (you must at least read the value of n, which consists of N = ceiling(log n) bits; your method of choosing a random number no greater than n may require additional time and/or space). The case behavior here is fairly uninteresting.
If we consider the expected runtime of the function in terms of the value (not length) of the input, we observe that there is a 1/n chance that any particular invocation picks the right random number (again, for n >= 10); we recognize that the number of times we need to try to get one is given by a geometric distribution and that the expectation is 1/(1/n) = n. So, the expected recursion depth is a linear function of the value of the input, n, and therefore an exponential function of the input size, N = log n. We recover an exact expression for the expectation; the upper and lower bounds are therefore both linear as well, and this covers all cases (best, worst, average, etc.) I say recursion depth since the runtime will also have an additional factor of N = log n, or more, owing to the observation in the preceding paragraph.
You need to know that there are "simple" formulas to calculate the complexity of a recursive algorithm, using of course recurrence.
In this case we obviously need to know what is that recursive algorithm, because in the best case, it is O(1) (temporal complexity), but in the worst case, we need to add O(n) (having into account that numbers may repeat) to the complexity of the algorithm itself.
I'll put this question/answer for more facility:
Determining complexity for recursive functions (Big O notation)
I was trying to solve this question, and below is my code.
import java.util.*;
// you can write to stdout for debugging purposes, e.g.
// System.out.println("this is a debug message");
class Solution {
public int solution(int[] A) {
Set <Integer> set = new HashSet<Integer>();
for(int i=0;i<A.length;i++){
set.add(A[i]);
}
return set.size();
// write your code in Java SE 8
}
}
My question is what is the time complexity of this code. I assumed it to be O(n). n being the number of elements but my test results says it has detected time complexity of O(n*log n). Could you please tell me the correct answer with a brief explanation?
The expected, amortized cost of an insertion into a HashMap in Java is O(1). This means that if you do a series of n insertions into a HashMap, then on average the runtime will be O(n) provided that you have a good hash function. The "on average" part refers to the fact that there is some inherent randomness about how objects get distributed, which means that it's possible it might take longer than that. The "best" way to characterize the runtime would be "expected O(n)," but with an understanding that if you get really, really unlucky it could be longer than that.
Good morning, I am studying algorithms and the way to calculate complexity when doing recursive calls, but I cannot find a reference on how a level limit in recursive calls can affect the complexity calculation. For instance this code:
countFamilyMembers(int level,....,int count){
if(noOperationCondition) { // for example no need to process this item because business rules like member already counted
return count;
} else if(level >= MAX_LEVEL) { // Level validation, we want just to look up to certain level
return ++count //last level to see then no more recurrence.
} else {
for (...each memberRelatives...) { //can be a database lookup for relatives to explore
count = countFamilyMembers(++level,...,++count);
}
return count;
}
}
I think this is O(2^n) because the recursive call in the loop. However, I have two main questions:
1. What happens if the loop values is not related to the original input at all? can that be considered "n" as well?
2. The level validation is for sure cutting limiting the recursive calls, how do this affect the complexity calculation?
Thanks for the clarifications. So we'll take n as some "best metric" on the number of relatives; this is also known as the "fan-out" in some paradigms.
Thus, you'll have 1 person at level 0, n at level 1, n^2 at level 2, and so on. A rough estimate of the return value ... and the number of operations (node visits, increments, etc.) is the sum of n^level for level ranging 0 to MAX_LEVEL. The dominant term is the highest exponent, n^MAX_LEVEL.
With the given information, I believe that's your answer: O(n^^MAX_LEVEL), a.k.a. polynomial time.
Note that, if you happen to be given a value for n, even an upper bound for n, then this becomes a constant, and the complexity is O(1).
The Incompressibility Method is said to simplify the analysis of algorithms for the average case. From what I understand, this is because there is no need to compute all of the possible combinations of input for that algorithm and then derive an average complexity. Instead, a single incompressible string is taken as the input. As an incompressible string is typical, we can assume that this input can act as an accurate approximation of the average case.
I am lost in regard to actually applying the Incompressibility Method to an algorithm. As an aside, I am not a mathematician, but think that this theory has practical applications in everyday programming.
Ultimately, I would like to learn how I can deduce the average case of any given algorithm, be it trivial or complex. Could somebody please demonstrate to me how the method can be applied to a simple algorithm? For instance, given an input string S, store all of the unique characters in S, then print each one individually:
void uniqueChars(String s) {
char[] chars = chars[ s.length() ];
int free_idx = 0;
for (int i = 0; i < s.length(); i++) {
if (! s[i] in chars) {
chars[free_idx] = s[i];
free_idx++;
}
}
for (int i = 0; i < chars.length(); i++) {
print (chars[i]);
}
}
Only for the sake of argument. I think pseudo-code is sufficient. Assume a linear search for checking whether the array contains an element.
Better algorithms by which the theory can be demonstrated are acceptable, of course.
This question maybe nonsensical and impractical, but I would rather ask than hold misconceptions.
reproducing my answer on the CS.Se question, for inter-reference purposes
Kolmogorov Complexity (or Algorithmic Complexity) deals with optimal descriptions of "strings" (in the general sense of strings as sequences of symbols)
A string is (sufficiently) incompressible or (sufficiently) algorithmicaly random if its (algorithmic) description (kolmogorov comlplexity K) is not less than its (literal) size. In other words the optimal description of the string, is the string itself.
Major result of the theory is that most strings are (algorithmicaly) random (or typical) (which is also related to other areas like Goedel's Theorems, through Chaitin's work)
Kolmogorov Complexity is related to Probabilistic (or Shannon) Entropy, in fact Entropy is an upper bound on KC. And this relates analysis based on descriptive complexity to probabilistic-based analysis. They can be inter-changeable.
Sometimes it might be easier to use probabilisrtic analysis, others descriptive complexity (views of the same lets say)
So in the light of the above, assuming an algorithmicaly random input to an algorithm, one asumes the following:
The input is typical, thus the analysis describes average-case scenario (point 3 above)
The input size is related in certain way to its probability (point 2 above)
One can pass from algorithmic view to probabilistic view (point 4 above)