I was trying to solve this question, and below is my code.
import java.util.*;
// you can write to stdout for debugging purposes, e.g.
// System.out.println("this is a debug message");
class Solution {
public int solution(int[] A) {
Set <Integer> set = new HashSet<Integer>();
for(int i=0;i<A.length;i++){
set.add(A[i]);
}
return set.size();
// write your code in Java SE 8
}
}
My question is what is the time complexity of this code. I assumed it to be O(n). n being the number of elements but my test results says it has detected time complexity of O(n*log n). Could you please tell me the correct answer with a brief explanation?
The expected, amortized cost of an insertion into a HashMap in Java is O(1). This means that if you do a series of n insertions into a HashMap, then on average the runtime will be O(n) provided that you have a good hash function. The "on average" part refers to the fact that there is some inherent randomness about how objects get distributed, which means that it's possible it might take longer than that. The "best" way to characterize the runtime would be "expected O(n)," but with an understanding that if you get really, really unlucky it could be longer than that.
Related
I've wrote this code for bubble sort.Can someone explain me the time complexity for this. It is working similar to 2 for loops. But still want to confirm with time complexity.
public int[] sortArray(int[] inpArr)
{
int i = 0;
int j = 0;
while(i != inpArr.length-1 && j != inpArr.length-1)
{
if(inpArr[i] > inpArr[i+1])
{
int temp = inpArr[i];
inpArr[i] = inpArr[i+1];
inpArr[i+1] = temp;
}
else
{
i++;
}
if(i==inpArr.length-1)
{
j++;
i = 0;
}
}
return inpArr;
}
This would have O(n^2) time complexity. Actually, this would be probably be both O(n^2) and theta(n^2).
Look at the logic of your code. You are performing the following:
Loop through the input array
If the current item is bigger than the next, switch the two
If that is not the case, increase the index(and essentially check the next item, so recursively walk through steps 1-2)
Once your index is the length-1 of the input array, i.e. it has gone through the entire array, your index is reset (the i=0 line), and j is increased, and the process restarts.
This essentially ensures that the given array will be looped through twice, meaning that you will have a WORST-CASE (big o, or O(x)) time complexity of O(n^2), but given this code, your AVERAGE (theta) time complexity will be theta(n^2).
There are SOME situations where you can have a BEST CASE (lambda) of nlg(n), giving a lambda(nlg*(n)) time complexity, but this situation is rare and I'm not even sure its achievable with this code.
Your time complexity is O(n^2) as a worst-case scenario and O(n) as a best case scenario. Your average scenario still performs O(n^2) comparisons but will have less swaps than O(n^2). This is because you're essentially doing the same thing as having two for loops. If you're interested in algorithmic efficiency, I'd recommend checking out pre-existing libraries that sort. The computer scientists that work on these sort of things really are intense. Java's Arrays.sort() method is based on a Python project called timsort that is based on merge-sorting. The disadvantage of your (and every) Bubble sort is that it's really inefficient for big, disordered arrays. Read more here.
I am battling to find the complexity of given code. I think I am struggling with identifying the correct complexity and how to actually analyze the complexity. The code to be analyzed is as follows:
public void doThings(int[] arr, int start){
boolean found = false;
int i = start;
while ((found != true) && (i<arr.length)){
i++;
if (arr[i]==17){
found=true;
}
}
}
public void reorganize(int[] arr){
for (int i=0; i<arr.length; i++){
doThings(arr, i);
}
}
The questions are:
1) What is the best case complexity of the reorganize method and for what inputs does it occur?
2) What is the worst case complexity of the reorganize method and for what inputs does it occur?
My answers are:
1) For the reorganize method there are two possible best cases that could occur. The first is when the array length is 1, meaning the loop in the reorganize and doThings method will run exactly once. The other possibility is when the ith item of the array is 17 meaning the doThings loop will not run completely on that ith iteration. Thus in both cases the best case=Ο(n).
2) The worst case would be when the number 17 is at the end of the array and when the number 17 is not in the array. This is will mean that the array will be traversed n×n times meaning the worst case would be Ο(n^2 ).
Could anyone please help me answer the questions correctly, if mine is incorrect and if possible explain the problem?
"best case" the array is empty, and you search nothing.
The worst case is that you look at every single element because you never see 17. All other cases are in between.
if (arr[i]==17){ is the "hottest path" of the code, meaning it is ran most often.
It will always execute a total of n*(n-1)/2 times (I think I did that math right) in the worst case because even when you set found = true, the reorganize method doesn't know about that, doesn't end, and continues to search even though you already scanned the entire array.
Basically, flatten the code without methods. You have this question.
What is the Big-O of a nested loop, where number of iterations in the inner loop is determined by the current iteration of the outer loop?
Considering this piece of Java code :
import java.util.Scanner;
class BreakWhileLoop {
public static void main(String[] args) {
int n;
Scanner input = new Scanner(System.in);
while (true) {
System.out.println("Input an integer");
n = input.nextInt();
if (n == 0) {
break;
}
System.out.println("You entered " + n);
}
}
}
Let's take this particular case : the user will always enter any integer except 0.
1.Can i consider this code as an algorithm ?
2.If yes , how to calculate its complexity ?
Thanks
To avoid trivial answers, let us relax the problem statement by removing the except 0 condition.
Then yes, it is an algorithm, we can call it a 0 acceptor.
Assuming that user input takes constant time, the time complexity is O(N) where N is the length of the nonzero sequence.
"An algorithm is a finite sequence of well-defined instructions for
calculating a function (or executing a procedure) that terminates in a
well-defined ending state."
As taken from: https://softwareengineering.stackexchange.com/questions/69083/what-is-an-algorithm
If the user will always keep inputting values then this is not an algorithm.
It will run forever. Time complexity is used to specify an upper bound for the time an algorithms runs depending on the input. Since your code will run forever no matter what the input is, it is meaningless to talk about its time complexity.
Interaction is a more powerful paradigm than rule-based algorithms for computer problem solving, overturning the prevailing view that all computing is expressible as algorithms.
You can follow the details and proof of this in this renowned article by Wegner which is "Why Interaction is More Powerful than Algorithm?"
The Incompressibility Method is said to simplify the analysis of algorithms for the average case. From what I understand, this is because there is no need to compute all of the possible combinations of input for that algorithm and then derive an average complexity. Instead, a single incompressible string is taken as the input. As an incompressible string is typical, we can assume that this input can act as an accurate approximation of the average case.
I am lost in regard to actually applying the Incompressibility Method to an algorithm. As an aside, I am not a mathematician, but think that this theory has practical applications in everyday programming.
Ultimately, I would like to learn how I can deduce the average case of any given algorithm, be it trivial or complex. Could somebody please demonstrate to me how the method can be applied to a simple algorithm? For instance, given an input string S, store all of the unique characters in S, then print each one individually:
void uniqueChars(String s) {
char[] chars = chars[ s.length() ];
int free_idx = 0;
for (int i = 0; i < s.length(); i++) {
if (! s[i] in chars) {
chars[free_idx] = s[i];
free_idx++;
}
}
for (int i = 0; i < chars.length(); i++) {
print (chars[i]);
}
}
Only for the sake of argument. I think pseudo-code is sufficient. Assume a linear search for checking whether the array contains an element.
Better algorithms by which the theory can be demonstrated are acceptable, of course.
This question maybe nonsensical and impractical, but I would rather ask than hold misconceptions.
reproducing my answer on the CS.Se question, for inter-reference purposes
Kolmogorov Complexity (or Algorithmic Complexity) deals with optimal descriptions of "strings" (in the general sense of strings as sequences of symbols)
A string is (sufficiently) incompressible or (sufficiently) algorithmicaly random if its (algorithmic) description (kolmogorov comlplexity K) is not less than its (literal) size. In other words the optimal description of the string, is the string itself.
Major result of the theory is that most strings are (algorithmicaly) random (or typical) (which is also related to other areas like Goedel's Theorems, through Chaitin's work)
Kolmogorov Complexity is related to Probabilistic (or Shannon) Entropy, in fact Entropy is an upper bound on KC. And this relates analysis based on descriptive complexity to probabilistic-based analysis. They can be inter-changeable.
Sometimes it might be easier to use probabilisrtic analysis, others descriptive complexity (views of the same lets say)
So in the light of the above, assuming an algorithmicaly random input to an algorithm, one asumes the following:
The input is typical, thus the analysis describes average-case scenario (point 3 above)
The input size is related in certain way to its probability (point 2 above)
One can pass from algorithmic view to probabilistic view (point 4 above)
I have a piece of code. I think the complexity of the code is O(n). But not sure, so can you please confirm me?
int low=0;
int high=array.length-1;
while(low<high)
{
while(array[low]!=0)
low++;
while(array[high]==0)
high--;
int temp=array[low];
array[low++]=array[high];
array[high--]=temp;
}
Your program keeps increasing low and decreasing high until they meets, so it's O(n)
Your program appears to be a Merge algorithm, which is O(N) or linear time.
At the end of your program, the number of times you increment low plus the number of times you decrement high is going to be the length of the array, which is O(N).
A famous algorithm with a similar structure to this is the partition step in Quicksort. You might be able to find more detailed analysis if you search for that.
O(n)
You can get confused with O(n^2) but as you can replace while with if conditions so then there will be no 2 loops, while loops are just put there to increase the computation.
you can also do like:
while(low<high)
{
if(arr[low]!=0)
low++;
if(arr[high]==0)
high--;
//Rest of the things
}
Here clearly the complexity is O(n) and the code is exactly same. So your code is also O(n) complexity.
Not necessarily
If everything in the {} is O(1) then yes it is O(n)
int low=0;
int high=array.length-1;
while(low<high)
{
}
If something in the {} was O(n) then it would be order n squared