public static LinkedList third(int[] array){
LinkedList retval = null;
for (int i = 0; i < 999 && i < array.length; i = i + 1) {
retval = new LinkedList(array[i], retval);
}
return retval;
}
Why this code gives big-O = O(1) ?
Because the loop will be executed maximally 999 times which is a constant value therefore You can think of it as it's O(999) = O(1) = O(c), where c is a constant value.
If the value of i wouldn't be limited by 999, the loop would be executed array.length times and the complexity would be O(n), where n is the size of input array.
Related
The Question:
Write a function:
class Solution {
public int solution(int[] A) {...}
}
that, given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A.
For example:
Given A = [1, 3, 6, 4, 1, 2], the function should return 5.
Given A = [1, 2, 3], the function should return 4.
Given A = [−1, −3], the function should return 1.
Assume that:
N is an integer within the range [1..100,000]; each element of array A is an integer within the range [−1,000,000..1,000,000].
Complexity:
expected worst-case time complexity is O(N); expected worst-case space complexity is O(N) (not counting the storage required for input arguments).
May I know why I get so low score to answer the question?
My solution below:
public static int solution(int[] A) {
int returnInt = 1;
int maxInt = 0;
if (A.length == 0)
return returnInt;
for (int i = 0; i < A.length; i++)
{
if (A[i] > maxInt)
maxInt = A[i];
}
if (maxInt < returnInt)
return returnInt;
return maxInt % 2 == 0
? maxInt - 1
: maxInt + 1;
}
The solution has only one for loop, I do not understand why I get a very low score.
You can use HashSet<int> exists to store all positive items of A; then you can check if number 1..exists.Count is in exists.
C# code:
public static int solution(int[] A) {
if (A is null || A.Length <= 0)
return 1;
var exists = new HashSet<int>();
foreach (int item in A)
if (item > 0)
exists.Add(item);
for (int i = 1; i <= exists.Count; ++i)
if (!exists.Contains(i))
return i;
return exists.Count + 1;
}
In the worst case we have
Time complexity: O(n), providing that we have good hash function: foreach loop is O(n) - adding to hash set is O(1), for (int i = 1; i <= exists.Count; ++i) is O(n) as well - Contains is O(1) in case of hash set
Space complexity: O(n) (hash set)
If we can allow ourselves to get slightly worse time complexity - O(n * log(n)) we can have O(1) space complexity only:
C# code:
public static int solution(int[] A) {
if (A is null || A.Length <= 0)
return 1;
Array.Sort(A);
for (int i = 0, prior = 0; i < A.Length; prior = Math.Clamp(A[i++], 0, A.Length))
if (A[i] > 0 && A[i] != prior + 1)
return prior + 1;
return Math.Clamp(A[A.Length - 1] + 1, 1, A.Length);
}
OP's performance is "low" certainly because it is producing the wrong answers.
return maxInt % 2 == 0 ? maxInt - 1 : maxInt + 1; makes little sense.
Simplify algorithm.
given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A.
Recognize that between values [1...N+1], there must be at least 1 value not in A[]. A[] has, at most, N different values. Pigeonhole principle
Cost O(N) time, O(N) more space solution, no hash table, no BST:
Form an array B[N+1] of T/F values - set all to false. Index this array [1...N+1]. Cost O(N) time, O(N) more space.
Walk array A. For each A[i], test if A[i] <= N (and A[i] >= 1). If A[i] in range, set B[A[i]] = true. Cost O(N) time.
Walk array B. Find the first B[i] that is false, i is the answer. Cost O(N) time.
Sample C code:
size_t LowestMissingPositive(size_t N, const int A[N]) {
unsigned char Used[N + 1];
memset(Used, 0, sizeof Used);
for (size_t i = 0; i < N; i++) {
if (A[i] >= 1 && (unsigned) A[i] <= N) {
Used[A[i] - 1] = 1;
}
}
for (size_t i = 0; i <= N; i++) {
if (!Used[i]) {
return i + 1;
}
}
// Code never expected to get here.
return N + 1;
}
Note: "each element of array A is an integer within the range [−1,000,000..1,000,000]" is not really an important stipulation other than the type of A[] needs to handle the range. E.g. at least a 21-bit wide integer type.
Create a list L with all integers from A which are bigger than 0. O(n)
Sort L. O(n lg(n))
If L is empty, return 1. If L[0] is not 1, return 1.
Iterate through L. If L[i] != i, return i. O(n)
Total complexity = O(n + n lg(n)) = O(n lg(n)).
I was doing this example (https://www.learnhowtoprogram.com/computer-science/big-o-notation-and-binary-trees/big-o-practice) and the answer they gave was O(N^2). The reason being - quote, "there is a for loop inside a while loop." This is true but isn't the time comp of assigning array[index1] = array[index2]; and array[index2] = temporaryValue; n. Wouldn't that take big-O to n^3.
function sortByValue(array){
function swap(array, index1, index2){
let temporaryValue = array[index1];
array[index1] = array[index2];
array[index2] = temporaryValue;
}
let count = 1;
while (count < array.length) {
let swapCount = 0;
for (let i=0; i<array.length-count; i++) {
if (array[i] > array[i+1]) {
swap(array, i, i+1);
swapCount++;
}
}
count++;
}
return array;
}
No, the time complexity of
array[index1] = array[index2];
array[index2] = temporaryValue;
is O(1). Accessing or assigning to an array element is constant time (O(1)), and each of these lines is just assigning a single value to another variable. It follows that the entire swap function is O(1). Then, since there is one loop nested inside one other loop that each can take up to array.length iterations, the sortByValue function is O(n^2).
I did this as a solution to one of the leetcode problems, but I'm not sure what the complexity of my algorithm is.
public String countAndSay(int n) {
if (n == 1) return "1";
String pre = countAndSay(n-1);
char[] prev = pre.toCharArray();
int len = prev.length;
if (len == 1 && prev[0] == '1') return "11";
int idx = 1;
int rep = 1;
String res = "";
while (idx <= len) {
if (idx == len) {
res += (Integer.toString(rep) + prev[idx-1]);
break;
}
if (prev[idx-1] == prev[idx]) rep++;
else {
res += (Integer.toString(rep) + prev[idx-1]);
rep = 1;
}
idx++;
}
return res;
}
Since the recursion takes place n times and the loop is O(n), I feel like it should be O(n^2). Is that correct? If not, can you please explain why?
Here are a few facts:
The method calls itself recursively on input n-1.
The method produces the sequence known as look-and-say sequence.
The length of the resulting string grows exponentially with base λ, where λ = 1.303577269034... is Conway's constant, so the length is O(λ^n).
The loop is quadratic on the length of the string (because of the repeated string concatenations), so we have O((λ^n)^2) = O((λ^2)^n) for the loop.
Hence we can derive the following recurrence relation:
T(0) = 1
T(n) = T(n-1) + O((λ^2)^n)
The asymptotic behaviour of this relation is given by
T(n) ∈ Θ((λ^2)^n) = Θ(1.6993^n)
If you use a StringBuilder instead of doing the evil repeated string concatenations, you can get it down to Θ(λ^n) which would also be asymptotically optimal for this problem.
I am trying to write a variation of insertion sort. In my algorithm, the swapping of values doesn't happen when finding the correct place for item in hand. Instead, it uses a lookup table (an array containing "links" to smaller values in the main array at corresponding positions) to find the correct position of the item. When we are done with all n elements in the main array, we haven't actually changed any of the elements in the main array itself, but an array named smaller will contain the links to immediate smaller values at positions i, i+1, ... n in correspondence to every element i, i+1, ... n in the main array. Finally, we iterate through the array smaller, starting from the index where the largest value in the main array existed, and populate another empty array in backward direction to finally get the sorted sequence.
Somewhat hacky/verbose implementation of the algorithm just described:
public static int [] sort (int[] a) {
int length = a.length;
int sorted [] = new int [length];
int smaller [] = new int [length];
//debug helpers
long e = 0, t = 0;
int large = 0;
smaller[large] = -1;
here:
for (int i = 1; i < length; i++) {
if (a[i] > a[large]) {
smaller[i] = large;
large = i;
continue;
}
int prevLarge = large;
int temp = prevLarge;
long st = System.currentTimeMillis();
while (prevLarge > -1 && a[prevLarge] >= a[i]) {
e++;
if (smaller[prevLarge] == -1) {
smaller[i] = -1;
smaller[prevLarge] = i;
continue here;
}
temp = prevLarge;
prevLarge = smaller[prevLarge];
}
long et = System.currentTimeMillis();
t += (et - st);
smaller[i] = prevLarge;
smaller[temp] = i;
}
for (int i = length - 1; i >= 0; i--) {
sorted[i] = a[large];
large = smaller[large];
}
App.print("DevSort while loop execution: " + (e));
App.print("DevSort while loop time: " + (t));
return sorted;
}
The variables e and t contain the number of times the inner while loop is executed and total time taken to execute the while loop e times, respectively.
Here is a modified version of insertion sort:
public static int [] sort (int a[]) {
int n = a.length;
//debug helpers
long e = 0, t = 0;
for (int j = 1; j < n; j++) {
int key = a[j];
int i = j - 1;
long st = System.currentTimeMillis();
while ( (i > -1) && (a[i] >= key)) {
e++;
// simply crap
if (1 == 1) {
int x = 0;
int y = 1;
int z = 2;
}
a[i + 1] = a[i];
i--;
}
long et = System.currentTimeMillis();
t += (et - st);
a[i+1] = key;
}
App.print("InsertSort while loop execution: " + (e));
App.print("InsertSort while loop time: " + (t));
return a;
}
if block inside the while loop is introduced just to match the number of statements inside the while loop of my "hacky" algorithm. Note that two variables e and t are introduced also in the modified insertion sort.
The thing that's confusing is that even though the while loop of insertion sort runs exactly equal number of times the while loop inside my "hacky" algorithm, t for insertion sort is significantly smaller than t for my algorithm.
For a particular run, if n = 10,000:
Total time taken by insertion sort's while loop: 20ms
Total time taken by my algorithm's while loop: 98ms
if n = 100,000;
Total time taken by insertion sort's while loop: 1100ms
Total time taken by my algorithm's while loop: 25251ms
In fact, because the condition 1 == 1 is always true, insertion sort's if block inside the while loop must execute more often than the one inside while loop of my algorithm. Can someone explain what's going on?
Two arrays containing same elements in the same order are being sorted using each algorithm.
What is the total running time of the following code:
I concluded that this code takes O(N log N) times to perform, when the class is being created the loop takes O(N) time to perform, where this for-loop below takes log n time. But I am not completely sure about it, thats why I am asking here.
Z z = new Z(N);
for (int i = 0; i < N-1; i = i+2)
z.update(i,i+1);
Class Z:
public class Z
{
int[] next, prev;
Z(int N)
{
prev = new int[N];
next = new int[N];
for (int i = 0; i<N; ++i)
// put element i in a list of its own
{
next[i] = i;
prev[i] = i;
}
}
int first(int i)
// return first element of list containing i
{
while (i != prev[i]) i = prev[i];
return i;
}
int last(int i)
// return last element of list containing i
{
while (i != next[i]) i = next[i];
return i;
}
void update(int i, int j)
{
int f = first(j);
int l = last(i);
next[l] = f;
prev[f] = l;
}
boolean query(int i, int j)
{
return last(i) == last(j);
}
}
The total running time is only O(N).
The constructor's loop has O(N) steps.
It creates the next/prev arrays as [0, 1, ..., N].
z.update(i,i+1) takes only O(1) time. Since you only call update() once for each i=i and j=i+1, first(j) and last(i) will return j and i, respectively.
It is not possible to analyze the expected complexity of first() and last() under general conditions, as they could easily contain infinite loops (for instance, if called with 0 when next=[1,0]). However, in the example given, they will always skip the while loop entirely, as each call to these functions is on an index that has not yet been modified.
Your for loop takes O(N) time. You run it a total of N/2 times and because you ignore the constant this is N. Total runtime O(N^2). There is no logarithm.
Here is my analysis:
Z z = new Z(N); // O(n)
for (int i = 0; i < N-1; i = i+2) // O(n)
z.update(i,i+1); // O(1)
Hence, the total running time will be O(n).
int first(int i)
{
while (i != prev[i]) i = prev[i]; // O(1), i will always equal prev[i]
// for any input n > 0
return i;
}
int last(int i)
{
while (i != next[i]) i = next[i]; // O(1), i will always equal next[i]
// for any input n > 0
return i;
}
void update(int i, int j)
{
int f = first(j); // O(1)
int l = last(i); // O(1)
next[l] = f; // O(1)
prev[f] = l; // O(1)
}