This is a method i wrote that checks if two strings are premutations of each other:
public static boolean permutation(String str1, String str2) {
boolean terminate = false;
for(int x = 0; x < str1.length(); x++){
for(int y = 0; y < str2.length(); y++){
if(y != str2.length() - 1){
if(str1.charAt(x) == str2.charAt(y)){
str1.replace(String.valueOf(str1.charAt(x)), "");
str2.replace(String.valueOf(str2.charAt(y)), "");
break;
}
}else{
if(str1.charAt(x) != str2.charAt(y)){
return false;
}else{
str1.replace(String.valueOf(str1.charAt(x)), "");
str2.replace(String.valueOf(str2.charAt(y)), "");
}
}
}
}
return true;
}
as you see in the code I'm using a for loop inside another for loop but not checking all the elements of x with y (but sometimes checking all the elements of y with the current value of x), So I'm guessing is BigO(nLog(n)) since we are not comparing all the elements overall but my mind is telling me this might be BigO(n^2). Also I am removing the indexes once a certain condition is met so the other remained indexes Won't inspect the removing elements, in my opinion it is a BigO(nLog(n)) but just in case it is not i want to know why.
!UPDATE!: Due to the lack of knowledge and that i did that algorithm in 7 minutes the actual problem was an O(n^3), I did not pay attention to the replace method of the string that it was actually an O(n) time complexity, also did not check if the input where the same length so it returned true when it should return false in some cases, here's the new code judging by it's structure i would say it is a O(n) in it's best and O(n^2) in it's worst:
public static boolean permutation(String str1, String str2){
if(str1.length() != str2.length()){
return false;
}
HashSet<Character> string1 = new HashSet<>();
for(int x =0; x < str1.length(); x++){
string1.add(str1.charAt(x));
}
for(int x = 0; x < str1.length(); x++){
if(!string1.contains(str2.charAt(x))){
return false;
}
}
return true;
}
You haven't considered the time complexity for the replace() method. It takes O(N) time complexity.
Agreed that you are processing the strings depending upon certain conditions, but the 2 for loops anyway make the time complexity O(N^2).
Therefore, overall time complexity = O(N^2 * N) = O(N^3).
To be precise, if the length of str1 is N and the length of str2 is M, then the time complexity ~ O (N * M * (N + M)).
Related
I did this as a solution to one of the leetcode problems, but I'm not sure what the complexity of my algorithm is.
public String countAndSay(int n) {
if (n == 1) return "1";
String pre = countAndSay(n-1);
char[] prev = pre.toCharArray();
int len = prev.length;
if (len == 1 && prev[0] == '1') return "11";
int idx = 1;
int rep = 1;
String res = "";
while (idx <= len) {
if (idx == len) {
res += (Integer.toString(rep) + prev[idx-1]);
break;
}
if (prev[idx-1] == prev[idx]) rep++;
else {
res += (Integer.toString(rep) + prev[idx-1]);
rep = 1;
}
idx++;
}
return res;
}
Since the recursion takes place n times and the loop is O(n), I feel like it should be O(n^2). Is that correct? If not, can you please explain why?
Here are a few facts:
The method calls itself recursively on input n-1.
The method produces the sequence known as look-and-say sequence.
The length of the resulting string grows exponentially with base λ, where λ = 1.303577269034... is Conway's constant, so the length is O(λ^n).
The loop is quadratic on the length of the string (because of the repeated string concatenations), so we have O((λ^n)^2) = O((λ^2)^n) for the loop.
Hence we can derive the following recurrence relation:
T(0) = 1
T(n) = T(n-1) + O((λ^2)^n)
The asymptotic behaviour of this relation is given by
T(n) ∈ Θ((λ^2)^n) = Θ(1.6993^n)
If you use a StringBuilder instead of doing the evil repeated string concatenations, you can get it down to Θ(λ^n) which would also be asymptotically optimal for this problem.
I´m new to algorithm analysis. I found this code online
boolean p (int s, int t, int n){
if (n == 1) {
if e(s, t)
return true;
else
return false;
}
else {
for(i = 1; i <= n; i++) {
if (p(s, i, n/2) and p(i, t, n/2))
return true;
}
}
return false;
}
How can I determine the recurrence relation described by the efficiency of this code?
Supposing that the function e(i, j) returns a boolean and takes O(1)
To find out the recurrence relation in terms of n, consider the following code fragment.
for(i=1;i<=n;i++)
{
if (p(s,i,n/2) and p(i,t,n/2))
return true;
}
We can see that the loop gets executed n times.
Notice that the loop is independent of the parameters s and t.
During each execution, this loop calls the function p() twice with parameter n/2.
Also notice that the other operations, including the function e(i,j) only takes constant time.
Thus the recurrence relation of the problem is T(n) = 2n T(n/2) + O(1).
Here is the happy number question in leetcode
This is one of the solution
Using Floyd Cycle detection algorithm.
int digitSquareSum(int n) {
int sum = 0, tmp;
while (n) {
tmp = n % 10;
sum += tmp * tmp;
n /= 10;
}
return sum;
}
bool isHappy(int n) {
int slow, fast;
slow = fast = n;
do {
slow = digitSquareSum(slow);
fast = digitSquareSum(fast);
fast = digitSquareSum(fast);
} while(slow != fast);
if (slow == 1) return 1;
else return 0;
}
Is there a chance to have infinite loop?
There would only be an infinite loop if iterating digitSquareSum could grow without bounds. But when it is called with an n digit number the result is always smaller than 100n so this does not happen because for n >= 4 the result is always smaller than the number used as input.
All that ignores that integers in the computer in most languages cannot be arbitrarily large, you would get an integer overflow if the result could grow mathematically to infinity. The result would then be likely wrong but there would still not be an infinite loop.
I tried to work out solution for validating anagram with O(n) time , 1 space.
I came up with bitwise approach.
anagram(s,t) to decide if two strings are anagrams or not.
Example
Given s = "abcd", t = "dcab", return true.
Given s = "ab", t = "ab", return true.
Given s = "ab", t = "ac", return false.
time: n , space 256
I've already had 256 space version.
public boolean anagram(String s, String t) {
if(s == null || t == null){
return false;
}
if(s.length() != t.length()){
return false;
}
int n = s.length();
int[] count = new int[256];
for(int i = 0; i < n; i++){
count[s.charAt(i)]++;
count[t.charAt(i)]--;
}
for(int i : count){
if(i != 0){
return false;
}
}
return true;
}
My code with bitwise solution
It cannot pass this test case:
s: "az" , t: "by"
I know my code is wrong, I want to find out O(n) time and 1 space solution, no sorting, it will take O(nlgn) time.
The code below is incorrect.
time: n , space 1
public boolean anagram(String s, String t) {
if(s == null || t == null || s.length() != t.length()){
return false;
}
int n = s.length();
int x = s.charAt(0) ^ t.charAt(0);
for(int i = 1; i < n; i++){
x ^= s.charAt(i);
x ^= t.charAt(i);
}
return x == 0;
}
I'm trying to work out with just 1 extra space.
Maybe there is no way to work it out.
public boolean anagram(String s, String t) {
if(s == null || t == null || s.length() != t.length()){
return false;
}
int[] hash = new int[256];// all have 0 by default
for(int i=s.length()-1;i>=0;--i){
hash[(int)s.charAt(i)]++;
hash[(int)t.charAt(i)]--;
}
for(int i=0;i<256;++i){
if(hash[i] != 0) return false;
}
return true;
}
The efficient way to detect anagrams is to sort their letters and compare the sorted words for equality.
Given the (presumably) short length of the words and the short size of the alphabet, different options are possible (straight selection sort, straight insertion sort, mergesort, counting sort, radix sort, optimized for small sizes).
A possible micro-optimization is to perform both sorts in parallel and conclude as soon as there is a discrepancy between the partially sorted words.
s: "az" , t: "by"
x = a^b = 3;
then x = 3^z = 121;
then y = 121 ^ y = 0; as the ascii value of y is 121.
I think the approach you have to compare words for anagram is not correct. Please revisit.
I would rather use an array with count for all the characters. If you know the string input is just limited to alphabets then use an array of size 26. But if the input could be any character then use the array of size 256.
This way the complexity will remain O(n)time and O(1) space.
I'm reading this Big O article (and some other book references) trying to figure out what changes affect my algorithm.
so given the following O(N^2) code:
bool ContainsDuplicates(String[] strings)
{
for(int i = 0; i < strings.Length; i++)
{
for(int j = 0; j < strings.Length; j++)
{
if(i == j) // Don't compare with self
{
continue;
}
if(strings[i] == strings[j])
{
return true;
}
}
}
return false;
}
I made the following change:
bool ContainsDuplicates(String[] strings)
{
for(int i = 0; i < strings.Length; i++)
{
for(int j = 0; j < strings.Length; j++)
{
if(i != j) // Don't compare with self
{
if(strings[i] == strings[j])
{
return true;
}
}
}
}
return false;
}
Now both IF's are nested and 'continue' is removed. Does this algorithm really became a O(N^2 + 1) ? and why ?
As far as I see the IF check was there before regardless, so initially thought it would still be a O(N^2).
Big O is describing how execution time grows as a chosen parameter becomes large.
In your example, if we wanted to be exact, the formula would be:
Time taken = Time(start) + Time(external loop) * N + Time (continue) * N + Time (no continue) * N^2
Which can be rewritten as
Time taken = a + b * N + c * N^2
Now, as N becomes larger and larger, it's clear that overall this will be shaped like a parabola. The order zero and order one terms become irrelevant as N grows to infinity.
Time taken (large N) ~= c * N^2
Finally, since we are interested in discussing qualitatively and not quantitatively, we simply describe the algorirhm as N^2
O(N^2) means that the algorithm will behave approximately as c * N^2 for large values of N
It is a similar concept to o(x) in calculus (with the difference that small-o is for parameters going to zero.