I was wondering how the line "if(!(i%j)) break;" in the code below would be interpreted. Since the "!" symbol is an inverter, does it mean that the bold line in the code below would interpret to saying that "if i mod j is equal to zero, invert and then break out of the loop"
Many thanks
int main ()
{
/* local variable definition */
int i, j;
for (i = 2; i < 100; i++) {
for (j = 2; j <= (i / j); j++)
if (!(i % j))
break;
if (j > (i / j)) printf("%d is prime\n", i);
}
return 0;
}
"if i mod j is equal to zero, invert and then break out of the loop"
Close: if i mod j equals zero then break.
if ( ! (i % j) ) break;
In C, 0 is false and anything else is true. So, when i % j is 0, ! (i % j) is 1, and thus true.
In C, an if (number) always evaluates to true, unless the number is 0. Therefore, that would evaluate to: if i mod j is equal to 0, basically, if i is a multiple of j.
Related
So recently I stumbled across a query i.e.
code 1:
for(long i = 1; i <= m; i++) {
long j = (fullsum - 2*(sum -i))/2;
if(j >= m+1 && j <=n) {
swaps++;
}
}
code 2:
for(long i = 1; i <= m; i++) {
for(long j = m+1; j <=n ; j++) {
if(sum - i + j == fullsum - sum -j + i) {
swaps++;
break;
}
}
}
Where fullsum = n*(n+1)/2, sum = m*(m+1)/2
1 <= N <= 10^9
1 <= M < N
Now my question here is that both the codes seem identical to me(logic wise) but Code 2 is giving correct output while code 1 is not.
Can anyone please tell me the difference between the codes, further why code2 is giving the correct output while code1 is not and what is the correct way of implementing code 1?
Rewriting if(sum - i + j == fullsum - sum -j + i), we get
if(2*j == fullsum - 2*(sum-i))
In the first code, the value you are assigning to j is
long j = (fullsum - 2*(sum -i))/2;
The issue is pretty clear: division truncation is causing the incorrect results. Let's say that fullsum - 2*(sum-i) = 45 for some case, and j=22. Now, the second condition will be false, since 2*j != fullsum - 2*(sum-i).
However, for the first condition, (fullsum - 2*(sum -i))/2 has a value of 45/2 = 22 (floor division), so the condition j = (fullsum - 2*(sum -i))/2 will be counted as a valid result when it shouldn't have been.
I believe that the following code is big theta of n^3, is this correct?
for (int i = 0; i < n; i ++)
{ // A is an array of integers
if (A[i] == 0) {
for (int j = 0; j <= i; j++) {
if (A[i] == 0) {
for (int k = 0; k <= j; k++) {
A[i] = 1;
}
}
}
}
}
And that the following is big theta of nlog(n)
for (int i = 1; i < n; i *= 2)
{
func(i);
}
void func(int x) {
if (x <= 1) return;
func(x-1);
}
because the for loop would run log(n) times, and func runs at most n recursive calls.
Thanks for the help!
Your intuition looks correct. Note that for the first bit if the input contains non-zero elements the time complexity drops down to big-theta(n). If you remove the checks it would definitely be big-theta(n^3).
You are correct about the second snippet, however the first is not Big-Theta(n^3). It is not even O(n^3)! The key observation is: for each i, the innermost loop will execute at most once.
Obviously, the worst-case is when the array contains only zeros. However, A[i] will be set to 1 in the first pass of the inner-most loop, and all subsequent checks of if (A[i] == 0) for the same i will be evaluated to false and the innermost loop will not be executed anymore until i increments. Therefore, there are total of 1 + 2 + 3 + .. + n = n * (n + 1) / 2 iterations, so the time complexity of the first snippet is O(n^2).
Hope this helps!
for(i = 1; i < a; i++){
for(j = 1; j < b; j = j + 3){
if((i+j) % 2 == 0)
Func()
}
}
In this case, I thought it is O(a*b) and Theta(a*b).
Did I analyze the Complexity correctly?
First of all, you, probably, mean
if ((i + j) % 2 == 0)
instead of
if (i + j % 2 == 0)
since when i is positive, j % 2 non-negative then i + j % 2 is positive and thus i + j % 2 never equals to zero: Func() doesn't run at all.
Your answer is correct one: the complexity is
a * // from the first loop
b / 3 * // from the second loop
1 // from the condition (it always true)
So you have
Θ(a * b / 3 * 1) = Θ(ab)
I'm trying to implement Arnold's Cat map for N*N images using the following formula
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
desMatrix[(i + j) % N][(i + 2 * j) % N] = srcMatrix[i][j];
}
}
To invert the process I do:
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
srcMatrix[(j-i) % N][(2*i-j) % N] = destMatrix[i][j];
}
}
Is the implementation correct?
It seems to me that for certain values of j and i I might get negative indexes from (j-i) and (2*i-j); how should I handle those cases, since matrix indexes are only positive?
In general, when a modulo (%) operation needs to work on negative indexes, you can simply add the modulo argument as many times as it's needed. Since
x % N == ( x + a*N ) % N
for all natural a's, and in this case you have i and j constrained in [0, N), then you can write (N + i - j) and ensure that even if i is 0 and j is N-1 (or even N for that matter), the result will always be non-negative. By the same token, (2*N + i - 2*j) or equivalently (i + 2*(N-j)) is always non-negative.
In this case, though, this is not necessary. To invert your map, you would repeat the forward step reversing the assignments. Since the matrix has unary determinant and is area-preserving, you're assured that you'll get all your points eventually (i.e. covering M(i+1) will yield a covering of M(i)).
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
newMatrix[i][j] = desMatrix[(i + j) % N][(i + 2 * j) % N];
}
}
At this point newMatrix and srcMatrix ought to be identical.
(Actually, you're already running the reverse transformation as your forward one. The one I set up to reverse yours is the one commonly used form for the forward transformation).
how to find longest slice of a binary array that can be split into two parts: in the left part, 0 should be the leader; in the right part, 1 should be the leader ?
for example :
[1,1,0,1,0,0,1,1] should return 7 so that the first part is [1,0,1,0,0] and the second part is [1,1]
i tried the following soln and it succeeds in some test cases but i think it is not efficient:
public static int solution(int[] A)
{
int length = A.Length;
if (length <2|| length>100000)
return 0;
if (length == 2 && A[0] != A[1])
return 0;
if (length == 2 && A[0] == A[1])
return 2;
int zerosCount = 0;
int OnesCount = 0;
int start = 0;
int end = 0;
int count=0;
//left hand side
for (int i = 0; i < length; i++)
{
end = i;
if (A[i] == 0)
zerosCount++;
if (A[i] == 1)
OnesCount++;
count = i;
if (zerosCount == OnesCount )
{
start++;
break;
}
}
int zeros = 0;
int ones = 0;
//right hand side
for (int j = end+1; j < length; j++)
{
count++;
if (A[j] == 0)
zeros++;
if (A[j] == 1)
ones++;
if (zeros == ones)
{
end--;
break;
}
}
return count;
}
I agree brute force is time complexity: O(n^3).
But this can be solved in linear time. I've implemented it in C, here is the code:
int f4(int* src,int n)
{
int i;
int sum;
int min;
int sta;
int mid;
int end;
// Find middle
sum = 0;
mid = -1;
for (i=0 ; i<n-1 ; i++)
{
if (src[i]) sum++;
else sum--;
if (src[i]==0 && src[i+1]==1)
{
if (mid==-1 || sum<min)
{
min=sum;
mid=i+1;
}
}
}
if (mid==-1) return 0;
// Find start
sum=0;
for (i=mid-1 ; i>=0 ; i--)
{
if (src[i]) sum++;
else sum--;
if (sum<0) sta=i;
}
// Find end
sum=0;
for (i=mid ; i<n ; i++)
{
if (src[i]) sum++;
else sum--;
if (sum>0) end=i+1;
}
return end-sta;
}
This code is tested: brute force results vs. this function. They have same results. I tested all valid arrays of 10 elements (1024 combinations).
If you liked this answer, don't forget to vote up :)
As promissed, heres the update:
I've found a simple algorithm with linear timecomplexity to solve the problem.
The math:
Defining the input as int[] bits, we can define this function:
f(x) = {bits[x] = 0: -1; bits[x] = 1: 1}
Next step would be to create a basic integral of this function for the given input:
F(x) = bits[x] + F(x - 1)
F(-1) = 0
This integral is from 0 to x.
F(x) simply represents the number of count(bits , 1 , 0 , x + 1) - count(bits , 0 , 0 , x + 1). This can be used to define the following function: F(x , y) = F(y) - F(x), which would be the same as count(bits , 1 , x , y + 1) - count(bits , 0 , x , y + 1) (number of 1s minus number of 0s in the range [x , y] - this is just to show how the algorithm basically works).
Since the searched sequence of the field must fulfill the following condition: in the range [start , mid] 0 must be leading, and in the range [mid , end] 1 must be leading and end - start + 1 must be the biggest possible value, the searched mid must fulfill the following condition: F(mid) < F(start) AND F(mid) < F(end). So first step is to search the minimum of 'F(x)', which would be the mid (every other point must be > than the minimum, and thus will result in a smaller / equally big range [end - start + 1]. NOTE: this search can be optimized by taking into the following into account: f(x) is always either 1 or -1. Thus, if f(x) returns 1bits for the next n steps, the next possible index with a minimum would be n * 2 ('n' 1s since the last minimum means, that 'n' -1s are required afterwards to reach a minimum - or atleast 'n' steps).
Given the 'x' for the minimum of F(x), we can simply find start and end (biggest/smallest value b, s ∈ [0 , length(bits) - 1] such that: F(s) > F(mid) and F(b) > F(mid), which can be found in linear time.
Pseudocode:
input: int[] bits
output: int
//input verification left out
//transform the input into F(x)
int temp = 0;
for int i in [0 , length(bits)]
if bits[i] == 0
--temp;
else
++temp;
//search the minimum of F(x)
int midIndex = -1
int mid = length(bits)
for int i in [0 , length(bits - 1)]
if bits[i] > mid
i += bits[i] - mid //leave out next n steps (see above)
else if bits[i - 1] > bits[i] AND bits[i + 1] > bits[i]
midIndex = i
mid = bits[i]
if midIndex == -1
return //only 1s in the array
//search for the endindex
int end
for end in [length(bits - 1) , mid]
if bits[end] > mid
break
else
end -= mid - bits[end] //leave out next n searchsteps
//search for the startindex
int start
for start in [0 , mid]
if bits[start] > mid
break
else
start += mid - bits[start]
return end - start