Querying in a range[L,R] - algorithm

Given a binary string (that is a string consisting of only 0 and 1). They were supposed to perform two types of query on the string.Problem
Type 0: Given two indices l and r.Print the value of the binary string from l to r modulo 3.
Type 1: Given an index l flip the value of that index if and only if the value at that index is 0.
I am trying to solve this using BIT.
If the number in range [l,r] is even then:
if the sum of the numbers of one is even then the answer is 0 else 2
If the number in range [l,r] is odd
if the sum of the numbers of one is even then the answer is 0 else 1
But I am getting wrong answer for some test cases what wrong is in my approach.
public static void update(int i){
while(A.length>i){
A[i]+=1;
i+=i&-i;
}
}
public static int ans(int i){
int a=0;
while(i>0){
a+=A[i];
i-=i&-i;
}
return a;
}
Answer for each Query.
while(Q>0){
Q--;
int x = in.nextInt();
int l = in.nextInt()+1;
if(x==1){
if((ans(l)-ans(l-1))==0) update(l);
continue;
}
int r = in.nextInt()+1;
int f = ans(r) - ans(r-1);
if(f==0){
int sum = ans(r)- ans(l-1);
if(sum%2==0) System.out.println(0);
else System.out.println(2);
}else{
int sum = ans(r)- ans(l-1);
if(sum%2==0) System.out.println(0);
else System.out.println(1);
}
}
Full CODE

When building a binary number, from the left digit to the right digit, if you consider only a currently parsed section of the string, it is a binary number. We'll call it n.
When you append a digit to the right of n, this shifts it left, and then also adds the digit (1 or 0). So n0 = 2*n and n1 = 2*n+1
Because you only care about the number modulo 3, you can just keep track of this modulo 3.
You can note that
0 (mod 3) * 2 = 0 (mod 3)
0 (mod 3) * 2 + 1 = 1 (mod 3)
1 (mod 3) * 2 = 2 (mod 3)
1 (mod 3) * 2 + 1 = 0 (mod 3)
2 (mod 3) * 2 = 1 (mod 3)
2 (mod 3) * 2 + 1 = 2 (mod 3)
You can construct a simple fsm to represent these relationships and simply use the section of the string which you are interested in as input to it. Or implement it however else you want.
Hopefully you realise that "Given an index l flip the value of that index if and only if the value at that index is 0." simply means set the value at l to 1.

Related

Algorithms. Add two n-bit binary numbers. What is a loop invariant of this problem?

I'm solving the exercise 2.1-4 from CLRS "introduction to algorithms".
The problem is described as:
Consider the problem of adding two n-bit binary integers, stored in two n-element arrays A and B. The sum of the two integers should be stored in binary form in element array C.
What is the loop invariant of this problem?
I have some thoughts about this question, and wrote them as comments in my solution to this problem, written in golang.
package additoin_binary
/*
Loop invariant:
At the start of each iteration of the loop digits in the subarray r[len(r) - 1 - i:] are
a[len(a) - 1 - i:] + b[len(b) - 1 - i:] + carry | provided that (len(a) - 1 - i) and (len(b) - 1 - i) are positive
a[len(a) - 1 - i:] + carry | provided that (len(a) - 1 - i) is positive and (len(b) - 1 - i) is negative
carry | provided that (len(a) - 1 - i) and (len(b) - 1 - i) are negative
*** carry for i = a[(len(a) - 1) - i - 1] + b[(len(b) - 1) - i - 1] == 2 ? 1 : 0
*/
func BinaryAddition(a, b []int) []int {
// a and b can have arbitrary number of digits.
// We should control a length of the second term. It should be less or equal to a length of first term.w
// Other vise we swap them
if len(b) > len(a) {
b, a = a, b
}
// loop invariant initialization:
// before first loop iteration (i=0) index b_i is out of the array range (b[-1]), so we don't have second term and sum = a
// another way of thinking about it is we don't get b as argument and then sum = a, too
if len(b) == 0 {
return a
}
// result array to store sum
r := make([]int, len(a)+1)
// overflow of summing two bits (1 + 1)
carry := 0
// loop invariant maintenance:
// we have right digits (after addition) in r for indexes r[len(r) - 1 - i:]
for i := 0; i < len(r); i++ {
a_i := len(a) - 1 - i // index for getting a last digit of a
b_i := len(b) - 1 - i // index for getting a last digit of b
r_i := len(r) - 1 - i // index for getting a last digit of r
var s int
if b_i >= 0 && a_i >= 0 {
s = a[a_i] + b[b_i] + carry
} else if a_i >= 0 {
s = a[a_i] + carry
} else { // all indexes run out of the game (a < 0, b < 0)
s = carry
}
if s > 1 {
r[r_i] = 0
carry = 1
} else {
r[r_i] = s
carry = 0
}
}
// loop invariant termination:
// i goes from 0 to len(r) - 1, r[len(r) - 1 - ([len(r) - 1):] => r[:]
// This means, that for every index in r we have a right sum
//*At i=0, r[i] a sum can be equal to 0, so we explicitly check that before return r
if r[0] == 0 {
return r[1:]
} else {
return r
}
}
Edit 1: I extended the original problem. So now arrays A and B can have arbitrary lengths, respectively m and n. Example A = [1,0,1], B = [1,0] (m=3, n=2)
Consider the problem of adding two n-bit binary integers, stored in two n-element arrays A and B. The sum of the two integers should be stored in binary form in element array C.
The problem has a guarantee that A and B are n-element arrays, I think it's an important condition which could reduce code work.
What is a loop invariant?
In simple words, a loop invariant is some predicate (condition) that holds for every iteration of the loop.
In this problem, if assume len = len(C), iterate i in [0, len), the loop invariant is that r[len-1-i:len] is always the sum of a[len-2-i:len-1] and b[len-2-i:len-1] in lower i + 1 bit. Because after each loop, you will make one bit correct, it could prove that algorithm is correct.
The loop invariant condition can be taken as the number of bits yet to be added n - p (assuming you start by adding the lsb bits first from right to left), where p I have taken as the current bit significant position and n the size of the Augend and Addend bit-sequences.

How many possible way for barcode to appear with limitation constrain

This is one of homeworks from a grader I've got. I've been struggling on this question for two days now. The topic is about Dynamic programming and I have no idea how to make sense of it.
The detail is the following.
A barcode consists of black and white vertical lines in different arrangement. For simplicity, we use a string of “0” and “1” to identify a barcode such that “0” represents a black line while “1” represents a white line.
A barcode is designed to be robust to error thus it has to follow some specific rules:
1) A barcode must consists of exactly N lines
2) There can be no more than M consecutive lines of same color. For example, when M=3, the barcode “01100001” is illegal because it consists of four consecutive white lines. However, 1001100 is legal.
3) We define “color changing” as follows. Color changing occurs when two
consecutive lines have different colors. For example, 1001100 has 3 color
changing. A barcode must have exactly K color changing.
4) The first line is always a black line.
We interest in knowing the number of possible barcode with respect to given
values of N, M and K.
Input
There are only one line contains 3 integers N, M and K where 1 <= N,M <= 30 and 0 <= K <= 30
Output
The output must contain exactly one line giving the number of possible barcodes.
For example
Input
4 3 1
Output
3
Input
5 2 2
Output
3
Input
7 9 4
Output
15
At each step ( the i barcode ) we have 2 options: either choose it white or black, then depend on that update your state (m and k).
here a pseudo Java code with comments, don't hesitate to ask if something is not clear:
static int n,m,k,memo[][][][];
static int dp(int i,int mm,int kk,int last) {
if(mm > m || kk > k) return 0; // limitation constrains
if(i==n) return kk==k?1:0; // if we build our barcode ( i == n ), we need to check color changing if it's ok return 1 else return 0
if(memo[i][mm][kk][last] != -1) return memo[i][mm][kk][last]; // momoization
int ans = 0;
ans += dp(i+1,last==1?mm+1:1,kk+(last!=1?1:0),1); // choose black as a color of this one and update state ( mm, kk )
ans += dp(i+1,last==0?mm+1:1,kk+(last!=0?1:0),0); // choose white as a color of this one and update state ( mm, kk )
return memo[i][mm][kk][last] = ans;
}
public static void main (String[] args) throws java.lang.Exception {
n = 4; m = 3; k = 1;
memo = new int[n+1][m+1][k+1][2];
for(int i=0;i<n;i++) for(int j=0;j<=m;j++) for(int l=0;l<=k;l++) Arrays.fill(memo[i][j][l], -1);
System.out.print(dp(1,1,0,1));
}
There is a quite simple recurrence relation, if T(N, M, K) is the output :
T(N, M, K) = T(N - 1, M, K - 1) + T(N - 2, M, K - 1) + ... + T(N - M, M, K - 1)
A valid barcode (N, M, K) is always a smaller valid barcode plus one new colour, the size of this new colour could be anything from 1 to M.
Thanks to this relation you can create for each M, a N x K table and solve the problem in O(NMK) with dynamic programming.
These rules should be enough to initialize the recurrence:
T(N, M, K) = 0 if (K >= N) and 1 if (K = N - 1)
T(N, M, K) = 0 if ((K+1) * M < N)

Excess (-1) in base 4 representation

I've been trying to wrap my head around this one problem for the last couple of days, and I can't figure out a way to solve it. So, here it goes:
Given the base 4(that is 0, 1, 2, 3 as digits for a number), find the excess (-1) in base 4 representation of any negative or positive integer number.
examples:
-6 = (-1)22
conversely, (-1)22 in excess (-1) of base 4 = 2 * 4^0 + 2 * 4^1 + (-1) * 4^2 = 2 + 8 - 16 = 10 - 16 = -6 in base 10
27 = 2(-1)(-1)
conversely, 2(-1)(-1) = (-1) * 4^0 + (-1) * 4^1 + 2 * 4^2 = -1 - 4 + 32 = 27
I did come up with a few algorithms for positive numbers, but none of them hold true for all negative numbers, so into the trash they went.
Can anyone give me some kind of clue here? Thanks!
----------------
Edit: I'm going to try to rephrase this question in such a way that it does not raise any confusions.
Consider the radix obtained by subtracting 1 from every digit, called the excess-(-1) of base 4. In this radix, any number can be represented using the digits -1, 0, 1, 2. So, the problem asks for an algorithm that gets as an input any integer number, and gives as output the representation of that given number.
Examples:
decimal -6 = -1 2 2 for the excess-(-1) of base 4.
To verify this, we take the representation -1 -1 2 and transform it to a decimal number, start from the right-most digit and use the generic base n to base 10 algorithm, like so:
number = 2 * 4^0 + 2 * 4^1 + (-1) * 4^2 = 2 + 4 - 16 = -6
I don't know if "quaterit" is the correct word for the radix in this representation, but I'm going to use it anyway.
Since you say you already have an algorithm for positive numbers, I'll try to take a negative number as an input and write something that uses what you already have. The code below doesn't quite work, but I'll explain why at the end.
int[] BaseFourExcessForNegativeNumbers(int x) {
int powerOfFour = 1;
while (-powerOfFour > x) {
powerOfFour *= 4;
}
int firstQuaterit = -1;
int remainder = x + powerOfFour;
int[] otherQuaterits;
if (remainder >= 0) {
otherQuaterits = BaseFourExcessForPositiveNumbers(remainder);
} else {
otherQuaterits = BaseFourExcessForNegativeNumbers(remainder);
}
int[] result = new int[otherQuaterits.Length + 1];
result[0] = firstQuaterit;
for (int index = 0; index < otherQuaterits.Length; ++index) {
result[index + 1] = otherQuaterits[index];
}
return result;
}
The idea here is that every negative number x will start with a (-1) in this representation. If that (-1) is in the 4^n position, we want to find out how to represent x - (-1)*4^n to see how to represent the rest of the number.
The reason the code I wrote won't work is that it doesn't take into consideration the possibility that the second quaterit is a 0. If that happens, the array my code will produce will be missing that 0. In fact, if BaseFourExcessForPositiveNumbers is written in the same way, the resulting array will be missing every 0, but will otherwise be correct. A workaround is to keep track of which place the first quaterit takes, and then make the array that size, and fill it from the back to the front.

Find number of binary numbers with certain constraints

This is more of a puzzle than a coding problem. I need to find how many binary numbers can be generated satisfying certain constraints. The inputs are
(integer) Len - Number of digits in the binary number
(integer) x
(integer) y
The binary number has to be such that taking any x adjacent digits from the binary number should contain at least y 1's.
For example -
Len = 6, x = 3, y = 2
0 1 1 0 1 1 - Length is 6, Take any 3 adjacent digits from this and
there will be 2 l's
I had this C# coding question posed to me in an interview and I cannot figure out any algorithm to solve this. Not looking for code (although it's welcome), any sort of help, pointers are appreciated
This problem can be solved using dynamic programming. The main idea is to group the binary numbers according to the last x-1 bits and the length of each binary number. If appending a bit sequence to one number yields a number satisfying the constraint, then appending the same bit sequence to any number in the same group results in a number satisfying the constraint also.
For example, x = 4, y = 2. both of 01011 and 10011 have the same last 3 bits (011). Appending a 0 to each of them, resulting 010110 and 100110, both satisfy the constraint.
Here is pseudo code:
mask = (1<<(x-1)) - 1
count[0][0] = 1
for(i = 0; i < Len-1; ++i) {
for(j = 0; j < 1<<i && j < 1<<(x-1); ++j) {
if(i<x-1 || count1Bit(j*2+1)>=y)
count[i+1][(j*2+1)&mask] += count[i][j];
if(i<x-1 || count1Bit(j*2)>=y)
count[i+1][(j*2)&mask] += count[i][j];
}
}
answer = 0
for(j = 0; j < 1<<i && j < 1<<(x-1); ++j)
answer += count[Len][j];
This algorithm assumes that Len >= x. The time complexity is O(Len*2^x).
EDIT
The count1Bit(j) function counts the number of 1 in the binary representation of j.
The only input to this algorithm are Len, x, and y. It starts from an empty binary string [length 0, group 0], and iteratively tries to append 0 and 1 until length equals to Len. It also does the grouping and counting the number of binary strings satisfying the 1-bits constraint in each group. The output of this algorithm is answer, which is the number of binary strings (numbers) satisfying the constraints.
For a binary string in group [length i, group j], appending 0 to it results in a binary string in group [length i+1, group (j*2)%(2^(x-1))]; appending 1 to it results in a binary string in group [length i+1, group (j*2+1)%(2^(x-1))].
Let count[i,j] be the number of binary strings in group [length i, group j] satisfying the 1-bits constraint. If there are at least y 1 in the binary representation of j*2, then appending 0 to each of these count[i,j] binary strings yields a binary string in group [length i+1, group (j*2)%(2^(x-1))] which also satisfies the 1-bit constraint. Therefore, we can add count[i,j] into count[i+1,(j*2)%(2^(x-1))]. The case of appending 1 is similar.
The condition i<x-1 in the above algorithm is to keep the binary strings growing when length is less than x-1.
Using the example of LEN = 6, X = 3 and Y = 2...
Build an exhaustive bit pattern generator for X bits. A simple binary counter can do this. For example, if X = 3
then a counter from 0 to 7 will generate all possible bit patterns of length 3.
The patterns are:
000
001
010
011
100
101
110
111
Verify the adjacency requirement as the patterns are built. Reject any patterns that do not qualify.
Basically this boils down to rejecting any pattern containing fewer than 2 '1' bits (Y = 2). The list prunes down to:
011
101
110
111
For each member of the pruned list, add a '1' bit and retest the first X bits. Keep the new pattern if it passes the
adjacency test. Do the same with a '0' bit. For example this step proceeds as:
1011 <== Keep
1101 <== Keep
1110 <== Keep
1111 <== Keep
0011 <== Reject
0101 <== Reject
0110 <== Keep
0111 <== Keep
Which leaves:
1011
1101
1110
1111
0110
0111
Now repeat this process until the pruned set is empty or the member lengths become LEN bits long. In the end
the only patterns left are:
111011
111101
111110
111111
110110
110111
101101
101110
101111
011011
011101
011110
011111
Count them up and you are done.
Note that you only need to test the first X bits on each iteration because all the subsequent patterns were verified in prior steps.
Considering that input values are variable and wanted to see the actual output, I used recursive algorithm to determine all combinations of 0 and 1 for a given length :
private static void BinaryNumberWithOnes(int n, int dump, int ones, string s = "")
{
if (n == 0)
{
if (BinaryWithoutDumpCountContainsnumberOfOnes(s, dump,ones))
Console.WriteLine(s);
return;
}
BinaryNumberWithOnes(n - 1, dump, ones, s + "0");
BinaryNumberWithOnes(n - 1, dump, ones, s + "1");
}
and BinaryWithoutDumpCountContainsnumberOfOnes to determine if the binary number meets the criteria
private static bool BinaryWithoutDumpCountContainsnumberOfOnes(string binaryNumber, int dump, int ones)
{
int current = 0;
int count = binaryNumber.Length;
while(current +dump < count)
{
var fail = binaryNumber.Remove(current, dump).Replace("0", "").Length < ones;
if (fail)
{
return false;
}
current++;
}
return true;
}
Calling BinaryNumberWithOnes(6, 3, 2) will output all binary numbers that match
010011
011011
011111
100011
100101
100111
101011
101101
101111
110011
110101
110110
110111
111011
111101
111110
111111
Sounds like a nested for loop would do the trick. Pseudocode (not tested).
value = '0101010111110101010111' // change this line to format you would need
for (i = 0; i < (Len-x); i++) { // loop over value from left to right
kount = 0
for (j = i; j < (i+x); j++) { // count '1' bits in the next 'x' bits
kount += value[j] // add 0 or 1
if kount >= y then return success
}
}
return fail
The naive approach would be a tree-recursive algorithm.
Our recursive method would slowly build the number up, e.g. it would start at xxxxxx, return the sum of a call with 1xxxxx and 0xxxxx, which themselves will return the sum of a call with 10, 11 and 00, 01, etc. except if the x/y conditions are NOT satisfied for the string it would build by calling itself it does NOT go down that path, and if you are at a terminal condition (built a number of the correct length) you return 1. (note that since we're building the string up from left to right, you don't have to check x/y for the entire string, just also considering the newly added digit!)
By returning a sum over all calls then all of the returned 1s will pool together and be returned by the initial call, equalling the number of constructed strings.
No idea what the big O notation for time complexity is for this one, it could be as bad as O(2^n)*O(checking x/y conditions) but it will prune lots of branches off the tree in most cases.
UPDATE: One insight I had is that all branches of the recursive tree can be 'merged' if they have identical last x digits so far, because then the same checks would be applied to all digits hereafter so you may as well double them up and save a lot of work. This now requires building the tree explicitly instead of implicitly via recursive calls, and maybe some kind of hashing scheme to detect when branches have identical x endings, but for large length it would provide a huge speedup.
My approach is to start by getting the all binary numbers with the minimum number of 1's, which is easy enough, you just get every unique permutation of a binary number of length x with y 1's, and cycle each unique permutation "Len" times. By flipping the 0 bits of these seeds in every combination possible, we are guaranteed to iterate over all of the binary numbers that fit the criteria.
from itertools import permutations, cycle, combinations
def uniq(x):
d = {}
for i in x:
d[i]=1
return d.keys()
def findn( l, x, y ):
window = []
for i in xrange(y):
window.append(1)
for i in xrange(x-y):
window.append(0)
perms = uniq(permutations(window))
seeds=[]
for p in perms:
pr = cycle(p)
seeds.append([ pr.next() for i in xrange(l) ]) ###a seed is a binary number fitting the criteria with minimum 1 bits
bin_numbers=[]
for seed in seeds:
if seed in bin_numbers: continue
indexes = [ i for i, x in enumerate(seed) if x == 0] ### get indexes of 0 "bits"
exit = False
for i in xrange(len(indexes)+1):
if( exit ): break
for combo in combinations(indexes, i): ### combinatorically flipping the zero bits in the seed
new_num = seed[:]
for index in combo: new_num[index]+=1
if new_num in bin_numbers:
### if our new binary number has been seen before
### we can break out since we are doing a depth first traversal
exit=True
break
else:
bin_numbers.append(new_num)
print len(bin_numbers)
findn(6,3,2)
Growth of this approach is definitely exponential, but I thought I'd share my approach in case it helps someone else get to a lower complexity solution...
Set some condition and introduce simple help variable.
L = 6, x = 3 , y = 2 introduce d = x - y = 1
Condition: if the list of the next number hypotetical value and the previous x - 1 elements values has a number of 0-digits > d next number concrete value must be 1, otherwise add two brances with both 1 and 0 as concrete value.
Start: check(Condition) => both 0,1 due to number of total zeros in the 0-count check.
Empty => add 0 and 1
Step 1:Check(Condition)
0 (number of next value if 0 and previous x - 1 zeros > d(=1)) -> add 1 to sequence
1 -> add both 0,1 in two different branches
Step 2: check(Condition)
01 -> add 1
10 -> add 1
11 -> add 0,1 in two different branches
Step 3:
011 -> add 0,1 in two branches
101 -> add 1 (the next value if 0 and prev x-1 seq would be 010, so we prune and set only 1)
110 -> add 1
111 -> add 0,1
Step 4:
0110 -> obviously 1
0111 -> both 0,1
1011 -> both 0,1
1101 -> 1
1110 -> 1
1111 -> 0,1
Step 5:
01101 -> 1
01110 -> 1
01111 -> 0,1
10110 -> 1
10111 -> 0,1
11011 -> 0,1
11101 -> 1
11110 -> 1
11111 -> 0,1
Step 6 (Finish):
011011
011101
011110
011111
101101
101110
101111
110110
110111
111011
111101
111110
111111
Now count. I've tested for L = 6, x = 4 and y = 2 too, but consider to check the algorithm for special cases and extended cases.
Note: I'm pretty sure some algorithm with Disposition Theory bases should be a really massive improvement of my algorithm.
So in a series of Len binary digits, you are looking for a x-long segment that contains y 1's ..
See the execution: http://ideone.com/xuaWaK
Here's my Algorithm in Java:
import java.util.*;
import java.lang.*;
class Main
{
public static ArrayList<String> solve (String input, int x, int y)
{
int s = 0;
ArrayList<String> matches = new ArrayList<String>();
String segment = null;
for (int i=0; i<(input.length()-x); i++)
{
s = 0;
segment = input.substring(i,(i+x));
System.out.print(" i: "+i+" ");
for (char c : segment.toCharArray())
{
System.out.print("*");
if (c == '1')
{
s = s + 1;
}
}
if (s == y)
{
matches.add(segment);
}
System.out.println();
}
return matches;
}
public static void main (String [] args)
{
String input = "011010101001101110110110101010111011010101000110010";
int x = 6;
int y = 4;
ArrayList<String> matches = null;
matches = solve (input, x, y);
for (String match : matches)
{
System.out.println(" > "+match);
}
System.out.println(" Number of matches is " + matches.size());
}
}
The number of patterns of length X that contain at least Y 1 bits is countable. For the case x == y we know there is exactly one pattern of the 2^x possible patterns that meets the criteria. For smaller y we need to sum up the number of patterns which have excess 1 bits and the number of patterns that have exactly y bits.
choose(n, k) = n! / k! (n - k)!
numPatterns(x, y) {
total = 0
for (int j = x; j >= y; j--)
total += choose(x, j)
return total
}
For example :
X = 4, Y = 4 : 1 pattern
X = 4, Y = 3 : 1 + 4 = 5 patterns
X = 4, Y = 2 : 1 + 4 + 6 = 11 patterns
X = 4, Y = 1 : 1 + 4 + 6 + 4 = 15 patterns
X = 4, Y = 0 : 1 + 4 + 6 + 4 + 1 = 16
(all possible patterns have at least 0 1 bits)
So let M be the number of X length patterns that meet the Y criteria. Now, that X length pattern is a subset of N bits. There are (N - x + 1) "window" positions for the sub pattern, and 2^N total patterns possible. If we start with any of our M patterns, we know that appending a 1 to the right and shifting to the next window will result in one of our known M patterns. The question is, how many of the M patterns can we add a 0 to, shift right, and still have a valid pattern in M?
Since we are adding a zero, we have to be either shifting away from a zero, or we have to already be in an M where we have an excess of 1 bits. To flip that around, we can ask how many of the M patterns have exactly Y bits and start with a 1. Which is the same as "how many patterns of length X-1 have Y-1 bits", which we know how to answer:
shiftablePatternCount = M - choose(X-1, Y-1)
So starting with M possibilities, we are going to increase by shiftablePatternCount when we slide to the right. All patterns in the new window are in the set of M, with some patterns now duplicated. We are going to shift a number of times to fill up N by (N - X), each time increasing the count by shiftablePatternCount, so the full answer should be :
totalCountOfMatchingPatterns = M + (N - X)*shiftablePatternCount
edit - realized a mistake. I need to count the duplicates of the shiftable patterns that are generated. I think that's doable. (draft still)
I am not sure about my answer but here is my view.just take a look at it,
Len=4,
x=3,
y=2.
i just took out two patterns,cause pattern must contain at least y's 1.
X 1 1 X
1 X 1 X
X - represent don't care
now count for 1st expression is 2 1 1 2 =4
and for 2nd expression 1 2 1 2 =4
but 2 pattern is common between both so minus 2..so there will be total 6 pair which satisfy the condition.
I happen to be using a algoritem similar to your problem, trying to find a way to improve it, I found your question. So I will share
static int GetCount(int length, int oneBits){
int result = 0;
double count = Math.Pow(2, length);
for (int i = 1; i <= count - 1; i++)
{
string str = Convert.ToString(i, 2).PadLeft(length, '0');
if (str.ToCharArray().Count(c => c == '1') == oneBits)
{
result++;
}
}
return result;
}
not very efficent I think, but elegent solution.

Find XOR of all numbers in a given range

You are given a large range [a,b] where 'a' and 'b' can be typically between 1 and 4,000,000,000 inclusive. You have to find out the XOR of all the numbers in the given range.
This problem was used in TopCoder SRM. I saw one of the solutions submitted in the match and I'm not able to figure out how its working.
Could someone help explain the winning solution:
long long f(long long a) {
long long res[] = {a,1,a+1,0};
return res[a%4];
}
long long getXor(long long a, long long b) {
return f(b)^f(a-1);
}
Here, getXor() is the actual function to calculate the xor of all number in the passed range [a,b] and "f()" is a helper function.
This is a pretty clever solution -- it exploits the fact that there is a pattern of results in the running XORs. The f() function calculates the XOR total run from [0, a]. Take a look at this table for 4-bit numbers:
0000 <- 0 [a]
0001 <- 1 [1]
0010 <- 3 [a+1]
0011 <- 0 [0]
0100 <- 4 [a]
0101 <- 1 [1]
0110 <- 7 [a+1]
0111 <- 0 [0]
1000 <- 8 [a]
1001 <- 1 [1]
1010 <- 11 [a+1]
1011 <- 0 [0]
1100 <- 12 [a]
1101 <- 1 [1]
1110 <- 15 [a+1]
1111 <- 0 [0]
Where the first column is the binary representation and then the decimal result and its relation to its index (a) into the XOR list. This happens because all the upper bits cancel and the lowest two bits cycle every 4. So, that's how to arrive at that little lookup table.
Now, consider for a general range of [a,b]. We can use f() to find the XOR for [0,a-1] and [0,b]. Since any value XOR'd with itself is zero, the f(a-1) just cancels out all the values in the XOR run less than a, leaving you with the XOR of the range [a,b].
Adding to FatalError's great answer, the line return f(b)^f(a-1); could be explained better. In short, it's because XOR has these wonderful properties:
It's associative - Place brackets wherever you want
It's commutative - that means you can move the operators around (they can "commute")
Here's both in action:
(a ^ b ^ c) ^ (d ^ e ^ f) = (f ^ e) ^ (d ^ a ^ b) ^ c
It reverses itself
Like this:
a ^ b = c
c ^ a = b
Add and multiply are two examples of other associative/ commutative operators, but they don't reverse themselves. Ok, so, why are these properties important? Well, a simple route is to expand it out into what it really is, and then you can see these properties at work.
First, let's define what we want and call it n:
n = (a ^ a+1 ^ a+2 .. ^ b)
If it helps, think of XOR (^) as if it was an add.
Let's also define the function:
f(b) = 0 ^ 1 ^ 2 ^ 3 ^ 4 .. ^ b
b is greater than a, so just by safely dropping in a few extra brackets (which we can because it's associative), we can also say this:
f(b) = ( 0 ^ 1 ^ 2 ^ 3 ^ 4 .. ^ (a-1) ) ^ (a ^ a+1 ^ a+2 .. ^ b)
Which simplifies to:
f(b) = f(a-1) ^ (a ^ a+1 ^ a+2 .. ^ b)
f(b) = f(a-1) ^ n
Next, we use that reversal property and commutivity to give us the magic line:
n = f(b) ^ f(a-1)
If you've been thinking of XOR like an add, you would've dropped in a subtract there. XOR is to XOR what add is to subtract!
How do I come up with this myself?
Remember the properties of logical operators. Work with them almost like an add or multiply if it helps. It feels unusual that and (&), xor (^) and or (|) are associative, but they are!
Run the naive implementation through first, look for patterns in the output, then start finding rules which confirm the pattern is true. Simplify your implementation even further and repeat. This is probably the route that the original creator took, highlighted by the fact that it's not completely optimal (i.e. use a switch statement rather than an array).
I found out that the below code is also working like the solution given in the question.
May be this is little optimized but its just what I got from observing repetition like given in the accepted answer,
I would like to know / understand the mathematical proof behind the given code, like explained in the answer by #Luke Briggs
Here is that JAVA code
public int findXORofRange(int m, int n) {
int[] patternTracker;
if(m % 2 == 0)
patternTracker = new int[] {n, 1, n^1, 0};
else
patternTracker = new int[] {m, m^n, m-1, (m-1)^n};
return patternTracker[(n-m) % 4];
}
I have solved the problem with recursion. I simply divide the dataset into an almost equal part for every iteration.
public int recursion(int M, int N) {
if (N - M == 1) {
return M ^ N;
} else {
int pivot = this.calculatePivot(M, N);
if (pivot + 1 == N) {
return this.recursion(M, pivot) ^ N;
} else {
return this.recursion(M, pivot) ^ this.recursion(pivot + 1, N);
}
}
}
public int calculatePivot(int M, int N) {
return (M + N) / 2;
}
Let me know your thoughts over the solution. Happy to get improvement feedbacks. The proposed solution calculates the XOR in 0(log N) complexity.
Thank you
To support XOR from 0 to N the code given needed to be modified as below,
int f(int a) {
int []res = {a, 1, a+1, 0};
return res[a % 4];
}
int getXor(int a, int b) {
return f(b) ^ f(a);
}
Adding on even further to FatalError's answer, it's possible to prove (by induction) that the observed pattern in f() will cycle for every 4 numbers.
We're trying to prove that for every integer k >= 0,
f(4k + 1) = 1
f(4k + 2) = 4k + 3
f(4k + 3) = 0
f(4k + 4) = 4k + 4
where f(n) is 1 ^ 2 ^ ... ^ n.
As our base case, we can work out by hand that
f(1) = 1
f(2) = 1 ^ 2 = 3
f(3) = 3 ^ 3 = 0
f(4) = 0 ^ 4 = 4
For our inductive step, assume that these equations are true up to a particular integer 4x (i.e. f(4x) = 4x). We want to show that our equations are true for 4x + 1, 4x + 2, 4x + 3 and 4x + 4.
To help write and visualize the proof, we can let b(x) denote the binary (base-2) string representation of x, for example
b(7) = '111', b(9) = '1001'.
and
b(4x) = 'b(x)00'
b(4x + 1) = 'b(x)01'
b(4x + 2) = 'b(x)10'
b(4x + 3) = 'b(x)11'
Here is the inductive step:
Assume: f(4x) = 4x = 'b(x)00'
Then:
f(4x + 1) = f(4x) ^ (4x + 1) // by definition
= f(4x) ^ 'b(x)01' // by definition
= 'b(x)00' ^ 'b(x)01' // from assumption
= '01' // as b(x) ^ b(x) = 0
f(4x + 2) = f(4x + 1) ^ (4x + 2)
= f(4x + 1) ^ 'b(x)10'
= '01' ^ 'b(x)10'
= 'b(x)11' // this is 4x + 3
f(4x + 3) = f(4x + 2) ^ (4x + 3)
= f(4x + 2) ^ 'b(x)11'
= 'b(x)11' ^ 'b(x)11'
= '00'
For the last case, we don't use binary strings,
since we don't know what b(4x + 4) is.
f(4x + 4) = f(4x + 3) ^ (4x + 4)
= 0 ^ (4x + 4)
= 4x + 4
So the pattern holds for the next four numbers after 4x, completing the proof.

Resources