Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
The Challenge
The shortest program by character count that outputs the n-bit Gray Code. n will be an arbitrary number smaller than 1000100000 (due to user suggestions) that is taken from standard input. The gray code will be printed in standard output, like in the example.
Note: I don't expect the program to print the gray code in a reasonable time (n=100000 is overkill); I do expect it to start printing though.
Example
Input:
4
Expected Output:
0000
0001
0011
0010
0110
0111
0101
0100
1100
1101
1111
1110
1010
1011
1001
1000
Python - 53 chars
n=1<<input()
for x in range(n):print bin(n+x^x/2)[3:]
This 54 char version overcomes the limitation of range in Python2 so n=100000 works!
x,n=0,1<<input()
while n>x:print bin(n+x^x/2)[3:];x+=1
69 chars
G=lambda n:n and[x+y for x in'01'for y in G(n-1)[::1-2*int(x)]]or['']
75 chars
G=lambda n:n and['0'+x for x in G(n-1)]+['1'+x for x in G(n-1)[::-1]]or['']
APL (29 chars)
With the function F as (⌽ is the 'rotate' char)
z←x F y
z←(0,¨y),1,¨⌽y
This produces the Gray Code with 5 digits (⍴ is now the 'rho' char)
F/5⍴⊂0,1
The number '5' can be changed or be a variable.
(Sorry about the non-printable APL chars. SO won't let me post images as a new user)
Impossible! language (54 58 chars)
#l{'0,'1}1[;#l<][%;~['1%+].>.%['0%+].>.+//%1+]<>%[^].>
Test run:
./impossible gray.i! 5
Impossible v0.1.28
00000
00001
00011
00010
00110
00111
00101
00100
01100
01101
01111
01110
01010
01011
01001
01000
11000
11001
11011
11010
11110
11111
11101
11100
10100
10101
10111
10110
10010
10011
10001
10000
(actually I don't know if personal languages are allowed, since Impossible! is still under development, but I wanted to post it anyway..)
Golfscript - 27 chars
Reads from stdin, writes to stdout
~2\?:),{.2/^)+2base''*1>n}%
Sample run
$ echo 4 | ruby golfscript.rb gray.gs
0000
0001
0011
0010
0110
0111
0101
0100
1100
1101
1111
1110
1010
1011
1001
1000
Ruby - 49 chars
(1<<n=gets.to_i).times{|x|puts"%.#{n}b"%(x^x/2)}
This works for n=100000 with no problem
C++, 168 characters, not including whitespaces:
#include <iostream>
#include <string>
int r;
void x(std::string p, char f=48)
{
if(!r--)std::cout<<p<<'\n';else
{x(p+f);x(p+char(f^1),49);}
r++;
}
int main() {
std::cin>>r;
x("");
return 0;
}
Haskell, 82 characters:
f a=map('0':)a++map('1':)(reverse a)
main=interact$unlines.(iterate f[""]!!).read
Point-free style for teh win! (or at least 4 fewer strokes). Kudos to FUZxxl.
previous: 86 characters:
f a=map('0':)a++map('1':)(reverse a)
main=interact$ \s->unlines$iterate f[""]!!read s
Cut two strokes with interact, one with unlines.
older: 89 characters:
f a=map('0':)a++map('1':)(reverse a)
main=readLn>>= \s->putStr$concat$iterate f["\n"]!!s
Note that the laziness gets you your immediate output for free.
Mathematica 50 Chars
Nest[Join["0"<>#&/##,"1"<>#&/#Reverse##]&,{""},#]&
Thanks to A. Rex for suggestions!
Previous attempts
Here is my attempt in Mathematica (140 characters). I know that it isn't the shortest, but I think it is the easiest to follow if you are familiar with functional programming (though that could be my language bias showing). The addbit function takes an n-bit gray code and returns an n+1 bit gray code using the logic from the wikipedia page.. The make gray code function applies the addbit function in a nested manner to a 1 bit gray code, {{0}, {1}}, until an n-bit version is created. The charactercode function prints just the numbers without the braces and commas that are in the output of the addbit function.
addbit[set_] :=
Join[Map[Prepend[#, 0] &, set], Map[Prepend[#, 1] &, Reverse[set]]]
MakeGray[n_] :=
Map[FromCharacterCode, Nest[addbit, {{0}, {1}}, n - 1] + 48]
Straightforward Python implementation of what's described in Constructing an n-bit Gray code on Wikipedia:
import sys
def _gray(n):
if n == 1:
return [0, 1]
else:
p = _gray(n-1)
pr = [x + (1<<(n-1)) for x in p[::-1]]
return p + pr
n = int(sys.argv[1])
for i in [("0"*n + bin(a)[2:])[-n:] for a in _gray(n)]:
print i
(233 characters)
Test:
$ python gray.py 4
0000
0001
0011
0010
0110
0111
0101
0100
1100
1101
1111
1110
1010
1011
1001
1000
C, 203 Characters
Here's a sacrificial offering, in C:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
char s[256];
int b, i, j, m, g;
gets(s);
b = atoi(s);
for (i = 0; i < 1 << b; ++i)
{
g = i ^ (i / 2);
m = 1 << (b - 1);
for (j = 0; j < b; ++j)
{
s[j] = (g & m) ? '1' : '0';
m >>= 1;
}
s[j] = '\0';
puts(s);
}
return 0;
}
C#, 149143 characters
C# isn't the best language for code golf, but I thought I'd go at it anyway.
static void Main(){var s=1L<<int.Parse(Console.ReadLine());for(long i=0;i<s;i++){Console.WriteLine(Convert.ToString(s+i^i/2,2).Substring(1));}}
Readable version:
static void Main()
{
var s = 1L << int.Parse(Console.ReadLine());
for (long i = 0; i < s; i++)
{
Console.WriteLine(Convert.ToString(s + i ^ i / 2, 2).Substring(1));
}
}
And here is my Fantom sacrificial offering
public static Str[]grayCode(Int i){if(i==1)return["0","1"];else{p:=grayCode(i-1);p.addAll(p.dup.reverse);p.each|s,c|{if(c<(p.size/2))p[c]="0"+s;else p[c]="1"+s;};return p}}
(177 char)
Or the expanded version:
public static Str[] grayCode(Int i)
{
if (i==1) return ["0","1"]
else{
p := grayCode(i-1);
p.addAll(p.dup.reverse);
p.each |s,c|
{
if(c<(p.size/2))
{
p[c] = "0" + s
}
else
{
p[c] = "1" + s
}
}
return p
}
}
F#, 152 characters
let m=List.map;;let rec g l=function|1->l|x->g((m((+)"0")l)#(l|>List.rev|>m((+)"1")))(x - 1);;stdin.ReadLine()|>int|>g["0";"1"]|>List.iter(printfn "%s")
F# 180 175 too many characters
This morning I did another version, simplifying the recursive version, but alas due to recursion it wouldn't do the 100000.
Recursive solution:
let rec g m n l =
if(m = n) then l
else List.map ((+)"0") l # List.map ((+)"1") (List.rev(l)) |> g (m+1) n
List.iter (fun x -> printfn "%s" x) (g 1 (int(stdin.ReadLine())) ["0";"1"]);;
After that was done I created a working version for the "100000" requirement - it's too long to compete with the other solutions shown here and I probably re-invented the wheel several times over, but unlike many of the solutions I have seen here it will work with a very,very large number of bits and hey it was a good learning experience for an F# noob - I didn't bother to shorten it, since it's way too long anyway ;-)
Iterative solution: (working with 100000+)
let bits = stdin.ReadLine() |>int
let n = 1I <<< bits
let bitcount (n : bigint) =
let mutable m = n
let mutable c = 1
while m > 1I do
m <- m >>>1
c<-c+1
c
let rec traverseBits m (number: bigint) =
let highbit = bigint(1 <<< m)
if m > bitcount number
then number
else
let lowbit = 1 <<< m-1
if (highbit&&& number) > 0I
then
let newnum = number ^^^ bigint(lowbit)
traverseBits (m+1) newnum
else traverseBits (m+1) number
let res = seq
{
for i in 0I..n do
yield traverseBits 1 i
}
let binary n m = seq
{
for i = m-1 downto 0 do
let bit = bigint(1 <<< i)
if bit &&&n > 0I
then yield "1"
else yield "0"
}
Seq.iter (fun x -> printfn "%s" (Seq.reduce (+) (binary x bits))) res
Lua, 156 chars
This is my throw at it in Lua, as close as I can get it.
LuaJIT (or lua with lua-bitop): 156 bytes
a=io.read()n,w,b=2^a,io.write,bit;for x=0,n-1 do t=b.bxor(n+x,b.rshift(x,1))for k=a-1,0,-1 do w(t%2^k==t%n and 0 or 1)t=t%2^k==t and t or t%2^k end w'\n'end
Lua 5.2: 154 bytes
a=io.read()n,w,b=2^a,io.write,bit32;for x=0,n-1 do t=b.XOR(n+x,b.SHR(x,1))for k=a-1,0,-1 do w(t%2^k==t%n and 0 or 1)t=t%2^k==t and t or t%2^k end w'\n'end
In cut-free Prolog (138 bytes if you remove the space after '<<'; submission editor truncates the last line without it):
b(N,D):-D=0->nl;Q is D-1,H is N>>Q/\1,write(H),b(N,Q).
c(N,D):-N=0;P is N xor(N//2),b(P,D),M is N-1,c(M,D).
:-read(N),X is 1<< N-1,c(X,N).
Ruby, 50 Chars
(2**n=gets.to_i).times{|i|puts"%0#{n}d"%i.to_s(2)}
Related
Given n, I have a binary pattern to be generated like this in a part of my application:
n = 0
0 -> 0
n = 1
0 -> 0
1 -> 1
n = 2
0 -> 00
1 -> 01
2 -> 10
3 -> 11
n = 3
0 -> 000
1 -> 001
2 -> 010
3 -> 100
4 -> 011
5 -> 101
6 -> 110
7 -> 111
n = 4
0 -> 0000
1 -> 0001
2 -> 0010
3 -> 0100
4 -> 1000
5 -> 0011
6 -> 0101
7 -> 1001
8 -> 0110
9 -> 1010
10 -> 1100
11 -> 0111
12 -> 1011
13 -> 1101
14 -> 1110
15 -> 1111
n = 5
0 -> 00000
1 -> 00001
2 -> 00010
3 -> 00100
4 -> 01000
5 -> 10000
6 -> 00011
7 -> 00101
8 -> 01001
9 -> 10001
10 -> 00110
11 -> 01010
12 -> 10010
13 -> 01100
14 -> 10100
15 -> 11000
16 -> 00111
17 -> 01011
18 -> 10011
19 -> 01101
20 -> 10101
21 -> 11001
22 -> 01110
23 -> 10110
24 -> 11010
25 -> 11100
26 -> 01111
27 -> 10111
28 -> 11011
29 -> 11101
30 -> 11110
31 -> 11111
I'll try to explain this algorithm the best way I can:
The algorithm has loops. In each loop, an extra bit is flipped. Then combinations are to be made out of it.
So in the first loop, no bits are 1s.
In the second loop, only one bit is 1. We need to first go through all possible combinations, in such an order that the leftmost bits are lit only after all combinations for the rightmost bits are over.
Similarly keep proceeding to further loops.
I'm not sure how to write an efficient code for it. One thing I could think of is like a DP solution to this problem. But could there be a more elegant, something like a mathematical solution, where I could put in 'n' and get the binary pattern equivalent?
You could use a recursive approach. In the main routine, increase the number of one-bits you want to produce (from 1 to n), and then call a recursive function that will do that job as follows:
It chooses a bit to set to 1, and then calls the function recursively to use the remaining bits at the right of it, to place one fewer one-bits.
Here is an implementation in JavaScript, with a demo run for n=4:
function * generateOnes(numDigits, numOnes) {
if (numDigits === 0 || numOnes === 0) {
yield 0;
} else {
for (let pos = numOnes - 1; pos < numDigits; pos++) {
for (let result of generateOnes(pos, numOnes - 1)) {
yield (1 << pos) | result;
}
}
}
}
function * generate(numDigits) {
for (let numOnes = 1; numOnes <= numDigits; numOnes++) {
yield * generateOnes(numDigits, numOnes);
}
}
// Demo with n=4:
for (let result of generate(4)) {
console.log(result.toString(2).padStart(4, "0"));
}
Here is the equivalent in Python:
def generate_ones(num_digits, num_ones):
if num_digits == 0 or num_ones == 0:
yield 0
else:
for pos in range(num_ones - 1, num_digits):
for result in generate_ones(pos, num_ones - 1):
yield (1 << pos) | result
def generate(num_digits):
for num_ones in range(1, num_digits + 1):
yield from generate_ones(num_digits, num_ones)
# Demo with n=4:
for result in generate(4):
print('{0:04b}'.format(result))
n=int(input())
a=[]
for i in range(2**n):
Str = bin(i).replace('0b','')
a.append(Str)
for i in range(len(a)):
a[i] = '0'*(n-len(a[i])) + a[i]
for i in range(len(a)):
print(a[i])
If you have any doubts related to the code comment down
Supposing “We need to first go through all possible combinations, in such an order that the leftmost bits are lit only after all combinations for the rightmost bits are over” is correct and the example shown for n=4:
7 -> 1001
8 -> 0110
is wrong, then here is C code to iterate through the values as desired:
#include <stdio.h>
// Print the n-bit binary numeral for x.
static void PrintBinary(int n, unsigned x)
{
putchar('\t');
// Iterate through bit positions from high to low.
for (int p = n-1; 0 <= p; --p)
putchar('0' + ((x >> p) & 1));
putchar('\n');
}
/* This is from Hacker’s Delight by Henry S. Warren, Jr., 2003,
Addison-Wesley, Chapter 2 (“Basics”), Section 2-1 “Manipulating Rightmost
Bits”, page 14.
*/
static unsigned snoob(unsigned x)
{
/* Consider some bits in x dddd011...1100...00, where d is “do not care”
and there are t bits in that trailing group of 1s. Then, in the code
below:
smallest is set to the trailing 1 bit.
ripple adds to that bit, carrying to the next 0, producing
dddd100...0000...00. Note that t 1 bits changed to 0s and one 0
changed to 1, so ripple has t-1 fewer 1 bits than x does.
ones is set to all bits that changed, dddd111...1100...0. It has
t+1 bits set -- for the t 1s that changed to 0s and the 0 that
changed to 1.
ones/smallest aligns those bits to the right, leaving the lowest
t+1 bits set. Shifting right two bits leaves t-1 bits set.
Then ripple | ones restores t-1 1 bits in the low positions,
resulting in t bits set.
*/
unsigned smallest = x & -x; // Find trailing 1 bit.
unsigned ripple = x + smallest; // Change it, carrying to next 0.
unsigned ones = x ^ ripple; // Find all bits that changed.
ones = ones/smallest >> 2;
return ripple | ones;
}
/* Give a number of bits n, iterate through all values of n bits in order
first by the number of bits set then by the binary value.
*/
static void Iterate(int n)
{
printf("Patterns for n = %d:\n", n);
// Iterate s through the numbers of bits set.
for (int s = 0; s <= n; ++s)
{
/* Set s low bits. Note: If n can equal (or exceed) the number of
bits in unsigned, "1u << s" is not defined by the C standard, and
some alternative must be used.
*/
unsigned i = (1u << s) - 1;
// Find the highest value.
unsigned h = i << n-s;
PrintBinary(n, i);
while (i < h)
{
i = snoob(i);
PrintBinary(n, i);
}
}
}
int main(void)
{
for (int n = 1; n <= 4; ++n)
Iterate(n);
}
What is the meaning of (number) & (-number)? I have searched it but was unable to find the meaning
I want to use i & (-i) in for loop like:
for (i = 0; i <= n; i += i & (-i))
Assuming 2's complement (or that i is unsigned), -i is equal to ~i+1.
i & (~i + 1) is a trick to extract the lowest set bit of i.
It works because what +1 actually does is to set the lowest clear bit, and clear all bits lower than that. So the only bit that is set in both i and ~i+1 is the lowest set bit from i (that is, the lowest clear bit in ~i). The bits lower than that are clear in ~i+1, and the bits higher than that are non-equal between i and ~i.
Using it in a loop seems odd unless the loop body modifies i, because i = i & (-i) is an idempotent operation: doing it twice gives the same result again.
[Edit: in a comment elsewhere you point out that the code is actually i += i & (-i). So what that does for non-zero i is to clear the lowest group of set bits of i, and set the next clear bit above that, for example 101100 -> 110000. For i with no clear bit higher than the lowest set bit (including i = 0), it sets i to 0. So if it weren't for the fact that i starts at 0, each loop would increase i by at least twice as much as the previous loop, sometimes more, until eventually it exceeds n and breaks or goes to 0 and loops forever.
It would normally be inexcusable to write code like this without a comment, but depending on the domain of the problem maybe this is an "obvious" sequence of values to loop over.]
I thought I'd just take a moment to show how this works. This code gives you the lowest set bit's value:
int i = 0xFFFFFFFF; //Last byte is 1111(base 2), -1(base 10)
int j = -i; //-(-1) == 1
int k = i&j; // 1111(2) = -1(10)
// & 0001(2) = 1(10)
// ------------------
// 0001(2) = 1(10). So the lowest set bit here is the 1's bit
int i = 0x80; //Last 2 bytes are 1000 0000(base 2), 128(base 10)
int j = -i; //-(128) == -128
int k = i&j; // ...0000 0000 1000 0000(2) = 128(10)
// & ...1111 1111 1000 0000(2) = -128(10)
// ---------------------------
// 1000 0000(2) = 128(10). So the lowest set bit here is the 128's bit
int i = 0xFFFFFFC0; //Last 2 bytes are 1100 0000(base 2), -64(base 10)
int j = -i; //-(-64) == 64
int k = i&j; // 1100 0000(2) = -64(10)
// & 0100 0000(2) = 64(10)
// ------------------
// 0100 0000(2) = 64(10). So the lowest set bit here is the 64's bit
It works the same for unsigned values, the result is always the lowest set bit's value.
Given your loop:
for(i=0;i<=n;i=i&(-i))
There are no bits set (i=0) so you're going to get back a 0 for the increment step of this operation. So this loop will go on forever unless n=0 or i is modified.
Assuming that negative values are using two's complement. Then -number can be calculated as (~number)+1, flip the bits and add 1.
For example if number = 92. Then this is what it would look like in binary:
number 0000 0000 0000 0000 0000 0000 0101 1100
~number 1111 1111 1111 1111 1111 1111 1010 0011
(~number) + 1 1111 1111 1111 1111 1111 1111 1010 0100
-number 1111 1111 1111 1111 1111 1111 1010 0100
(number) & (-number) 0000 0000 0000 0000 0000 0000 0000 0100
You can see from the example above that (number) & (-number) gives you the least bit.
You can see the code run online on IDE One: http://ideone.com/WzpxSD
Here is some C code:
#include <iostream>
#include <bitset>
#include <stdio.h>
using namespace std;
void printIntBits(int num);
void printExpression(char *text, int value);
int main() {
int number = 92;
printExpression("number", number);
printExpression("~number", ~number);
printExpression("(~number) + 1", (~number) + 1);
printExpression("-number", -number);
printExpression("(number) & (-number)", (number) & (-number));
return 0;
}
void printExpression(char *text, int value) {
printf("%-20s", text);
printIntBits(value);
printf("\n");
}
void printIntBits(int num) {
for(int i = 0; i < 8; i++) {
int mask = (0xF0000000 >> (i * 4));
int portion = (num & mask) >> ((7 - i) * 4);
cout << " " << std::bitset<4>(portion);
}
}
Also here is a version in C# .NET: https://dotnetfiddle.net/ai7Eq6
The operation i & -i is used for isolating the least significant non-zero bit of the corresponding integer.
In binary notation num can be represented as a1b, where a represents binary digits before the last bit and b represents zeroes after the last bit.
-num is equal to (a1b)¯ + 1 = a¯0b¯ + 1. b consists of all zeroes, so b¯ consists of all ones.
-num = (a1b)¯ + 1 => a¯0b¯ + 1 => a¯0(0…0)¯ + 1 => ¯0(1…1) + 1 => a¯1(0…0) => a¯1b
Now, num & -num => a1b & a¯1b => (0..0)1(0..0)
For e.g. if i = 5
| iteration | i | last bit position | i & -i|
|-------- |--------|-------- |-----|
| 1 | 5 = 101 | 0 | 1 (2^0)|
| 2 | 6 = 110 | 1 | 2 (2^1)|
| 3 | 8 = 1000 | 3 | 8 (2^3)|
| 4 | 16 = 10000 | 4 | 16 (2^4)|
| 5 | 32 = 100000 | 5 | 32 (2^5)|
This operation in mainly used in Binary Indexed Trees to move up and down the tree
PS: For some reason stackoverflow is treating table as code :(
I am trying to infer a mapping scheme from set A to B (given below). Is there a way (Toolbox, long-forgotten File Exchange Gem, ...) to do that in Matlab?
My A and B are:
A = [8955573624 8727174542 6144057737 6697647320 1335549467 6669202192...
9276317113 5048034450 4757279524 1423969226 9729294957 4332046813...
0681780168 8231841017 9809242207 5584677643 6193476760 7203972648...
7286156579 5669792887 6789954237 8042954283 7426511939 4053045131...
8629149977 2997522935 9363344270 9890870146 9426932555 5755262458...
8327043690 0162545530 6451719711 5376165082 0595003112 5172323540...
9314878787 6822370777 8236826223 3097377830];
B = [000 001 001 003 003 004...
004 005 005 005 005 007...
007 009 009 009 010 010...
013 013 013 018 018 018...
018 019 019 019 020 020...
020 024 024 024 024 027...
027 027 027 028];
A brute-force method may be a good starting point. It at least give one some place to start thinking about the problem. I include the code I used to find out that for the first four numbers the following order of operations on each of the 10 digits in the gives the 3 digit code.
#mod, #times, #rem, #mod, #times, #plus, #rem, #rem, #mod
However
Elapsed time is 391.706191 seconds.
Code
data = [8955573624 000
8727174542 001
6144057737 001];
operations = {#plus, #minus, #times, #rdivide, #mod, #rem};
tic;
j = 1; % start from 1st row
while true
a = data(j,1);
digits = arrayfun(#str2mat,b(:)); b = num2str(a(1)); % Digits
if j == 1; % Find a set of operations which converts from digits to the code
value = NaN;
trials = 0;
while value ~= data(j,2) || trials > 1e3
ops = datasample(operations,numel(digits)-1); % Random operations
value = digits(1);
for jj = 1:numel(digits)-1
value = arrayfun(ops{jj},value,digits(jj+1));
end
trials = trials + 1;
end
else % Test whether it works for j > 1
value = digits(1);
for jj = 1:numel(digits)-1
value = arrayfun(ops{jj},value,digits(jj+1));
end
end
if value == data(j,2);
if j == size(data,1); break; end;
j = j + 1;
else
j = 1;
end
end
toc;
In terms of other things to try in the framework of this code:
Allowing for the digits to be tested as larger portions of the code. E.g. split the first code into 89,5,55,736,2,4 as opposed to only into single digits
Allowing other/more operations
Paralleling the attempts
Splitting the codes into digits before the while loop (<- Probably the easiest optimization to do here)
Trying the operations on all the codes at once (vectorising)
Changing both code and the answer into binary and trying to find a map there
Hope that helps. Even though It does not straight up solve your problem it might help you think about it in a new way.
I am solving this problem:
The count of ones in binary representation of integer number is called the weight of that number. The following algorithm finds the closest integer with the same weight. For example, for 123 (0111 1011)₂, the closest integer number is 125 (0111 1101)₂.
The solution for O(n)
where n is the width of the input number is by swapping the positions of the first pair of consecutive bits that differ.
Could someone give me some hints for solving in it in O(1) runtime and space ?
Thanks
As already commented by ajayv this cannot really be done in O(1) as the answer always depends on the number of bits the input has. However, if we interpret the O(1) to mean that we have as an input some primitive integer data and all the logic and arithmetic operations we perform on that integer are O(1) (no loops over the bits), the problem can be solved in constant time. Of course, if we changed from 32bit integer to 64bit integer the running time would increase as the arithmetic operations would take longer on hardware.
One possible solution is to use following functions. The first gives you a number where only the lowest set bit of x is set
int lowestBitSet(int x){
( x & ~(x-1) )
}
and the second the lowest bit not set
int lowestBitNotSet(int x){
return ~x & (x+1);
}
If you work few examples of these on paper you see how they work.
Now you can find the bits you need to change using these two functions and then use the algorithm you already described.
A c++ implementation (not checking for cases where there are no answer)
unsigned int closestInt(unsigned int x){
unsigned int ns=lowestBitNotSet(x);
unsigned int s=lowestBitSet(x);
if (ns>s){
x|=ns;
x^=ns>>1;
}
else{
x^=s;
x|=s>>1;
}
return x;
}
To solve this problem in O(1) time complexity it can be considered that there are two main cases:
1) When LSB is '0':
In this case, the first '1' must be shifted with one position to the right.
Input : "10001000"
Out ::: "10000100"
2) When LSB is '1':
In this case the first '0' must be set to '1', and first '1' must be set to '0'.
Input : "10000111"
Out ::: "10001110"
The next method in Java represents one solution.
private static void findClosestInteger(String word) { // ex: word = "10001000"
System.out.println(word); // Print initial binary format of the number
int x = Integer.parseInt(word, 2); // Convert String to int
if((x & 1) == 0) { // Evaluates LSB value
// Case when LSB = '0':
// Input: x = 10001000
int firstOne = x & ~(x -1); // get first '1' position (from right to left)
// firstOne = 00001000
x = x & (x - 1); // set first '1' to '0'
// x = 10000000
x = x | (firstOne >> 1); // "shift" first '1' with one position to right
// x = 10000100
} else {
// Case when LSB = '1':
// Input: x = 10000111
int firstZero = ~x & ~(~x - 1); // get first '0' position (from right to left)
// firstZero = 00001000
x = x & (~1); // set first '1', which is the LSB, to '0'
// x = 10000110
x = x | firstZero; // set first '0' to '1'
// x = 10001110
}
for(int i = word.length() - 1; i > -1 ; i--) { // print the closest integer with same weight
System.out.print("" + ( ( (x & 1 << i) != 0) ? 1 : 0) );
}
}
The problem can be viewed as "which differing bits to swap in a bit representation of a number, so that the resultant number is closest to the original?"
So, if we we're to swap bits at indices k1 & k2, with k2 > k1, the difference between the numbers would be 2^k2 - 2^k1. Our goal is to minimize this difference. Assuming that the bit representation is not all 0s or all 1s, a simple observation yields that the difference would be least if we kept |k2 - k1| as minimum. The minimum value can be 1. So, if we're able to find two consecutive different bits, starting from the least significant bit (index = 0), our job is done.
The case where bits starting from Least Significant Bit to the right most set bit are all 1s
k2
|
7 6 5 4 3 2 1 0
---------------
n: 1 1 1 0 1 0 1 1
rightmostSetBit: 0 0 0 0 0 0 0 1
rightmostNotSetBit: 0 0 0 0 0 1 0 0 rightmostNotSetBit > rightmostSetBit so,
difference: 0 0 0 0 0 0 1 0 i.e. rightmostNotSetBit - (rightmostNotSetBit >> 1):
---------------
n + difference: 1 1 1 0 1 1 0 1
The case where bits starting from Least Significant Bit to the right most set bit are all 0s
k2
|
7 6 5 4 3 2 1 0
---------------
n: 1 1 1 0 1 1 0 0
rightmostSetBit: 0 0 0 0 0 1 0 0
rightmostNotSetBit: 0 0 0 0 0 0 0 1 rightmostSetBit > rightmostNotSetBit so,
difference: 0 0 0 0 0 0 1 0 i.e. rightmostSetBit -(rightmostSetBit>> 1)
---------------
n - difference: 1 1 1 0 1 0 1 0
The edge case, of course the situation where we have all 0s or all 1s.
public static long closestToWeight(long n){
if(n <= 0 /* If all 0s */ || (n+1) == Integer.MIN_VALUE /* n is MAX_INT */)
return -1;
long neg = ~n;
long rightmostSetBit = n&~(n-1);
long rightmostNotSetBit = neg&~(neg-1);
if(rightmostNotSetBit > rightmostSetBit){
return (n + (rightmostNotSetBit - (rightmostNotSetBit >> 1)));
}
return (n - (rightmostSetBit - (rightmostSetBit >> 1)));
}
Attempted the problem in Python. Can be viewed as a translation of Ari's solution with the edge case handled:
def closest_int_same_bit_count(x):
# if all bits of x are 0 or 1, there can't be an answer
if x & sys.maxsize in {sys.maxsize, 0}:
raise ValueError("All bits are 0 or 1")
rightmost_set_bit = x & ~(x - 1)
next_un_set_bit = ~x & (x + 1)
if next_un_set_bit > rightmost_set_bit:
# 0 shifted to the right e.g 0111 -> 1011
x ^= next_un_set_bit | next_un_set_bit >> 1
else:
# 1 shifted to the right 1000 -> 0100
x ^= rightmost_set_bit | rightmost_set_bit >> 1
return x
Similarly jigsawmnc's solution is provided below:
def closest_int_same_bit_count(x):
# if all bits of x are 0 or 1, there can't be an answer
if x & sys.maxsize in {sys.maxsize, 0}:
raise ValueError("All bits are 0 or 1")
rightmost_set_bit = x & ~(x - 1)
next_un_set_bit = ~x & (x + 1)
if next_un_set_bit > rightmost_set_bit:
# 0 shifted to the right e.g 0111 -> 1011
x += next_un_set_bit - (next_un_set_bit >> 1)
else:
# 1 shifted to the right 1000 -> 0100
x -= rightmost_set_bit - (rightmost_set_bit >> 1)
return x
Java Solution:
//Swap the two rightmost consecutive bits that are different
for (int i = 0; i < 64; i++) {
if ((((x >> i) & 1) ^ ((x >> (i+1)) & 1)) == 1) {
// then swap them or flip their bits
int mask = (1 << i) | (1 << i + 1);
x = x ^ mask;
System.out.println("x = " + x);
return;
}
}
static void findClosestIntWithSameWeight(uint x)
{
uint xWithfirstBitSettoZero = x & (x - 1);
uint xWithOnlyfirstbitSet = x & ~(x - 1);
uint xWithNextTofirstBitSet = xWithOnlyfirstbitSet >> 1;
uint closestWeightNum = xWithfirstBitSettoZero | xWithNextTofirstBitSet;
Console.WriteLine("Closet Weight for {0} is {1}", x, closestWeightNum);
}
Code in python:
def closest_int_same_bit_count(x):
if (x & 1) != ((x >> 1) & 1):
return x ^ 0x3
diff = x ^ (x >> 1)
rbs = diff & ~(diff - 1)
i = int(math.log(rbs, 2))
return x ^ (1 << i | 1 << i + 1)
A great explanation of this problem can be found on question 4.4 in EPI.
(Elements of Programming Interviews)
Another place would be this link on geeksforgeeks.org if you don't own the book.
(Time complexity may be wrong on this link)
Two things you should keep in mind here is (Hint if you're trying to solve this for yourself):
You can use x & (x - 1) to clear the lowest set-bit (not to get confused with LSB - least significant bit)
You can use x & ~(x - 1) to get/extract the lowest set bit
If you know the O(n) solution you know that we need to find the index of the first bit that differs from LSB.
If you don't know what the LBS is:
0000 0000
^ // it's bit all the way to the right of a binary string.
Take the base two number 1011 1000 (184 in decimal)
The first bit that differs from LSB:
1011 1000
^ // this one
We'll record this as K1 = 0000 1000
Then we need to swap it with the very next bit to the right:
0000 1000
^ // this one
We'll record this as K2 = 0000 0100
Bitwise OR K1 and K2 together and you'll get a mask
mask = K1 | k2 // 0000 1000 | 0000 0100 -> 0000 1100
Bitwise XOR the mask with the original number and you'll have the correct output/swap
number ^ mask // 1011 1000 ^ 0000 1100 -> 1011 0100
Now before we pull everything together we have to consider that fact that the LSB could be 0001, and so could a bunch of bits after that 1000 1111. So we have to deal with the two cases of the first bit that differs from the LSB; it may be a 1 or 0.
First we have a conditional that test the LSB to be 1 or 0: x & 1
IF 1 return x XORed with the return of a helper function
This helper function has a second argument which its value depends on whether the condition is true or not. func(x, 0xFFFFFFFF) // if true // 0xFFFFFFFF 64 bit word with all bits set to 1
Otherwise we'll skip the if statement and return a similar expression but with a different value provided to the second argument.
return x XORed with func(x, 0x00000000) // 64 bit word with all bits set to 0. You could alternatively just pass 0 but I did this for consistency
Our helper function returns a mask that we are going to XOR with the original number to get our output.
It takes two arguments, our original number and a mask, used in this expression:
(x ^ mask) & ~((x ^ mask) - 1)
which gives us a new number with the bit at index K1 always set to 1.
It then shifts that bit 1 to the right (i.e index K2) then ORs it with itself to create our final mask
0000 1000 >> 1 -> 0000 0100 | 0001 0000 -> 0000 1100
This all implemented in C++ looks like:
unsigned long long int closestIntSameBitCount(unsigned long long int n)
{
if (n & 1)
return n ^= getSwapMask(n, 0xFFFFFFFF);
return n ^= getSwapMask(n, 0x00000000);
}
// Helper function
unsigned long long int getSwapMask(unsigned long long int n, unsigned long long int mask)
{
unsigned long long int swapBitMask = (n ^ mask) & ~((n ^ mask) - 1);
return swapBitMask | (swapBitMask >> 1);
}
Keep note of the expression (x ^ mask) & ~((x ^ mask) - 1)
I'll now run through this code with my example 1011 1000:
// start of closestIntSameBitCount
if (0) // 1011 1000 & 1 -> 0000 0000
// start of getSwapMask
getSwapMask(1011 1000, 0x00000000)
swapBitMask = (x ^ mask) & ~1011 0111 // ((x ^ mask) - 1) = 1011 1000 ^ .... 0000 0000 -> 1011 1000 - 1 -> 1011 0111
swapBitMask = (x ^ mask) & 0100 1000 // ~1011 0111 -> 0100 1000
swapBitMask = 1011 1000 & 0100 1000 // (x ^ mask) = 1011 1000 ^ .... 0000 0000 -> 1011 1000
swapBitMask = 0000 1000 // 1011 1000 & 0100 1000 -> 0000 1000
return swapBitMask | 0000 0100 // (swapBitMask >> 1) = 0000 1000 >> 1 -> 0000 0100
return 0000 1100 // 0000 1000 | 0000 0100 -> 0000 11000
// end of getSwapMask
return 1011 0100 // 1011 1000 ^ 0000 11000 -> 1011 0100
// end of closestIntSameBitCount
Here is a full running example if you would like compile and run it your self:
#include <iostream>
#include <stdio.h>
#include <bitset>
unsigned long long int closestIntSameBitCount(unsigned long long int n);
unsigned long long int getSwapMask(unsigned long long int n, unsigned long long int mask);
int main()
{
unsigned long long int number;
printf("Pick a number: ");
std::cin >> number;
std::bitset<64> a(number);
std::bitset<64> b(closestIntSameBitCount(number));
std::cout << a
<< "\n"
<< b
<< std::endl;
}
unsigned long long int closestIntSameBitCount(unsigned long long int n)
{
if (n & 1)
return n ^= getSwapMask(n, 0xFFFFFFFF);
return n ^= getSwapMask(n, 0x00000000);
}
// Helper function
unsigned long long int getSwapMask(unsigned long long int n, unsigned long long int mask)
{
unsigned long long int swapBitMask = (n ^ mask) & ~((n ^ mask) - 1);
return swapBitMask | (swapBitMask >> 1);
}
This was my solution to the problem. I guess #jigsawmnc explains pretty well why we need to have |k2 -k1| to a minimum. So in order to find the closest integer, with the same weight, we would want to find the location where consecutive bits are flipped and then flip them again to get the answer. In order to do that we can shift the number 1 unit. Take the XOR with the same number. This will set bits at all locations where there is a flip. Find the least significant bit for the XOR. This will give you the smallest location to flip. Create a mask for the location and next bit. Take an XOR and that should be the answer. This won't work, if the digits are all 0 or all 1
Here is the code for it.
def variant_closest_int(x: int) -> int:
if x == 0 or ~x == 0:
raise ValueError('All bits are 0 or 1')
x_ = x >> 1
lsb = x ^ x_
mask_ = lsb & ~(lsb - 1)
mask = mask_ | (mask_ << 1)
return x ^ mask
My solution, takes advantage of the parity of the integer. I think the way I got the LSB masks can be simplified
def next_weighted_int(x):
if x % 2 == 0:
lsb_mask = ( ((x - 1) ^ x) >> 1 ) + 1 # Gets a mask for the first 1
x ^= lsb_mask
x |= (lsb_mask >> 1)
return x
lsb_mask = ((x ^ (x + 1)) >> 1 ) + 1 # Gets a mask for the first 0
x |= lsb_mask
x ^= (lsb_mask >> 1)
return x
Just sharing my python solution for this problem:
def same closest_int_same_bit_count(a):
x = a + (a & 1) # change last bit to 0
bit = (x & ~(x-1)) # get last set bit
return a ^ (bit | bit >> 1) # swap set bit with unset bit
func findClosestIntegerWithTheSameWeight2(x int) int {
rightMost0 := ^x & (x + 1)
rightMost1 := x & (-x)
if rightMost0 > 1 {
return (x ^ rightMost0) ^ (rightMost0 >> 1)
} else {
return (x ^ rightMost1) ^ (rightMost1 >> 1)
}
}
Given an integer 1<=n<=100000. How do I get efficiently all the integers which are subsequences of the integer n (considering the integer n as a string of digits). I want a pseudo code for this.
eg. input: n=121, output: 1,12,11,2,21 (the order of output has no importance)
eg. input: n=132, output: 1,13,12,3,32,2
thanks in advance
Here's another version, using recursion. This one uses just integer division and modulo (both combined in divmod) to get the 'sub-integers'. It's done in Python, which is just as good as pseudo code...
def subints(n):
d, r = divmod(n, 10)
if d > 0:
for s in subints(d):
yield 10*s + r
yield s
yield r
Example: (a set of the resulting numbers would be enough; using list for better understanding)
>>> print list(subints(1234))
[1234, 123, 124, 12, 134, 13, 14, 1, 234, 23, 24, 2, 34, 3, 4]
n=121
length = 3
Generate all possible binary numbers of length 3. See the set bits in binary number and print the corresponding digit from the original number.
Example 1
121 length=3
000 -> 0 (ignore this:no bit set)
001 -> 1 (__1)
010 -> 2 (_2_)
011 -> 21 (_21)
100 -> 1 (repeated: ignore this)
101 -> 11 (1_1)
110 -> 12 (12_)
111 -> 121 (121)
Example 2: n=1234
1234 length=4
0000 -> 0 (ignore this:no bit set)
0001 -> 4 (___4)
0010 -> 3 (__3_)
0011 -> 34 (__34)
0100 -> 2 (_2__)
0101 -> 24 (_2_4)
0110 -> 23 (_23_)
0111 -> 234 (_234)
1000 -> 1 (1___)
1001 -> 14 (1__4)
1010 -> 13 (1_3_)
1011 -> 134 (1_34)
1100 -> 12 (12__)
1101 -> 124 (12_4)
1110 -> 123 (123_)
1111 -> 1234 (1234)
The code for the above algorithm made by me in C is pasted below. I have not performed many optimizations but the logic is same.
Sorry for the unoptimized code.
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<math.h>
#define SIZE 30
main()
{
int i,j,end,index,k,swap;
char str[SIZE],arr[SIZE];
char buffer[SIZE],dict[SIZE][SIZE]={'\0'}; //dictlength is used to store the
int dictlength=0;
gets(str);
for(i=1;i<pow(2,strlen(str));i++)
{
index=0;
end=strlen(str);
itoa(i,arr,2);
for(j=strlen(arr);j>=0;j--,end--)
{
if(arr[j]=='1')
{
buffer[index]=str[end];
index=index+1;
}
}
buffer[index]='\0';
for(k=0,j=strlen(buffer)-1; k<j ; k++,j--)
{
swap=buffer[k];
buffer[k]=buffer[j];
buffer[j]=swap;
}
strcpy(dict[dictlength],buffer);
dictlength++;
}
for(i=0;i<dictlength;i++)
puts(dict[i]);
}
How about this:
static HashSet<string> Subseq(string input, HashSet<string> result = null, int pos = 0, string current = "")
{
if (result == null)
{
result = new HashSet<string>();
}
if (pos == input.Length)
{
return result;
}
Subseq(input, result, pos + 1, current);
current += input[pos];
if (!result.Contains(current))
{
result.Add(current);
}
Subseq(input, result, pos + 1, current);
return result;
}
Called as:
var result = Subseq("18923928");
If you don't consider "123" a subsequence of "123", then just add the condition current != input to the last if.