Converting Number to Binary String - c++11

The following is the code to convert a number to binary string. Can anyone tell me how ans.push_back((char)('0' + rem)) works?
class Solution {
public:
string findDigitsInBinary(int n) {
string ans;
if (n == 0) return "0";
while (n > 0) {
int rem = n % 2;
ans.push_back((char)('0' + rem));
n /= 2;
}
reverse(ans.begin(), ans.end());
return ans;
}
};

To understand it, you just need to know that you can do arithmetic operations on char variables too. So, the simple loop below is valid and will print 0123456789.
for(char c = '0'; c <= '9'; ++c)
cout << c;
In you code, rem is either 0 or 1. So, (char)('0'+rem) is either '0' or '1' as desired, corresponding to rem=0, 1, respectively.

while (n > 0) {
int rem = n % 2;
ans.push_back((char)('0' + rem));
n /= 2;
}
Focus on this loop. Suppose n is 5
n > 0 true so enter into loop. rem = n % 2 so rem = 5 % 2 = 1
ans.push_back((char)('0' + rem)) here ('0' + rem) is (48 + 1) ASCII of '0' is 48
Now convert 48 + 1 = 49 into char that is '1'. Now push '1' into ansand then n /= 2 is 5 /= 2 that is 2. Now go back and check the condition in while loop. After loop reverse the content of ans and you have binary string of number n

First you get rem as %2. Thus the value of rem can either be 0 or 1.
In ans.push_back((char)('0' + rem)); you need to add the corresponding character to the string, that is either 0 or 1. For this you have considered '0' as base character and you simply add the rem to it, using its ASCII int. When doing such integer operation, the ASCII value of character '0' is considered, which is 48. Thus after adding rem to it, it can either be 48 + 0 = 48 or 48 + 1 = 49.
Finally, this value is type casted back to char, with 48 being '0' and 49 being '1'

Related

Reading the bits of a natural number from LSB to MSB without built-ins in O(n)

Taking a natural number a as input, it is easy to read the bits of its binary form from MSB to LSB in O(n) time, n being its binary length, using only a for loop and elementary sums and subtractions. A left shift can be achieved by a+a and subtracting 1000000...
def powerOfTwo(n):
a = 1
for j in range(0,n):
a=(a+a)
return a
def bitLenFast(n):
len=0
if (n==0):
len=1
else:
y=1
while (y<=n):
y=(y+y)
len=(len+1)
return len
def readAsBinary(x):
len=bitLenFast(x) # Length of input x in bits
y=powerOfTwo((len-1)) # Reference word 1000000...
hBit=powerOfTwo(len) # Deletes highest bit in left shift
for i in range(0, len):
if (x>=y):
bit=1
x=((x+x)-hBit)
else:
bit=0
x=(x+x)
print(bit)
Is there an algorithm to parse a bit by bit from LSB to MSB in O(n) time, using only a while or a for loop and elementary operations (i.e. no bitwise built-in functions or operators)?
Apply your algorithm to find the bits in MSB to LSB order to the number. Keep an accumulator A initialized to 0 and a place value variable B initialized to 1. At each iteration, add B to A if the bit is set and then double B by adding it to itself. You also need to keep track of the number of consecutive 0 bits. Initialize a counter C to zero beforehand and at each iteration increment it if the bit is 0 or set to zero otherwise.
At the end you will have the number with the bits reversed in A. You can then output C leading zeros and then apply the algorithm to A to output the bits of the original number in LSB to MSB order.
This is an implementation of samgak's answer in JS, using 2 calls to (an adapted version of) OP's code. Since OP's code is O(n), and all added code is O(1), the result is also O(n).
Therefore, the answer to OP's question is yes.
NOTE: updated to add leading zeroes as per samgak's updated answer.
function read_low_to_high(num, out) {
const acc = {
n: 0, // integer with bits in reverse order
p: 1, // current power-of-two
z: 0, // last run of zeroes, to prepend to result once finished
push: (bit) => { // this is O(1)
if (bit) {
acc.n = acc.n + acc.p;
acc.z = 0;
} else {
acc.z = acc.z + 1;
}
acc.p = acc.p + acc.p;
}
};
// with n as log2(num) ...
read_high_to_low(num, acc); // O(n) - bits in reverse order
for (let i=0; i<acc.z; i++) { // O(n) - prepend zeroes
out.push(0);
}
read_high_to_low(acc.n, out); // O(n) - bits in expected order
}
function read_high_to_low(num, out) {
let po2 = 1; // max power-of-two <= num
let binlength = 1;
while (po2 + po2 <= num) {
po2 = po2 + po2;
binlength ++;
}
const hi = po2 + po2; // min power-of-two > num
for (let i=0; i<binlength; i++) {
if (num>=po2) {
out.push(1);
num = num + num - hi;
} else {
out.push(0);
num = num + num;
}
}
}
function test(i) {
const a = i.toString(2)
.split('').map(i => i-'0');
const ra = a.slice().reverse();
const b = [];
read_high_to_low(i, b);
const rb = [];
read_low_to_high(i, rb);
console.log(i,
"high-to-low",
JSON.stringify(a),
JSON.stringify(b),
"low-to-high",
JSON.stringify(ra),
JSON.stringify(rb)
);
}
for (let i=0; i<16; i++) test(i);
Perhaps you want something like this:
value = 666
while value:
next = value // 2 # integer division
bit = value - next * 2
print(bit, end = " ")
value = next
>>> 0 1 0 1 1 0 0 1 0 1
For reading digits from least significant to most significant and determining the numerical value, there is, but for a valid assertion about run time it would be essential if e.g. indexed access is constant time.
For digits in numerical value:
value ← 0, weight ← 1
foreach digit
  while 0 < digit
    value ← value + weight
    digit ← digit - 1
  weight ← weight + weight
 

Count number of 1 digits in 11 to the power of N

I came across an interesting problem:
How would you count the number of 1 digits in the representation of 11 to the power of N, 0<N<=1000.
Let d be the number of 1 digits
N=2 11^2 = 121 d=2
N=3 11^3 = 1331 d=2
Worst time complexity expected O(N^2)
The simple approach where you compute the number and count the number of 1 digits my getting the last digit and dividing by 10, does not work very well. 11^1000 is not even representable in any standard data type.
Powers of eleven can be stored as a string and calculated quite quickly that way, without a generalised arbitrary precision math package. All you need is multiply by ten and add.
For example, 111 is 11. To get the next power of 11 (112), you multiply by (10 + 1), which is effectively the number with a zero tacked the end, added to the number: 110 + 11 = 121.
Similarly, 113 can then be calculated as: 1210 + 121 = 1331.
And so on:
11^2 11^3 11^4 11^5 11^6
110 1210 13310 146410 1610510
+11 +121 +1331 +14641 +161051
--- ---- ----- ------ -------
121 1331 14641 161051 1771561
So that's how I'd approach, at least initially.
By way of example, here's a Python function to raise 11 to the n'th power, using the method described (I am aware that Python has support for arbitrary precision, keep in mind I'm just using it as a demonstration on how to do this an an algorithm, which is how the question was tagged):
def elevenToPowerOf(n):
# Anything to the zero is 1.
if n == 0: return "1"
# Otherwise, n <- n * 10 + n, once for each level of power.
num = "11"
while n > 1:
n = n - 1
# Make multiply by eleven easy.
ten = num + "0"
num = "0" + num
# Standard primary school algorithm for adding.
newnum = ""
carry = 0
for dgt in range(len(ten)-1,-1,-1):
res = int(ten[dgt]) + int(num[dgt]) + carry
carry = res // 10
res = res % 10
newnum = str(res) + newnum
if carry == 1:
newnum = "1" + newnum
# Prepare for next multiplication.
num = newnum
# There you go, 11^n as a string.
return num
And, for testing, a little program which works out those values for each power that you provide on the command line:
import sys
for idx in range(1,len(sys.argv)):
try:
power = int(sys.argv[idx])
except (e):
print("Invalid number [%s]" % (sys.argv[idx]))
sys.exit(1)
if power < 0:
print("Negative powers not allowed [%d]" % (power))
sys.exit(1)
number = elevenToPowerOf(power)
count = 0
for ch in number:
if ch == '1':
count += 1
print("11^%d is %s, has %d ones" % (power,number,count))
When you run that with:
time python3 prog.py 0 1 2 3 4 5 6 7 8 9 10 11 12 1000
you can see that it's both accurate (checked with bc) and fast (finished in about half a second):
11^0 is 1, has 1 ones
11^1 is 11, has 2 ones
11^2 is 121, has 2 ones
11^3 is 1331, has 2 ones
11^4 is 14641, has 2 ones
11^5 is 161051, has 3 ones
11^6 is 1771561, has 3 ones
11^7 is 19487171, has 3 ones
11^8 is 214358881, has 2 ones
11^9 is 2357947691, has 1 ones
11^10 is 25937424601, has 1 ones
11^11 is 285311670611, has 4 ones
11^12 is 3138428376721, has 2 ones
11^1000 is 2469932918005826334124088385085221477709733385238396234869182951830739390375433175367866116456946191973803561189036523363533798726571008961243792655536655282201820357872673322901148243453211756020067624545609411212063417307681204817377763465511222635167942816318177424600927358163388910854695041070577642045540560963004207926938348086979035423732739933235077042750354729095729602516751896320598857608367865475244863114521391548985943858154775884418927768284663678512441565517194156946312753546771163991252528017732162399536497445066348868438762510366191040118080751580689254476068034620047646422315123643119627205531371694188794408120267120500325775293645416335230014278578281272863450085145349124727476223298887655183167465713337723258182649072572861625150703747030550736347589416285606367521524529665763903537989935510874657420361426804068643262800901916285076966174176854351055183740078763891951775452021781225066361670593917001215032839838911476044840388663443684517735022039957481918726697789827894303408292584258328090724141496484460001, has 105 ones
real 0m0.609s
user 0m0.592s
sys 0m0.012s
That may not necessarily be O(n2) but it should be fast enough for your domain constraints.
Of course, given those constraints, you can make it O(1) by using a method I call pre-generation. Simply write a program to generate an array you can plug into your program which contains a suitable function. The following Python program does exactly that, for the powers of eleven from 1 to 100 inclusive:
def mulBy11(num):
# Same length to ease addition.
ten = num + '0'
num = '0' + num
# Standard primary school algorithm for adding.
result = ''
carry = 0
for idx in range(len(ten)-1, -1, -1):
digit = int(ten[idx]) + int(num[idx]) + carry
carry = digit // 10
digit = digit % 10
result = str(digit) + result
if carry == 1:
result = '1' + result
return result
num = '1'
print('int oneCountInPowerOf11(int n) {')
print(' static int numOnes[] = {-1', end='')
for power in range(1,101):
num = mulBy11(num)
count = sum(1 for ch in num if ch == '1')
print(',%d' % count, end='')
print('};')
print(' if ((n < 0) || (n > sizeof(numOnes) / sizeof(*numOnes)))')
print(' return -1;')
print(' return numOnes[n];')
print('}')
The code output by this script is:
int oneCountInPowerOf11(int n) {
static int numOnes[] = {-1,2,2,2,2,3,3,3,2,1,1,4,2,3,1,4,2,1,4,4,1,5,5,1,5,3,6,6,3,6,3,7,5,7,4,4,2,3,4,4,3,8,4,8,5,5,7,7,7,6,6,9,9,7,12,10,8,6,11,7,6,5,5,7,10,2,8,4,6,8,5,9,13,14,8,10,8,7,11,10,9,8,7,13,8,9,6,8,5,8,7,15,12,9,10,10,12,13,7,11,12};
if ((n < 0) || (n > sizeof(numOnes) / sizeof(*numOnes)))
return -1;
return numOnes[n];
}
which should be blindingly fast when plugged into a C program. On my system, the Python code itself (when you up the range to 1..1000) runs in about 0.6 seconds and the C code, when compiled, finds the number of ones in 111000 in 0.07 seconds.
Here's my concise solution.
def count1s(N):
# When 11^(N-1) = result, 11^(N) = (10+1) * result = 10*result + result
result = 1
for i in range(N):
result += 10*result
# Now count 1's
count = 0
for ch in str(result):
if ch == '1':
count += 1
return count
En c#:
private static void Main(string[] args)
{
var res = Elevento(1000);
var countOf1 = res.Select(x => int.Parse(x.ToString())).Count(s => s == 1);
Console.WriteLine(countOf1);
}
private static string Elevento(int n)
{
if (n == 0) return "1";
//Otherwise, n <- n * 10 + n, once for each level of power.
var num = "11";
while (n > 1)
{
n--;
// Make multiply by eleven easy.
var ten = num + "0";
num = "0" + num;
//Standard primary school algorithm for adding.
var newnum = "";
var carry = 0;
foreach (var dgt in Enumerable.Range(0, ten.Length).Reverse())
{
var res = int.Parse(ten[dgt].ToString()) + int.Parse(num[dgt].ToString()) + carry;
carry = res/10;
res = res%10;
newnum = res + newnum;
}
if (carry == 1)
newnum = "1" + newnum;
// Prepare for next multiplication.
num = newnum;
}
//There you go, 11^n as a string.
return num;
}

Any algorithm to find the double trouble number?

I was trying to code the double trouble number problem, but before that not able to finalize the algorithm.
Anybody has any idea?
Problem Statement -
The numbers has the following property -
Whenever you would right-rotate the number (that is, take away the
last digit and put it in front of the number), you would end up with
double the original number. Numbers possessing this property were
called double-trouble numbers. For example, X = 421052631578947368 is
a double-trouble number, since 2X = 842105263157894736 which is a
right rotation of X.
The number X is a double-trouble number in the number system with base
10. Any number system with base p >= 2 , however, has many such double-trouble numbers. In the binary number system (base p = 2), for
example, we have the double-trouble numbers 01 and 0101. Notice that
the leading zeros are necessary here in order to obtain the proper
number after right rotation.
In the binary number system the smallest double-trouble number is 01. In > the decimal (p = 10) number system, the smallest double-trouble number
is 052631578947368421. I need to write a program that computes for a
given base p of a number system the smallest double-trouble number in
that system.
Here's the brute force solution in JavaScript.
It starts with a digit, then prepends the double of the previous digit (plus carry).
After each iteraion it tests if the digits are a double trouble number (it also tries the prepend by "0" corner/ambiguous case)
This implementation is only for base 10; you'll have to understand the algorithm and modify the code to create an arbitrary base abstraction.
Double Trouble Solver for base 10
// (digits * 2) == digits[n]:digits[1..n-1]
function isDT(digits) {
var times2 = "";
var carry = false;
for(var i = digits.length-1; i >= 0; i--) {
var d = parseInt(digits.charAt(i));
var d2 = "" + (d * 2 + (carry ? 1 : 0));
carry = d2.length > 1;
times2 = d2.charAt(d2.length > 1 ? 1 : 0) + times2;
}
if(carry) { times2 = "1" + times2; }
return times2 == (digits.charAt(digits.length -1) + digits.substring(0, digits.length -1));
}
// generate a doule trouble number from a starting digit
function makeDT(digits, carry) {
var carry = carry || false;
var digits = "" + digits;
if(carry && isDT("1" + digits)) {
return "1" + digits;
} else if(isDT(digits)) {
return digits;
} else if(isDT("0" + digits)) {
return "0" + digits;
}
var d = digits.charAt(0);
var d2 = "" + (d * 2 + (carry ? 1 : 0));
carry = d2.length > 1;
digits = d2.charAt(d2.length > 1 ? 1 : 0) + digits;
return makeDT(digits, carry);
}
//
alert(makeDT("9"));
alert(makeDT("8"));
alert(makeDT("7"));
alert(makeDT("6"));
alert(makeDT("5"));
alert(makeDT("4"));
alert(makeDT("3"));
alert(makeDT("2"));
alert(makeDT("1"));
EDIT Here's the jsfiddle http://jsfiddle.net/avbfae0w/

positional sum of 2 numbers

How to sum 2 numbers digit by digit with pseudo code?
Note: You don't know the length of the numbers - if it has tens, hundreds, thousands...
Units should be add to units, tens to tens, hundreds to hundreds.....
If there is a value >= 10 in adding the units you need to put the value of that ten with "the tens"....
I tried
Start
Do
Add digit(x) in A to Sum(x)
Add digit(x) in B to Sum(x)
If Sum(x) > 9, then (?????)
digit(x) = digit(x+1)
while digit(x) in A and digit(x) in B is > 0
How to show the result?
I am lost with that.....
Please help!
Try this,
n = minDigit(a, b) where a and b are the numbers.
let sum be a number.
m = maxDigit(a,b)
allocate maxDigit(a,b) + 1 memory for sum
carry = 0;
for (i = 1 to n)
temp = a[i] + b[i] + carry
// reset carry
carry = 0
if (temp > 10)
carry = 1
temp = temp - 10;
sum[i] = temp
// one last step to get the leftover carry
if (digits(a) == digits(b)
sum[n + 1] = carry
return
if (digits(a) > digits(b)
toCopy = a
else
toCopy = b
for (i = n to m)
temp = toCopy[i] + carry
// reset carry
carry = 0
if (temp > 10)
carry = 1
temp = temp - 10;
sum[i] = temp
Let me know if it helps
A and B are the integers you want to sum.
Note that the while loop ends when all the three integers are equal to zero.
carry = 0
sum = 0
d = 1
while (A > 0 or B > 0 or carry > 0)
tmp = carry + A mod 10 + B mod 10
sum = sum + (tmp mod 10) * d
carry = tmp / 10
d = d * 10
A = A / 10
B = B / 10

Number of 1s in the two's complement binary representations of integers in a range

This problem is from the 2011 Codesprint (http://csfall11.interviewstreet.com/):
One of the basics of Computer Science is knowing how numbers are represented in 2's complement. Imagine that you write down all numbers between A and B inclusive in 2's complement representation using 32 bits. How many 1's will you write down in all ?
Input:
The first line contains the number of test cases T (<1000). Each of the next T lines contains two integers A and B.
Output:
Output T lines, one corresponding to each test case.
Constraints:
-2^31 <= A <= B <= 2^31 - 1
Sample Input:
3
-2 0
-3 4
-1 4
Sample Output:
63
99
37
Explanation:
For the first case, -2 contains 31 1's followed by a 0, -1 contains 32 1's and 0 contains 0 1's. Thus the total is 63.
For the second case, the answer is 31 + 31 + 32 + 0 + 1 + 1 + 2 + 1 = 99
I realize that you can use the fact that the number of 1s in -X is equal to the number of 0s in the complement of (-X) = X-1 to speed up the search. The solution claims that there is a O(log X) recurrence relation for generating the answer but I do not understand it. The solution code can be viewed here: https://gist.github.com/1285119
I would appreciate it if someone could explain how this relation is derived!
Well, it's not that complicated...
The single-argument solve(int a) function is the key. It is short, so I will cut&paste it here:
long long solve(int a)
{
if(a == 0) return 0 ;
if(a % 2 == 0) return solve(a - 1) + __builtin_popcount(a) ;
return ((long long)a + 1) / 2 + 2 * solve(a / 2) ;
}
It only works for non-negative a, and it counts the number of 1 bits in all integers from 0 to a inclusive.
The function has three cases:
a == 0 -> returns 0. Obviously.
a even -> returns the number of 1 bits in a plus solve(a-1). Also pretty obvious.
The final case is the interesting one. So, how do we count the number of 1 bits from 0 to an odd number a?
Consider all of the integers between 0 and a, and split them into two groups: The evens, and the odds. For example, if a is 5, you have two groups (in binary):
000 (aka. 0)
010 (aka. 2)
100 (aka. 4)
and
001 (aka 1)
011 (aka 3)
101 (aka 5)
Observe that these two groups must have the same size (because a is odd and the range is inclusive). To count how many 1 bits there are in each group, first count all but the last bits, then count the last bits.
All but the last bits looks like this:
00
01
10
...and it looks like this for both groups. The number of 1 bits here is just solve(a/2). (In this example, it is the number of 1 bits from 0 to 2. Also, recall that integer division in C/C++ rounds down.)
The last bit is zero for every number in the first group and one for every number in the second group, so those last bits contribute (a+1)/2 one bits to the total.
So the third case of the recursion is (a+1)/2 + 2*solve(a/2), with appropriate casts to long long to handle the case where a is INT_MAX (and thus a+1 overflows).
This is an O(log N) solution. To generalize it to solve(a,b), you just compute solve(b) - solve(a), plus the appropriate logic for worrying about negative numbers. That is what the two-argument solve(int a, int b) is doing.
Cast the array into a series of integers. Then for each integer do:
int NumberOfSetBits(int i)
{
i = i - ((i >> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >> 2) & 0x33333333);
return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
}
Also this is portable, unlike __builtin_popcount
See here: How to count the number of set bits in a 32-bit integer?
when a is positive, the better explanation was already been posted.
If a is negative, then on a 32-bit system each negative number between a and zero will have 32 1's bits less the number of bits in the range from 0 to the binary representation of positive a.
So, in a better way,
long long solve(int a) {
if (a >= 0){
if (a == 0) return 0;
else if ((a %2) == 0) return solve(a - 1) + noOfSetBits(a);
else return (2 * solve( a / 2)) + ((long long)a + 1) / 2;
}else {
a++;
return ((long long)(-a) + 1) * 32 - solve(-a);
}
}
In the following code, the bitsum of x is defined as the count of 1 bits in the two's complement representation of the numbers between 0 and x (inclusive), where Integer.MIN_VALUE <= x <= Integer.MAX_VALUE.
For example:
bitsum(0) is 0
bitsum(1) is 1
bitsum(2) is 1
bitsum(3) is 4
..etc
10987654321098765432109876543210 i % 10 for 0 <= i <= 31
00000000000000000000000000000000 0
00000000000000000000000000000001 1
00000000000000000000000000000010 2
00000000000000000000000000000011 3
00000000000000000000000000000100 4
00000000000000000000000000000101 ...
00000000000000000000000000000110
00000000000000000000000000000111 (2^i)-1
00000000000000000000000000001000 2^i
00000000000000000000000000001001 (2^i)+1
00000000000000000000000000001010 ...
00000000000000000000000000001011 x, 011 = x & (2^i)-1 = 3
00000000000000000000000000001100
00000000000000000000000000001101
00000000000000000000000000001110
00000000000000000000000000001111
00000000000000000000000000010000
00000000000000000000000000010001
00000000000000000000000000010010 18
...
01111111111111111111111111111111 Integer.MAX_VALUE
The formula of the bitsum is:
bitsum(x) = bitsum((2^i)-1) + 1 + x - 2^i + bitsum(x & (2^i)-1 )
Note that x - 2^i = x & (2^i)-1
Negative numbers are handled slightly differently than positive numbers. In this case the number of zeros is subtracted from the total number of bits:
Integer.MIN_VALUE <= x < -1
Total number of bits: 32 * -x.
The number of zeros in a negative number x is equal to the number of ones in -x - 1.
public class TwosComplement {
//t[i] is the bitsum of (2^i)-1 for i in 0 to 31.
private static long[] t = new long[32];
static {
t[0] = 0;
t[1] = 1;
int p = 2;
for (int i = 2; i < 32; i++) {
t[i] = 2*t[i-1] + p;
p = p << 1;
}
}
//count the bits between x and y inclusive
public static long bitsum(int x, int y) {
if (y > x && x > 0) {
return bitsum(y) - bitsum(x-1);
}
else if (y >= 0 && x == 0) {
return bitsum(y);
}
else if (y == x) {
return Integer.bitCount(y);
}
else if (x < 0 && y == 0) {
return bitsum(x);
} else if (x < 0 && x < y && y < 0 ) {
return bitsum(x) - bitsum(y+1);
} else if (x < 0 && x < y && 0 < y) {
return bitsum(x) + bitsum(y);
}
throw new RuntimeException(x + " " + y);
}
//count the bits between 0 and x
public static long bitsum(int x) {
if (x == 0) return 0;
if (x < 0) {
if (x == -1) {
return 32;
} else {
long y = -(long)x;
return 32 * y - bitsum((int)(y - 1));
}
} else {
int n = x;
int sum = 0; //x & (2^i)-1
int j = 0;
int i = 1; //i = 2^j
int lsb = n & 1; //least significant bit
n = n >>> 1;
while (n != 0) {
sum += lsb * i;
lsb = n & 1;
n = n >>> 1;
i = i << 1;
j++;
}
long tot = t[j] + 1 + sum + bitsum(sum);
return tot;
}
}
}

Resources