Iterative Factorial Implementation - algorithm

I found multiple iterative solutions in the net for defining the factorial of n. They look something like this:
int Factorial(int number)
{
int factorial = 1;
for (int i = 1; i <= number; i++)
factorial *= i;
return factorial;
}
Doesn't Factorial(0) = 1 and Factorial(1) = 1? Therefore, the counter variable inside the for loop should start with 2 since everything below it will yield 1.
for (int i = 2; i <= number; i++)
factorial *= i;
Is there some reason why they used 1 as the starting number for the counter?

It doesn't matter - either 1 or 2 will work, as multiplying by 1 does nothing. However, most loops start with 0 or 1, and this just follows the pattern. Also, the definition of factorial is often stated as the product of all positive integers up to n, so this includes one. Essentially, 1 is, aesthetically, a better starting point.

For iterative functions in python:
def iterative_factorial(n):
x = 1
for i in range(2,n+1):
x *= i
return x
print(iterative_factorial(n))

Related

find kth good number

Recently, In a competitive coding exam I got this question -
A good number is number whose sum of digits is divisible by 5. example - 5 (5), 14 (1+4), 19(1+9), 23(2+3)
Question is - you are provided a integer n and another integer k you have to find kth good number greater the n.
Constraints - 1<k<10^9
Sample test 1 -
input: n = 6, k = 5
output: 32
Explanation: After 6 good numbers are - 14, 19, 23, 28, 32 (5th is 32)
Sample test 2 -
input: n = 5, k = 1
output: 14
Explanation: 5 is 1st good number but we need greater than 5 so ans is 14
I have tried it with native approach i.e for each number greater then n checking if it is good and loop until I found k good numbers, here is my code -
def solve(n,k):
n+=1
count = 0
while count<k:
if sum(map(int,str(n)))%5==0:
count+=1
n+=1
return n-1
but above code was gave me TLE, how to do it in better time complexity, I have searched on internet for similar question but unable to find, help.
Let's start with a simple question:
I give you a list of five consecutive numbers. How many of those numbers are divisible by 5? (I'm not talking abouts sums of digits yet. Just the numbers, like 18, 19, 20, 21, 22.
No problem there, right? So a slightly different question:
In a list of ten consecutive numbers, how many are divisible by 5?
Still pretty easy, no? Now let's look at your "good" numbers. We'll start by introducing the function digit_sum(n), which is the sum of the digits in n. For now, we don't need to write that function; we just need to know it exists. And here's another simple question:
If n is a number which does not end in the digit 9 and s is digit_sum(n), what is digit_sum(n+1)? (Try a few numbers if that's not immediately clear.) (Bonus question: why does it matter whether the last digit is 9? Or to put it another way, why doesn't it matter which digit other than 9 is at the end? What's special about 9?)
Ok, almost there. Let's put these two ideas together:
Suppose n ends with 0. How many of the ten numbers digit_sum(n), digit_sum(n+1), digit_sum(n+2), … digit_sum(n+9) are divisible by 5? (See question 2).
Does that help you find a quick way to compute the kth good number after n? Hopefully, the answer is yes. Now you just need to generalise a little bit.
maybe your digits algorithm is too time consuming? try this:
def digits(n):
sum = 0
while n:
n, r = divmod(n, 10)
sum += r
return sum
def solve(n,k):
n+=1
count = 0
while count<k:
if digits(n)%5==0:
count+=1
n+=1
return n-1
still it’s a trivial solution, and always you can run it offline and just submit a lookups table.
And here is a link for this sequence: http://oeis.org/A227793
Here's a digit dynamic-program that can answer how many such numbers we can find from 1 to the parameter, k. The function uses O(num_digits) search space. We could use it to search for the kth good number with binary search.
The idea generally is that the number of good numbers that include digit d in the ith position, and have a prefix mod 5 of mod1 so far, is equal to the count of valid digit-suffixes that have the complementary mod, mod2, so that (mod1 + mod2) mod 5 = 0.
JavaScript code comparing with brute force:
function getDigits(k){
const result = [];
while (k){
result.push(k % 10);
k = ~~(k / 10);
}
return result.reverse();
}
function g(i, mod, isK, ds, memo){
const key = String([i, mod, isK]);
if (memo.hasOwnProperty(key))
return memo[key];
let result = 0;
const max = isK ? ds[i] : 9;
// Single digit
if (i == ds.length-1){
for (let j=0; j<=max; j++)
result += (j % 5 == mod);
return memo[key] = result;
}
// Otherwise
for (let j=0; j<=max; j++){
const m = j % 5;
const t = (5 + mod - m) % 5;
const next = g(i+1, t, isK && j == max, ds, memo);
result += next;
}
return memo[key] = result;
}
function f(k){
if (k < 10)
return (k > 4) & 1;
const ds = getDigits(k);
const memo = {};
let result = -1;
for (let i=0; i<=ds[0]; i++){
const m = i % 5;
const t = (5 - m) % 5;
const next = g(1, t, i==ds[0], ds, memo);
result += next;
}
return result;
}
function bruteForce(k){
let result = 0;
//let ns = [];
for (let i=1; i<=k; i++){
const ds = getDigits(i);
const sum = ds.reduce((a, b) => a + b, 0);
if (!(sum % 5)){
//ns.push(i);
result += 1;
}
}
//console.log(ns);
return result;
}
var k = 3000;
for (let i=1; i<k; i++){
const _bruteForce = bruteForce(i);
const _f = f(i);
if (_bruteForce != _f){
console.log(i);
console.log(_bruteForce);
console.log(_f);
break;
}
}
console.log('Done.');

Calculating a Factorial of a number in best time

I am given N numbers i want to calculate sum of a factorial modulus m
For Example
4 100
12 18 2 11
Ans = (12! + 18! +2!+11!)%100
Since the 1<N<10^5 and Numbers are from 1<Ni<10^17
How to calculate it in efficient time.
Since the recursive approach will fail i.e
int fact(int n){
if(n==1) return 1;
return n*fact(n-1)%m;
}
if you precalculate factorials, using every operation %m, and will use hints from comments about factorials for numbers bigger than m you will get something like this
fact = new int[m];
f = fact[0] = 1;
for(int i = 1; i < m; i++)
{
f = (f * i) % m;
fact[i] = f;
}
sum = 0
for each (n in numbers)
{
if (n < m)
{
sum = (sum + fact[n]) % m
}
}
I'm not sure if it's best but it should work in a reasonable amount of time.
Upd: Code can be optimized using knowledge that if for some number j, (j!)%m ==0 than for every n > j (n!)%m ==0 , so in some cases (usually when m is not a prime number) it's not necessary to precalculate factorials for all numbers less than m
try this:
var numbers = [12,18,2,11]
function fact(n) {
if(n==1) return 1;
return n * fact(n-1);
}
var accumulator = 0
$.each(numbers, function(index, value) {
accumulator += fact(value)
})
var answer = accumulator%100
alert(accumulator)
alert(answer)
you can see it running here:
http://jsfiddle.net/orw4gztf/1/

How to fix my numberOfDigits function

Came across some code where the number of digits was being determined by casting the number to a string then using a len().
Function numOfDigits_len(n As Long) As Long
numOfDigits_len = Len(Str(n)) - 1
End Function
Now although this works I knew it would be slow compared to any method that didn't use strings, so I wrote one that uses log().
Function numOfDigits_log(n As Long) As Long
numOfDigits_log = Int(Log(n) / Log(10)) + 1
End Function
Cut run time by 1/2 which was great but there was something weird happening in a specific case.
n numOfDigits_log(n)
===== ====================
999 3
1000 3
1001 4
It would not handle 1000 properly. I figured it is because of floating point and rounding issues.
Function numOfDigits_loop(ByVal n As Long) As Long
Do Until n = 0
n = n \ 10
numOfDigits_loop = numOfDigits_loop + 1
Loop
End Function
Wrote this which turned out to be ~10% slower as numbers got larger than 10^6 and seems to become slowly larger as n gets bigger. Which is fine if I was being pragmatic but I would like to find something more ideal.
Now my question is, is there a way to use the log() method accurately. I could do something like
Function numOfDigits_log(n As Long) As Long
numOfDigits_log = Int(Log(n) / Log(10) + 0.000000001) + 1
End Function
But it seems very "hacky". Is there a nicer way that's faster or as fast as the log() method?
Note: I realize this kind of optimization is pointless in a lot of cases but now that I've come across this I would like to "fix" it
I've answered this before, but I couldn't find it, so here's the basics:
int i = ... some number >= 0 ...
int n = 1;
if (i >= 100000000){i /= 100000000; n += 8;}
if (i >= 10000){i /= 10000; n += 4;}
if (i >= 100){i /= 100; n += 2;}
if (i >= 10){i /= 10; n += 1;}
That's in C, but you get the idea.
A while loop guarantees correctness, i.e. it doesn't use any floating point calculations
int numDigits = 0;
while(num != 0) {
num /= 10;
numDigits++;
}
You can also speed this up by using a larger divisor
int numDigits = 0;
if(num >= 100000 || num <= -100000) {
int prevNum;
while(num != 0) {
prevNum = num;
num /= 100000;
numDigits += 5;
}
num = prevNum;
numDigits -= 5;
}
while(num != 0) {
num /= 10;
numDigits++;
}
You'll love this.
We live in a base 10 number system! That means all you have to do is ROUND UP.
the length of some number ALWAYS = ceiling (log n). So for instance: 7456412 (a 7-digit number). Log (7456412) = 6.8...round up and you have 7. log (9999) = 3.9999. Round up and it's 4.
The special case is when you DON'T have to round, or when you have some power of 10. For instance: log(1000) = 3. if you can detect when you have a power of 10, add one to the log result and you win!
the way you could do this detection is something like
double log10;
int clog10;
int length;
log10 = (Log(n) / Log(10)); // can also use a private static final long hardcoded for Log(10)
clog10 = ceiling(log10);
if (Int(log10) == clog10)
length = clog10 + 1;
else
length = clog10;

Fastest way to generate binomial coefficients

I need to calculate combinations for a number.
What is the fastest way to calculate nCp where n>>p?
I need a fast way to generate binomial coefficients for an polynomial equation and I need to get the coefficient of all the terms and store it in an array.
(a+b)^n = a^n + nC1 a^(n-1) * b + nC2 a^(n-2) * ............
+nC(n-1) a * b^(n-1) + b^n
What is the most efficient way to calculate nCp ??
You cau use dynamic programming in order to generate binomial coefficients
You can create an array and than use O(N^2) loop to fill it
C[n, k] = C[n-1, k-1] + C[n-1, k];
where
C[1, 1] = C[n, n] = 1
After that in your program you can get the C(n, k) value just looking at your 2D array at [n, k] indices
UPDATE smth like that
for (int k = 1; k <= K; k++) C[0][k] = 0;
for (int n = 0; n <= N; n++) C[n][0] = 1;
for (int n = 1; n <= N; n++)
for (int k = 1; k <= K; k++)
C[n][k] = C[n-1][k-1] + C[n-1][k];
where the N, K - maximum values of your n, k
If you need to compute them for all n, Ribtoks's answer is probably the best.
For a single n, you're better off doing like this:
C[0] = 1
for (int k = 0; k < n; ++ k)
C[k+1] = (C[k] * (n-k)) / (k+1)
The division is exact, if done after the multiplication.
And beware of overflowing with C[k] * (n-k) : use large enough integers.
If you want complete expansions for large values of n, FFT convolution might be the fastest way. In the case of a binomial expansion with equal coefficients (e.g. a series of fair coin tosses) and an even order (e.g. number of tosses) you can exploit symmetries thus:
Theory
Represent the results of two coin tosses (e.g. half the difference between the total number of heads and tails) with the expression A + A*cos(Pi*n/N). N is the number of samples in your buffer - a binomial expansion of even order O will have O+1 coefficients and require a buffer of N >= O/2 + 1 samples - n is the sample number being generated, and A is a scale factor that will usually be either 2 (for generating binomial coefficients) or 0.5 (for generating a binomial probability distribution).
Notice that, in frequency, this expression resembles the binomial distribution of those two coin tosses - there are three symmetrical spikes at positions corresponding to the number (heads-tails)/2. Since modelling the overall probability distribution of independent events requires convolving their distributions, we want to convolve our expression in the frequency domain, which is equivalent to multiplication in the time domain.
In other words, by raising our cosine expression for the result of two tosses to a power (e.g. to simulate 500 tosses, raise it to the power of 250 since it already represents a pair), we can arrange for the binomial distribution for a large number to appear in the frequency domain. Since this is all real and even, we can substitute the DCT-I for the DFT to improve efficiency.
Algorithm
decide on a buffer size, N, that is at least O/2 + 1 and can be conveniently DCTed
initialise it with the expression pow(A + A*cos(Pi*n/N),O/2)
apply the forward DCT-I
read out the coefficients from the buffer - the first number is the central peak where heads=tails, and subsequent entries correspond to symmetrical pairs successively further from the centre
Accuracy
There's a limit to how high O can be before accumulated floating-point rounding errors rob you of accurate integer values for the coefficients, but I'd guess the number is pretty high. Double-precision floating-point can represent 53-bit integers with complete accuracy, and I'm going to ignore the rounding loss involved in the use of pow() because the generating expression will take place in FP registers, giving us an extra 11 bits of mantissa to absorb the rounding error on Intel platforms. So assuming we use a 1024-point DCT-I implemented via the FFT, that means losing 10 bits' accuracy to rounding error during the transform and not much else, leaving us with ~43 bits of clean representation. I don't know what order of binomial expansion generates coefficients of that size, but I dare say it's big enough for your needs.
Asymmetrical expansions
If you want the asymmetrical expansions for unequal coefficients of a and b, you'll need to use a two-sided (complex) DFT and a complex pow() function. Generate the expression A*A*e^(-Pi*i*n/N) + A*B + B*B*e^(+Pi*i*n/N) [using the complex pow() function to raise it to the power of half the expansion order] and DFT it. What you have in the buffer is, again, the central point (but not the maximum if A and B are very different) at offset zero, and it is followed by the upper half of the distribution. The upper half of the buffer will contain the lower half of the distribution, corresponding to heads-minus-tails values that are negative.
Notice that the source data is Hermitian symmetrical (the second half of the input buffer is the complex conjugate of the first), so this algorithm is not optimal and can be performed using a complex-to-complex FFT of half the required size for optimum efficiency.
Needless to say, all the complex exponentiation will chew more CPU time and hurt accuracy compared to the purely real algorithm for symmetrical distributions above.
This is my version:
def binomial(n, k):
if k == 0:
return 1
elif 2*k > n:
return binomial(n,n-k)
else:
e = n-k+1
for i in range(2,k+1):
e *= (n-k+i)
e /= i
return e
I recently wrote a piece of code that needed to call for a binary coefficient about 10 million times. So I did a combination lookup-table/calculation approach that's still not too wasteful of memory. You might find it useful (and my code is in the public domain). The code is at
http://www.etceterology.com/fast-binomial-coefficients
It's been suggested that I inline the code here. A big honking lookup table seems like a waste, so here's the final function, and a Python script that generates the table:
extern long long bctable[]; /* See below */
long long binomial(int n, int k) {
int i;
long long b;
assert(n >= 0 && k >= 0);
if (0 == k || n == k) return 1LL;
if (k > n) return 0LL;
if (k > (n - k)) k = n - k;
if (1 == k) return (long long)n;
if (n <= 54 && k <= 54) {
return bctable[(((n - 3) * (n - 3)) >> 2) + (k - 2)];
}
/* Last resort: actually calculate */
b = 1LL;
for (i = 1; i <= k; ++i) {
b *= (n - (k - i));
if (b < 0) return -1LL; /* Overflow */
b /= i;
}
return b;
}
#!/usr/bin/env python3
import sys
class App(object):
def __init__(self, max):
self.table = [[0 for k in range(max + 1)] for n in range(max + 1)]
self.max = max
def build(self):
for n in range(self.max + 1):
for k in range(self.max + 1):
if k == 0: b = 1
elif k > n: b = 0
elif k == n: b = 1
elif k == 1: b = n
elif k > n-k: b = self.table[n][n-k]
else:
b = self.table[n-1][k] + self.table[n-1][k-1]
self.table[n][k] = b
def output(self, val):
if val > 2**63: val = -1
text = " {0}LL,".format(val)
if self.column + len(text) > 76:
print("\n ", end = "")
self.column = 3
print(text, end = "")
self.column += len(text)
def dump(self):
count = 0
print("long long bctable[] = {", end="");
self.column = 999
for n in range(self.max + 1):
for k in range(self.max + 1):
if n < 4 or k < 2 or k > n-k:
continue
self.output(self.table[n][k])
count += 1
print("\n}}; /* {0} Entries */".format(count));
def run(self):
self.build()
self.dump()
return 0
def main(args):
return App(54).run()
if __name__ == "__main__":
sys.exit(main(sys.argv))
If you really only need the case where n is much larger than p, one way to go would be to use the Stirling's formula for the factorials. (if n>>1 and p is order one, Stirling approximate n! and (n-p)!, keep p! as it is etc.)
The fastest reasonable approximation in my own benchmarking is the approximation used by the Apache Commons Maths library: http://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math3/special/Gamma.html#logGamma(double)
My colleagues and I tried to see if we could beat it, while using exact calculations rather than approximates. All approaches failed miserably (many orders slower) except one, which was 2-3 times slower. The best performing approach uses https://math.stackexchange.com/a/202559/123948, here is the code (in Scala):
var i: Int = 0
var binCoeff: Double = 1
while (i < k) {
binCoeff *= (n - i) / (k - i).toDouble
i += 1
}
binCoeff
The really bad approaches where various attempts at implementing Pascal's Triangle using tail recursion.
nCp = n! / ( p! (n-p)! ) =
( n * (n-1) * (n-2) * ... * (n - p) * (n - p - 1) * ... * 1 ) /
( p * (p-1) * ... * 1 * (n - p) * (n - p - 1) * ... * 1 )
If we prune the same terms of the numerator and the denominator, we are left with minimal multiplication required. We can write a function in C to perform 2p multiplications and 1 division to get nCp:
int binom ( int p, int n ) {
if ( p == 0 ) return 1;
int num = n;
int den = p;
while ( p > 1 ) {
p--;
num *= n - p;
den *= p;
}
return num / den;
}
I was looking for the same thing and couldn't find it, so wrote one myself that seems optimal for any Binomial Coeffcient for which the endresult fits into a Long.
// Calculate Binomial Coefficient
// Jeroen B.P. Vuurens
public static long binomialCoefficient(int n, int k) {
// take the lowest possible k to reduce computing using: n over k = n over (n-k)
k = java.lang.Math.min( k, n - k );
// holds the high number: fi. (1000 over 990) holds 991..1000
long highnumber[] = new long[k];
for (int i = 0; i < k; i++)
highnumber[i] = n - i; // the high number first order is important
// holds the dividers: fi. (1000 over 990) holds 2..10
int dividers[] = new int[k - 1];
for (int i = 0; i < k - 1; i++)
dividers[i] = k - i;
// for every dividers there is always exists a highnumber that can be divided by
// this, the number of highnumbers being a sequence that equals the number of
// dividers. Thus, the only trick needed is to divide in reverse order, so
// divide the highest divider first trying it on the highest highnumber first.
// That way you do not need to do any tricks with primes.
for (int divider: dividers) {
boolean eliminated = false;
for (int i = 0; i < k; i++) {
if (highnumber[i] % divider == 0) {
highnumber[i] /= divider;
eliminated = true;
break;
}
}
if(!eliminated) throw new Error(n+","+k+" divider="+divider);
}
// multiply remainder of highnumbers
long result = 1;
for (long high : highnumber)
result *= high;
return result;
}
If I understand the notation in the question, you don't just want nCp, you actually want all of nC1, nC2, ... nC(n-1). If this is correct, we can leverage the following relationship to make this fairly trivial:
for all k>0: nCk = prod_{from i=1..k}( (n-i+1)/i )
i.e. for all k>0: nCk = nC(k-1) * (n-k+1) / k
Here's a python snippet implementing this approach:
def binomial_coef_seq(n, k):
"""Returns a list of all binomial terms from choose(n,0) up to choose(n,k)"""
b = [1]
for i in range(1,k+1):
b.append(b[-1] * (n-i+1)/i)
return b
If you need all coefficients up to some k > ceiling(n/2), you can use symmetry to reduce the number of operations you need to perform by stopping at the coefficient for ceiling(n/2) and then just backfilling as far as you need.
import numpy as np
def binomial_coef_seq2(n, k):
"""Returns a list of all binomial terms from choose(n,0) up to choose(n,k)"""
k2 = int(np.ceiling(n/2))
use_symmetry = k > k2
if use_symmetry:
k = k2
b = [1]
for i in range(1, k+1):
b.append(b[-1] * (n-i+1)/i)
if use_symmetry:
v = k2 - (n-k)
b2 = b[-v:]
b.extend(b2)
return b
Time Complexity : O(denominator)
Space Complexity : O(1)
public class binomialCoeff {
static double binomialcoeff(int numerator, int denominator)
{
double res = 1;
//invalid numbers
if (denominator>numerator || denominator<0 || numerator<0) {
res = -1;
return res;}
//default values
if(denominator==numerator || denominator==0 || numerator==0)
return res;
// Since C(n, k) = C(n, n-k)
if ( denominator > (numerator - denominator) )
denominator = numerator - denominator;
// Calculate value of [n * (n-1) *---* (n-k+1)] / [k * (k-1) *----* 1]
while (denominator>=1)
{
res *= numerator;
res = res / denominator;
denominator--;
numerator--;
}
return res;
}
/* Driver program to test above function*/
public static void main(String[] args)
{
int numerator = 120;
int denominator = 20;
System.out.println("Value of C("+ numerator + ", " + denominator+ ") "
+ "is" + " "+ binomialcoeff(numerator, denominator));
}
}

Write a function to divide a number by 3 without using /, % and * operators. itoa() available?

I tried to solve it myself but I could not get any clue.
Please help me to solve this.
Are you supposed to use itoa() for this assignment? Because then you could use that to convert to a base 3 string, drop the last character, and then restore back to base 10.
Using the mathematical relation:
1/3 == Sum[1/2^(2n), {n, 1, Infinity}]
We have
int div3 (int x) {
int64_t blown_up_x = x;
for (int power = 1; power < 32; power += 2)
blown_up_x += ((int64_t)x) << power;
return (int)(blown_up_x >> 33);
}
If you can only use 32-bit integers,
int div3 (int x) {
int two_third = 0, four_third = 0;
for (int power = 0; power < 31; power += 2) {
four_third += x >> power;
two_third += x >> (power + 1);
}
return (four_third - two_third) >> 2;
}
The 4/3 - 2/3 treatment is used because x >> 1 is floor(x/2) instead of round(x/2).
EDIT: Oops, I misread the title's question. Multiply operator is forbidden as well.
Anyway, I believe it's good not to delete this answer for those who didn't know about dividing by non power of two constants.
The solution is to multiply by a magic number and then to extract the 32 leftmost bits:
divide by 3 is equivalent to multiply by 1431655766 and then to shift by 32, in C:
int divideBy3(int n)
{
return (n * 1431655766) >> 32;
}
See Hacker's Delight Magic number calculator.
x/3 = e^(ln(x) - ln(3))
Here's a solution implemented in C++:
#include <iostream>
int letUserEnterANumber()
{
int numberEnteredByUser;
std::cin >> numberEnteredByUser;
return numberEnteredByUser;
}
int divideByThree(int x)
{
std::cout << "What is " << x << " divided by 3?" << std::endl;
int answer = 0;
while ( answer + answer + answer != x )
{
answer = letUserEnterANumber();
}
}
;-)
if(number<0){ // Edited after comments
number = -(number);
}
quotient = 0;
while (number-3 >= 0){ //Edited after comments..
number = number-3;
quotient++;
}//after loop exits value in number will give you reminder
EDIT: Tested and working perfectly fine :(
Hope this helped. :-)
long divByThree(int x)
{
char buf[100];
itoa(x, buf, 3);
buf[ strlen(buf) - 1] = 0;
char* tmp;
long res = strtol(buf, &tmp, 3);
return res;
}
Sounds like homework :)
I image you can write a function which iteratively divides a number. E.g. you can model what you do with a pen and a piece of paper to divide numbers. Or you can use shift operators and + to figure out if your intermediate results is too small/big and iteratively apply corrections. I'm not going to write down the code though ...
unsigned int div3(unsigned int m) {
unsigned long long n = m;
n += n << 2;
n += n << 4;
n += n << 8;
n += n << 16;
return (n+m) >> 32;
}
int divideby3(int n)
{
int x=0;
if(n<3) { return 0; }
while(n>=3)
{
n=n-3;
x++;
}
return x;
}
you can use a property from the numbers: A number is divisible by 3 if its sum is divisible by3.
Take the individual digits from itoa() and then use switch function for them recursively with additions and itoa()
Hope this helps
This is very easy, so easy I'm only going to hint at the answer --
Basic boolean logic gates (and,or,not,xor,...) don't do division. Despite this handicap CPUs can do division. Your solution is obvious: find a reference which tells you how to build a divisor with boolean logic and write some code to implement that.
How about this, in some kind of Python like pseudo-code. It divides the answer into an integer part and a fraction part. If you want to convert it to a floating point representation then I am not sure of the best way to do that.
x = <a number>
total = x
intpart = 0
fracpart = 0
% Find the integer part
while total >= 3
total = total - 3
intpart = intpart + 1
% Fraction is what remains
fracpart = total
print "%d / 3 = %d + %d/3" % (x, intpart, fracpart)
Note that this will not work for negative numbers. To fix that you need to modify the algorithm:
total = abs(x)
is_neg = abs(x) != x
....
if is_neg
print "%d / 3 = -(%d + %d/3)" % (x, intpart, fracpart)
for positive integer division
result = 0
while (result + result + result < input)
result +=1
return result
Convert 1/3 into binary
so 1/3=0.01010101010101010101010101
and then just "multiply" whit this number using shifts and sum
There is a solution posted on http://bbs.chinaunix.net/forum.php?mod=viewthread&tid=3776384&page=1&extra=#pid22323016
int DividedBy3(int A) {
int p = 0;
for (int i = 2; i <= 32; i += 2)
p += A << i;
return (-p);
}
Please say something about that, thanks:)
Here's a O(log(n)) way to do it with no bit shifting, so it can handle numbers up-to and including your biggest register size.
(c-style code)
long long unsigned Div3 (long long unsigned n)
{
// base case:
if (n < 6)
return (n >= 3);
long long unsigned division = 0;
long long unsigned remainder = 0;
// Used for results for only a single power of 2
// Initialise for 2^0
long long unsigned tmp_div = 0;
long long unsigned tmp_rem = 1;
for (long long unsigned pow_2 = 1; pow_2 && (pow_2 <= n); pow_2 += pow_2)
{
if (n & pow_2)
{
division += tmp_div;
remainder += tmp_rem;
}
if (tmp_rem == 1)
{
tmp_div += tmp_div;
tmp_rem = 2;
}
else
{
tmp_div += tmp_div + 1;
tmp_rem = 1;
}
}
return division + Div3(remainder);
}
It uses recursion, but note that the number drops exponentially in size at each iteration, so the time complexity (TC) is really:
O(TC) = O(log(n) + log(log(n)) + log(log(log(n))) + ... + z)
where z < 6.
Proof that it's O(log(n)):
We note that the number at each recursion strictly decreases (by at least 1):
So series = [log(log(n))] + [log(log(log(n)))] + [...] + [z]) has at most log(log(n)) sums.
implies:
series <= log(log(n))*log(log(n))
implies:
O(TC) = O(log(n) + log(log(n))*log(log(n)))
Now we note for n sufficiently large:
sqrt(x) > log(x)
iff:
x/sqrt(x) > log(x)
implies:
x/log(x) > log(x)
iff:
x > log(x)*log(x)
So O(x) > O(log(x)*log(x))
Now let x = log(n)
implies:
O(log(n)) > O(log(log(n))*log(log(n)))
and given:
O(TC) = O(log(n) + log(log(n))*log(log(n)))
implies:
O(TC) = O(log(n))
Slow and naive, but it should work, if an exact divisor exists. Addition is allowed, right?
for number from 1 to input
if number == input+input+input
return number
Extending it for fractional divisors is left as an exercise to the reader.
Basically test for +1 and +2 I think...

Resources