I came cross this question when I was in a CS job interview. I have no idea about it, let alone implement the code……
Could I get some tips?
P.S. exp() is the function y = e^x and ln() is y = ln(x)
You can find the value in log time by binary searching the answer. This is possible because log X is a monotonically increasing function.
(courtesy of WolframAlpha).
For example, if the value whose logarithm we have to calculate (assume it to be X) is greater than 1, then start with an assumption of answer = X. Raise the power e^answer and check if the value is greater than or smaller than X. Now based on whether the value you get is greater than or lesser than X, you can refine your limits. The search stops when you have reached within suitable ranges of your answer.
double log(double X){
double lo = 1;
double hi = X;
while(true){
double mid = (lo+hi)/2;
double val = power(e, mid);
if(val > X){
hi = mid;
}
if(val < X){
lo = mid;
}
if(abs(val-X) < error){
return mid;
}
}
}
Similarly, if the value of X is smaller than 1, then you can reduce this case to the case we have already considered, ie. when X is greater than 1. For example if X = 0.04, then
log 0.04 = log (4/100)
= (log 4) - (log 100)
If X is positive, then the logarithm can be found using Newton's method.
X_{0} = 0
X_{n+1} = X_{n} - (exp(X_{n}) - X) / (exp(X_{n})
Very fast convergence.
Adapting this answer to get X scaled in the range [0,e]. A few things we know about ln(x), ln(x) is only defined for 0 < x, ln(1)=0, the results can be any number from -infinity to +infinity. ln(x^a) = a * ln(x) in particular ln(x^(-1)) = - ln(x), ln(X/e) = ln(X)-ln(e) so ln(X) = ln(X/e) + 1.
double E = exp(1);
double ln(double X) {
if(X<0) return NaN;
// use recursion to get approx range
if(X<1) {
return - ln( 1 / X );
}
if(X>E) {
return ln(X/E) + 1;
}
// X is now between 1 and e
// Y is between 0 and 1
double lo = 0;
double hi = 1;
while(true){
double mid = (lo+hi)/2;
double val = exp(mid);
if(val > X){
hi = mid;
}
if(val < X){
lo = mid;
}
if(abs(val-X) < error){
return mid;
}
}
}
If you look at the actual implementations of mathematical functions in the libraries. They do quite a lot of prescaling work to narrow the ranges of input, probably more aggressive than is done here.
Related
So I am trying to make a carmichael function in processing for some RSA encryption stuff I am playing with, but the modulo function seems to give many wrong answers.
here is my code:
int carmichael(int n) {
int checkIndex = 0;
int m = 1;
ArrayList<Integer> coprimes = findCoprimesLessThan(n);
println(coprimes);
for(m = 1; m < 50; m++){
for(checkIndex = 0; checkIndex < coprimes.size(); checkIndex++){
int a = coprimes.get(checkIndex);
float mod = pow(a, m) % n;
println(a, m, n, mod, pow(a, m), pow(a, m) % n);
if (mod == 1) {
continue;
}
if (mod != 1){
break;
}
return m;
}
}
return 1;
}
And for an input of say, 31, it loops forever (I have it stop at 100 just for this reason so it just outputs 1 if it goes through all 100 and doesn't find anything) when it should give 30. I believe I have narrowed it down to the modulo operation not working on large numbers as that seems to be the problem, for example:
when a = 3, m = 30, and n = 31, my println statement gives this:
3 30 31 18.0 2.05891136E14 18.0
and all of that is correct except the modulo, it gives 18.0 when it should be 1.0. I am unsure of anyway to get around this as even doing a "manual modulus" like this:
while(mod >= n){
mod-= n;
}
results in the exact same problem. All research I have done into the carmichael function has led me to either confusion or here which was no help.
My guess is you're hitting a limit of float precision.
Float values can only track a certain amount of precision. Try running this example program:
float one = 123456789;
float two = one + 1;
println(one == two);
You would expect this to print false, but if you run it, you'll see that it prints true instead. This is because we're outside the bounds of precision.
To get around this, you could upgrade to the double type. Double values have the same problem, but at a higher level of precision.
double one = 123456789;
double two = one + 1;
println(one == two);
Getting back to your code, by default Processing treats everything as a float value. This is fine for most cases, but if you need lots of precision then you're better off switching to double value.
int a = 3;
int m = 30;
int n = 31;
double p = Math.pow(a, m);
println(p);
double mod = p % n;
println(mod);
Note that I'm using Math.pow() instead of pow(). The Math.pow() function comes from Java and takes and returns double values instead of float values.
(By the way, this is the type of example program I was talking about in the comments.)
How can we develop a dynamic programming algorithm that calculates the minimum number of different primes that sum to x?
Assume the dynamic programming calculates the minimum number of different primes amongst which the largest is p for each couple of x and p. Can someone help?
If we assume the Goldbach conjecture is true, then every even integer > 2 is the sum of two primes.
So we know the answer if x is even (1 if x==2, or 2 otherwise).
If x is odd, then there are 3 cases:
x is prime (answer is 1)
x-2 is prime (answer is 2)
otherwise x-3 is an even number bigger than 2 (answer is 3)
First of all, you need a list of primes up to x. Let's call this array of integers primes.
Now we want to populate the array answer[x][p], where x is the sum of primes and p is maximum for each prime in the set (possibly including, but not necessarily including p).
There are 3 possibilities for answer[x][p] after all calculations:
1) if p=x and p is prime => answer[x][p] contains 1
2) if it's not possible to solve problem for given x, p => answer[x][p] contains -1
3) if it's possible to solve problem for given x, p => answer[x][p] contains number of primes
There is one more possible value for answer[x][p] during calculations:
4) we did not yet solve the problem for given x, p => answer[x][p] contains 0
It's quite obvious that 0 is not the answer for anything but x=0, so we are safe initializing array with 0 (and making special treatment for x=0).
To calculate answer[x][p] we can iterate (let q is prime value we are iterating on) through all primes up to (including) p and find minimum over 1+answer[x-q][q-1] (do not consider all answer[x-q][q-1]=-1 cases though). Here 1 comes for q and answer[x-q][q-1] should be calculated in recursive call or before this calculation.
Now there's small optimization: iterate primes from higher to lower and if x/q is bigger than current answer, we can stop, because to make sum x we will need at least x/q primes anyway. For example, we will not even consider q=2 for x=10, as we'd already have answer=3 (actually, it includes 2 as one of 3 primes - 2+3+5, but we've already got it through recursive call answer(10-5, 4)), since 10/2=5, that is we'd get 5 as answer at best (in fact it does not exist for q=2).
package ru.tieto.test;
import java.util.ArrayList;
public class Primers {
static final int MAX_P = 10;
static final int MAX_X = 10;
public ArrayList<Integer> primes= new ArrayList<>();
public int answer[][] = new int[MAX_X+1][MAX_P+1];
public int answer(int x, int p) {
if (x < 0)
return -1;
if (x == 0)
return 0;
if (answer[x][p] != 0)
return answer[x][p];
int max_prime_idx = -1;
for (int i = 0;
i < primes.size() && primes.get(i) <= p && primes.get(i) <= x;
i++)
max_prime_idx = i;
if (max_prime_idx < 0) {
answer[x][p] = -1;
return -1;
}
int cur_answer = x+1;
for (int i = max_prime_idx; i >= 0; i--) {
int q = primes.get(i);
if (x / q >= cur_answer)
break;
if (x == q) {
cur_answer = 1;
break;
}
int candidate = answer(x-q, q-1);
if (candidate == -1)
continue;
if (candidate+1 < cur_answer)
cur_answer = candidate+1;
}
if (cur_answer > x)
answer[x][p] = -1;
else
answer[x][p] = cur_answer;
return answer[x][p];
}
private void make_primes() {
primes.add(2);
for (int p = 3; p <= MAX_P; p=p+2) {
boolean isPrime = true;
for (Integer q : primes) {
if (q*q > p)
break;
if (p % q == 0) {
isPrime = false;
break;
}
}
if (isPrime)
primes.add(p);
}
// for (Integer q : primes)
// System.out.print(q+",");
// System.out.println("<<");
}
private void init() {
make_primes();
for (int p = 0; p <= MAX_P; p++) {
answer[0][p] = 0;
answer[1][p] = -1;
}
for (int x = 2; x <= MAX_X; x++) {
for (int p = 0; p <= MAX_P; p++)
answer[x][p] = 0;
}
for (Integer p: primes)
answer[p][p] = 1;
}
void run() {
init();
for (int x = 0; x <= MAX_X; x++)
for (int p = 0; p <= MAX_P; p++)
answer(x, p);
}
public static void main(String[] args) {
Primers me = new Primers();
me.run();
// for (int x = 0; x <= MAX_X; x++) {
// System.out.print("x="+x+": {");
// for (int p = 0; p <= MAX_P; p++) {
// System.out.print(String.format("%2d=%-3d,",p, me.answer[x][p]));
// }
// System.out.println("}");
// }
}
}
Start with a list of all primes lower than x.
Take the largest. Now we need to solve the problem for (x - pmax). At this stage, that will be easy, x - pmax will be low. Mark all the primes as "used" and store solution 1. Now take the largest prime still in the list and repeat until all the primes are either used or rejected. If (x - pmax) is high, the problem gets more complex.
That's your first pass, brute force algorithm. Get that working first before considering how to speed things up.
Assuming you're not using goldbach conjecture, otherwise see Peter de Rivaz excellent answer, :
dynamic programming generally takes advantage of overlapping subproblems. Usually you go top down, but in this case bottom up may be simpler
I suggest you sum various combinations of primes.
lookup = {}
for r in range(1, 3):
for primes in combinations_with_replacement(all_primes, r):
s = sum(primes)
lookup[s] = lookup.get(s, r) //r is increasing, so only set it if it's not already there
this will start getting slow very quickly if you have a large number of primes, in that case, change max r to something like 1 or 2, whatever the max that is fast enough for you, and then you will be left with some numbers that aren't found, to solve for a number that doesnt have a solution in lookup, try break that number into sums of numbers that are found in lookup (you may need to store the prime combos in lookup and dedupe those combinations).
Given
f(n) = 1+x+x^2+x^3+……+x^n, (n >=0 && n is a integer)
input x, n, how can we work out the result with a greater efficiency?
It's a geometric progression. Noting that
(x-1)f(n) = x^{n+1}-1
you get
f(n)=(x^{n+1}-1)/(x-1).
This does n multiplies and n increments. It's easy to put the sum into closed form, but computing the closed form requires evaluating xn+1, which could also end up doing n multiplies, but doesn't require a divide.
Although this is actually valid C, think of it as pseudocode. A real implementation would check for negative n rather than looping through half the int numberspace. If you needed to apply this to an integer x rather than a floating point x, this would definitely be the way to go.
double polysum(int n, double x) {
double a = 1;
while (n--) a = x * a + 1;
return a;
}
public class Test {
public static void main(String args[]) {
int x = 2, n = 10;
Double sum = new Double(0);
for (int i = 0 ; i <= n ; i++) {
sum = sum + Math.pow(x, i);
}
System.out.println(sum);
}
}
This is an interview question: "Given 2 integers x and y, check if x is an integer power of y" (e.g. for x = 8 and y = 2 the answer is "true", and for x = 10 and y = 2 "false").
The obvious solution is:int n = y; while(n < x) n *= y; return n == x
Now I am thinking about how to improve it.
Of course, I can check some special cases: e.g. both x and y should be either odd or even numbers, i.e. we can check the least significant bit of x and y. However I wonder if I can improve the core algorithm itself.
You'd do better to repeatedly divide y into x. The first time you get a non-zero remainder you know x is not an integer power of y.
while (x%y == 0) x = x / y
return x == 1
This deals with your odd/even point on the first iteration.
It means logy(x) should be an integer. Don't need any loop. in O(1) time
public class PowerTest {
public static boolean isPower(int x, int y) {
double d = Math.log(Math.abs(x)) / Math.log(Math.abs(y));
if ((x > 0 && y > 0) || (x < 0 && y < 0)) {
if (d == (int) d) {
return true;
} else {
return false;
}
} else if (x > 0 && y < 0) {
if ((int) d % 2 == 0) {
return true;
} else {
return false;
}
} else {
return false;
}
}
/**
* #param args
*/
public static void main(String[] args) {
System.out.println(isPower(-32, -2));
System.out.println(isPower(2, 8));
System.out.println(isPower(8, 12));
System.out.println(isPower(9, 9));
System.out.println(isPower(-16, 2));
System.out.println(isPower(-8, -2));
System.out.println(isPower(16, -2));
System.out.println(isPower(8, -2));
}
}
This looks for the exponent in O(log N) steps:
#define MAX_POWERS 100
int is_power(unsigned long x, unsigned long y) {
int i;
unsigned long powers[MAX_POWERS];
unsigned long last;
last = powers[0] = y;
for (i = 1; last < x; i++) {
last *= last; // note that last * last can overflow here!
powers[i] = last;
}
while (x >= y) {
unsigned long top = powers[--i];
if (x >= top) {
unsigned long x1 = x / top;
if (x1 * top != x) return 0;
x = x1;
}
}
return (x == 1);
}
Negative numbers are not handled by this code, but it can be done easyly with some conditional code when i = 1
This looks to be pretty fast for positive numbers as it finds the lower and upper limits for desired power and then applies binary search.
#include <iostream>
#include <cmath>
using namespace std;
//x is the dividend, y the divisor.
bool isIntegerPower(int x, int y)
{
int low = 0, high;
int exp = 1;
int val = y;
//Loop by changing exponent in the powers of 2 and
//Find out low and high exponents between which the required exponent lies.
while(1)
{
val = pow((double)y, exp);
if(val == x)
return true;
else if(val > x)
break;
low = exp;
exp = exp * 2;
high = exp;
}
//Use binary search to find out the actual integer exponent if exists
//Otherwise, return false as no integer power.
int mid = (low + high)/2;
while(low < high)
{
val = pow((double)y, mid);
if(val > x)
{
high = mid-1;
}
else if(val == x)
{
return true;
}
else if(val < x)
{
low = mid+1;
}
mid = (low + high)/2;
}
return false;
}
int main()
{
cout<<isIntegerPower(1024,2);
}
double a=8;
double b=64;
double n = Math.log(b)/Math.log(a);
double e = Math.ceil(n);
if((n/e) == 1){
System.out.println("true");
} else{
System.out.println("false");
}
I would implement the function like so:
bool IsWholeNumberPower(int x, int y)
{
double power = log(x)/log(y);
return floor(power) == power;
}
This shouldn't need check within a delta as is common with floating point comparisons, since we're checking whole numbers.
On second thoughts, don't do this. It does not work for negative x and/or y. Note that all other log-based answers presented right now are also broken in exactly the same manner.
The following is a fast general solution (in Java):
static boolean isPow(int x, int y) {
int logyx = (int)(Math.log(x) / Math.log(y));
return pow(y, logyx) == x || pow(y, logyx + 1) == x;
}
Where pow() is an integer exponentiation function such as the following in Java:
static int pow(int a, int b) {
return (int)Math.pow(a, b);
}
(This works due to the following guarantee provided by Math.pow: "If both arguments are integers, then the result is exactly equal to the mathematical result of raising the first argument to the power of the second argument...")
The reason to go with logarithms instead of repeated division is performance: while log is slower than division, it is slower by a small fixed multiple. At the same time it does remove the need for a loop and therefore gives you a constant-time algorithm.
In cases where y is 2, there is a quick approach that avoids the need for a loop. This approach can be extended to cases where y is some larger power of 2.
If x is a power of 2, the binary representation of x has a single set bit. There is a fairly simple bit-fiddling algorithm for counting the bits in an integer in O(log n) time where n is the bit-width of an integer. Many processors also have specialised instructions that can handle this as a single operation, about as fast as (for example) an integer negation.
To extend the approach, though, first take a slightly different approach to checking for a single bit. First determine the position of the least significant bit. Again, there is a simple bit-fiddling algorithm, and many processors have fast specialised instructions.
If this bit is the only bit, then (1 << pos) == x. The advantage here is that if you're testing for a power of 4, you can test for pos % 2 == 0 (the single bit is at an even position). Testing for a power of any power of two, you can test for pos % (y >> 1) == 0.
In principle, you could do something similar for testing for powers of 3 and powers of powers of 3. The problem is that you'd need a machine that works in base 3, which is a tad unlikely. You can certainly test any value x to see if its representation in base y has a single non-zero digit, but you'd be doing more work that you're already doing. The above exploits the fact that computers work in binary.
Probably not worth doing in the real world, though.
Here is a Python version which puts together the ideas of #salva and #Axn and is modified to not generate any numbers greater than those given and uses only simple storage (read, "no lists") by repeatedly paring away at the number of interest:
def perfect_base(b, n):
"""Returns True if integer n can be expressed as b**e where
n is a positive integer, else False."""
assert b > 1 and n >= b and int(n) == n and int(b) == b
# parity check
if not b % 2:
if n % 2:
return False # b,n is even,odd
if b == 2:
return n & (n - 1) == 0
if not b & (b - 1) and n & (n - 1):
return False # b == 2**m but n != 2**M
elif not n % 2:
return False # b,n is odd,even
while n >= b:
d = b
while d <= n:
n, r = divmod(n, d)
if r:
return False
d *= d
return n == 1
Previous answers are correct, I liked Paul's answer the best. It's Simple and clean.
Here is the Java implementation of what he suggested:
public static boolean isPowerOfaNumber(int baseOrg, int powerOrg) {
double base = baseOrg;
double power = powerOrg;
while (base % power == 0)
base = base / power;
// return true if base is equal 1
return base == 1;
}
in the case the number is too large ... use log function to reduce time complexity:
import math
base = int(input("Enter the base number: "))
for i in range(base,int(input("Enter the end of range: "))+1):
if(math.log(i) / math.log(base) % 1 == 0 ):
print(i)
If you have access to the largest power of y, that can be fitted inside the required datatype, this is a really slick way of solving this problem.
Lets say, for our case, y == 3. So, we would need to check if x is a power of 3.
Given that we need to check if an integer x is a power of 3, let us start thinking about this problem in terms of what information is already at hand.
1162261467 is the largest power of 3 that can fit into an Java int.
1162261467 = 3^19 + 0
The given x can be expressed as [(a power of 3) + (some n)]. I think it is fairly elementary to be able to prove that if n is 0(which happens iff x is a power of 3), 1162261467 % x = 0.
So, to check if a given integer x is a power of three, check if x > 0 && 1162261467 % x == 0.
Generalizing. To check if a given integer x is a power of a given integer y, check if x > 0 && Y % x == 0: Y is the largest power of y that can fit into an integer datatype.
The general idea is that if A is some power of Y, A can be expressed as B/Ya, where a is some integer and A < B. It follows the exact same principle for A > B. The A = B case is elementary.
I found this Solution
//Check for If A can be expressed as power of two integers
int isPower(int A)
{
int i,a;
double p;
if(A==1)
return 1;
for(int a=1; a<=sqrt(A);++a )
{
p=log(A)/log(a);
if(p-int(p)<0.000000001)
return 1;
}
return 0;
}
binarycoder.org
I tried to solve it myself but I could not get any clue.
Please help me to solve this.
Are you supposed to use itoa() for this assignment? Because then you could use that to convert to a base 3 string, drop the last character, and then restore back to base 10.
Using the mathematical relation:
1/3 == Sum[1/2^(2n), {n, 1, Infinity}]
We have
int div3 (int x) {
int64_t blown_up_x = x;
for (int power = 1; power < 32; power += 2)
blown_up_x += ((int64_t)x) << power;
return (int)(blown_up_x >> 33);
}
If you can only use 32-bit integers,
int div3 (int x) {
int two_third = 0, four_third = 0;
for (int power = 0; power < 31; power += 2) {
four_third += x >> power;
two_third += x >> (power + 1);
}
return (four_third - two_third) >> 2;
}
The 4/3 - 2/3 treatment is used because x >> 1 is floor(x/2) instead of round(x/2).
EDIT: Oops, I misread the title's question. Multiply operator is forbidden as well.
Anyway, I believe it's good not to delete this answer for those who didn't know about dividing by non power of two constants.
The solution is to multiply by a magic number and then to extract the 32 leftmost bits:
divide by 3 is equivalent to multiply by 1431655766 and then to shift by 32, in C:
int divideBy3(int n)
{
return (n * 1431655766) >> 32;
}
See Hacker's Delight Magic number calculator.
x/3 = e^(ln(x) - ln(3))
Here's a solution implemented in C++:
#include <iostream>
int letUserEnterANumber()
{
int numberEnteredByUser;
std::cin >> numberEnteredByUser;
return numberEnteredByUser;
}
int divideByThree(int x)
{
std::cout << "What is " << x << " divided by 3?" << std::endl;
int answer = 0;
while ( answer + answer + answer != x )
{
answer = letUserEnterANumber();
}
}
;-)
if(number<0){ // Edited after comments
number = -(number);
}
quotient = 0;
while (number-3 >= 0){ //Edited after comments..
number = number-3;
quotient++;
}//after loop exits value in number will give you reminder
EDIT: Tested and working perfectly fine :(
Hope this helped. :-)
long divByThree(int x)
{
char buf[100];
itoa(x, buf, 3);
buf[ strlen(buf) - 1] = 0;
char* tmp;
long res = strtol(buf, &tmp, 3);
return res;
}
Sounds like homework :)
I image you can write a function which iteratively divides a number. E.g. you can model what you do with a pen and a piece of paper to divide numbers. Or you can use shift operators and + to figure out if your intermediate results is too small/big and iteratively apply corrections. I'm not going to write down the code though ...
unsigned int div3(unsigned int m) {
unsigned long long n = m;
n += n << 2;
n += n << 4;
n += n << 8;
n += n << 16;
return (n+m) >> 32;
}
int divideby3(int n)
{
int x=0;
if(n<3) { return 0; }
while(n>=3)
{
n=n-3;
x++;
}
return x;
}
you can use a property from the numbers: A number is divisible by 3 if its sum is divisible by3.
Take the individual digits from itoa() and then use switch function for them recursively with additions and itoa()
Hope this helps
This is very easy, so easy I'm only going to hint at the answer --
Basic boolean logic gates (and,or,not,xor,...) don't do division. Despite this handicap CPUs can do division. Your solution is obvious: find a reference which tells you how to build a divisor with boolean logic and write some code to implement that.
How about this, in some kind of Python like pseudo-code. It divides the answer into an integer part and a fraction part. If you want to convert it to a floating point representation then I am not sure of the best way to do that.
x = <a number>
total = x
intpart = 0
fracpart = 0
% Find the integer part
while total >= 3
total = total - 3
intpart = intpart + 1
% Fraction is what remains
fracpart = total
print "%d / 3 = %d + %d/3" % (x, intpart, fracpart)
Note that this will not work for negative numbers. To fix that you need to modify the algorithm:
total = abs(x)
is_neg = abs(x) != x
....
if is_neg
print "%d / 3 = -(%d + %d/3)" % (x, intpart, fracpart)
for positive integer division
result = 0
while (result + result + result < input)
result +=1
return result
Convert 1/3 into binary
so 1/3=0.01010101010101010101010101
and then just "multiply" whit this number using shifts and sum
There is a solution posted on http://bbs.chinaunix.net/forum.php?mod=viewthread&tid=3776384&page=1&extra=#pid22323016
int DividedBy3(int A) {
int p = 0;
for (int i = 2; i <= 32; i += 2)
p += A << i;
return (-p);
}
Please say something about that, thanks:)
Here's a O(log(n)) way to do it with no bit shifting, so it can handle numbers up-to and including your biggest register size.
(c-style code)
long long unsigned Div3 (long long unsigned n)
{
// base case:
if (n < 6)
return (n >= 3);
long long unsigned division = 0;
long long unsigned remainder = 0;
// Used for results for only a single power of 2
// Initialise for 2^0
long long unsigned tmp_div = 0;
long long unsigned tmp_rem = 1;
for (long long unsigned pow_2 = 1; pow_2 && (pow_2 <= n); pow_2 += pow_2)
{
if (n & pow_2)
{
division += tmp_div;
remainder += tmp_rem;
}
if (tmp_rem == 1)
{
tmp_div += tmp_div;
tmp_rem = 2;
}
else
{
tmp_div += tmp_div + 1;
tmp_rem = 1;
}
}
return division + Div3(remainder);
}
It uses recursion, but note that the number drops exponentially in size at each iteration, so the time complexity (TC) is really:
O(TC) = O(log(n) + log(log(n)) + log(log(log(n))) + ... + z)
where z < 6.
Proof that it's O(log(n)):
We note that the number at each recursion strictly decreases (by at least 1):
So series = [log(log(n))] + [log(log(log(n)))] + [...] + [z]) has at most log(log(n)) sums.
implies:
series <= log(log(n))*log(log(n))
implies:
O(TC) = O(log(n) + log(log(n))*log(log(n)))
Now we note for n sufficiently large:
sqrt(x) > log(x)
iff:
x/sqrt(x) > log(x)
implies:
x/log(x) > log(x)
iff:
x > log(x)*log(x)
So O(x) > O(log(x)*log(x))
Now let x = log(n)
implies:
O(log(n)) > O(log(log(n))*log(log(n)))
and given:
O(TC) = O(log(n) + log(log(n))*log(log(n)))
implies:
O(TC) = O(log(n))
Slow and naive, but it should work, if an exact divisor exists. Addition is allowed, right?
for number from 1 to input
if number == input+input+input
return number
Extending it for fractional divisors is left as an exercise to the reader.
Basically test for +1 and +2 I think...