How do I determine the bias of an algorithm? - algorithm

Let's say I have an algorithm that is supposed to represent a coin flip. How do I determine the bias of this coin? Specifically, I have written the algorithm in this JSFiddle.
The fiddle runs a series of 20 tests. Each test flips the coin 100 times and tallies the results. At the end of the series it reports Heads/Tails for the total number of flips across all tests. This result seems to be approaching 1 (from both sides), but I have not done any rigorous testing of this.
Note, this is not homework. This is purely a personal interest.

You can't come up with a way to guarantee detecting a bias, but you can determine it to a certain degree of certainty (say 95%). What you do is test n times and count how many times you get heads, call this variable h.
Then if h / n < 0.5 - 1.96 * sqrt(0.25 / n) then the coin is biased towards tails (with a 95% probability) and if h / n > 0.5 + 1.96 * sqrt(0.25 / n) then the coin is biased towards heads.
This decision is based on something called normal approximation to a binomial distribution, you can read more about it here: http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Normal_approximation_interval

George Marsaglia Diehard Tests
if you consider your Head/Tail as generating 0 1 bits. Using them you can generate random numbers and test their randomness

Related

Sampling from Geometric distribution in constant time

I would like to know if there is any method to sample form the Geometric distribution in constant time without using log which can be hard to approximate. Thanks.
Without relying on logarithms, there is no algorithm to sample from a geometric(p) distribution in constant expected time. Rather, on a realistic computing model, such an algorithm's expected running time must grow at least as fast as 1 + log(1/p)/w, where w is the word size of the computer in bits (Bringmann and Friedrich 2013). The following algorithm, which is equivalent to one in the Bringmann paper, generates a geometric(px/py) random number without relying on logarithms, and when px/py is very small, the algorithm is considerably faster than the trivial algorithm of generating trials until a success:
Set pn to px, k to 0, and d to 0.
While pn*2 <= py, add 1 to k and multiply pn by 2.
With probability (1− px /py)2k, add 1 to d and repeat this step.
Generate a uniform random integer in [0, 2k), call it m, then with probability (1− px /py)m, return d*2k+m. Otherwise, repeat this step.
(The actual algorithm described in the Bringmann paper is in fact much more involved than this; see my note "On a Geometric Sampler".)
REFERENCES:
Bringmann, K., and Friedrich, T., 2013, July. Exact and efficient generation of geometric random variates and random graphs, in International Colloquium on Automata, Languages, and Programming (pp. 267-278).

What makes this prime factorization so efficient?

I've been doing some Project Euler problems to learn/practice Lua, and my initial quick-and-dirty way of finding the largest prime factor of nwas pretty bad, so I looked up some code to see how others were doing it (in attempts to understand different factoring methodologies).
I ran across the following (originally in Python - this is my Lua):
function Main()
local n = 102
local i = 2
while i^2 < n do
while n%i==0 do n = n / i end
i = i+1
end
print(n)
end
This factored huge numbers in a very short time - almost immediately. The thing I noticed about the algorithm that I wouldn't have divined:
n = n / i
This seems to be in all of the decent algorithms. I've worked it out on paper with smaller numbers and I can see that it makes the numbers converge, but I don't understand why this operation converges on the largest prime factor.
Can anyone explain?
In this case, i is the prime factor candidate. Consider, n is composed of the following prime numbers:
n = p1^n1 * p2^n2 * p3^n3
When i reaches p1, the statement n = n / i = n / p1 removes one occurrence of p1:
n / p1 = p1^(n-1) * p2^n2 * p3^n3
The inner while iterates as long as there are p1s in n. Thus, after the iteration is complete (when i = i + 1 is executed), all occurrences of p1 have been removed and:
n' = p2^n2 * p3^n3
Let's skip some iterations until i reaches p3. The remaining n is then:
n'' = p3^n3
Here, we find a first mistake in the code. If n3 is 2, then the outer condition does not hold and we remain with p3^2. It should be while i^2 <= n.
As before, the inner while removes all occurences of p3, leaving us with n'''=1. This is the second mistake. It should be while n%i==0 and n>i (not sure about the LUA syntax), which keeps the very last occurence.
So the above code works for all numbers n where the largest prime factor occurrs only once by successivley removing all other factors. For all other numbers, the mentioned corrections should make it work, too.
This eliminates all the known smaller prime factors off n so that n becomes smaller, and sqrt(n) can be reached earlier. This gives the performance boost, as you no longer need to run numbers to square root of original N, say if n is a million, it consists of 2's and 5's, and naive querying against all known primes would need to check against all primes up to 1000, while dividing this by 2 yields 15625, then dividing by 5 yields 1 (by the way, your algorithm will return 1! To fix, if your loop exits with n=1, return i instead.) effectively factoring the big number in two steps. But this is only acceptable with "common" numbers, that have a single high prime denominator and a bunch of smaller ones, but factoring a number n=p*q whiere both p and q are primes and are close won't be able to benefit from this boost.
The n=n/i line works because if you are seeking another prime than i you are currently found as a divisor, the result is also divisible by that prime, by definition of prime numbers. Read here: https://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmetic . Also this only works in your case because your i runs from 2 upward, so that you first divide by primes then their composites. Otherwise, if your number would have a 3 as largest prime, is also divisible by 2 and you'd check against 6 first, you'd spoil the principle of only dividing by primes (say with 72, if you first divide by 6, you'll end up with 2, while the answer is 3) by accidentally dividing by a composite of a largest prime.
This algorithm (when corrected) takes O(max(p2,sqrt(p1))) steps to find the prime factorization of n, where p1 is the largest prime factor and the p2 is the second largest prime factor. In case of a repeated largest prime factor, p1=p2.
Knuth and Trabb Pardo studied the behavior of this function "Analysis of a Simple Factorization Algorithm" Theoretical Computer Science 3 (1976) 321-348. They argued against the usual analysis such as computing the average number of steps taken when factoring integers up to n. Although a few numbers with large prime factors boost the average value, in a cryptographic context what may be more relevant is that some of the percentiles are quite low. For example, 44.7% of numbers satisfy max(sqrt(p1),p2)<n^(1/3), and 1.2% of numbers satisfy max(sqrt(p1),p2)<n^(1/5).
A simple improvement is to test the remainder for primality after you find a new prime factor. It is very fast to test whether a number is prime. This reduces the time to O(p2) by avoiding the trial divisions between p2 and sqrt(p1). The median size of the second largest prime is about n^0.21. This means it is feasible to factor many 45-digit numbers rapidly (in a few processor-seconds) using this improvement on trial division. By comparison, Pollard-rho factorization on a product of two primes takes O(sqrt(p2)) steps on average, according to one model.

random number generator test

How will you test if the random number generator is generating actual random numbers?
My Approach: Firstly build a hash of size M, where M is the prime number. Then take the number
generated by random number generator, and take mod with M.
and see it fills in all the hash or just in some part.
That's my approach. Can we prove it with visualization?
Since I have very less knowledge about testing. Can you suggest me a thorough approach of this question? Thanks in advance
You should be aware that you cannot guarantee the random number generator is working properly. Note that even a perfect uniform distribution in range [1,10] - there is a 10-10 chance of getting 10 times 10 in a random sampling of 10 numbers.
Is it likely? Of course not.
So - what can we do?
We can statistically prove that the combination (10,10,....,10) is unlikely if the random number generator is indeed uniformly distributed. This concept is called Hypothesis testing. With this approach we can say "with certainty level of x% - we can reject the hypothesis that the data is taken from a uniform distribution".
A common way to do it, is using Pearson's Chi-Squared test, The idea is similar to yours - you fill in a table - check what is the observed (generated) number of numbers for each cell, and what is the expected number of numbers for each cell under the null hypothesis (in your case, the expected is k/M - where M is the range's size, and k is the total number of numbers taken).
You then do some manipulation on the data (see the wikipedia article for more info what this manipulation is exactly) - and get a number (the test statistic). You then check if this number is likely to be taken from a Chi-Square Distribution. If it is - you cannot reject the null hypothesis, if it is not - you can be certain with x% certainty that the data is not taken from a uniform random generator.
EDIT: example:
You have a cube, and you want to check if it is "fair" (uniformly distributed in [1,6]). Throw it 200 times (for example) and create the following table:
number: 1 2 3 4 5 6
empirical occurances: 37 41 30 27 32 33
expected occurances: 33.3 33.3 33.3 33.3 33.3 33.3
Now, according to Pearson's test, the statistic is:
X = ((37-33.3)^2)/33.3 + ((41-33.3)^2)/33.3 + ... + ((33-33.3)^2)/33.3
X = (18.49 + 59.29 + 10.89 + 39.69 + 1.69 + 0.09) / 33.3
X = 3.9
For a random C~ChiSquare(5), the probability of being higher then 3.9 is ~0.45 (which is not improbable)1.
So we cannot reject the null hypothesis, and we can conclude that the data is probably uniformly distributed in [1,6]
(1) We usually reject the null hypothesis if the value is smaller then 0.05, but this is very case dependent.
My naive idea:
The generator is following a distribution. (At least it should.) Do a reasonable amount of runs then plot the values on a graph. Fit a regression curve on the points. If it correlates with the shape of the distribution you're good. (This is also possible in 1D with projections and histograms. And fully automatable with the correct tool, e.g. MatLab)
You can also use the diehard tests as it was mentioned before, that is surely better but involves much less intuition, at least on your side.
Let's say you want to generate a uniform distribution on the interval [0, 1].
Then one possible test is
for i from 1 to sample-size
when a < random-being-tested() < b
counter +1
return counter/sample-size
And see if the result is closed to b-a (b minus a).
Of course you should define a function taking a, b between 0 and 1 as inputs, and return the difference between the counter/sample-size and b-a. Loop through possible a, b, say of the multiples of 0.01, a < b. Print out a, b when the difference is larger than a preset epsilon, say 0.001.
Those are the a, b for which there are too many outliers.
If you let sample-size be 5000. Your random-being-tested will be called about 5000 * 5050 times in total, hopefully not too bad.
I had the same problem.
when I finish to write my code (using an external RNG engine)
I looked on the results and found that all of them fail Chi-Square test whenever I have to many results.
my code generated a random number and hold buckets of the amount of each result range.
I don't know why the Chi-square test fail when i have a lot of results.
during my research I saw that the C# Random.next() fail in any range of random and that some of the numbers have better odds than the other, further more i saw that the RNGCryptoServiceProvider random provider is not supporting good on big numbers.
when trying to get numbers in the range of 0-1,000,000,000 the numbers in the lower range 0-300M have better odds to appear...
as a result I'm using the RNGCryptoServiceProvider and if my range is higher than 100M i'm combine the number my self (RandomHigh*100M + RandomLow) and the ranges of both randoms is smaller than 100M so it good.
Good Luck!

Programming problem - Game of Blocks

maybe you would have an idea on how to solve the following problem.
John decided to buy his son Johnny some mathematical toys. One of his most favorite toy is blocks of different colors. John has decided to buy blocks of C different colors. For each color he will buy googol (10^100) blocks. All blocks of same color are of same length. But blocks of different color may vary in length.
Jhonny has decided to use these blocks to make a large 1 x n block. He wonders how many ways he can do this. Two ways are considered different if there is a position where the color differs. The example shows a red block of size 5, blue block of size 3 and green block of size 3. It shows there are 12 ways of making a large block of length 11.
Each test case starts with an integer 1 ≤ C ≤ 100. Next line consists c integers. ith integer 1 ≤ leni ≤ 750 denotes length of ith color. Next line is positive integer N ≤ 10^15.
This problem should be solved in 20 seconds for T <= 25 test cases. The answer should be calculated MOD 100000007 (prime number).
It can be deduced to matrix exponentiation problem, which can be solved relatively efficiently in O(N^2.376*log(max(leni))) using Coppersmith-Winograd algorithm and fast exponentiation. But it seems that a more efficient algorithm is required, as Coppersmith-Winograd implies a large constant factor. Do you have any other ideas? It can possibly be a Number Theory or Divide and Conquer problem
Firstly note the number of blocks of each colour you have is a complete red herring, since 10^100 > N always. So the number of blocks of each colour is practically infinite.
Now notice that at each position, p (if there is a valid configuration, that leaves no spaces, etc.) There must block of a color, c. There are len[c] ways for this block to lie, so that it still lies over this position, p.
My idea is to try all possible colors and positions at a fixed position (N/2 since it halves the range), and then for each case, there are b cells before this fixed coloured block and a after this fixed colour block. So if we define a function ways(i) that returns the number of ways to tile i cells (with ways(0)=1). Then the number of ways to tile a number of cells with a fixed colour block at a position is ways(b)*ways(a). Adding up all possible configurations yields the answer for ways(i).
Now I chose the fixed position to be N/2 since that halves the range and you can halve a range at most ceil(log(N)) times. Now since you are moving a block about N/2 you will have to calculate from N/2-750 to N/2-750, where 750 is the max length a block can have. So you will have to calculate about 750*ceil(log(N)) (a bit more because of the variance) lengths to get the final answer.
So in order to get good performance you have to through in memoisation, since this inherently a recursive algorithm.
So using Python(since I was lazy and didn't want to write a big number class):
T = int(raw_input())
for case in xrange(T):
#read in the data
C = int(raw_input())
lengths = map(int, raw_input().split())
minlength = min(lengths)
n = int(raw_input())
#setup memoisation, note all lengths less than the minimum length are
#set to 0 as the algorithm needs this
memoise = {}
memoise[0] = 1
for length in xrange(1, minlength):
memoise[length] = 0
def solve(n):
global memoise
if n in memoise:
return memoise[n]
ans = 0
for i in xrange(C):
if lengths[i] > n:
continue
if lengths[i] == n:
ans += 1
ans %= 100000007
continue
for j in xrange(0, lengths[i]):
b = n/2-lengths[i]+j
a = n-(n/2+j)
if b < 0 or a < 0:
continue
ans += solve(b)*solve(a)
ans %= 100000007
memoise[n] = ans
return memoise[n]
solve(n)
print "Case %d: %d" % (case+1, memoise[n])
Note I haven't exhaustively tested this, but I'm quite sure it will meet the 20 second time limit, if you translated this algorithm to C++ or somesuch.
EDIT: Running a test with N = 10^15 and a block with length 750 I get that memoise contains about 60000 elements which means non-lookup bit of solve(n) is called about the same number of time.
A word of caution: In the case c=2, len1=1, len2=2, the answer will be the N'th Fibonacci number, and the Fibonacci numbers grow (approximately) exponentially with a growth factor of the golden ratio, phi ~ 1.61803399. For the
huge value N=10^15, the answer will be about phi^(10^15), an enormous number. The answer will have storage
requirements on the order of (ln(phi^(10^15))/ln(2)) / (8 * 2^40) ~ 79 terabytes. Since you can't even access 79
terabytes in 20 seconds, it's unlikely you can meet the speed requirements in this special case.
Your best hope occurs when C is not too large, and leni is large for all i. In such cases, the answer will
still grow exponentially with N, but the growth factor may be much smaller.
I recommend that you first construct the integer matrix M which will compute the (i+1,..., i+k)
terms in your sequence based on the (i, ..., i+k-1) terms. (only row k+1 of this matrix is interesting).
Compute the first k entries "by hand", then calculate M^(10^15) based on the repeated squaring
trick, and apply it to terms (0...k-1).
The (integer) entries of the matrix will grow exponentially, perhaps too fast to handle. If this is the case, do the
very same calculation, but modulo p, for several moderate-sized prime numbers p. This will allow you to obtain
your answer modulo p, for various p, without using a matrix of bigints. After using enough primes so that you know their product
is larger than your answer, you can use the so-called "Chinese remainder theorem" to recover
your answer from your mod-p answers.
I'd like to build on the earlier #JPvdMerwe solution with some improvements. In his answer, #JPvdMerwe uses a Dynamic Programming / memoisation approach, which I agree is the way to go on this problem. Dividing the problem recursively into two smaller problems and remembering previously computed results is quite efficient.
I'd like to suggest several improvements that would speed things up even further:
Instead of going over all the ways the block in the middle can be positioned, you only need to go over the first half, and multiply the solution by 2. This is because the second half of the cases are symmetrical. For odd-length blocks you would still need to take the centered position as a seperate case.
In general, iterative implementations can be several magnitudes faster than recursive ones. This is because a recursive implementation incurs bookkeeping overhead for each function call. It can be a challenge to convert a solution to its iterative cousin, but it is usually possible. The #JPvdMerwe solution can be made iterative by using a stack to store intermediate values.
Modulo operations are expensive, as are multiplications to a lesser extent. The number of multiplications and modulos can be decreased by approximately a factor C=100 by switching the color-loop with the position-loop. This allows you to add the return values of several calls to solve() before doing a multiplication and modulo.
A good way to test the performance of a solution is with a pathological case. The following could be especially daunting: length 10^15, C=100, prime block sizes.
Hope this helps.
In the above answer
ans += 1
ans %= 100000007
could be much faster without general modulo :
ans += 1
if ans == 100000007 then ans = 0
Please see TopCoder thread for a solution. No one was close enough to find the answer in this thread.

Prime factor of 300 000 000 000?

I need to find out the prime factors of over 300 billion. I have a function that is adding to the list of them...very slowly! It has been running for about an hour now and i think its got a fair distance to go still. Am i doing it completly wrong or is this expected?
Edit: Im trying to find the largest prime factor of the number 600851475143.
Edit:
Result:
{
List<Int64> ListOfPrimeFactors = new List<Int64>();
Int64 Number = 600851475143;
Int64 DividingNumber = 2;
while (DividingNumber < Number / DividingNumber)
{
if (Number % DividingNumber == 0)
{
ListOfPrimeFactors.Add(DividingNumber);
Number = Number/DividingNumber;
}
else
DividingNumber++;
}
ListOfPrimeFactors.Add(Number);
listBox1.DataSource = ListOfPrimeFactors;
}
}
Are you remembering to divide the number that you're factorizing by each factor as you find them?
Say, for example, you find that 2 is a factor. You can add that to your list of factors, but then you divide the number that you're trying to factorise by that value.
Now you're only searching for the factors of 150 billion. Each time around you should start from the factor you just found. So if 2 was a factor, test 2 again. If the next factor you find is 3, there's no point testing from 2 again.
And so on...
Finding prime factors is difficult using brute force, which sounds like the technique you are using.
Here are a few tips to speed it up somewhat:
Start low, not high
Don't bother testing each potential factor to see whether it is prime--just test LIKELY prime numbers (odd numbers that end in 1,3,7 or 9)
Don't bother testing even numbers (all divisible by 2), or odds that end in 5 (all divisible by 5). Of course, don't actually skip 2 and 5!!
When you find a prime factor, make sure to divide it out--don't continue to use your massive original number. See my example below.
If you find a factor, make sure to test it AGAIN to see if it is in there multiple times. Your number could be 2x2x3x7x7x7x31 or something like that.
Stop when you reach >= sqrt(remaining large number)
Edit: A simple example:
You are finding the factors of 275.
Test 275 for divisibility by 2. Does 275/2 = int(275/2)? No. Failed.
Test 275 for divisibility by 3. Failed.
Skip 4!
Test 275 for divisibility by 5. YES! 275/5 = 55. So your NEW test number is now 55.
Test 55 for divisibility by 5. YES! 55/5 = 11. So your NEW test number is now 11.
BUT 5 > sqrt (11), so 11 is prime, and you can stop!
So 275 = 5 * 5 * 11
Make more sense?
Factoring big numbers is a hard problem. So hard, in fact, that we rely on it to keep RSA secure. But take a look at the wikipedia page for some pointers to algorithms that can help. But for a number that small, it really shouldn't be taking that long, unless you are re-doing work over and over again that you don't have to somewhere.
For the brute-force solution, remember that you can do some mini-optimizations:
Check 2 specially, then only check odd numbers.
You only ever need to check up to the square-root of the number (if you find no factors by then, then the number is prime).
Once you find a factor, don't use the original number to find the next factor, divide it by the found factor, and search the new smaller number.
When you find a factor, divide it through as many times as you can. After that, you never need to check that number, or any smaller numbers again.
If you do all the above, each new factor you find will be prime, since any smaller factors have already been removed.
Here is an XSLT solution!
This XSLT transformation takes 0.109 sec.
<xsl:stylesheet version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:saxon="http://saxon.sf.net/"
xmlns:f="http://fxsl.sf.net/"
exclude-result-prefixes="xs saxon f"
>
<xsl:import href="../f/func-Primes.xsl"/>
<xsl:output method="text"/>
<xsl:template name="initial" match="/*">
<xsl:sequence select="f:maxPrimeFactor(600851475143)"/>
</xsl:template>
<xsl:function name="f:maxPrimeFactor" as="xs:integer">
<xsl:param name="pNum" as="xs:integer"/>
<xsl:sequence select=
"if(f:isPrime($pNum))
then $pNum
else
for $vEnd in xs:integer(floor(f:sqrt($pNum, 0.1E0))),
$vDiv1 in (2 to $vEnd)[$pNum mod . = 0][1],
$vDiv2 in $pNum idiv $vDiv1
return
max((f:maxPrimeFactor($vDiv1),f:maxPrimeFactor($vDiv2)))
"/>
</xsl:function>
</xsl:stylesheet>
This transformation produces the correct result (the maximum prime factor of 600851475143) in just 0.109 sec.:
6857
The transformation uses the f:sqrt() and f:isPrime() defined in FXSL 2.0 -- a library for functional programming in XSLT. FXSL is itself written entirely in XSLT.
f:isPrime() uses Fermat's little theorem so that it is efficient to determine primeality.
One last thing nobody has mentioned, perhaps because it seems obvious. Every time you find a factor and divide it out, keep trying the factor until it fails.
64 only has one prime factor, 2. You will find that out pretty trivially if you keep dividing out the 2 until you can't anymore.
$ time factor 300000000000 > /dev/null
real 0m0.027s
user 0m0.000s
sys 0m0.001s
You're doing something wrong if it's taking an hour. You might even have an infinite loop somewhere - make sure you're not using 32-bit ints.
The key to understanding why the square root is important, consider that each factor of n below the square root of n has a corresponding factor above it. To see this, consider that if x is factor of n, then x/n = m which means that x/m = n, hence m is also a factor.
I wouldn't expect it to take very long at all - that's not a particularly large number.
Could you give us an example number which is causing your code difficulties?
Here's one site where you can get answers: Factoris - Online factorization service. It can do really big numbers, but it also can factorize algebraic expressions.
The fastest algorithms are sieve algorithms, and are based on arcane areas of discrete mathematics (over my head at least), complicated to implement and test.
The simplest algorithm for factoring is probably (as others have said) the Sieve of Eratosthenes. Things to remember about using this to factor a number N:
general idea: you're checking an increasing sequence of possible integer factors x to see if they evenly divide your candidate number N (in C/Java/Javascript check whether N % x == 0) in which case N is not prime.
you just need to go up to sqrt(N), but don't actually calculate sqrt(N): loop as long as your test factor x passes the test x*x<N
if you have the memory to save a bunch of previous primes, use only those as the test factors (and don't save them if prime P fails the test P*P > N_max since you'll never use them again
Even if you don't save the previous primes, for possible factors x just check 2 and all the odd numbers. Yes, it will take longer, but not that much longer for reasonable sized numbers. The prime-counting function and its approximations can tell you what fraction of numbers are prime; this fraction decreases very slowly. Even for 264 = approx 1.8x1019, roughly one out of every 43 numbers is prime (= one out of every 21.5 odd numbers is prime). For factors of numbers less than 264, those factors x are less than 232 where about one out of every 20 numbers is prime = one out of every 10 odd numbers is prime. So you'll have to test 10 times as many numbers, but the loop should be a bit faster and you don't have to mess around with storing all those primes.
There are also some older and simpler sieve algorithms that a little bit more complex but still fairly understandable. See Dixon's, Shanks' and Fermat's factoring algorithms. I read an article about one of these once, can't remember which one, but they're all fairly straightforward and use algebraic properties of the differences of squares.
If you're just testing whether a number N is prime, and you don't actually care about the factors themselves, use a probabilistic primality test. Miller-Rabin is the most standard one, I think.
I spent some time on this since it just sucked me in. I won't paste the code here just yet. Instead see this factors.py gist if you're curious.
Mind you, I didn't know anything about factoring (still don't) before reading this question. It's just a Python implementation of BradC's answer above.
On my MacBook it takes 0.002 secs to factor the number mentioned in the question (600851475143).
There must obviously be much, much faster ways of doing this. My program takes 19 secs to compute the factors of 6008514751431331. But the Factoris service just spits out the answer in no-time.
The specific number is 300425737571? It trivially factors into 131 * 151 * 673 * 22567.
I don't see what all the fuss is...
Here's some Haskell goodness for you guys :)
primeFactors n = factor n primes
where factor n (p:ps) | p*p > n = [n]
| n `mod` p /= 0 = factor n ps
| otherwise = p : factor (n `div` p) (p:ps)
primes = 2 : filter ((==1) . length . primeFactors) [3,5..]
Took it about .5 seconds to find them, so I'd call that a success.
You could use the sieve of Eratosthenes to find the primes and see if your number is divisible by those you find.
You only need to check it's remainder mod(n) where n is a prime <= sqrt(N) where N is the number you are trying to factor. It really shouldn't take over an hour, even on a really slow computer or a TI-85.
Your algorithm must be FUBAR. This only takes about 0.1s on my 1.6 GHz netbook in Python. Python isn't known for its blazing speed. It does, however, have arbitrary precision integers...
import math
import operator
def factor(n):
"""Given the number n, to factor yield a it's prime factors.
factor(1) yields one result: 1. Negative n is not supported."""
M = math.sqrt(n) # no factors larger than M
p = 2 # candidate factor to test
while p <= M: # keep looking until pointless
d, m = divmod(n, p)
if m == 0:
yield p # p is a prime factor
n = d # divide n accordingly
M = math.sqrt(n) # and adjust M
else:
p += 1 # p didn't pan out, try the next candidate
yield n # whatever's left in n is a prime factor
def test_factor(n):
f = factor(n)
n2 = reduce(operator.mul, f)
assert n2 == n
def example():
n = 600851475143
f = list(factor(n))
assert reduce(operator.mul, f) == n
print n, "=", "*".join(str(p) for p in f)
example()
# output:
# 600851475143 = 71*839*1471*6857
(This code seems to work in defiance of the fact that I don't know enough about number theory to fill a thimble.)
Just to expand/improve slightly on the "only test odd numbers that don't end in 5" suggestions...
All primes greater than 3 are either one more or one less than a multiple of 6 (6x + 1 or 6x - 1 for integer values of x).
It shouldn't take that long, even with a relatively naive brute force. For that specific number, I can factor it in my head in about one second.
You say you don't want solutions(?), but here's your "subtle" hint. The only prime factors of the number are the lowest three primes.
Semi-prime numbers of that size are used for encryption, so I am curious as to what you exactly want to use them for.
That aside, there currently are not good ways to find the prime factorization of large numbers in a relatively small amount of time.

Resources