finding pythagorean triples (a,b,c) with a <=200 - algorithm

In my previous post on this subject i have made little progress (not blaming anyone except myself!) so i'll try to approach my problem statement differently.
how do i go about writing the algorithm to generate a list of primitive triples?
all i have to start with is:
a) the basic theorem: a^2 + b^2 = c^2
b) the fact that the small sides of the triple (a and b) need to be smaller than 'n'
(note: 'n' <= 200 for this purpose)
How do i go about building my loops? Do i need 2 or 3 loops?
a professor gave me some hints but alas i am still lost. I don't know where to start with building my loops. Do i need 2 or 3 loops? do i loop through a and b or do i need to introduce the 'n' variable into a loop of its own? This probably looks like obvious hints to experienced programmers but it seems i need more hand holding still...any help will be appreciated!
A Pythagorean triple is group of a,b,c
where a^2 + b^2 = c^2. you need to
find all a,b,c combinations which
satisfy the above rule starting a
0,0,0 up to 200 ,609,641 The first
triple will be [3,4,5] the next will
be [5,12,13] etc.. n is length of the
small side a so if n is 5 you need to
check all triples with
a=1,a=2,a=3,a=4,a=5 and find the two
cases shown above as being
Pythagorean,
EDIT
thanks for all submissions. So this is what i came up with (using python)
import math
for a in range (1,200):
for b in range (a,a*a):
csqrd = a * a + b * b
c = math.sqrt(csqrd)
if math.floor(c) == c:
print (a,b,int(c))
this DOES return the triple (200 ,609,641) where 200 is the upper limit for 'a' but computing the upper limit for 'b' remains tricky. Not sure how i would go about this...suggestions welcome :)
Thanks
Baba
p.s. i'm not looking for a solution but rather help in improving my problem solving skills. (definitely needed :-) )

You only need two loops. Note that n is given, meaning you read it from the keyboard or from a file.
Once you read n, you simply loop a from 1, then in that loop you loop b from a. Then you check if a <= n and if b <= n. If yes, you check if a^2 + b^2 is a square (if it can be writen as c^2 where c is an integer). If yes you output the corresponding triplet. You can stop the first loop once a > n and the second loop once b > n.

To compute the upper limit of b ... certainly we can't go past a^2 + b^2 = (b+1)^2, since the gap between successive squares increases. Now, (b+1)^2 is b^2 + 2b + 1, so we can stop on b when a^2 < 2b + 1. (In fact, for odd a, the biggest triple is when b = (a^2 - 1)/2, and then a^2 + b^2 = (b+1)^2.)
Let's consider even a. Then, we need to consider a^2 + b^2 = (b+2)^2, since 2b+1 is necessarily odd. Now, (b+2)^2 - b^2 = 4b+4, so we're looking at a^2 = 4b+4, or b = (a^2 - 4)/4 as the highest b (and, as before, we know this b works).
Therefore, for given a, you need to check bs up to
(a^2 - 1)/2 (a odd)
(a ^2 - 4)/4 (a even)

Given any a and b, you can compute what c should be. You can also check if the c you get is a whole number. With that in mind, you need to check all the a and b values and find the ones that give you a whole c number.
This should take just two loops (one for a and one for b). Leave comments if you want more help, and let me know what problems you have.

So Pythagorean tripes luckily have two properties that make this not so bad to solve:
First, all the numbers in a triple have to be integers (that means, you can calculate a^2 + b^2 and you have a triple if c^2 is an integer and not a float). Additionally, c is bounded by what a and b are.
So this should inform you how many variables you really have (which will guide your algorithm design - specifically how many for loops you need). The latter piece of information will inform you as to how long of a range you need to iterate over. I've tried to be vague as per your request, but let me know if you'd like anything more specific.

Break the problem into sub problems. The first clue is that you have an upper bound n on the value of c. Let's start with c=1 --- so, let's see how many triplets can be formed with:
a^2 + b^2 = 1
Now, let's set a = 1 to c-1. So that means we have to check if b is an integer such that b^2 = c^2 - a^2 and b^2 = int(b)^2.

leaving the formula and the language alone, you're trying to find every combination of two variables, a and b so...
foreach A
foreach B
foreach C
do something with B and A and eval with c
end foreach C
end foreach B
end foreach A
for ($x = 1; $x <= 200; $x++) {
for ($y = 1; $y <= 200; $y++) {
for ($z = 1; $z <= 200; $z++) {
if ($x < $y) {
if (pow($x, 2) + pow($y, 2) == pow($z, 2)) {
echo "$x, $y , $z<br/>";
}
}
}
}
}
3, 4 , 5
5, 12 , 13
6, 8 , 10
...
81, 108 , 135
84, 112 , 140
84, 135 , 159

Related

How can I descale x by n/d, when x*n overflows?

My problem is limited to unsigned integers of 256 bits.
I have a value x, and I need to descale it by the ratio n / d, where n < d.
The simple solution is of course x * n / d, but the problem is that x * n may overflow.
I am looking for any arithmetic trick which may help in reaching a result as accurate as possible.
Dividing each of n and d by gcd(n, d) before calculating x * n / d does not guarantee success.
Is there any process (iterative or other) which i can use in order to solve this problem?
Note that I am willing to settle on an inaccurate solution, but I'd need to be able to estimate the error.
NOTE: Using integer division instead of normal division
Let us suppose
x = ad + b
n = cd + e
Then find a,b,c,e as follows:
a = x/d
b = x%d
c = n/d
e = n%d
Then,
nx/d = acd + ae + bc + be/d
CALCULATING be/d
1. Represent e in binary form
2. Find b/d, 2b/d, 4b/d, 8b/d, ... 256b/d and their remainders
3. Find be/d = b*binary terms + their remainders
Example:
e = 101 in binary = 4+1
be/d = (b/d + 4b/d) + (b%d + 4b%d)/d
FINDING b/d, 2b/d, ... 256b/d
quotient(2*ib/d) = 2*quotient(ib /d) + (2*remainder(ib /d))/d
remainder(2*ib/d) = (2*remainder(ib/d))%d
Executes in O(number of bits)

What's an algorithm to get a number closest to a constant that can evenly (within a margin) divide into two other constants?

So let't say I have numbers A=1483 and B = 635. My X=100.0
Let's say my allowed MARGIN is 10.0
What's the best way to get the closest number to X (can be floating point) that can divide into A and B with a remainder that is less that MARGIN?
For an answer K. A % K <= MARGIN, B % K <= MARGIN, with K being as close to X as possible, for example |K - X| < 100
Let's try and write the problem with mathematical notations.
What you have is Euclidean divisions:
A = Q1*X + R1
B = Q2*X + R2
You want to find the minimal |x| such that
A = Q1'*(X+x) + R1' , |R1'| <= M
B = Q2'*(X+x) + R2' , |R2'| <= M
To help you finding such x, you have relations like:
A = Q1*(X+x) + R1-Q1*x
B = Q2*(X+x) + R2-Q2*x
From here, you should first concentrate on how to solve the example you gave, then try and generalize.
1483 = 14*100 + 83 = 15*100 - 17
635 = 6*100 + 35 = 7*100 - 65
Should you can take x > 0 in order to reduce R2 (35) down to 10, or x < 0 to increase R1 (-17) up to -10?
In the first case, x should be in interval [25/6 , 45/6] to bring |R2'| <= M, but at the same time it must be in interval [73/14 , 93/14] to bring |R1'| <= M.
Do these intervals overlap?
if yes you have a solution.
if no, then you have to try further (decrement quotients Q1' and/or Q2')
Just check with any decent interpreter (Squeak/Pharo Smalltalk here)
{25/6 . 45/6. 73/14 . 93/14} sorted
= {(25/6) . (73/14) . (93/14) . (15/2)}
So they overlap, starting at x=73/14.
But maybe you would get a closer x in the other direction?
I have not given an algorithm, just a clue, up to you to continue. But you see that increment does not have to be random (like 0.001).
For now the best way I have found is a brute force method by finding the GCD of A and B and decrease by a small interval (0.001) and find the smallest c(K) where K >= X and c(x) = A % x + B % x
If I had found a way to differentiate c(x) correctly, I would've liked to find its gradient and use gradient descent to find the most optimal value without brute force.

Algorithm to simplify boolean expressions

I want to simplify a very large boolean function of the form :
f(a1,a2,....,an)= (a1+a2+a5).(a2+a7+a11+a23+a34)......(a1+a3+an).
'.' means OR
'+' means AND
there may be 100 such terms ('.' with each other )
value of n may go upto 30.
Is there any feasible algorithm to simplify this?
NOTE: this is not a lab assignment, a small part of my project on rule generation by rough set where f is dissimilarity function.
The well-known ways to do this are:
if the number of variables is less than 5, use the Karnaugh Map Algorithm
if the number of variables is 5 or more, use the Quine McCluskey Algorithm
The second way is most commonly used on a computer. It's tabular and straight-forward. The first way is the best way to do by hand and is more fun, but you can't use it for anything more than 4 variables reliably.
The typical method is to use boolean algebra to reduce the statement to its simplest form.
If, for example, you have something like:
(A AND B) OR (A AND C)
you can convert it to a more simple form:
A AND (B OR C)
If you represent the a values as an int or long where a1 has value 2, a2 has value 4, a3 has value 8 etc.:
int a = (a1 ? 2^1 : 0) + (a2 ? 2^2 : 0) + (a3 ? 2^3 : 0) + ...;
(wasting a bit to keep it simple and ignoring the fact that you'd be better off with an a0 = 1)
And you do the same for all of the terms:
long[] terms = ...;
terms[0] = 2^0 + 2^3 + 2^5 // a1+a2+a5
terms[1] = 2^2 + 2^7 + 2^23 + 2^34 // (a2+a7+a11+a23+a34)
Then you can find the result:
foreach(var term in terms)
{
if (a & term == term) return true;
}
return false;
BUT this only works well for up to n=64. Above that it's messy.

Is it possible to compute the minimum of three numbers by using two comparisons at the same time?

I've been trying to think up of some way that I could do two comparisons at the same time to find the greatest/least of three numbers. Arithmetic operations on them are considered "free" in this case.
That is to say, the classical way of finding the greater of two, and then comparing it to the third number isn't valid in this case because one comparison depends on the result of the other.
Is it possible to use two comparisons where this isn't the case? I was thinking maybe comparing the differences of the numbers somehow or their products or something, but came up with nothing.
Just to reemphasize, two comparisons are still done, just that neither comparison relies on the result of the other comparison.
Great answers so far, thanks guys
Ignoring the possibility of equal values ("ties"), there are 3! := 6 possible orderings of three items. If a comparison yields exactly one bit, then two comparisons can only encode 2*2 := 4 possible configurations. and 4 < 6. IOW: you cannot decide the order of three items using two fixed comparisons.
Using a truth table:
a b c|min|a<b a<c b<c| condition needed using only a<b and a<c
-+-+-+---+---+---+---+------------------
1 2 3| a | 1 1 1 | (ab==1 && ac==1)
1 3 2| a | 1 1 0 | ...
2 1 3| b | 0 1 1 | (ab==0 && ac==1)
3 1 2| b | 0 0 1 | (ab==0 && ac==0) <<--- (*)
2 3 1| c | 1 0 0 | (ab==1 && ac==0)
3 2 1| c | 0 0 0 | (ab==0 && ac==0) <<--- (*)
As you can see, you cannot distinguish the two cases marked by (*), when using only the a<b and a<c comparisons. (choosing another set of two comparisons will of course fail similarly, (by symmetry)).
But it is a pity: we fail to encode the three possible outcomes using only two bits. (yes, we could, but we'd need a third comparison, or choose the second comparison based on the outcome of the first)
I think it's possible (the following is for the min, according to the original form of the question):
B_lt_A = B < A
C_lt_min_A_B = C < (A + B - abs(A - B)) / 2
and then you combine these (I have to write it sequentially, but this is rather a 3-way switch):
if (C_lt_min_A_B) then C is the min
else if (B_lt_A) then B is the min
else A is the min
You might argue that the abs() implies a comparison, but that depends on the hardware. There is a trick to do it without comparison for integers. For IEEE 754 floating point it's just a matter of forcing the sign bit to zero.
Regarding (A + B - abs(A - B)) / 2: this is (A + B) / 2 - abs(A - B) / 2, i.e., the minimum of A and B is half the distance between A and B down from their midpoint. This can be applied again to yield min(A,B,C), but then you lose the identity of the minimum, i.e., you only know the value of the minimum, but not where it comes from.
One day we may find that parallelizing the 2 comparisons gives a better turnaround time, or even throughput, in some situation. Who knows, maybe for some vectorization, or for some MapReduce, or for something we don't know about yet.
If you were only talking integers, I think you can do it with zero comparisons using some math and a bit fiddle. Given three int values a, b, and c:
int d = ((a + b) - Abs(a - b)) / 2; // find d = min(a,b)
int e = ((d + c) - Abs(d - c)) / 2; // find min(d,c)
with Abs(x) implemented as
int Abs(int x) {
int mask = x >> 31;
return (x + mask) ^ mask;
}
Not extensively tested, so I may have missed something. Credit for the Abs bit twiddle goes to these sources
How to compute the integer absolute value
http://graphics.stanford.edu/~seander/bithacks.html#IntegerAbs
From Bit Twiddling Hacks
r = y ^ ((x ^ y) & -(x < y)); // min(x, y)
min = r ^ ((z ^ r) & -(z < r)); // min(z, r)
Two comparisons!
How about this to find the minimum:
If (b < a)
Swap(a, b)
If (c < a)
Swap(a, c)
Return a;
You can do this with zero comparisons in theory, assuming 2's complement number representation (and that right shifting a signed number preserves its sign).
min(a, b) = (a+b-abs(a-b))/2
abs(a) = (2*(a >> bit_depth)+1) * a
and then
min(a,b,c) = min(min(a,b),c)
This works because assuming a >> bit_depth gives 0 for positive numbers and -1 for negative numbers then 2*(a>>bit_depth)+1 gives 1 for positive numbers and -1 for negative numbers. This gives the signum function and we get abs(a) = signum(a) * a.
Then it's just a matter of the min(a,b) formula. This can be demonstrated by going through the two possibilities:
case min(a,b) = a:
min(a,b) = (a+b - -(a-b))/2
min(a,b) = (a+b+a-b)/2
min(a,b) = a
case min(a,b) = b:
min(a,b) = (a+b-(a-b))/2
min(a,b) = (a+b-a+b)/2
min(a,b) = b
So the formula for min(a,b) works.
The assumptions above only apply to the abs() function, if you can get a 0-comparison abs() function for your data type then you're good to go.
For example, IEEE754 floating point data has a sign bit as the top bit so the absolute value simply means clearing that bit. This means you can also use floating point numbers.
And then you can extend this to min of N numbers in 0 comparisons.
In practice though, it's hard to imagine this method will beat anything not intentionally slower. This is all about using less than 3 independent comparisons, not about making something faster than the straightforward implementation in practice.
if cos(1.5*atan2(sqrt(3)*(B-C), 2*A-B-C))>0 then
A is the max
else
if cos(1.5*atan2(sqrt(3)*(C-A), 2*B-C-A))>0 then
B is the max
else
C is the max

Dynamic programming idiom for combinations

Consider the problem in which you have a value of N and you need to calculate how many ways you can sum up to N dollars using [1,2,5,10,20,50,100] Dollar bills.
Consider the classic DP solution:
C = [1,2,5,10,20,50,100]
def comb(p):
if p==0:
return 1
c = 0
for x in C:
if x <= p:
c += comb(p-x)
return c
It does not take into effect the order of the summed parts. For example, comb(4) will yield 5 results: [1,1,1,1],[2,1,1],[1,2,1],[1,1,2],[2,2] whereas there are actually 3 results ([2,1,1],[1,2,1],[1,1,2] are all the same).
What is the DP idiom for calculating this problem? (non-elegant solutions such as generating all possible solutions and removing duplicates are not welcome)
Not sure about any DP idioms, but you could try using Generating Functions.
What we need to find is the coefficient of x^N in
(1 + x + x^2 + ...)(1+x^5 + x^10 + ...)(1+x^10 + x^20 + ...)...(1+x^100 + x^200 + ...)
(number of times 1 appears*1 + number of times 5 appears * 5 + ... )
Which is same as the reciprocal of
(1-x)(1-x^5)(1-x^10)(1-x^20)(1-x^50)(1-x^100).
You can now factorize each in terms of products of roots of unity, split the reciprocal in terms of Partial Fractions (which is a one time step) and find the coefficient of x^N in each (which will be of the form Polynomial/(x-w)) and add them up.
You could do some DP in calculating the roots of unity.
You should not go from begining each time, but at max from were you came from at each depth.
That mean that you have to pass two parameters, start and remaining total.
C = [1,5,10,20,50,100]
def comb(p,start=0):
if p==0:
return 1
c = 0
for i,x in enumerate(C[start:]):
if x <= p:
c += comb(p-x,i+start)
return c
or equivalent (it might be more readable)
C = [1,5,10,20,50,100]
def comb(p,start=0):
if p==0:
return 1
c = 0
for i in range(start,len(C)):
x=C[i]
if x <= p:
c += comb(p-x,i)
return c
Terminology: What you are looking for is the "integer partitions"
into prescibed parts (you should replace "combinations" in the title).
Ignoring the "dynamic programming" part of the question, a routine
for your problem is given in the first section of chapter 16
("Integer partitions", p.339ff) of the fxtbook, online at
http://www.jjj.de/fxt/#fxtbook

Resources