Finding the continued fraction of 2^(1/3) to very high precision - algorithm

Here I'll use the notation
It is possible to find the continued fraction of a number by computing it then applying the definition, but that requires at least O(n) bits of memory to find a0, a1 ... an, in practice it is a much worse. Using double floating point precision it is only possible to find a0, a1 ... a19.
An alternative is to use the fact that if a,b,c are rational numbers then there exist unique rationals p,q,r such that 1/(a+b*21/3+c*22/3) = x+y*21/3+z*22/3, namely
So if I represent x,y, and z to absolute precision using the boost rational lib I can obtain floor(x + y*21/3+z*22/3) accurately only using double precision for 21/3 and 22/3 because I only need it to be within 1/2 of the true value. Unfortunately the numerators and denominators of x,y, and z grow considerably fast, and if you use regular floats instead the errors pile up quickly.
This way I was able to compute a0, a1 ... a10000 in under an hour, but somehow mathematica can do that in 2 seconds. Here's my code for reference
#include <iostream>
#include <boost/multiprecision/cpp_int.hpp>
namespace mp = boost::multiprecision;
int main()
{
const double t_1 = 1.259921049894873164767210607278228350570251;
const double t_2 = 1.587401051968199474751705639272308260391493;
mp::cpp_rational p = 0;
mp::cpp_rational q = 1;
mp::cpp_rational r = 0;
for(unsigned int i = 1; i != 10001; ++i) {
double p_f = static_cast<double>(p);
double q_f = static_cast<double>(q);
double r_f = static_cast<double>(r);
uint64_t floor = p_f + t_1 * q_f + t_2 * r_f;
std::cout << floor << ", ";
p -= floor;
//std::cout << floor << " " << p << " " << q << " " << r << std::endl;
mp::cpp_rational den = (p * p * p + 2 * q * q * q +
4 * r * r * r - 6 * p * q * r);
mp::cpp_rational a = (p * p - 2 * q * r) / den;
mp::cpp_rational b = (2 * r * r - p * q) / den;
mp::cpp_rational c = (q * q - p * r) / den;
p = a;
q = b;
r = c;
}
return 0;
}

The Lagrange algorithm
The algorithm is described for example in Knuth's book The Art of Computer Programming, vol 2 (Ex 13 in section 4.5.3 Analysis of Euclid's Algorithm, p. 375 in 3rd edition).
Let f be a polynomial of integer coefficients whose only real root is an irrational number x0 > 1. Then the Lagrange algorithm calculates the consecutive quotients of the continued fraction of x0.
I implemented it in python
def cf(a, N=10):
"""
a : list - coefficients of the polynomial,
i.e. f(x) = a[0] + a[1]*x + ... + a[n]*x^n
N : number of quotients to output
"""
# Degree of the polynomial
n = len(a) - 1
# List of consecutive quotients
ans = []
def shift_poly():
"""
Replaces plynomial f(x) with f(x+1) (shifts its graph to the left).
"""
for k in range(n):
for j in range(n - 1, k - 1, -1):
a[j] += a[j+1]
for _ in range(N):
quotient = 1
shift_poly()
# While the root is >1 shift it left
while sum(a) < 0:
quotient += 1
shift_poly()
# Otherwise, we have the next quotient
ans.append(quotient)
# Replace polynomial f(x) with -x^n * f(1/x)
a.reverse()
a = [-x for x in a]
return ans
It takes about 1s on my computer to run cf([-2, 0, 0, 1], 10000). (The coefficients correspond to the polynomial x^3 - 2 whose only real root is 2^(1/3).) The output agrees with the one from Wolfram Alpha.
Caveat
The coefficients of the polynomials evaluated inside the function quickly become quite large integers. So this approach needs some bigint implementation in other languages (Pure python3 deals with it, but for example numpy doesn't.)

You might have more luck computing 2^(1/3) to high accuracy and then trying to derive the continued fraction from that, using interval arithmetic to determine if the accuracy is sufficient.
Here's my stab at this in Python, using Halley iteration to compute 2^(1/3) in fixed point. The dead code is an attempt to compute fixed-point reciprocals more efficiently than Python via Newton iteration -- no dice.
Timing from my machine is about thirty seconds, spent mostly trying to extract the continued fraction from the fixed point representation.
prec = 40000
a = 1 << (3 * prec + 1)
two_a = a << 1
x = 5 << (prec - 2)
while True:
x_cubed = x * x * x
two_x_cubed = x_cubed << 1
x_prime = x * (x_cubed + two_a) // (two_x_cubed + a)
if -1 <= x_prime - x <= 1: break
x = x_prime
cf = []
four_to_the_prec = 1 << (2 * prec)
for i in range(10000):
q = x >> prec
r = x - (q << prec)
cf.append(q)
if True:
x = four_to_the_prec // r
else:
x = 1 << (2 * prec - r.bit_length())
while True:
delta_x = (x * ((four_to_the_prec - r * x) >> prec)) >> prec
if not delta_x: break
x += delta_x
print(cf)

Related

Given some rounded numbers, how to find the original fraction?

After asking this question on math.stackexchange.com I figured this might be a better place after all...
I have a small list of positive numbers rounded to (say) two decimals:
1.15 (can be 1.145 - 1.154999...)
1.92 (can be 1.915 - 1.924999...)
2.36 (can be 2.355 - 2.364999...)
2.63 (can be 2.625 - 2.634999...)
2.78 (can be 2.775 - 2.784999...)
3.14 (can be 3.135 - 3.144999...)
24.04 (can be 24.035 - 24.044999...)
I suspect that these numbers are fractions of integers and that all numerators or all denominators are equal. Choosing 100 as a common denominator would work in this case, that would leave the last value as 2404/100. But there could be a 'simpler' solution with much smaller integers.
How do I efficiently find the smallest common numerator and/or denominator? Or (if that is different) the one that would result in the smallest maximum denominator resp. numerator?
Of course I could brute force for small lists/numbers and few decimals. That would find 83/72, 138/72, 170/72, 189/72, 200/72, 226/72 and 1731/72 for this example.
Assuming the numbers don't have too many significant digits and aren't too big you can try increasing the denominator until you find a valid solution. It is not just brute-forcing. Additionally the following script is staying at the number violating the constraints as long as there is nothing found, in the hope of getting the denominator higher faster, without having to calculate for the non-problematic numbers.
It works based on the following formula:
x / y < a / b if x * b < a * y
This means a denominator d is valid if:
ceil(loNum * d / loDen) * hiDen < hiNum * d
The ceil(...) part calculates the smallest possible numerator satisfying the constraint of the low boundary and the rest is checking if it also satysfies the high boundary.
Better would be to work with real integer calculations, e.g. just longs in Java, then the ceil part becomes:
(loNum * d + loDen - 1) / loDen
function findRatios(arr) {
let lo = [], hi = [], consecutive = 0, d = 1
for (let i = 0; i < arr.length; i++) {
let x = '' + arr[i], len = x.length, dot = x.indexOf('.'),
num = parseInt(x.substr(0, dot) + x.substr(dot + 1)) * 10,
den = Math.pow(10, len - dot),
loGcd = gcd(num - 5, den), hiGcd = gcd(num + 5, den)
lo[i] = {num: (num - 5) / loGcd, den: den / loGcd}
hi[i] = {num: (num + 5) / hiGcd, den: den / hiGcd}
}
for (let index = 0; consecutive < arr.length; index = (index + 1) % arr.length) {
if (!valid(d, lo[index], hi[index])) {
consecutive = 1
d++
while (!valid(d, lo[index], hi[index]))
d++
} else {
consecutive++
}
}
for (let i = 0; i < arr.length; i++)
console.log(Math.ceil(lo[i].num * d / lo[i].den) + ' / ' + d)
}
function gcd(x, y) {
while(y) {
let t = y
y = x % y
x = t
}
return x
}
function valid(d, lo, hi) {
let n = Math.ceil(lo.num * d / lo.den)
return n * hi.den < hi.num * d
}
findRatios([1.15, 1.92, 2.36, 2.63, 2.78, 3.14, 24.04])

Optimized point on line finding algorithm

I'm looking for an optimized integer-based point-on-line algorithm, where you can define the line using begin and end coordinates, and the point to find based on either an x or y input.
I know how to do this using dy/dx division but I'm looking for an algorithm that eliminates all divisions.
This is what I'm currently doing:
int mult = ((px - v0.x)<<16) / (v1.x - v0.x);
vec2 result{px, v0.y + (lerpmult*(v1.y - v0.y))>>16};
The division in the first line is the problem I'm trying to eliminate.
One trick to solve this would be using the scalar product to determine the cosine of the angle between two vectors:
def line_test(a, b, p):
v_ap = tuple(m - n for n, m in zip(a, p))
v_ab = tuple(m - n for n, m in zip(a, b))
scp = sum(m * n for m, n in zip(v_ap, v_ab))
return scp > 0 and scp * scp == sum(n * n for n in v_ap) * sum(n * n for n in v_ab) and all(m <= n for m, n in zip(v_ap, v_ab))
The parameters of the above function are the end-points of the line (a and b) and the point p (c in the image), which we want to test.
Step by step the following happens in each line:
v_ap = tuple(m - n for n, m in zip(a, p))
We calculate the vector from a to p (v_ap)
v_ab = tuple(m - n for n, m in zip(a, b))
The vector from a to b (v_ab)
scp = sum(m * n for m, n in zip(v_ap, v_ab))
In this line the scalar product of v_ap and v_ab is calculated. The result is scp = cos(v_ab, v_ap) * euclidean_length(v_ab) * euclidean_length(v_ap), where the euclidean length of a vector is defined as sqrt(sum(n * n for n in vector)) (the standard definition of the geometric length of a vector).
return scp > 0 and scp * scp == sum(n * n for n in v_ap) * sum(n * n for n in v_ab) and all(m <= n for m, n in zip(v_ap, v_ab)
This line is pretty complex, so I'll break it down into a few parts:
scp * scp == sum(n * n for n in v_ap) * sum(n * n for n in v_ab)
Since division isn't allowed, we shouldn't use the square-root either, since it's calculation usually involves divisions. So instead of calculating the square-root, we take the square of both the euclidean length of both vectors and the scalar product, thus eliminating the square-root calculation:
scp = cos(v_ab, v_ap) * euclidean_length(v_ab) * euclidean_length(v_ap) =
= cos(v_ab, v_ap) * sqrt(sum(n ^ 2 for n in v_ab)) * sqrt(sum(n ^ 2 for n in v_ap))
scp ^ 2 = cos(v_ab, v_ap) ^ 2 * sum(n ^ 2 for n in v_ab) * sum(n ^ 2 for n in v_ap)
The cosine of the angle between the two vectors should be 1, if they point in the same direction. So the square of the scalar product if the vectors share the same direction would be
euclidean_length(v_ap) ^ 2 * euclidean_length(v_ab) ^ 2
which we then compare to the actual scalar product scp.
This however leaves one problem: taking the square eliminates the sign, which we check separately with the comparison scp > 0. Since the euclidean length is always positive, only the sign of the cosine determines the value of scp. A negative value of scp means that the angle of between v_ap and v_ab is at least pi / 4 and at most pi * 3/4. However the sign of scp get's lost when squaring, which means that we can only check whether the two vectors are parallel, not if they point into the same direction. This problem is solved by checking scp > 0 in addition.
Last but not least we have to check whether the distance from a to p is shorter than the distance from a to b. This can be done by checking whether v_ap has a smaller length than v_ab. Since we already checked that the two vectors point into exactly the same direction, it is sufficient check whether all elements in v_ap are at most as large as the corresponding element in v_ab, which is done by
all(m <= n for m, n in zip(v_ap, v_ab))
The answer what you are finding is as follows:
Lets say our line equation is Ax + By + C = 0. Then we just need
this three coefficients (A, B and C).
Say this line goes through point P(P_x, P_y) and Q(Q_x, Q_y). Then
it is easy to calculate the above three coefficients.
A = P_y - Q_y,
B = Q_x - P_x,
C = - A P_x - B P_y
Once we have our line equation, we can easily calculate x or y
coordinate for given y or x respectfully.
Here is my c++ template:
#include <iostream>
using namespace std;
// point struct
struct pt {
int x, y;
};
// line struct
struct line {
int a, b, c;
// create line object
line() {}
line (pt p, pt q) {
a = p.y - q.y;
b = q.x - p.x;
c = - a * p.x - b * p.y;
}
// a > 0; is must be true otherwise runtime error will occure
int getX(int y) {
return (-b * y - c) / a;
}
// b > 0; is must be true otherwise runtime error will occure
int getY(int x) {
return (-a * x - c) / b;
}
};
int main() {
pt p, q;
p.x = 1, p.y = 2;
q.x = 3, q.y = 6;
line m = line(p, q);
cout << "for y = 4, x = " << m.getX(4) << endl;
cout << "for x = 2, y = " << m.getY(2) << endl;
return 0;
}
Output:
for y = 4, x = 2
for x = 2, y = 4
Ref: http://e-maxx.ru/algo/segments_intersection

How to pick a number based on probability?

I want to select a random number from 0,1,2,3...n, however I want to make it that the chance of selecting k|0<k<n will be lower by multiplication of x from selecting k - 1 so x = (k - 1) / k. As bigger the number as smaller the chances to pick it up.
As an answer I want to see the implementation of the next method:
int pickANumber(n,x)
This is for a game that I am developing, I saw those questions as related but not exactly that same:
How to pick an item by its probability
C Function for picking from a list where each element has a distinct probabili
p1 + p2 + ... + pn = 1
p1 = p2 * x
p2 = p3 * x
...
p_n-1 = pn * x
Solving this gives you:
p1 + p2 + ... + pn = 1
(p2 * x) + (p3 * x) + ... + (pn * x) + pn = 1
((p3*x) * x) + ((p4*x) * x) + ... + ((p_n-1*x) * x) + pn = 1
....
pn* (x^(n-1) + x^(n-2) + ... +x^1 + x^0) = 1
pn*(1-x^n)/(1-x) = 1
pn = (1-x)/(1-x^n)
This gives you the probability you need to set to pn, and from it you can calculate the probabilities for all other p1,p2,...p_n-1
Now, you can use a "black box" RNG that chooses a number with a distribution, like those in the threads you mentioned.
A simple approach to do it is to set an auxillary array:
aux[i] = p1 + p2 + ... + pi
Now, draw a random number with uniform distribution between 0 to aux[n], and using binary search (aux array is sorted), get the first value, which matching value in aux is greater than the random uniform number you got
Original answer, for substraction (before question was editted):
For n items, you need to solve the equation:
p1 + p2 + ... + pn = 1
p1 = p2 + x
p2 = p3 + x
...
p_n-1 = pn + x
Solving this gives you:
p1 + p2 + ... + pn = 1
(p2 + x) + (p3 + x) + ... + (pn + x) + pn = 1
((p3+x) + x) + ((p4+x) + x) + ... + ((p_n-1+x) + x) + pn = 1
....
pn* ((n-1)x + (n-2)x + ... +x + 0) = 1
pn* x = n(n-1)/2
pn = n(n-1)/(2x)
This gives you the probability you need to set to pn, and from it you can calculate the probabilities for all other p1,p2,...p_n-1
Now, you can use a "black box" RNG that chooses a number with a distribution, like those in the threads you mentioned.
Be advised, this is not guaranteed you will have a solution such that 0<p_i<1 for all i, but you cannot guarantee one given from your requirements, and it is going to depend on values of n and x to fit.
Edit This answer was for the OPs original question, which was different in that each probability was supposed to be lower by a fixed amount than the previous one.
Well, let's see what the constraints say. You want to have P(k) = P(k - 1) - x. So we have:
P(0)
P(1) = P(0) - x
P(2) = P(0) - 2x
...
In addition, Sumk P(k) = 1. Summing, we get:
1 = (n + 1)P(0) -x * n / 2 (n + 1),
This gives you an easy constraint between x and P(0). Solve for one in terms of the other.
For this I would use the Mersenne Twister algorithm for a uniform distribution which Boost provides, then have a mapping function to map the results of that random distribution to the actual number select.
Here's a quick example of a potential implementation, although I left out the quadtratic equation implementation since it is well known:
int f_of_xib(int x, int i, int b)
{
return x * i * i / 2 + b * i;
}
int b_of_x(int i, int x)
{
return (r - ( r ) / 2 );
}
int pickANumber(mt19937 gen, int n, int x)
{
// First, determine the range r required where the probability equals i * x
// since probability of each increasing integer is x higher of occuring.
// Let f(i) = r and given f'(i) = x * i then r = ( x * i ^2 ) / 2 + b * i
// where b = ( r - ( x * i ^ 2 ) / 2 ) / i . Since r = x when i = 1 from problem
// definition, this reduces down to b = r - r / 2. therefore to find r_max simply
// plugin x to find b, then plugin n for i, x, and b to get r_max since r_max occurs
// when n == i.
// Find b when
int b = b_of_x(x);
int r_max = f_of_xib(x, n, b);
boost::uniform_int<> range(0, r_max);
boost::variate_generator<boost::mt19937&, boost::uniform_int<> > next(gen, range);
// Now to map random number to desired number, just find the positive value for i
// when r is the return random number which boils down to finding the non-zero root
// when 0 = ( x * i ^ 2 ) / 2 + b * i - r
int random_number = next();
return quadtratic_equation_for_positive_value(1, b, r);
}
int main(int argc, char** argv)
{
mt19937 gen;
gen.seed(time(0));
pickANumber(gen, 10, 1);
system("pause");
}

Avoiding Brute Force: Counting Solutions

In a programming contest, a problem was:
Count all solutions to the equation: x + 4y + 4z = n. You will be
given n and you will determine the count of solutions. Assume x, y and z are positive integers.
I have considered using triple for loops (brute force), but it was unefficient, causing TIME LIMIT EXCEED. (since the n may be = 1000,000):
int sol = 0;
for (int i = 1; i <= n; i++)
{
for (int j = 1; j <= n / 4; j++)
{
for (int k = 1; k <= n / 4; k++)
{
if (i + 4 * j + 4 * k == n)
sol++;
}
}
}
My friend could solve the problem. When I asked him, he said that he didn't use brute force at all. Instead, he converted the equation to a 'series' (i.e. summition). I asked him to tell how me but he refused :)
Can I know how?
This is particular case of coin change problem, which is solved in general by dynamic programming.
But here we can elaborate simple solution. I consider x,y,z > 0
x + 4*(y+z)=n
Let y + z = q = p + 1 (q > 1, p > 0)
x+4*q=n
x+4*p=n-4
There are M = Floor((n-5)/4) variants for x and p, hence there are M possible values of
q = 2..M+1
For every q>1 there are (q-1) variants of y and z: q = 1 + (q-1) = 2 + (q-2) +..+(q-1)+1
So we have N=1 + 2 + 3 + ... + M = M * (M + 1)/2 solutions
Example:
n = 15;
M = (15 - 5) div 4 = 2
N = 3
(3,1,2),(3,2,1),(7,1,1)
First note that n-x must be divisible by 4. Start by finding the smallest value that x can take:
start = 4
while ((n - start) % 4 != 0)
{
start = start + 1
}
From now on, you know that x will take values from [start, start+4, start+8 ...]. Now you can count the number of solutions by a simple counting loop:
count = 0
for (x = start; x < n - 4; x = x + 4)
{
y_z_sum = (n - x) / 4
count = count + y_z_sum - 1
}
For each choice of x, we can compute the value of y+z. For each value for y+z, there are y+z-1 possible choices (since y ranges from 1 to y+z-1, assuming that y and z are both positive integers).
Instead of a brute force solution with O(n3) running time, you can achieve O(n) this way.
This is a classic linear algebra problem. Please refer to any linear algebra textbook on how to solve a system of linear equations. One such method is called Gaussian Elimination.

Facebook Hacker Cup: Power Overwhelming

A lot of people at Facebook like to play Starcraft II™. Some of them have made a custom game using the Starcraft II™ map editor. In this game, you play as the noble Protoss defending your adopted homeworld of Shakuras from a massive Zerg army. You must do as much damage to the Zerg as possible before getting overwhelmed. You can only build two types of units, shield generators and warriors. Shield generators do no damage, but your army survives for one second per shield generator that you build. Warriors do one damage every second. Your army is instantly overrun after your shield generators expire. How many shield generators and how many warriors should you build to inflict the maximum amount of damage on the Zerg before your army is overrun? Because the Protoss value bravery, if there is more than one solution you should return the one that uses the most warriors.
Constraints
1 ≤ G (cost for one shield generator) ≤ 100
1 ≤ W (cost for one warrior) ≤ 100
G + W ≤ M (available funds) ≤ 1000000000000 (1012)
Here's a solution whose complexity is O(W). Let g be the number of generators we build, and similarly let w be the number of warriors we build (and G, W be the corresponding prices per unit).
We note that we want to maximize w*g subject to w*W + g*G <= M.
First, we'll get rid of one of the variables. Note that if we choose a value for g, then obviously we should buy as many warriors as possible with the remaining amount of money M - g*G. In other words, w = floor((M-g*G)/W).
Now, the problem is to maximize g*floor((M-g*G)/W) subject to 0 <= g <= floor(M/G). We want to get rid of the floor, so let's consider W distinct cases. Let's write g = W*k + r, where 0 <= r < W is the remainder when dividing g by W.
The idea is now to fix r, and insert the expression for g and then let k be the variable in the equation. We'll get the following quadratic equation in k:
Let p = floor((M - r*G)/W), then the equation is (-GW) * k^2 + (Wp - rG)k + rp.
This is a quadratic equation which goes to negative infinity when x goes to infinity or negative infinity so it has a global maximum at k = -B/(2A). To find the maximum value for legal values of k, we'll try the minimum legal value of k, the maximum legal value of k and the two nearest integer points of the real maximum if they are within the legal range.
The overall maximum for all values of r is the one we are seeking. Since there are W values for r, and it takes O(1) to compute the maximum for a fixed value, the overall time is O(W).
If you build g generators, and w warriors, you can do a total damage of
w (damage per time) × g (time until game-over).
The funds constraint restricts the value of g and w to W × w + G × g &leq; M.
If you build g generators, you can build at most (M - g × G)/W warriors, and do g × (M - g × G)/W damage.
This function has a maximum at g = M / (2 G), which results in M2 / (4 G W) damage.
Summary:
Build M / (2 G) shield generators.
Build M / (2 G) warriors.
Do M2 / (4 G W) damage.
Since you can only build integer amounts of any of the two units, this reduces to the optimization problem:
maximize g × w
with respect to g × G + w × W &leq; M and g, w &in; ℤ+
The general problem of Integer Programming is NP-complete, so the best algorithm for this is to check all integer values close to the real-valued solution above.
If you find some pair (gi, wi), with total damage di, you only have to check values where gj × wj &geq; di. This and the original condition W × w + G × g &leq; M constrains the search-space with each item found.
F#-code:
let findBestSetup (G : int) (W : int) (M : int) =
let mutable bestG = int (float M / (2.0 * float G))
let mutable bestW = int (float M / (2.0 * float W))
let mutable bestScore = bestG * bestW
let maxW = (M + isqrt (M*M - 4 * bestScore * G * W)) / (2*G)
let minW = (M - isqrt (M*M - 4 * bestScore * G * W)) / (2*G)
for w = minW to maxW do
// ceiling of (bestScore / w)
let minG = (bestScore + w - 1) / w
let maxG = (M - W*w)/G
for g = minG to maxG do
let score = g * w
if score > bestScore || score = bestScore && w > bestW then
bestG <- g
bestW <- w
bestScore <- score
bestG, bestW, bestScore
This assumed W and G were the counts and the cost of each was equal to 1. So it's obsolete with the updated question.
Damage = LifeTime*DamagePerSecond = W * G
So you need to maximize W*G with the constraint G+W <= M. Since both Generators and Warriors are always good we can use G+W = M.
Thus the function we want to maximize becomes W*(M-W).
Now we set the derivative = 0:
M-2W=0
W = M/2
But since we need the solution to the discrete case(You can't have x.5 warriors and x.5 generators) we use the values closest to the continuous solution(this is optimal due to the properties of a parabel).
If M is even than the continuous solution is identical to the discrete solution. If M is odd then we have two closest solutions, one with one warrior more than generators, and one the other way round. And the OP said we should choose more warriors.
So the final solution is:
G = W = M/2 for even M
and G+1 = W = (M+1)/2 for odd M.
g = total generators
gc = generator cost
w = warriors
wc = warrior cost
m = money
d = total damage
g = (m - (w*wc))/gc
w = (m - (g*gc))/wc
d = g * w
d = ((m - (w*wc))/gc) * ((m - (g*gc))/wc)
d = ((m - (w*wc))/gc) * ((m - (((m - (w*wc))/gc)*gc))/wc) damage as a function of warriors
I then tried to compute an array of all damages then find max but of course it'd not complete in 6 mins with m in the trillions.
To find the max you'd have to differentiate that equation and find when it equals zero, which I forgotten how to do seing I haven't done math in about 6 years
Not a really a solution but here goes.
The assumption is that you already get a high value of damage when the number of shields equals 1 (cannot equal zero or no damage will be done) and the number of warriors equals (m-g)/w. Iterating up should (again an assumption) reach the point of compromise between the number of shields and warriors where damage is maximized. This is handled by the bestDamage > calc branch.
There is almost likely a flaw in this reasoning and it'd be preferable to understand the maths behind the problem. As I haven't practised mathematics for a while I'll just guess that this requires deriving a function.
long bestDamage = 0;
long numShields = 0;
long numWarriors = 0;
for( int k = 1;; k++ ){
// Should move declaration outside of loop
long calc = m / ( k * g ); // k = number of shields
if( bestDamage < calc ) {
bestDamage = calc;
}
if( bestDamage > calc ) {
numShields = k;
numWarriors = (m - (numShields*g))/w;
break;
}
}
System.out.println( "numShields:" + numShields );
System.out.println( "numWarriors:" + numWarriors );
System.out.println( bestDamage );
Since I solved this last night, I thought I'd post my C++ solution. The algorithm starts with an initial guess, located at the global maximum of the continuous case. Then it searches 'little' to the left/right of the initial guess, terminating early when continuous case dips below an already established maximum. Interestingly, the 5 example answers posted by the FB contained 3 wrong answers:
Case #1
ours: 21964379805 dmg: 723650970382348706550
theirs: 21964393379 dmg: 723650970382072360271 Wrong
Case #2
ours: 1652611083 dmg: 6790901372732348715
theirs: 1652611083 dmg: 6790901372732348715
Case #3
ours: 12472139015 dmg: 60666158566094902765
theirs: 12472102915 dmg: 60666158565585381950 Wrong
Case #4
ours: 6386438607 dmg: 10998633262062635721
theirs: 6386403897 dmg: 10998633261737360511 Wrong
Case #5
ours: 1991050385 dmg: 15857126540443542515
theirs: 1991050385 dmg: 15857126540443542515
Finally the code (it uses libgmpxx for large numbers). I doubt the code is optimal, but it does complete in 0.280ms on my personal computer for the example input given by FB....
#include <iostream>
#include <gmpxx.h>
using namespace std;
typedef mpz_class Integer;
typedef mpf_class Real;
static Integer getDamage( Integer g, Integer G, Integer W, Integer M)
{
Integer w = (M - g * G) / W;
return g * w;
}
static Integer optimize( Integer G, Integer W, Integer M)
{
Integer initialNg = M / ( 2 * G);
Integer bestNg = initialNg;
Integer bestDamage = getDamage ( initialNg, G, W, M);
// search left
for( Integer gg = initialNg - 1 ; ; gg -- ) {
Real bestTheoreticalDamage = gg * (M - gg * G) / (Real(W));
if( bestTheoreticalDamage < bestDamage) break;
Integer dd = getDamage ( gg, G, W, M);
if( dd >= bestDamage) {
bestDamage = dd;
bestNg = gg;
}
}
// search right
for( Integer gg = initialNg + 1 ; ; gg ++ ) {
Real bestTheoreticalDamage = gg * (M - gg * G) / (Real(W));
if( bestTheoreticalDamage < bestDamage) break;
Integer dd = getDamage ( gg, G, W, M);
if( dd > bestDamage) {
bestDamage = dd;
bestNg = gg;
}
}
return bestNg;
}
int main( int, char **)
{
Integer N;
cin >> N;
for( int i = 0 ; i < N ; i ++ ) {
cout << "Case #" << i << "\n";
Integer G, W, M, FB;
cin >> G >> W >> M >> FB;
Integer g = optimize( G, W, M);
Integer ourDamage = getDamage( g, G, W, M);
Integer fbDamage = getDamage( FB, G, W, M);
cout << " ours: " << g << " dmg: " << ourDamage << "\n"
<< " theirs: " << FB << " dmg: " << fbDamage << " "
<< (ourDamage > fbDamage ? "Wrong" : "") << "\n";
}
}

Resources