Verifying properties using Solver - algorithm

I have a system of equations of the form:
x1 * x2 *.... * xn = a, where * can either be + or - .
I am building some other equation of the same form, and I want to verify
if they are satisfied by the first system.
My question is: Is there a solver that can affirm whether the given equation is satisfied or not?
Many thanks,
Cheers

This is a variation of partition problem with a bias (you need to end up one subset larger than the other by a, instead of them being equal). It can be addressed by adding a to the set, and now solve "regular" partition problem.
This problem is NP-Complete, but can be solved in pseudo-polynomial time using dynamic programming:
D(x,i) = false x<0
D(0,i) = true
D(x,0) = false x != 0
D(x,i) = D(x,i-1) OR D(x-arr[i],i-1)
And you are looking for a subset of sum (x1 + x2 + ... + xn + a) / 2
The idea is to get 2 sets, one with a (let it be A) and one without it (let it be B).
Give all the elements (except a) in A - sign, and all elements in B a + sign.
Since sum(A) = sum(B), you get
sum(B)-(sum(A)-a) = sum(B) - sum(A) + a = 0 + a = a

Related

How can I descale x by n/d, when x*n overflows?

My problem is limited to unsigned integers of 256 bits.
I have a value x, and I need to descale it by the ratio n / d, where n < d.
The simple solution is of course x * n / d, but the problem is that x * n may overflow.
I am looking for any arithmetic trick which may help in reaching a result as accurate as possible.
Dividing each of n and d by gcd(n, d) before calculating x * n / d does not guarantee success.
Is there any process (iterative or other) which i can use in order to solve this problem?
Note that I am willing to settle on an inaccurate solution, but I'd need to be able to estimate the error.
NOTE: Using integer division instead of normal division
Let us suppose
x = ad + b
n = cd + e
Then find a,b,c,e as follows:
a = x/d
b = x%d
c = n/d
e = n%d
Then,
nx/d = acd + ae + bc + be/d
CALCULATING be/d
1. Represent e in binary form
2. Find b/d, 2b/d, 4b/d, 8b/d, ... 256b/d and their remainders
3. Find be/d = b*binary terms + their remainders
Example:
e = 101 in binary = 4+1
be/d = (b/d + 4b/d) + (b%d + 4b%d)/d
FINDING b/d, 2b/d, ... 256b/d
quotient(2*ib/d) = 2*quotient(ib /d) + (2*remainder(ib /d))/d
remainder(2*ib/d) = (2*remainder(ib/d))%d
Executes in O(number of bits)

Algorithm to evaluate best weights for weighted average

I have a data set of the form:
[9.1 5.6 7.4] => 8.5, [4.1 4.4 5.2] => 4.9, ... , x => y(x)
So x is a real vector of three elements and y is a scalar function.
I'm assuming a weighted average model of this data:
y(x) = (a * x[0] + b * x[1] + c * x[2]) / (a+b+c) + E(x)
where E is an unknown random error term.
I need an algorithm to find a,b,c, that minimizes total sum square error:
error = sum over all x of { E(x)^2 }
for a given data set.
Assume that the weights are normalized to sum to 1 (which happily is without loss of generality), then we can re-cast the problem with c = 1 - a - b, so we are actually solving for a and b.
With this we can write
error(a,b) = sum over all x { a x[0] + b x[1] + (1 - a - b) x[2] - y(x) }^2
Now it's just a question of taking the partial derivatives d_error/da and d_error/db and setting them to zero to find the minimum.
With some fiddling, you get a system of two equations in a and b.
C(X[0],X[0],X[2]) a + C(X[0],X[1],X[2]) b = C(X[0],Y,X[2])
C(X[1],X[0],X[2]) a + C(X[1],X[1],X[2]) b = C(X[1],Y,X[2])
The meaning of X[i] is the vector of all i'th components from the dataset x values.
The meaning of Y is the vector of all y(x) values.
The coefficient function C has the following meaning:
C(p, q, r) = sum over i { p[i] ( q[i] - r[i] ) }
I'll omit how to solve the 2x2 system unless this is a problem.
If we plug in the two-element data set you gave, we should get precise coefficients because you can always approximate two points perfectly with a line. So for example the first equation coefficients are:
C(X[0],X[0],X[2]) = 9.1(9.1 - 7.4) + 4.1(4.1 - 5.2) = 10.96
C(X[0],X[1],X[2]) = -19.66
C(X[0],Y,X[2]) = 8.78
Similarly for the second equation: 4.68 -13.6 4.84
Solving the 2x2 system produces: a = 0.42515, b = -0.20958. Therefore c = 0.78443.
Note that in this problem a negative coefficient results. There is nothing to guarantee they'll be positive, though "real" data sets may produce this result.
Indeed if you compute weighted averages with these coefficients, they are 8.5 and 4.9.
For fun I also tried this data set:
X[0] X[1] X[2] Y
0.018056028 9.70442075 9.368093544 6.360312244
8.138752835 5.181373099 3.824747424 5.423581239
6.296398214 4.74405298 9.837741509 7.714662742
5.177385358 1.241610571 5.028388255 4.491743107
4.251033792 8.261317658 7.415111851 6.430957844
4.720645386 1.0721718 2.187147908 2.815078796
1.941872069 1.108191586 6.24591771 3.994268819
4.220448549 9.931055481 4.435085917 5.233711923
9.398867623 2.799376317 7.982096264 7.612485261
4.971020963 1.578519218 0.462459906 2.248086465
I generated the Y values with 1/3 x[0] + 1/6 x[1] + 1/2 x[2] + E where E is a random number in [-0.1..+0.1]. If the algorithm is working correctly we'd expect to get roughly a = 1/3 and b = 1/6 from this result. Indeed we get a = .3472 and b = .1845.
OP has now said that his actual data are larger than 3-vectors. This method generalizes without much trouble. If the vectors are of length n, then you get an n-1 x n-1 system to solve.

Mapping function for two integers

SO,
The problem
I have two integers, which are in first case, positive, and in second case - just any integers. I need to create a map function F from them to some another integer value, which will be:
Result should be integer value. For first case (x>0, y>0), positive integer value
Symmetric. That means F(x, y) = F(y, x)
Unique. That means F(x0, y0) = F(x1, y1) <=> (x0 = x1 ^ y0 = y1) V (y0 = x1 ^ x0 = y1)
My approach
At first glance, for positive integers we could use expression like F(x, y) = x2 + y2, but that will fail - for example, 892 + 232 = 132 + 912 As for second (common) case - that's even more complicated.
Use-case
That may be useful when dealing with some things, which supposed to be order-independent and need to be unique. For example, if we want to find cartesian product of many arrays and we want result to be unique independent of order, i.e. <x,z,y> is equal to <x,y,z>. It may be done with:
function decartProductPair($one, $two, $unique=false)
{
$result = [];
for($i=0; $i<count($one); $i++)
{
for($j=0; $j<count($two); $j++)
{
if($unique)
{
if($i!=$j)
{
$result[$i*$i+$j*$j]=array_merge((array)$one[$i],(array)$two[$j]);
// ^
// |
// +----//this is the place where F(i,j) is needed
}
}
else
{
$result[]=array_merge((array)$one[$i], (array)$two[$j]);
}
}
}
return array_values($result);
}
Another use-case is to properly group sender and receiver in some SQL table, so that different senders/receivers will be differed while they should stay symmetric. Something like:
SELECT
COUNT(1) AS message_count,
sender,
receiver
FROM
test
GROUP BY
-- this is the place where F(sender, receiver) is needed:
sender*sender + receiver*receiver
(By posting samples I wanted to show that issue is certainly related to programming)
The question
As mentioned, the question is - what can be used as F? I want as simple F as it's possible. Keep in mind two cases:
Integer x>0, y>0. F(x,y) > 0
Any integer x, y and so any integer F(x,y) as a result
May be F isn't just an expression - but some algorithm to find desired result for any x,y (so tagging with algorithm too). However, expression is better because it's more like that it will be able to use that expression in SQL or PHP or whatever. Feel free to edit tagging because I'm not sure if two tags here is enough
Most simple solution: f(x,y) = x^5 + y^5
No positive integer is known which can be written as the sum of two fifth powers in more than one way.
As for now, this is unsolved math problem.
You need a MAX_INTEGER constant, and the result will need to hold MAX_INTEGER**2 (say: be a long, if both are int's). In that case, one such function is:
f(x,y) = min(x,y)*MAX_INTEGER + max(x,y)
But I propose a different solution: use a hash function (say md5) of the string resulting from the concatenation of str(min(x,y)), a separator (say ".") and str(max(x,y)). That is:
f(x,y) = md5(str(min(x,y)) + "." + str(max(x,y)))
It is not unique, but collisions are very rare, and probably OK for most use cases. If still worried about collisions, save the actualy {x,y} along with f(x,y), and check if collisions happened.
Sort input numbers and interleave their bits:
x = 5
y = 3
Step 1. Sorting: 3, 5
Step 2. Mixing bits: 11, 101 -> 1_1_, 1_0_1 -> 11011 = 27
So, F(3, 5) = 27
A compact representation is x*(x+3)/2 + y*(x+1) + (y*(y-1))/2, which comes from an arrangement like this:
x->
y 0 1 3 6 10 15
| 2 4 7 11 16
v 5 8 12 17
9 13 18
14 19
20
According to [Stackoverflow:mapping-two-integers-to-one-in-a-unique-and-deterministic-way][1], if we symmetrize the formula we would have the following:
(x + y) * (x + y + 1) / 2 + min(x, y)
This might just work. For
(x + y) * (x + y + 1) / 2 + x
is unique, then the first formula is also unique.
[1]: Mapping two integers to one, in a unique and deterministic way

shoot projectile (straight trajectory) at moving target in 3 dimensions

I already googled for the problem but only found either 2D solutions or formulas that didn't work for me (found this formula that looks nice: http://www.ogre3d.org/forums/viewtopic.php?f=10&t=55796 but seems not to be correct).
I have given:
Vec3 cannonPos;
Vec3 targetPos;
Vec3 targetVelocityVec;
float bulletSpeed;
what i'm looking for is time t such that
targetPos+t*targetVelocityVec
is the intersectionpoint where to aim the cannon to and shoot.
I'm looking for a simple, inexpensive formula for t (by simple i just mean not making many unnecessary vectorspace transformations and the like)
thanks!
The real problem is finding out where in space that the bullet can intersect the targets path. The bullet speed is constant, so in a certain amount of time it will travel the same distance regardless of the direction in which we fire it. This means that it's position after time t will always lie on a sphere. Here's an ugly illustration in 2d:
This sphere can be expressed mathematically as:
(x-x_b0)^2 + (y-y_b0)^2 + (z-z_b0)^2 = (bulletSpeed * t)^2 (eq 1)
x_b0, y_b0 and z_b0 denote the position of the cannon. You can find the time t by solving this equation for t using the equation provided in your question:
targetPos+t*targetVelocityVec (eq 2)
(eq 2) is a vector equation and can be decomposed into three separate equations:
x = x_t0 + t * v_x
y = y_t0 + t * v_y
z = z_t0 + t * v_z
These three equations can be inserted into (eq 1):
(x_t0 + t * v_x - x_b0)^2 + (y_t0 + t * v_y - y_b0)^2 + (z_t0 + t * v_z - z_b0)^2 = (bulletSpeed * t)^2
This equation contains only known variables and can be solved for t. By assigning the constant part of the quadratic subexpressions to constants we can simplify the calculation:
c_1 = x_t0 - x_b0
c_2 = y_t0 - y_b0
c_3 = z_t0 - z_b0
(v_b = bulletSpeed)
(t * v_x + c_1)^2 + (t * v_y + c_2)^2 + (t * v_z + c_3)^2 = (v_b * t)^2
Rearrange it as a standard quadratic equation:
(v_x^2+v_y^2+v_z^2-v_b^2)t^2 + 2*(v_x*c_1+v_y*c_2+v_z*c_3)t + (c_1^2+c_2^2+c_3^2) = 0
This is easily solvable using the standard formula. It can result in zero, one or two solutions. Zero solutions (not counting complex solutions) means that there's no possible way for the bullet to reach the target. One solution will probably happen very rarely, when the target trajectory intersects with the very edge of the sphere. Two solutions will be the most common scenario. A negative solution means that you can't hit the target, since you would need to fire the bullet into the past. These are all conditions you'll have to check for.
When you've solved the equation you can find the position of t by putting it back into (eq 2). In pseudo code:
# setup all needed variables
c_1 = x_t0 - x_b0
c_2 = y_t0 - y_b0
c_3 = z_t0 - z_b0
v_b = bulletSpeed
# ... and so on
a = v_x^2+v_y^2+v_z^2-v_b^2
b = 2*(v_x*c_1+v_y*c_2+v_z*c_3)
c = c_1^2+c_2^2+c_3^2
if b^2 < 4*a*c:
# no real solutions
raise error
p = -b/(2*a)
q = sqrt(b^2 - 4*a*c)/(2*a)
t1 = p-q
t2 = p+q
if t1 < 0 and t2 < 0:
# no positive solutions, all possible trajectories are in the past
raise error
# we want to hit it at the earliest possible time
if t1 > t2: t = t2
else: t = t1
# calculate point of collision
x = x_t0 + t * v_x
y = y_t0 + t * v_y
z = z_t0 + t * v_z

Possible ways to calculate X = A - inv(B) * Y * inv(B) and X = Y + A' * inv(B) * A

I have two problems. I have to calculate two equations:
X = A - inv(B) * Y * inv(B)
and
X = Y + A' * inv(B) * A
where, A, B and Y are known p*p matrices (p can be small or large, depends the situation). Matrices are quite dense, without any structure (except B being non-singular of course).
Is it possible to solve X in those equations without inverting the matrix B? I have to calculate these equations n times, n being hundreds or thousands, and all the matrices change over time.
Thank you very much.
If you can express your updates to your matrix B in the following terms:
Bnew = B + u*s*v
then you can express an update to inv(B) explicitly using the Sherman-Morrison-Woodbury formula:
inv(B + u*s*v) = inv(B) - inv(B)*u*inv(s + v*inv(B)*u)*v*inv(B)
If u and v are vectors (column and row, respectively) and s is scalar, then this expression simplifies:
inv(B + u*s*v) = inv(B) - inv(B)*u*v*inv(B)/(s + v*inv(B)*u)
You would only have to calculate inv(B) once and then update it when it changes with no additional inversions.
It may be preferable not to calculate the full inverse, just simple "matrix divisions" on y and (ynew - y) or a and (anew - a) depending on the size of "n" with respect to "p" in your problem.
Memo-ize inv(B), i.e. only invert B when it changes, and keep the inverse around.
If changes to B are small, possibly you could use a delta-approximation.

Resources