I'm given a string 2*x + 5 - (3*x-2)=x + 5 and I need to solve for x. My thought process is that I'd convert it to an expression tree, something like,
=
/ \
- +
/\ /\
+ - x 5
/\ /\
* 5 * 2
/\ /\
2 x 3 x
But how do I actually reduce the tree from here? Any other ideas?
You have to reduce it using axioms from algebra
a * (b + c) -> (a * b) + (a * c)
This is done by checking the types of each node in the pass tree. Once the thing is fully expanded into terms, you can then check they are actually linear, etc.
The values in the tree will be either variables or numbers. It isn't very neat to represent these as classes inheriting from some AbstractTreeNode class however, because cplusplus doesn't have multiple dispatch. So it is better to do it the 'c' way.
enum NodeType {
Number,
Variable,
Addition //to represent the + and *
}
struct Node {
NodeType type;
//union {char*, int, Node*[2]} //psuedo code, but you need
//something kind of like this for the
//variable name ("x") and numerical value
//and the children
}
Now you can query they types of a node and its children using switch case.
As I said earlier - c++ idiomatic code would use virtual functions but lack the necessary multiple dispatch to solve this cleanly. (You would need to store the type anyway)
Then you group terms, etc and solve the equation.
You can have rules to normalise the tree, for example
constant + variable -> variable + constant
Would put x always on the left of a term. Then x * 2 + x * 4 could be simplified more easily
var * constant + var * constant -> (sum of constants) * var
In your example...
First, simplify the '=' by moving the terms (as per the rule above)
The right hand side will be -1 * (x + 5), becoming -1 * x + -1 * 5. The left hand side will be harder - consider replacing a - b with a + -1 * b.
Eventually,
2x + 5 + -3x + 2 + -x + -5 = 0
Then you can group terms ever which way you want. (By scanning along, etc)
(2 + -3 + -1) x + 5 + 2 + -5 = 0
Sum them up and when you have mx + c, solve it.
Assuming you have a first order equation, check all the leaves on each side. On each side, have two bins: one to add up all the leaves containing a multiple of X and one for all the leaves containing a multiples of a constant. Either add to a bin or multiply each bin as you step up the tree along each branch from the leaves. You will end up with something that is conceptually like
a*x + b = c*x + d
At that point, you can just solve
x = (d - b) / (a - c)
Assuming the equation can reduce to f(x) = 0, and f(x) = a * x + b.
You can transform all the leaves in expression tree to f(x), for example : 2 -> 0 * x + 2, 3 * x -> 3 * x + 0, then you can do arithmetic operations of f(x) in expression tree. finally solve the equation f(x) = 0.
If the function is much more complicated than polynomial, you can do a binary search on x, and using the expression tree to calculate the left and right side of equation.
Related
My problem is limited to unsigned integers of 256 bits.
I have a value x, and I need to descale it by the ratio n / d, where n < d.
The simple solution is of course x * n / d, but the problem is that x * n may overflow.
I am looking for any arithmetic trick which may help in reaching a result as accurate as possible.
Dividing each of n and d by gcd(n, d) before calculating x * n / d does not guarantee success.
Is there any process (iterative or other) which i can use in order to solve this problem?
Note that I am willing to settle on an inaccurate solution, but I'd need to be able to estimate the error.
NOTE: Using integer division instead of normal division
Let us suppose
x = ad + b
n = cd + e
Then find a,b,c,e as follows:
a = x/d
b = x%d
c = n/d
e = n%d
Then,
nx/d = acd + ae + bc + be/d
CALCULATING be/d
1. Represent e in binary form
2. Find b/d, 2b/d, 4b/d, 8b/d, ... 256b/d and their remainders
3. Find be/d = b*binary terms + their remainders
Example:
e = 101 in binary = 4+1
be/d = (b/d + 4b/d) + (b%d + 4b%d)/d
FINDING b/d, 2b/d, ... 256b/d
quotient(2*ib/d) = 2*quotient(ib /d) + (2*remainder(ib /d))/d
remainder(2*ib/d) = (2*remainder(ib/d))%d
Executes in O(number of bits)
I have been working on a Hackerearth Problem. Here is the problem statement:
We have three variables a, b and c. We need to convert a to b and following operations are allowed:
1. Can decrement by 1.
2. Can decrement by 2.
3. Can multiply by c.
Minimum steps required to convert a to b.
Here is the algorithm I came up with:
Increment count to 0.
Loop through till a === b:
1. Perform (x = a * c), (y = a - 1) and (z = a - 2).
2. Among x, y and z, choose the one whose absolute difference with b is the least.
3. Update the value of a to the value chosen among x, y and z.
4. Increment the count by 1.
I can get pass the basic test case but all my advance cases are failing. I guess my logic is correct but due to the complexity it seems to fail.
Can someone suggest a more optimized solution.
Edit 1
Sample Code
function findMinStep(arr) {
let a = parseInt(arr[0]);
let b = parseInt(arr[1]);
let c = parseInt(arr[2]);
let numOfSteps = 0;
while(a !== b) {
let multiply = Math.abs(b - (a * c));
let decrement = Math.abs(b - (a - 1));
let doubleDecrement = Math.abs(b - (a - 2));
let abs = Math.min(multiply, decrement, doubleDecrement);
if(abs === multiply) a = a * c;
else if(abs === decrement) a -= 1;
else a -= 2;
numOfSteps += 1;
}
return numOfSteps.toString()
}
Sample Input: a = 3, b = 10, c = 2
Explanation: Multiply 3 with 2 to get 6, subtract 1 from 6 to get 5, multiply 5 with 2 to get 10.
Reason for tagging both Python and JS: Comfortable with both but I am not looking for code, just an optimized algorithm and analytical thinking.
Edit 2:
function findMinStep(arr) {
let a = parseInt(arr[0]);
let b = parseInt(arr[1]);
let c = parseInt(arr[2]);
let depth = 0;
let queue = [a, 'flag'];
if(a === b ) return 0
if(a > b) {
let output = Math.floor((a - b) / 2);
if((a - b) % 2) return output + 1;
return output
}
while(true) {
let current = queue.shift();
if(current === 'flag') {
depth += 1;
queue.push('flag');
continue;
}
let multiple = current * c;
let decrement = current - 1;
let doubleDecrement = current -2;
if (multiple !== b) queue.push(multiple);
else return depth + 1
if (decrement !== b) queue.push(decrement);
else return depth + 1
if (doubleDecrement !== b) queue.push(doubleDecrement);
else return depth + 1
}
}
Still times out. Any more suggestions?
Link for the question for you reference.
BFS
A greedy approach won't work here.
However it is already on the right track. Consider the graph G, where each node represents a value and each edge represents one of the operations and connects two values that are related by that operation (e.g.: 4 and 3 are connected by "subtract 1"). Using this graph, we can easily perform a BFS-search to find the shortest path:
def a_to_b(a, b, c):
visited = set()
state = {a}
depth = 0
while b not in state:
visited |= state
state = {v - 1 for v in state if v - 1 not in visited} | \
{v - 2 for v in state if v - 2 not in visited} | \
{v * c for v in state if v * c not in visited}
depth += 1
return 1
This query systematically tests all possible combinations of operations until it reaches b by testing stepwise. I.e. generate all values that can be reached with a single operation from a, then test all values that can be reached with two operations, etc., until b is among the generated values.
In depth analysis
(Assuming c >= 0, but can be generalized)
So far for the standard-approach that works with little analysis. This approach has the advantage that it works for any problem of this kind and is easy to implement. However it isn't very efficient and will reach it's limits fairly fast, once the numbers grow. So instead I'll show a way to analyze the problem in depth and gain a (far) more performant solution:
In a first step this answer will analyze the problem:
We need operations -->op such that a -->op b and -->op is a sequence of
subtract 1
subtract 2
multiply by c
First of all, what happens if we first subtract and afterwards multiply?
(a - x) * c = a * c - x * c
Next what happens, if we first multiply and afterwards subtract?
a * c - x'
Positional systems
Well, there's no simplifying transformation for this. But we've got the basic pieces to analyze more complicated chains of operations. Let's see what happens when we chain subtractions and multiplications alternatingly:
(((a - x) * c - x') * c - x'') * c - x'''=
((a * c - x * c - x') * c - x'') * c - x''' =
(a * c^2 - x * c^2 - x' * c - x'') * c - x''' =
a * c^3 - x * c^3 - x' * c^2 - x'' * c - x'''
Looks familiar? We're one step away from defining the difference between a and b in a positional system base c:
a * c^3 - x * c^3 - x' * c^2 - x'' * c - x''' = b
x * c^3 + x' * c^2 + x'' * c + x''' = a * c^3 - b
Unfortunately the above is still not quite what we need. All we can tell is that the LHS of the equation will always be >=0. In general, we first need to derive the proper exponent n (3 in the above example), s.t. it is minimal, nonnegative and a * c^n - b >= 0. Solving this for the individual coefficients (x, x', ...), where all coefficients are non-negative is a fairly trivial task.
We can show two things from the above:
if a < b and a < 0, there is no solution
solving as above and transforming all coefficients into the appropriate operations leads to the optimal solution
Proof of optimality
The second statement above can be proven by induction over n.
n = 0: In this case a - b < c, so there is only one -->op
n + 1: let d = a * c^(n + 1) - b. Let d' = d - m * c^(n + 1), where m is chosen, such that d' is minimal and nonnegative. Per induction-hypothesis d' can be generated optimally via a positional system. Leaving a difference of exactly m * c^n. This difference can not be covered more efficiently via lower-order terms than by m / 2 subtractions.
Algorithm (The TLDR-part)
Consider a * c^n - b as a number base c and try to find it's digits. The final number should have n + 1 digits, where each digit represents a certain number of subtractions. Multiple subtractions are represented by a single digit by addition of the subtracted values. E.g. 5 means -2 -2 -1. Working from the most significant to the least significant digit, the algorithm operates as follows:
perform the subtractions as specified by the digit
if the current digit is was the last, terminate
multiply by c and repeat from 1. with the next digit
E.g.:
a = 3, b = 10, c = 2
choose n = 2
a * c^n - b = 3 * 4 - 10 = 2
2 in binary is 010
steps performed: 3 - 0 = 3, 3 * 2 = 6, 6 - 1 = 5, 5 * 2 = 10
or
a = 2, b = 25, c = 6
choose n = 2
a * c^n - b = 47
47 base 6 is 115
steps performed: 2 - 1 = 1, 1 * 6 = 6, 6 - 1 = 5, 5 * 6 = 30, 30 - 2 - 2 - 1 = 25
in python:
def a_to_b(a, b, c):
# calculate n
n = 0
pow_c = 1
while a * pow_c - b < 0:
n += 1
pow_c *= 1
# calculate coefficients
d = a * pow_c - b
coeff = []
for i in range(0, n + 1):
coeff.append(d // pow_c) # calculate x and append to terms
d %= pow_c # remainder after eliminating ith term
pow_c //= c
# sum up subtractions and multiplications as defined by the coefficients
return n + sum(c // 2 + c % 2 for c in coeff)
SO,
The problem
I have two integers, which are in first case, positive, and in second case - just any integers. I need to create a map function F from them to some another integer value, which will be:
Result should be integer value. For first case (x>0, y>0), positive integer value
Symmetric. That means F(x, y) = F(y, x)
Unique. That means F(x0, y0) = F(x1, y1) <=> (x0 = x1 ^ y0 = y1) V (y0 = x1 ^ x0 = y1)
My approach
At first glance, for positive integers we could use expression like F(x, y) = x2 + y2, but that will fail - for example, 892 + 232 = 132 + 912 As for second (common) case - that's even more complicated.
Use-case
That may be useful when dealing with some things, which supposed to be order-independent and need to be unique. For example, if we want to find cartesian product of many arrays and we want result to be unique independent of order, i.e. <x,z,y> is equal to <x,y,z>. It may be done with:
function decartProductPair($one, $two, $unique=false)
{
$result = [];
for($i=0; $i<count($one); $i++)
{
for($j=0; $j<count($two); $j++)
{
if($unique)
{
if($i!=$j)
{
$result[$i*$i+$j*$j]=array_merge((array)$one[$i],(array)$two[$j]);
// ^
// |
// +----//this is the place where F(i,j) is needed
}
}
else
{
$result[]=array_merge((array)$one[$i], (array)$two[$j]);
}
}
}
return array_values($result);
}
Another use-case is to properly group sender and receiver in some SQL table, so that different senders/receivers will be differed while they should stay symmetric. Something like:
SELECT
COUNT(1) AS message_count,
sender,
receiver
FROM
test
GROUP BY
-- this is the place where F(sender, receiver) is needed:
sender*sender + receiver*receiver
(By posting samples I wanted to show that issue is certainly related to programming)
The question
As mentioned, the question is - what can be used as F? I want as simple F as it's possible. Keep in mind two cases:
Integer x>0, y>0. F(x,y) > 0
Any integer x, y and so any integer F(x,y) as a result
May be F isn't just an expression - but some algorithm to find desired result for any x,y (so tagging with algorithm too). However, expression is better because it's more like that it will be able to use that expression in SQL or PHP or whatever. Feel free to edit tagging because I'm not sure if two tags here is enough
Most simple solution: f(x,y) = x^5 + y^5
No positive integer is known which can be written as the sum of two fifth powers in more than one way.
As for now, this is unsolved math problem.
You need a MAX_INTEGER constant, and the result will need to hold MAX_INTEGER**2 (say: be a long, if both are int's). In that case, one such function is:
f(x,y) = min(x,y)*MAX_INTEGER + max(x,y)
But I propose a different solution: use a hash function (say md5) of the string resulting from the concatenation of str(min(x,y)), a separator (say ".") and str(max(x,y)). That is:
f(x,y) = md5(str(min(x,y)) + "." + str(max(x,y)))
It is not unique, but collisions are very rare, and probably OK for most use cases. If still worried about collisions, save the actualy {x,y} along with f(x,y), and check if collisions happened.
Sort input numbers and interleave their bits:
x = 5
y = 3
Step 1. Sorting: 3, 5
Step 2. Mixing bits: 11, 101 -> 1_1_, 1_0_1 -> 11011 = 27
So, F(3, 5) = 27
A compact representation is x*(x+3)/2 + y*(x+1) + (y*(y-1))/2, which comes from an arrangement like this:
x->
y 0 1 3 6 10 15
| 2 4 7 11 16
v 5 8 12 17
9 13 18
14 19
20
According to [Stackoverflow:mapping-two-integers-to-one-in-a-unique-and-deterministic-way][1], if we symmetrize the formula we would have the following:
(x + y) * (x + y + 1) / 2 + min(x, y)
This might just work. For
(x + y) * (x + y + 1) / 2 + x
is unique, then the first formula is also unique.
[1]: Mapping two integers to one, in a unique and deterministic way
I already googled for the problem but only found either 2D solutions or formulas that didn't work for me (found this formula that looks nice: http://www.ogre3d.org/forums/viewtopic.php?f=10&t=55796 but seems not to be correct).
I have given:
Vec3 cannonPos;
Vec3 targetPos;
Vec3 targetVelocityVec;
float bulletSpeed;
what i'm looking for is time t such that
targetPos+t*targetVelocityVec
is the intersectionpoint where to aim the cannon to and shoot.
I'm looking for a simple, inexpensive formula for t (by simple i just mean not making many unnecessary vectorspace transformations and the like)
thanks!
The real problem is finding out where in space that the bullet can intersect the targets path. The bullet speed is constant, so in a certain amount of time it will travel the same distance regardless of the direction in which we fire it. This means that it's position after time t will always lie on a sphere. Here's an ugly illustration in 2d:
This sphere can be expressed mathematically as:
(x-x_b0)^2 + (y-y_b0)^2 + (z-z_b0)^2 = (bulletSpeed * t)^2 (eq 1)
x_b0, y_b0 and z_b0 denote the position of the cannon. You can find the time t by solving this equation for t using the equation provided in your question:
targetPos+t*targetVelocityVec (eq 2)
(eq 2) is a vector equation and can be decomposed into three separate equations:
x = x_t0 + t * v_x
y = y_t0 + t * v_y
z = z_t0 + t * v_z
These three equations can be inserted into (eq 1):
(x_t0 + t * v_x - x_b0)^2 + (y_t0 + t * v_y - y_b0)^2 + (z_t0 + t * v_z - z_b0)^2 = (bulletSpeed * t)^2
This equation contains only known variables and can be solved for t. By assigning the constant part of the quadratic subexpressions to constants we can simplify the calculation:
c_1 = x_t0 - x_b0
c_2 = y_t0 - y_b0
c_3 = z_t0 - z_b0
(v_b = bulletSpeed)
(t * v_x + c_1)^2 + (t * v_y + c_2)^2 + (t * v_z + c_3)^2 = (v_b * t)^2
Rearrange it as a standard quadratic equation:
(v_x^2+v_y^2+v_z^2-v_b^2)t^2 + 2*(v_x*c_1+v_y*c_2+v_z*c_3)t + (c_1^2+c_2^2+c_3^2) = 0
This is easily solvable using the standard formula. It can result in zero, one or two solutions. Zero solutions (not counting complex solutions) means that there's no possible way for the bullet to reach the target. One solution will probably happen very rarely, when the target trajectory intersects with the very edge of the sphere. Two solutions will be the most common scenario. A negative solution means that you can't hit the target, since you would need to fire the bullet into the past. These are all conditions you'll have to check for.
When you've solved the equation you can find the position of t by putting it back into (eq 2). In pseudo code:
# setup all needed variables
c_1 = x_t0 - x_b0
c_2 = y_t0 - y_b0
c_3 = z_t0 - z_b0
v_b = bulletSpeed
# ... and so on
a = v_x^2+v_y^2+v_z^2-v_b^2
b = 2*(v_x*c_1+v_y*c_2+v_z*c_3)
c = c_1^2+c_2^2+c_3^2
if b^2 < 4*a*c:
# no real solutions
raise error
p = -b/(2*a)
q = sqrt(b^2 - 4*a*c)/(2*a)
t1 = p-q
t2 = p+q
if t1 < 0 and t2 < 0:
# no positive solutions, all possible trajectories are in the past
raise error
# we want to hit it at the earliest possible time
if t1 > t2: t = t2
else: t = t1
# calculate point of collision
x = x_t0 + t * v_x
y = y_t0 + t * v_y
z = z_t0 + t * v_z
Consider the problem in which you have a value of N and you need to calculate how many ways you can sum up to N dollars using [1,2,5,10,20,50,100] Dollar bills.
Consider the classic DP solution:
C = [1,2,5,10,20,50,100]
def comb(p):
if p==0:
return 1
c = 0
for x in C:
if x <= p:
c += comb(p-x)
return c
It does not take into effect the order of the summed parts. For example, comb(4) will yield 5 results: [1,1,1,1],[2,1,1],[1,2,1],[1,1,2],[2,2] whereas there are actually 3 results ([2,1,1],[1,2,1],[1,1,2] are all the same).
What is the DP idiom for calculating this problem? (non-elegant solutions such as generating all possible solutions and removing duplicates are not welcome)
Not sure about any DP idioms, but you could try using Generating Functions.
What we need to find is the coefficient of x^N in
(1 + x + x^2 + ...)(1+x^5 + x^10 + ...)(1+x^10 + x^20 + ...)...(1+x^100 + x^200 + ...)
(number of times 1 appears*1 + number of times 5 appears * 5 + ... )
Which is same as the reciprocal of
(1-x)(1-x^5)(1-x^10)(1-x^20)(1-x^50)(1-x^100).
You can now factorize each in terms of products of roots of unity, split the reciprocal in terms of Partial Fractions (which is a one time step) and find the coefficient of x^N in each (which will be of the form Polynomial/(x-w)) and add them up.
You could do some DP in calculating the roots of unity.
You should not go from begining each time, but at max from were you came from at each depth.
That mean that you have to pass two parameters, start and remaining total.
C = [1,5,10,20,50,100]
def comb(p,start=0):
if p==0:
return 1
c = 0
for i,x in enumerate(C[start:]):
if x <= p:
c += comb(p-x,i+start)
return c
or equivalent (it might be more readable)
C = [1,5,10,20,50,100]
def comb(p,start=0):
if p==0:
return 1
c = 0
for i in range(start,len(C)):
x=C[i]
if x <= p:
c += comb(p-x,i)
return c
Terminology: What you are looking for is the "integer partitions"
into prescibed parts (you should replace "combinations" in the title).
Ignoring the "dynamic programming" part of the question, a routine
for your problem is given in the first section of chapter 16
("Integer partitions", p.339ff) of the fxtbook, online at
http://www.jjj.de/fxt/#fxtbook