Calculate probability that M out of N events will appear - algorithm

I have probabilities of 110 independent events.
I want to calculate for every number of events, that exactly that number of distinct events will appear.
Example, if we have only three events
A = 0.45
B = 0.65
C = 0.73
# Probability of none event
P[0] = (1-A)*(1-B)*(1-C)
# Probability of exaclty one event
P[1] = A*(1-B)*(1-C)+(1-A)*B*(1-C)+(1-A)*(1-B)*C
# Probability of exactly two events
P[2] = A*B*(1-C)+A*(1-B)*C+(1-A)*B*C
# Probability of exactly three events
P[3] = A*B*C
Is realistic to compute it for 110 events under 1 hour?
If yes, how to do it in any programming language?

Let the probabilities be p_1, p_2, ..., p_n. You're in essence trying to expand the polynomial
(1 - p_1 + p_1 x) (1 - p_2 + p_2 x) ... (1 - p_n + p_n x),
where the probability of getting m events is the coefficient of x^m. Rather than computing all 2^n monomials and summing them, you can simplify after each multiplication. In Python:
def f(ps):
coefs = [1]
for p in ps:
coefs.append(0)
for i in range(len(coefs) - 1, 0, -1):
coefs[i] = coefs[i] * (1 - p) + coefs[i - 1] * p
coefs[0] *= 1 - p
return coefs
Sample execution (note the floating point error).
>>> f([0.45, 0.65, 0.73])
[0.05197500000000001, 0.279575, 0.454925, 0.21352500000000002]

Related

Finding Probability of Multinomial distribution

A bag contain 5 dices and each have six faces with probability $p_1$=$p_2$=$p_3$=$2p_4$=$2p_5$=$3*p_6$. What is the probability of selecting two dice with face 4, and three dice with face 1?
Someone have try the codes for this problem in r shown in picture but I not understand how the probability is obtained. Kindly explain me the answer for this problem.
First, find p_is values based on the given equalities and the following assumption (replace all p_is based on their relations with p_6 to find it, then find the value of each p_i):
p_1 + p_2 + p_3 + p_4 + p_5 + p_6 = 1
To find the probability, we need to select 2 out of 5 of dices with face 4 which its probability is p_4^2 and for other dices, they should have face 1 that it's probability is p_1^3.
Now, to understand the last step, you should read the explanation of the dmultinom(x, size, prob) function (from this post):
Generate multinomially distributed random number vectors and compute multinomial probabilities. If x is a K-component vector, dmultinom(x, prob) is the probability
P(X[1]=x[1], … , X[K]=x[k]) = C * prod(j=1 , …, K) p[j]^x[j]. where C is the ‘multinomial coefficient’ C = N! / (x[1]! * … * x[K]!) and N = sum(j=1, …, K) x[j].
By definition, each component X[j] is binomially distributed as Bin(size, prob[j]) for j = 1, …, K.
Therefore, dmultinom(j, n, p) means C(5,2) * p_1^3 * p_4^2, as j = (3, 0, 0, 2, 0), n=5, and p = (p_1, p_2, p_3, p_4, p_5, p_6).

Algorithm to solve a Hacker earth problem

I have been working on a Hackerearth Problem. Here is the problem statement:
We have three variables a, b and c. We need to convert a to b and following operations are allowed:
1. Can decrement by 1.
2. Can decrement by 2.
3. Can multiply by c.
Minimum steps required to convert a to b.
Here is the algorithm I came up with:
Increment count to 0.
Loop through till a === b:
1. Perform (x = a * c), (y = a - 1) and (z = a - 2).
2. Among x, y and z, choose the one whose absolute difference with b is the least.
3. Update the value of a to the value chosen among x, y and z.
4. Increment the count by 1.
I can get pass the basic test case but all my advance cases are failing. I guess my logic is correct but due to the complexity it seems to fail.
Can someone suggest a more optimized solution.
Edit 1
Sample Code
function findMinStep(arr) {
let a = parseInt(arr[0]);
let b = parseInt(arr[1]);
let c = parseInt(arr[2]);
let numOfSteps = 0;
while(a !== b) {
let multiply = Math.abs(b - (a * c));
let decrement = Math.abs(b - (a - 1));
let doubleDecrement = Math.abs(b - (a - 2));
let abs = Math.min(multiply, decrement, doubleDecrement);
if(abs === multiply) a = a * c;
else if(abs === decrement) a -= 1;
else a -= 2;
numOfSteps += 1;
}
return numOfSteps.toString()
}
Sample Input: a = 3, b = 10, c = 2
Explanation: Multiply 3 with 2 to get 6, subtract 1 from 6 to get 5, multiply 5 with 2 to get 10.
Reason for tagging both Python and JS: Comfortable with both but I am not looking for code, just an optimized algorithm and analytical thinking.
Edit 2:
function findMinStep(arr) {
let a = parseInt(arr[0]);
let b = parseInt(arr[1]);
let c = parseInt(arr[2]);
let depth = 0;
let queue = [a, 'flag'];
if(a === b ) return 0
if(a > b) {
let output = Math.floor((a - b) / 2);
if((a - b) % 2) return output + 1;
return output
}
while(true) {
let current = queue.shift();
if(current === 'flag') {
depth += 1;
queue.push('flag');
continue;
}
let multiple = current * c;
let decrement = current - 1;
let doubleDecrement = current -2;
if (multiple !== b) queue.push(multiple);
else return depth + 1
if (decrement !== b) queue.push(decrement);
else return depth + 1
if (doubleDecrement !== b) queue.push(doubleDecrement);
else return depth + 1
}
}
Still times out. Any more suggestions?
Link for the question for you reference.
BFS
A greedy approach won't work here.
However it is already on the right track. Consider the graph G, where each node represents a value and each edge represents one of the operations and connects two values that are related by that operation (e.g.: 4 and 3 are connected by "subtract 1"). Using this graph, we can easily perform a BFS-search to find the shortest path:
def a_to_b(a, b, c):
visited = set()
state = {a}
depth = 0
while b not in state:
visited |= state
state = {v - 1 for v in state if v - 1 not in visited} | \
{v - 2 for v in state if v - 2 not in visited} | \
{v * c for v in state if v * c not in visited}
depth += 1
return 1
This query systematically tests all possible combinations of operations until it reaches b by testing stepwise. I.e. generate all values that can be reached with a single operation from a, then test all values that can be reached with two operations, etc., until b is among the generated values.
In depth analysis
(Assuming c >= 0, but can be generalized)
So far for the standard-approach that works with little analysis. This approach has the advantage that it works for any problem of this kind and is easy to implement. However it isn't very efficient and will reach it's limits fairly fast, once the numbers grow. So instead I'll show a way to analyze the problem in depth and gain a (far) more performant solution:
In a first step this answer will analyze the problem:
We need operations -->op such that a -->op b and -->op is a sequence of
subtract 1
subtract 2
multiply by c
First of all, what happens if we first subtract and afterwards multiply?
(a - x) * c = a * c - x * c
Next what happens, if we first multiply and afterwards subtract?
a * c - x'
Positional systems
Well, there's no simplifying transformation for this. But we've got the basic pieces to analyze more complicated chains of operations. Let's see what happens when we chain subtractions and multiplications alternatingly:
(((a - x) * c - x') * c - x'') * c - x'''=
((a * c - x * c - x') * c - x'') * c - x''' =
(a * c^2 - x * c^2 - x' * c - x'') * c - x''' =
a * c^3 - x * c^3 - x' * c^2 - x'' * c - x'''
Looks familiar? We're one step away from defining the difference between a and b in a positional system base c:
a * c^3 - x * c^3 - x' * c^2 - x'' * c - x''' = b
x * c^3 + x' * c^2 + x'' * c + x''' = a * c^3 - b
Unfortunately the above is still not quite what we need. All we can tell is that the LHS of the equation will always be >=0. In general, we first need to derive the proper exponent n (3 in the above example), s.t. it is minimal, nonnegative and a * c^n - b >= 0. Solving this for the individual coefficients (x, x', ...), where all coefficients are non-negative is a fairly trivial task.
We can show two things from the above:
if a < b and a < 0, there is no solution
solving as above and transforming all coefficients into the appropriate operations leads to the optimal solution
Proof of optimality
The second statement above can be proven by induction over n.
n = 0: In this case a - b < c, so there is only one -->op
n + 1: let d = a * c^(n + 1) - b. Let d' = d - m * c^(n + 1), where m is chosen, such that d' is minimal and nonnegative. Per induction-hypothesis d' can be generated optimally via a positional system. Leaving a difference of exactly m * c^n. This difference can not be covered more efficiently via lower-order terms than by m / 2 subtractions.
Algorithm (The TLDR-part)
Consider a * c^n - b as a number base c and try to find it's digits. The final number should have n + 1 digits, where each digit represents a certain number of subtractions. Multiple subtractions are represented by a single digit by addition of the subtracted values. E.g. 5 means -2 -2 -1. Working from the most significant to the least significant digit, the algorithm operates as follows:
perform the subtractions as specified by the digit
if the current digit is was the last, terminate
multiply by c and repeat from 1. with the next digit
E.g.:
a = 3, b = 10, c = 2
choose n = 2
a * c^n - b = 3 * 4 - 10 = 2
2 in binary is 010
steps performed: 3 - 0 = 3, 3 * 2 = 6, 6 - 1 = 5, 5 * 2 = 10
or
a = 2, b = 25, c = 6
choose n = 2
a * c^n - b = 47
47 base 6 is 115
steps performed: 2 - 1 = 1, 1 * 6 = 6, 6 - 1 = 5, 5 * 6 = 30, 30 - 2 - 2 - 1 = 25
in python:
def a_to_b(a, b, c):
# calculate n
n = 0
pow_c = 1
while a * pow_c - b < 0:
n += 1
pow_c *= 1
# calculate coefficients
d = a * pow_c - b
coeff = []
for i in range(0, n + 1):
coeff.append(d // pow_c) # calculate x and append to terms
d %= pow_c # remainder after eliminating ith term
pow_c //= c
# sum up subtractions and multiplications as defined by the coefficients
return n + sum(c // 2 + c % 2 for c in coeff)

Number of ways to reach N from 0 using only 2 or 3?

I am solving this problem where we need to reach from X=0 to X=N.We can only take a step of 2 or 3 at a time.
For each step of 2 we have a probability of 0.2 and for each step of 3 we have a probability of 0.8.How can we find the total probability to reach N.
e.g. for reaching 5,
2+3 with probability =0.2 * 0.8=0.16
3+2 with probability =0.8 * 0.2=0.16 total = 0.32.
My initial thoughts:
Number of ways can be found out by simple Fibonacci sequence.
f(n)=f(n-3)+f(n-2);
But how do we remember the numbers so that we can multiply them to find the probability?
This can be solved using Dynamic programming.
Lets call the function F(N) = probability to reach 0 using only 2 and 3 when the starting number is N
F(N) = 0.2*F(N-2) + 0.3*F(N-3)
Base case:
F(0) = 1 and F(k)= 0 where k< 0
So the DP code would be somthing like that:
F[0] = 1;
for(int i = 1;i<=N;i++){
if(i>=3)
F[i] = 0.2*F[i-2] + 0.8*F[i-3];
else if(i>=2)
F[i] = 0.2*F[i-2];
else
F[i] = 0;
}
return F[N];
This algorithm would run in O(N)
Some clarifications about this solution: I assume the only allowed operation for generating the number from 2s and 3s is addition (your definition would allow substraction aswell) and the input-numbers are always valid (2 <= input). Definition: a unique row of numbers means: no other row with the same number of 3s and 2s in another order is in scope.
We can reduce the problem into multiple smaller problems:
Problem A: finding all sequences of numbers that can sum up to the given number. (Unique rows of numbers only)
Start by finding the minimum-number of 3s required to build the given number, which is simply input % 2. The maximum-number of 3s that can be used to build the input can be calculated this way:
int max_3 = (int) (input / 3);
if(input - max_3 == 1)
--max_3;
Now all sequences of numbers that sum up to input must hold between input % 2 and max_3 3s. The 2s can be easily calculated from a given number of 3s.
Problem B: calculating the probability for a given list and it's permutations to be the result
For each unique row of numbers, we can easily derive all permutations. Since these consist of the same number, they have the same likeliness to appear and produce the same sum. The likeliness can be calculated easily from the row: 0.8 ^ number_of_3s * 0.2 ^ number_of_2s. Next step would be to calculate the number of different permuatations. Calculating all distinct sets with a specific number of 2s and 3s can be done this way: Calculate all possible distributions of 2s in the set: (number_of_2s + number_of_3s)! / (number_of_3s! * numer_of_2s!). Basically just the number of possible distinct permutations.
Now from theory to praxis
Since the math is given, the rest is pretty straight forward:
define prob:
input: int num
output: double
double result = 0.0
int min_3s = (num % 2)
int max_3s = (int) (num / 3)
if(num - max_3 == 1)
--max_3
for int c3s in [min_3s , max_3s]
int c2s = (num - (c3s * 3)) / 2
double p = 0.8 ^ c3s * 0.2 * c2s
p *= (c3s + c2s)! / (c3s! * c2s!)
result += p
return result
Instead of jumping into the programming, you can use math.
Let p(n) be the probability that you reach the location that is n steps away.
Base cases:
p(0)=1
p(1)=0
p(2)=0.2
Linear recurrence relation
p(n+3)=0.2 p(n+1) + 0.8 p(n)
You can solve this in closed form by finding the exponential solutions to the linear recurrent relation.
c^3 = 0.2 c + 0.8
c = 1, (-5 +- sqrt(55)i)/10
Although this was cubic, c=1 will always be a solution in this type of problem since there is a constant nonzero solution.
Because the roots are distinct, all solutions are of the form a1(1)^n + a2((-5+sqrt(55)i) / 10)^n + a3((-5-sqrt(55)i)/10)^n. You can solve for a1, a2, and a3 using the initial conditions:
a1=5/14
a2=(99-sqrt(55)i)/308
a3=(99+sqrt(55)i)/308
This gives you a nonrecursive formula for p(n):
p(n)=5/14+(99-sqrt(55)i)/308((-5+sqrt(55)i)/10)^n+(99+sqrt(55)i)/308((-5-sqrt(55)i)/10)^n
One nice property of the non-recursive formula is that you can read off the asymptotic value of 5/14, but that's also clear because the average value of a jump is 2(1/5)+ 3(4/5) = 14/5, and you almost surely hit a set with density 1/(14/5) of the integers. You can use the magnitudes of the other roots, 2/sqrt(5)~0.894, to see how rapidly the probabilities approach the asymptotics.
5/14 - (|a2|+|a3|) 0.894^n < p(n) < 5/14 + (|a2|+|a3|) 0.894^n
|5/14 - p(n)| < (|a2|+|a3|) 0.894^n
f(n, p) = f(n-3, p*.8) + f(n -2, p*.2)
Start p at 1.
If n=0 return p, if n <0 return 0.
Instead of using the (terribly inefficient) recursive algorithm, start from the start and calculate in how many ways you can reach subsequent steps, i.e. using 'dynamic programming'. This way, you can easily calculate the probabilities and also have a complexity of only O(n) to calculate everything up to step n.
For each step, memorize the possible ways of reaching that step, if any (no matter how), and the probability of reaching that step. For the zeroth step (the start) this is (1, 1.0).
steps = [(1, 1.0)]
Now, for each consecutive step n, get the previously computed possible ways poss and probability prob to reach steps n-2 and n-3 (or (0, 0.0) in case of n < 2 or n < 3 respectively), add those to the combined possibilities and probability to reach that new step, and add them to the list.
for n in range(1, 10):
poss2, prob2 = steps[n-2] if n >= 2 else (0, 0.0)
poss3, prob3 = steps[n-3] if n >= 3 else (0, 0.0)
steps.append( (poss2 + poss3, prob2 * 0.2 + prob3 * 0.8) )
Now you can just get the numbers from that list:
>>> for n, (poss, prob) in enumerate(steps):
... print "%s\t%s\t%s" % (n, poss, prob)
0 1 1.0
1 0 0.0
2 1 0.2
3 1 0.8
4 1 0.04
5 2 0.32 <-- 2 ways to get to 5 with combined prob. of 0.32
6 2 0.648
7 3 0.096
8 4 0.3856
9 5 0.5376
(Code is in Python)
Note that this will get you both the number of possible ways of reaching a certain step (e.g. "first 2, then 3" or "first 3, then 2" for 5), and the probability to reach that step in one go. Of course, if you need only the probability, you can just use single numbers instead of tuples.

random increasing sequence with O(1) access to any element?

I have an interesting math/CS problem. I need to sample from a possibly infinite random sequence of increasing values, X, with X(i) > X(i-1), with some distribution between them. You could think of this as the sum of a different sequence D of uniform random numbers in [0,d). This is easy to do if you start from the first one and go from there; you just add a random amount to the sum each time. But the catch is, I want to be able to get any element of the sequence in faster than O(n) time, ideally O(1), without storing the whole list. To be concrete, let's say I pick d=1, so one possibility for D (given a particular seed) and its associated X is:
D={.1, .5, .2, .9, .3, .3, .6 ...} // standard random sequence, elements in [0,1)
X={.1, .6, .8, 1.7, 2.0, 2.3, 2.9, ...} // increasing random values; partial sum of D
(I don't really care about D, I'm just showing one conceptual way to construct X, my sequence of interest.) Now I want to be able to compute the value of X[1] or X[1000] or X[1000000] equally fast, without storing all the values of X or D. Can anyone point me to some clever algorithm or a way to think about this?
(Yes, what I'm looking for is random access into a random sequence -- with two different meanings of random. Makes it hard to google for!)
Since D is pseudorandom, there’s a space-time tradeoff possible:
O(sqrt(n))-time retrievals using O(sqrt(n)) storage locations (or,
in general, O(n**alpha)-time retrievals using O(n**(1-alpha))
storage locations). Assume zero-based indexing and that
X[n] = D[0] + D[1] + ... + D[n-1]. Compute and store
Y[s] = X[s**2]
for all s**2 <= n in the range of interest. To look up X[n], let
s = floor(sqrt(n)) and return
Y[s] + D[s**2] + D[s**2+1] + ... + D[n-1].
EDIT: here's the start of an approach based on the following idea.
Let Dist(1) be the uniform distribution on [0, d) and let Dist(k) for k > 1 be the distribution of the sum of k independent samples from Dist(1). We need fast, deterministic methods to (i) pseudorandomly sample Dist(2**p) and (ii) given that X and Y are distributed as Dist(2**p), pseudorandomly sample X conditioned on the outcome of X + Y.
Now imagine that the D array constitutes the leaves of a complete binary tree of size 2**q. The values at interior nodes are the sums of the values at their two children. The naive way is to fill the D array directly, but then it takes a long time to compute the root entry. The way I'm proposing is to sample the root from Dist(2**q). Then, sample one child according to Dist(2**(q-1)) given the root's value. This determines the value of the other, since the sum is fixed. Work recursively down the tree. In this way, we look up tree values in time O(q).
Here's an implementation for Gaussian D. I'm not sure it's working properly.
import hashlib, math
def random_oracle(seed):
h = hashlib.sha512()
h.update(str(seed).encode())
x = 0.0
for b in h.digest():
x = ((x + b) / 256.0)
return x
def sample_gaussian(variance, seed):
u0 = random_oracle((2 * seed))
u1 = random_oracle(((2 * seed) + 1))
return (math.sqrt((((- 2.0) * variance) * math.log((1.0 - u0)))) * math.cos(((2.0 * math.pi) * u1)))
def sample_children(sum_outcome, sum_variance, seed):
difference_outcome = sample_gaussian(sum_variance, seed)
return (((sum_outcome + difference_outcome) / 2.0), ((sum_outcome - difference_outcome) / 2.0))
def sample_X(height, i):
assert (0 <= i <= (2 ** height))
total = 0.0
z = sample_gaussian((2 ** height), 0)
seed = 1
for j in range(height, 0, (- 1)):
(x, y) = sample_children(z, (2 ** j), seed)
assert (abs(((x + y) - z)) <= 1e-09)
seed *= 2
if (i >= (2 ** (j - 1))):
i -= (2 ** (j - 1))
total += x
z = y
seed += 1
else:
z = x
return total
def test(height):
X = [sample_X(height, i) for i in range(((2 ** height) + 1))]
D = [(X[(i + 1)] - X[i]) for i in range((2 ** height))]
mean = (sum(D) / len(D))
variance = (sum((((d - mean) ** 2) for d in D)) / (len(D) - 1))
print(mean, math.sqrt(variance))
D.sort()
with open('data', 'w') as f:
for d in D:
print(d, file=f)
if (__name__ == '__main__'):
test(10)
If you do not record the values in X, and if you do not remember the values in X you have previously generate, there is no way to guarantee that the elements in X you generate (on the fly) will be in increasing order. It furthermore seems like there is no way to avoid O(n) time worst-case per query, if you don't know how to quickly generate the CDF for the sum of the first m random variables in D for any choice of m.
If you want the ith value X(i) from a particular realization, I can't see how you could do this without generating the sequence up to i. Perhaps somebody else can come up with something clever.
Would you be willing to accept a value which is plausible in the sense that it has the same distribution as the X(i)'s you would observe across multiple realizations of the X process? If so, it should be pretty easy. X(i) will be asymptotically normally distributed with mean i/2 (since it's the sum of the Dk's for k=1,...,i, the D's are Uniform(0,1), and the expected value of a D is 1/2) and variance i/12 (since the variance of a D is 1/12 and the variance of a sum of independent random variables is the sum of their variances).
Because of the asymptotic aspect, I'd pick some threshold value for i to switch over from direct summing to using the normal. For example, if you use i = 12 as your threshold you would use actual summing of uniforms for values of i from 1 to 11, and generate a Normal(i/2, sqrt(i/12)) value for i >. That's an O(1) algorithm since the total work is bounded by your threshold, and the results produced will be distributionally representative of what you would see if you actually went through the summing.

Algorithm to evaluate best weights for weighted average

I have a data set of the form:
[9.1 5.6 7.4] => 8.5, [4.1 4.4 5.2] => 4.9, ... , x => y(x)
So x is a real vector of three elements and y is a scalar function.
I'm assuming a weighted average model of this data:
y(x) = (a * x[0] + b * x[1] + c * x[2]) / (a+b+c) + E(x)
where E is an unknown random error term.
I need an algorithm to find a,b,c, that minimizes total sum square error:
error = sum over all x of { E(x)^2 }
for a given data set.
Assume that the weights are normalized to sum to 1 (which happily is without loss of generality), then we can re-cast the problem with c = 1 - a - b, so we are actually solving for a and b.
With this we can write
error(a,b) = sum over all x { a x[0] + b x[1] + (1 - a - b) x[2] - y(x) }^2
Now it's just a question of taking the partial derivatives d_error/da and d_error/db and setting them to zero to find the minimum.
With some fiddling, you get a system of two equations in a and b.
C(X[0],X[0],X[2]) a + C(X[0],X[1],X[2]) b = C(X[0],Y,X[2])
C(X[1],X[0],X[2]) a + C(X[1],X[1],X[2]) b = C(X[1],Y,X[2])
The meaning of X[i] is the vector of all i'th components from the dataset x values.
The meaning of Y is the vector of all y(x) values.
The coefficient function C has the following meaning:
C(p, q, r) = sum over i { p[i] ( q[i] - r[i] ) }
I'll omit how to solve the 2x2 system unless this is a problem.
If we plug in the two-element data set you gave, we should get precise coefficients because you can always approximate two points perfectly with a line. So for example the first equation coefficients are:
C(X[0],X[0],X[2]) = 9.1(9.1 - 7.4) + 4.1(4.1 - 5.2) = 10.96
C(X[0],X[1],X[2]) = -19.66
C(X[0],Y,X[2]) = 8.78
Similarly for the second equation: 4.68 -13.6 4.84
Solving the 2x2 system produces: a = 0.42515, b = -0.20958. Therefore c = 0.78443.
Note that in this problem a negative coefficient results. There is nothing to guarantee they'll be positive, though "real" data sets may produce this result.
Indeed if you compute weighted averages with these coefficients, they are 8.5 and 4.9.
For fun I also tried this data set:
X[0] X[1] X[2] Y
0.018056028 9.70442075 9.368093544 6.360312244
8.138752835 5.181373099 3.824747424 5.423581239
6.296398214 4.74405298 9.837741509 7.714662742
5.177385358 1.241610571 5.028388255 4.491743107
4.251033792 8.261317658 7.415111851 6.430957844
4.720645386 1.0721718 2.187147908 2.815078796
1.941872069 1.108191586 6.24591771 3.994268819
4.220448549 9.931055481 4.435085917 5.233711923
9.398867623 2.799376317 7.982096264 7.612485261
4.971020963 1.578519218 0.462459906 2.248086465
I generated the Y values with 1/3 x[0] + 1/6 x[1] + 1/2 x[2] + E where E is a random number in [-0.1..+0.1]. If the algorithm is working correctly we'd expect to get roughly a = 1/3 and b = 1/6 from this result. Indeed we get a = .3472 and b = .1845.
OP has now said that his actual data are larger than 3-vectors. This method generalizes without much trouble. If the vectors are of length n, then you get an n-1 x n-1 system to solve.

Resources