I want a ref or pseudo-algorithm or an actual algorithm for Quaternion GCD, I need this to find out the 4 squares that make up any given integer $n$, I did all the other work but I am stuck on this since there is no information on Wikipedia or Arxiv on how to do such GCD.
Thanks.
I tried to Extend the Complex(Gaussian Integers) gcd but with no success.
The key for these things is the Euclidean algorithm:
def euclidean_rightdiv_hurwitz(B,D):
#if B is not D, returns q,r such that
#B = qD + r
#r==0 or norm(r)< norm(D)
nor = norm(D)
for a,b,c,d in [-sqrt(nor), sqrt(nor)+1]:
r = quaternion(a,b,c,d)
if r==0 or norm(r)< nor:
diff = (B-r)
if hurwitz_is_rightdivisible(diff,D):
return diff/D, r
To implement hurwitz_is_rightdivisible, notice that if diff is rightdivisible by D it must be the case that diff*inv(D) is an integer, so just compute it and check each coordinate.
Related
How should I write this:
(d*a)mod(b)=1
in order to make it work properly in Ruby? I tried it on Wolfram, but their solution:
(da(b, d))/(dd) = -a/d
doesn't help me. I know a and b. I need to solve (d*a)mod(b)=1 for d in the form d=....
It's not clear what you're asking, and, depending on what you mean, a solution may be impossible.
First off, (da(b, d))/(dd) = -a/d, is not a solution to that equation; rather, it's a misinterpretation of the notation used for partial derivatives. What Wolfram Alpha actually gave you was:
, which is entirely unrelated.
Secondly, if you're trying to solve (d*a)mod(b)=1 for d, you may be out of luck. For any value of a and b, where a and b have a common prime factor, there are an infinite number of values of d that satisfy the equation. If a and b are coprime, you can use the formula given in LutzL's answer.
Additionally, if you're looking to perform symbolic manipulation of equations, Ruby is likely not the proper tool. Consider using a CAS, like Python's SymPy or Wolfram Mathematica.
Finally, if you're just trying to compute (d*a)mod(b), the modulo operator in Ruby is %, so you'd write (d*a)%(b).
You are looking for the modular inverse of a modulo b.
For any two numbers a,b the extended euclidean algorithm
g,u,v = xgcd(a, b)
gives coefficients u,v such that
u*a+v*b = g
and g is the greatest common divisor. You need a,b co-prime, preferably by ensuring that b is a prime number, to get g=1 and then you can set d=u.
xgcd(a,b)
if b = 0
return (a,1,0)
q,r = a divmod b
// a = q*b + r
g,u,v = xgcd(b, r)
// g = u*b + v*r = u*b + v*(a-q*b) = v*a+(u-q*v)*b
return g,v,u - q*v
this is my first question on this website. I'm looking at a Matlab problem, and don't seem to know how to do it. Before I type the question, I want to make it clear that I'm looking for an UNDERSTANDING, NOT an ANSWER. Although, I must admit, I won't be angry if an answer is posted. But more importantly, I need to understand this.
"The matrix factorization LU = PA can be used to compute the determinant of A. We have
det(L)det(U) = det(P)det(A).
Because L is triangular with ones on the diagonal, det(L) = 1. Because U is
triangular, det(U) = u 11 u 22 · · · u nn . Because P is a permutation, det(P) =
+1 if the number of interchanges is even and −1 if it is odd. So
det(A) = ±u 11 u 22 · · · u nn .
Modify the lutx function so that it returns four outputs.
function [L,U,p,sig] = lutx(A)
%LU Triangular factorization
% [L,U,p,sig] = lutx(A) computes a unit lower triangular
% matrix L, an upper triangular matrix U, a permutation
% vector p, and a scalar sig, so that L*U = A(p,:) and
% sig = +1 or -1 if p is an even or odd permutation.
Write a function mydet(A) that uses your modified lutx to compute the
determinant of A. In Matlab, the product u 11 u 22 · · · u nn can be computed
by the expression prod(diag(U))."`
The lutx code can be found here:
I'm having difficulty understanding the concept of the problem, and also the code that needs to be written. Any help would be very appreciated. Thank you.
As you mentioned in your problem in the following equation:
det(L)det(U) = det(P)det(A)
actually the lutx function decompose the input matrix and returns the decomposed elements. It means if you give it the A matrix, it will calculate the L,U,p. you can check the source code.
actually in your problem, three out of four elements are 'known', so you can use the lutx function to find the det(A).
because :
det(A) = det(L)det(U) / det(P);
so what you can do is this:
[L,U,p,sig] = lutx(A); % here I am using the modified version of lutx that you mentioned
DetA = 1 * prod(diag(U)) * sig;
because, det(L) = 1 (I mention it in the previous line of code just for underestanding), and det(U) = prod(diag(U)), and sig gives the sign.
finally you can compare your result with matlab function: det(A).
The exercise appears to be mainly to compute "sig", which lutx currently doesn't return. As a hint, you must compute
delta_p = (1:length(p))-p;
and check whether delta_p has an even or odd number of non-zero elements. That will determine the sign of sig.
I need to get N x columns(L) matrix of legendre polynomials evaluated over L for arbitrary N.
Is there a better way of computing the matrix than just explicitly evaluating the polynomial vector for each row? The code snippet for this approach (N = 4) is here:
L = linspace(-1,1,800);
# How to do this in a better way?
G = [legendre_Pl(0,L); legendre_Pl(1,L); legendre_Pl(2,L); legendre_Pl(3,L)];
Thanks,
Vojta
Create an anonymous function. Documentation at http://www.gnu.org/software/octave/doc/interpreter/Anonymous-Functions.html
f = #(x) legendre_Pl(x,L);
Then use arrayfun to apply the function, f to an array [1:N] Documentation at http://www.gnu.org/software/octave/doc/interpreter/Function-Application.html
CellArray = arrayfun(f, [1:N], "UniformOutput", false);
That gives you a cell array. If you want the answer in a matrix, use cell2mat
G = cell2mat(CellArray);
Maybe I should ask this in Mathoverflow but here it goes:
I have 2 data sets (sets of x and y coordinates) with different number of elements. I have to find the best match between them by stretching one of the data sets (multiplying all x with a factor of m and all y with a factor of n) and moving it around (adding p and q to all x and y respectively).
Basically these 2 sets represent different curves and i have to fit curve B (which has less elements) to some segment of curve A (which has many more elements).
How can I find the values m, n, p, and q for the closest match?
Answers can be pseudo code, C, Java or Python. Thanks.
Following is a solution for finding values m, n, p and q when after transforming the first curve it matches exactly with a part of the second curve:
Basically, we have to solve the following matrix equation:
[m n][x y]' + [p q]' = [X Y]' ...... (1)
where [x y]' and [X Y]' are the coordinates of first and second curves respectively. Let's assume first curve has total l number of coordinates and second curve has total h number of coordinates.
(1) implies,
[mx+p ny+1]' = [X Y]'
i.e we have to solve:
mx_1+p = X_k, mx_2+p = X_{k+1}, ..., mx_l+p = X_{k+l-1}
ny_1+q = Y_k, ny_2+q = Y_{k+1}, ..., ny_l+q = Y_{k+l-1}
where k+l-1 <= h-l
We can solve it in the following naive way:
for (i=1 to h-l){
(m,p) = SOLVE(x1, X_i, x2, X_{i+1})// 2 unknowns, 2 equations
(n,q) = SOLVE(y1, Y_i, y2, Y_{i+1})// 2 unknowns, 2 equations
for (j=3 to l){
if(m*x[j]+p != m*X[i+2]+p)break;//value of m, p found from 1st 2 doesn't work for rest
if(n*y[j]+q != n*Y[i+2]+q)break;//value of n, q found from 1st 2 doesn't work for rest
}
if(j==l){//match found
return i;//returns the smallest index of 2nd curves' coordinates where we found a match
}
}
return -1;//no match found
I am not sure if there can be an optimized version of this.
Consider this way of solving the Subset sum problem:
def subset_summing_to_zero (activities):
subsets = {0: []}
for (activity, cost) in activities.iteritems():
old_subsets = subsets
subsets = {}
for (prev_sum, subset) in old_subsets.iteritems():
subsets[prev_sum] = subset
new_sum = prev_sum + cost
new_subset = subset + [activity]
if 0 == new_sum:
new_subset.sort()
return new_subset
else:
subsets[new_sum] = new_subset
return []
I have it from here:
http://news.ycombinator.com/item?id=2267392
There is also a comment which says that it is possible to make it "more efficient".
How?
Also, are there any other ways to solve the problem which are at least as fast as the one above?
Edit
I'm interested in any kind of idea which would lead to speed-up. I found:
https://en.wikipedia.org/wiki/Subset_sum_problem#cite_note-Pisinger09-2
which mentions a linear time algorithm. But I don't have the paper, perhaps you, dear people, know how it works? An implementation perhaps? Completely different approach perhaps?
Edit 2
There is now a follow-up:
Fast solution to Subset sum algorithm by Pisinger
I respect the alacrity with which you're trying to solve this problem! Unfortunately, you're trying to solve a problem that's NP-complete, meaning that any further improvement that breaks the polynomial time barrier will prove that P = NP.
The implementation you pulled from Hacker News appears to be consistent with the pseudo-polytime dynamic programming solution, where any additional improvements must, by definition, progress the state of current research into this problem and all of its algorithmic isoforms. In other words: while a constant speedup is possible, you're very unlikely to see an algorithmic improvement to this solution to the problem in the context of this thread.
However, you can use an approximate algorithm if you require a polytime solution with a tolerable degree of error. In pseudocode blatantly stolen from Wikipedia, this would be:
initialize a list S to contain one element 0.
for each i from 1 to N do
let T be a list consisting of xi + y, for all y in S
let U be the union of T and S
sort U
make S empty
let y be the smallest element of U
add y to S
for each element z of U in increasing order do
//trim the list by eliminating numbers close to one another
//and throw out elements greater than s
if y + cs/N < z ≤ s, set y = z and add z to S
if S contains a number between (1 − c)s and s, output yes, otherwise no
Python implementation, preserving the original terms as closely as possible:
from bisect import bisect
def ssum(X,c,s):
""" Simple impl. of the polytime approximate subset sum algorithm
Returns True if the subset exists within our given error; False otherwise
"""
S = [0]
N = len(X)
for xi in X:
T = [xi + y for y in S]
U = set().union(T,S)
U = sorted(U) # Coercion to list
S = []
y = U[0]
S.append(y)
for z in U:
if y + (c*s)/N < z and z <= s:
y = z
S.append(z)
if not c: # For zero error, check equivalence
return S[bisect(S,s)-1] == s
return bisect(S,(1-c)*s) != bisect(S,s)
... where X is your bag of terms, c is your precision (between 0 and 1), and s is the target sum.
For more details, see the Wikipedia article.
(Additional reference, further reading on CSTheory.SE)
While my previous answer describes the polytime approximate algorithm to this problem, a request was specifically made for an implementation of Pisinger's polytime dynamic programming solution when all xi in x are positive:
from bisect import bisect
def balsub(X,c):
""" Simple impl. of Pisinger's generalization of KP for subset sum problems
satisfying xi >= 0, for all xi in X. Returns the state array "st", which may
be used to determine if an optimal solution exists to this subproblem of SSP.
"""
if not X:
return False
X = sorted(X)
n = len(X)
b = bisect(X,c)
r = X[-1]
w_sum = sum(X[:b])
stm1 = {}
st = {}
for u in range(c-r+1,c+1):
stm1[u] = 0
for u in range(c+1,c+r+1):
stm1[u] = 1
stm1[w_sum] = b
for t in range(b,n+1):
for u in range(c-r+1,c+r+1):
st[u] = stm1[u]
for u in range(c-r+1,c+1):
u_tick = u + X[t-1]
st[u_tick] = max(st[u_tick],stm1[u])
for u in reversed(range(c+1,c+X[t-1]+1)):
for j in reversed(range(stm1[u],st[u])):
u_tick = u - X[j-1]
st[u_tick] = max(st[u_tick],j)
return st
Wow, that was headache-inducing. This needs proofreading, because, while it implements balsub, I can't define the right comparator to determine if the optimal solution to this subproblem of SSP exists.
I don't know much python, but there is an approach called meet in the middle.
Pseudocode:
Divide activities into two subarrays, A1 and A2
for both A1 and A2, calculate subsets hashes, H1 and H2, the way You do it in Your question.
for each (cost, a1) in H1
if(H2.contains(-cost))
return a1 + H2[-cost];
This will allow You to double the number of elements of activities You can handle in reasonable time.
I apologize for "discussing" the problem, but a "Subset Sum" problem where the x values are bounded is not the NP version of the problem. Dynamic programing solutions are known for bounded x value problems. That is done by representing the x values as the sum of unit lengths. The Dynamic programming solutions have a number of fundamental iterations that is linear with that total length of the x's. However, the Subset Sum is in NP when the precision of the numbers equals N. That is, the number or base 2 place values needed to state the x's is = N. For N = 40, the x's have to be in the billions. In the NP problem the unit length of the x's increases exponentially with N.That is why the dynamic programming solutions are not a polynomial time solution to the NP Subset Sum problem. That being the case, there are still practical instances of the Subset Sum problem where the x's are bounded and the dynamic programming solution is valid.
Here are three ways to make the code more efficient:
The code stores a list of activities for each partial sum. It is more efficient in terms of both memory and time to just store the most recent activity needed to make the sum, and work out the rest by backtracking once a solution is found.
For each activity the dictionary is repopulated with the old contents (subsets[prev_sum] = subset). It is faster to simply grow a single dictionary
Splitting the values in two and applying a meet in the middle approach.
Applying the first two optimisations results in the following code which is more than 5 times faster:
def subset_summing_to_zero2 (activities):
subsets = {0:-1}
for (activity, cost) in activities.iteritems():
for prev_sum in subsets.keys():
new_sum = prev_sum + cost
if 0 == new_sum:
new_subset = [activity]
while prev_sum:
activity = subsets[prev_sum]
new_subset.append(activity)
prev_sum -= activities[activity]
return sorted(new_subset)
if new_sum in subsets: continue
subsets[new_sum] = activity
return []
Also applying the third optimisation results in something like:
def subset_summing_to_zero3 (activities):
A=activities.items()
mid=len(A)//2
def make_subsets(A):
subsets = {0:-1}
for (activity, cost) in A:
for prev_sum in subsets.keys():
new_sum = prev_sum + cost
if new_sum and new_sum in subsets: continue
subsets[new_sum] = activity
return subsets
subsets = make_subsets(A[:mid])
subsets2 = make_subsets(A[mid:])
def follow_trail(new_subset,subsets,s):
while s:
activity = subsets[s]
new_subset.append(activity)
s -= activities[activity]
new_subset=[]
for s in subsets:
if -s in subsets2:
follow_trail(new_subset,subsets,s)
follow_trail(new_subset,subsets2,-s)
if len(new_subset):
break
return sorted(new_subset)
Define bound to be the largest absolute value of the elements.
The algorithmic benefit of the meet in the middle approach depends a lot on bound.
For a low bound (e.g. bound=1000 and n=300) the meet in the middle only gets a factor of about 2 improvement other the first improved method. This is because the dictionary called subsets is densely populated.
However, for a high bound (e.g. bound=100,000 and n=30) the meet in the middle takes 0.03 seconds compared to 2.5 seconds for the first improved method (and 18 seconds for the original code)
For high bounds, the meet in the middle will take about the square root of the number of operations of the normal method.
It may seem surprising that meet in the middle is only twice as fast for low bounds. The reason is that the number of operations in each iteration depends on the number of keys in the dictionary. After adding k activities we might expect there to be 2**k keys, but if bound is small then many of these keys will collide so we will only have O(bound.k) keys instead.
Thought I'd share my Scala solution for the discussed pseudo-polytime algorithm described in wikipedia. It's a slightly modified version: it figures out how many unique subsets there are. This is very much related to a HackerRank problem described at https://www.hackerrank.com/challenges/functional-programming-the-sums-of-powers. Coding style might not be excellent, I'm still learning Scala :) Maybe this is still helpful for someone.
object Solution extends App {
var input = "1000\n2"
System.setIn(new ByteArrayInputStream(input.getBytes()))
println(calculateNumberOfWays(readInt, readInt))
def calculateNumberOfWays(X: Int, N: Int) = {
val maxValue = Math.pow(X, 1.0/N).toInt
val listOfValues = (1 until maxValue + 1).toList
val listOfPowers = listOfValues.map(value => Math.pow(value, N).toInt)
val lists = (0 until maxValue).toList.foldLeft(List(List(0)): List[List[Int]]) ((newList, i) =>
newList :+ (newList.last union (newList.last.map(y => y + listOfPowers.apply(i)).filter(z => z <= X)))
)
lists.last.count(_ == X)
}
}