I am trying to extract the lower triangular part of a SymPy matrix. Since I could not find a tril method in SymPy, I defined:
def tril (M):
m = M.copy()
for row_index in range (m.rows):
for col_index in range (row_index + 1, m.cols):
m[row_index, col_index] = 0
return (m)
It seems to work:
Is there a more elegant way to extract the lower triangular part of a SymPy matrix?
Is .copy() the recommended way to ensure the integrity of the original matrix?
In SymPy, M.lower_triangular(k) will give the lower triangular elements below the kth diagonal. The default is k=0.
In [99]: M
Out[99]:
⎡a b c⎤
⎢ ⎥
⎢d e f⎥
⎢ ⎥
⎣g h i⎦
The other answer suggest using the np.tril function:
In [100]: np.tril(M)
Out[100]:
array([[a, 0, 0],
[d, e, 0],
[g, h, i]], dtype=object)
That converts M into a numpy array - object dtype because of the symbols. And the result is also a numpy array.
Your function returns a sympy.Matrix.
In [101]: def tril (M):
...: m = M.copy()
...: for row_index in range (m.rows):
...: for col_index in range (row_index + 1, m.cols):
...: m[row_index, col_index] = 0
...: return (m)
...:
In [102]: tril(M)
Out[102]:
⎡a 0 0⎤
⎢ ⎥
⎢d e 0⎥
⎢ ⎥
⎣g h i⎦
As a general rule mixing sympy and numpy leads to confusion, if not errors. numpy is best for numeric work. It can handle non-numeric objects like symbols, but the math is hit-or-miss.
The np.tri... functions are built on the np.tri function:
In [114]: np.tri(3).astype(int)
Out[114]:
array([[1, 0, 0],
[1, 1, 0],
[1, 1, 1]])
We can make a symbolic Matrix from this:
In [115]: m1 = Matrix(np.tri(3).astype(int))
In [116]: m1
Out[116]:
⎡1 0 0⎤
⎢ ⎥
⎢1 1 0⎥
⎢ ⎥
⎣1 1 1⎦
and do element-wise multiplication:
In [117]: M.multiply_elementwise(m1)
Out[117]:
⎡a 0 0⎤
⎢ ⎥
⎢d e 0⎥
⎢ ⎥
⎣g h i⎦
np.tri works by comparing a column array with a row:
In [123]: np.arange(3)[:,None]>=np.arange(3)
Out[123]:
array([[ True, False, False],
[ True, True, False],
[ True, True, True]])
In [124]: _.astype(int)
Out[124]:
array([[1, 0, 0],
[1, 1, 0],
[1, 1, 1]])
Another answer suggests lower_triangular. It's interesting to look at its code:
def entry(i, j):
return self[i, j] if i + k >= j else self.zero
return self._new(self.rows, self.cols, entry)
It applies an i>=j test to each element. _new must be iterating on the rows and columns.
You can simply use numpy function:
import numpy as np
np.tril(M)
*of course, as noted below, you should convert back to sympy.Matrix(np.tril(M)). But it depends on what you're going to do next.
Related
Find a matrix for the Linear Transformation T: R2 → R3, defined by
T (x, y) = (13x - 9y, -x - 2y, -11x - 6y) with respect to the basis
B = {(2, 3), (-3, -4)} and C = {(-1, 2, 2), (-4, 1, 3), (1, -1, -1)} for R2 & R3
respectively.
Here, the process should be to find the transformation for the vectors of B and express those as a linear combination of C and those vectors will form the matrix for linear transformation. Is my approach correct or do I need to change something?
I will show you how to do in python with sympy module
import sympy
# Assuming that B(x, y) = (2,3)*x + (-3, -4)*y it can be expressed as a left multiplication by
B = sympy.Matrix(
[[2, -3],
[3, -4]])
# Then you apply T as a left multiplication by
T = sympy.Matrix(
[[13, -9],
[-1, -2],
[-11, -6]])
#And finally to get the representation on the basis C you multiply of the result
# by the inverse of C
C = sympy.Matrix(
[[-1, -4, 1],
[2, 1, -1],
[2, 3, -1]])
combined = C.inv() * T * B
The combined transformation matrix yelds
[[-57, 77],
[-16, 23],
[-122, 166]])
The accepted answer to this question provides an implementation of an algorithm that given two numbers k and n can generate all combinations (excluding permutations) of k positive integers which sum to n.
I'm looking for a very similar algorithm which essentially calculates the same thing except that the requirement that k > 0 is dropped, i.e. for k = 3, n = 4, the output should be
[0, 0, 0, 4], [0, 0, 1, 3], ... (in any order).
I have tried modifying the code snippet I linked but I have so far not had any success whatsoever. How can I efficiently implement this? (pseudo-code would be sufficient)
def partitions(Sum, K, lst, Minn = 0):
'''Enumerates integer partitions of Sum'''
if K == 0:
if Sum == 0:
print(lst)
return
for i in range(Minn, min(Sum + 1, Sum + 1)):
partitions(Sum - i, K - 1, lst + [i], i)
partitions(6, 3, [])
[0, 0, 6]
[0, 1, 5]
[0, 2, 4]
[0, 3, 3]
[1, 1, 4]
[1, 2, 3]
[2, 2, 2]
This code is quite close to linked answer idea, just low limit is 0 and correspondingly stop value n - size + 1 should be changed
You could use the code provided on the other thread provided as is.
Then you want to get all of the sets for set size 1 to k, and if your current set size is less than k then pad with 0's i.e
fun nonZeroSums (k, n)
for i in 1 to k
[pad with i - k 0's] concat sum_to_n(i, n)
any suggestions for a fast multiply of A * diag(e) * A^T * f for dense matrix A and vectors e, f?
This is what I have now.
v[:] = 0
for i in range(N):
for j in range(N):
v[i] = v[i]+A[i,j]*e[j]*np.dot(A[:,j],f)
Thanks,
Comments by #rubenvb's, where it was suggested to use A.dot(np.diag(e)).dot(A.transpose()).dot(f) should make it really fast. But, we don't really need to make it a 2D array of diag(e) there and thus skip one matrix-multiplication. Additionally, we can swap places for A.T and f and thus avoid the transpose too. Thus, a simplified and much more efficient solution would evolve, like so -
A.dot(e*f.dot(A))
Here's a quick runtime test on decent sized arrays on all the proposed approaches -
In [226]: # Setup inputs
...: N = 200
...: A = np.random.rand(N,N)
...: e = np.random.rand(N,)
...: f = np.random.rand(N,)
...:
In [227]: %timeit np.einsum('ij,j,kj,k', A, e, A, f) # #Warren Weckesser's soln
10 loops, best of 3: 77.6 ms per loop
In [228]: %timeit A.dot(np.diag(e)).dot(A.transpose()).dot(f) # #rubenvb's soln
10 loops, best of 3: 18.6 ms per loop
In [229]: %timeit A.dot(e*f.dot(A)) # Proposed here
10000 loops, best of 3: 100 µs per loop
The suggestion made by #rubenvb is probably the simplest way to do it. Another way is to use einsum.
Here's an example. I'll use the following a, e and f:
In [95]: a
Out[95]:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
In [96]: e
Out[96]: array([-1, 2, 3])
In [97]: f
Out[97]: array([5, 4, 1])
This is the direct translation of your formula into numpy code. It is basically the same as #rubenvb's suggestion:
In [98]: a.dot(np.diag(e)).dot(a.T).dot(f)
Out[98]: array([ 556, 1132, 1708])
Here's the einsum version:
In [99]: np.einsum('ij,j,jk,k', a, e, a.T, f)
Out[99]: array([ 556, 1132, 1708])
You can eliminate the need to transpose a by swapping the index labels associated with that argument:
In [100]: np.einsum('ij,j,kj,k', a, e, a, f)
Out[100]: array([ 556, 1132, 1708])
I'm trying to evaluate the partial derivative of the most general 3D rotation matrix, like this:
phi, psi, theta = sympy.symbols("phi, psi, theta")
RMatrixPhi = sympy.Matrix([[cos(phi), sin(phi), 0],
[-sin(phi), cos(phi), 0],
[0, 0, 1]])
RMatrixPsi = sympy.Matrix([[cos(psi), 0, sin(psi)],
[0, 1, 0 ],
[-sin(psi), 0, cos(psi)]])
RMatrixTheta = sympy.Matrix([[1, 0, 0 ],
[0, cos(theta), sin(theta)],
[0, -sin(theta), cos(theta) ]])
RMatrix = RMatrixPhi * RMatrixPsi * RMatrixTheta
D = diff(RMatrix, phi)
However,D is then a sympy.Derivative object, and I cannot get it evaluated,
it's just printed out as Derivative(Matrix(...))
The only way I could get it working is by writing
sympy.Matrix([sympy.diff(r, phi) for r in RMatrix]).reshape(3,3)
but that looks ugly. What's the right way to compute such derivatives?
The Matrix class has a method called diff which, according to the documentation ...
Docstring:
Calculate the derivative of each element in the matrix.
So use
RMatrix.diff(phi)
to perform element-wise derivation.
Let x denote a vector of p values (i.e. a data point in p dimensional space).
I have two sets: set A of n elements A = {xi, .., xn} and a set B of m elements B = {xj, .., xm}, where |A| > 1 and |B| > 1. Given an integer k > 0, let dist(x, k, A) a function which returns the mean Euclidean distance from x to its k nearest points in A; and dist(x, k, B) the mean Euclidean distance from x to its k nearest points in B.
I have the following algorithm:
Repeat
{
A' = { x in A, such that dist(x, k, A) > dist(x, k, B) }
B' = { x in B, such that dist(x, k, A) < dist(x, k, B) }
A = { x in A such that x not in A' } U B'
B = { x in B such that x not in B' } U A'
}
Until CONDITION == True
Termination: CONDITION is True when no more elements move from A to B or from B to A (that is A' and B' becomes empty), or when |A| or |B| becomes less than or equals to 1.
1) Is it possible to prove that this algorithm terminates ?
2) And if so, is it also possible to have an upper bound for the number of iterations required to terminate ?
Note: the k nearest points to x in a set S, means: the k points (others than x) in S, having the smallest Euclidean distance to x.
It looks like this algorithm can loop forever, oscillating between two or more states. I determined this experimentally using the following Python program:
def mean(seq):
if len(seq) == 0:
raise IndexError("didn't expect empty sequence for mean")
return sum(seq) / float(len(seq))
def dist(a,b):
return abs(a-b)
def mean_dist(x, k, a):
neighbors = {p for p in a if p != x}
neighbors = sorted(neighbors, key=lambda p: dist(p,x))
return mean([dist(x, p) for p in neighbors[:k]])
def frob(a,b,k, verbose = False):
def show(msg):
if verbose:
print msg
seen_pairs = set()
iterations = 0
while True:
iterations += 1
show("Iteration #{}".format(iterations))
a_star = {x for x in a if mean_dist(x, k, a) > mean_dist(x,k,b)}
b_star = {x for x in b if mean_dist(x, k, a) < mean_dist(x,k,b)}
a_temp = {x for x in a if x not in a_star} | b_star
b_temp = {x for x in b if x not in b_star} | a_star
show("\tA`: {}".format(list(a_star)))
show("\tB`: {}".format(list(b_star)))
show("\tA becomes {}".format(list(a_temp)))
show("\tB becomes {}".format(list(b_temp)))
if a_temp == a and b_temp == b:
return a, b
key = (tuple(sorted(a_temp)), tuple(sorted(b_temp)))
if key in seen_pairs:
raise Exception("Infinite loop for values {} and {}".format(list(a_temp),list(b_temp)))
seen_pairs.add(key)
a = a_temp
b = b_temp
import random
#creates a set of random integers, with the given number of elements.
def randSet(size):
a = set()
while len(a) < size:
a.add(random.randint(0, 10))
return a
size = 2
k = 1
#p equals one because I don't feel like doing vector math today
while True:
a = randSet(size)
b = randSet(size)
try:
frob(a,b, k)
except IndexError as e:
continue
except Exception as e:
print "infinite loop detected for initial inputs {} and {}".format(list(a), list(b))
#run the algorithm again, but showing our work this time
try:
frob(a,b,k, True)
except:
pass
break
Result:
infinite loop detected for initial inputs [10, 4] and [1, 5]
Iteration #1
A`: [10, 4]
B`: [1, 5]
A becomes [1, 5]
B becomes [10, 4]
Iteration #2
A`: [1, 5]
B`: [10, 4]
A becomes [10, 4]
B becomes [1, 5]
Iteration #3
A`: [10, 4]
B`: [1, 5]
A becomes [1, 5]
B becomes [10, 4]
In this case, the loop never terminates because A and B continually switch entirely. While experimenting with larger set sizes, I found a case where only some elements switch:
infinite loop detected for initial inputs [8, 1, 0] and [9, 4, 5]
Iteration #1
A`: [8]
B`: [9]
A becomes [0, 1, 9]
B becomes [8, 4, 5]
Iteration #2
A`: [9]
B`: [8]
A becomes [0, 1, 8]
B becomes [9, 4, 5]
Iteration #3
A`: [8]
B`: [9]
A becomes [0, 1, 9]
B becomes [8, 4, 5]
Here, elements 8 and 9 move back and forth while the other elements stay in place.