Multiply matrix by vector rowwise (sweep) - matrix

Does STAN provide a method for multiplying each row of a matrix by a vector, elementwise? i.e. if I had a matrix:
[1,2,3,
4,5,6]
and a vector:
[2,4,6]
the desired result would be a second matrix:
[2,8,18,
8,15,36]
I'm sure I can do this as a for loop, but it seems like something I should be able to do without it.

Stan has an elementwise multiplication operator: .*. It applies only to objects of the same type (e.g., two vectors, or two matrices). But we can use the rep_matrix() broadcast function to turn the vector into a matrix:
my_matrix .* rep_matrix(my_vector', rows(my_matrix))
If the vector is already a row vector in Stan, then the transposition is unnecessary:
my_matrix .* rep_matrix(my_row_vector, rows(my_matrix))

Related

Convert a vector of positive/negative elements to all positive elements in Julia?

Given a one dimensional vector in Julia with positive and negative entries, like A=[1;-3;5;-7], is there any function or command that can alter this vector so that its elements all become positive, so that it becomes A=[1;3;5;7]?
Vectorize over abs:
julia> abs.(A)
4-element Vector{Int64}:
1
3
5
7
Any function in Julia can work over an array by just appending a dot . to its name.

Constructing a vector from a sequence in MacAulay2

I am in the following situation:
S=QQ[x_0..x_n];
for i from 0 to n do for j from i to n do d_{i,j} = x_i*x_j;
Now I would like to construct a vector whose elements are
d_{0,0}=x_0^2,d_{0,1}=x_0*x_1,...,d_{0,n}=x_0*x_n,d_{1,1}=x_1^2,d_{1,2}=x_1*x_2,...,d_{n,n}=x_n^2
How can I do this in MacAulay2? Thank you very much.
This may be what you are looking for.
m=ideal(S_*)
m^2_*
The _* operator gets the generators of an ideal. So, m is the maximal ideal, and you are looking for the generators of m^2.
Alternatively
flatten entries basis(2,S)
which simply gives you the vector basis of the ring S in degree 2.
In Macaulay2, vector refers to a column vector, and if we have vector elements, we can construct the following vector:
SQ= for i from 0 to n list d_{i}
vector(SQ)
But since the vector you want is not a column vector, it's best to make a matrix:
d=mutableMatrix genericMatrix(S,n,n)
for i from 0 to n do for j from 0 to n do d_(i,j)=x_i*x_j

Is there a pushable/poppable hash function for stack-like objects?

I know of rolling hash functions that are similar to a hash on a bounded queue. Is there anything similar for stacks?
My use case is that I am doing a depth first search of possible program traces (with loop unrolling, so these stacks can get biiiiig) and I need to identify branching via these traces. Rather than store a bunch of stacks of depth 1000 I want to hash them so that I can index by int. However, if I have stacks of depth 10000+ this hash is going to be expensive, so I want to keep track of my last hash so that when I push/pop from my stack I can hash/unhash the new/old item respectively.
In particular, I am looking for a hash h(Object, Hash) with an unhash u(Object, Hash) with the property that for object x to be hashed we have:
u(x, h(x, baseHash)) = baseHash
Additionally, this hash shouldn't be commutative, since order matters.
One thought I had was matrix multiplication over GL(2, F(2^k)), maybe using a Cayley graph? For example, take two invertible matrices A_0, A_1, with inverses B_0 and B_1, in GL(2, F(2^k)), and compute the hash of an object x by first computing some integer hash with bits b31b30...b1b0, and then compute
H(x) = A_b31 . A_b30 . ... . A_b1 . A_b0
This has an inverse
U(x) = B_b0 . B_b1 . ... . B_b30 . B_31.
Thus the h(x, baseHash) = H(x) . baseHash and u(x, baseHash) = U(x) . baseHash, so that
u(x, h(x, base)) = U(x) . H(x) . base = base,
as desired.
This seems like it might be more expensive than is necessary, but for 2x2 matrices it shouldn't be too bad?
Most incremental hash functions can be made from two kinds of operations:
1) An invertible diffusion function that mixes up the previous hash. Invertible functions are chosen for this so that they don't loose information. Otherwise the hash would tend towards a few values; and
2) An invertible mixing function to mix new data into the hash. Invertible functions are used for this so that every part of the input has equivalent influence over the final hash value.
Since both these things are invertible, it's very easy to undo the last part of an incremental hash and "pop" off the previous value.
For instance, the most common kind of simple hash functions in use are polynomial hash functions. To update a previous hash value with a new input 'x', you calculate:
h' = h*A + x mod M
The multiplication is the diffusion function. In order for this to be invertible, A must have a multiplicative inverse mod M -- commonly either M is chosen to be prime, or M is a power of 2 and A is odd.
Because the multiplicative inverse exists, it's easy to pop off the last value from the hash, as long as you still have access to it:
h = (h' - x)*(1/A) mod M
You can use the extended Euclidean algorithm to find the inverse of A: https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm
Most other common non-cryptographic hashes, like CRCs, FNV, murmurHash, etc. are similarly easy to pop values off.
Some of these hashes have a final diffusion step after the incremental work, but that step is pretty much always invertible as well, to ensure that the hash can take on any value, so you can undo it to get back to the incremental part.
Diffusion operations are often made from sequences of primitive invertible operations. To undo them you would undo each operation in reverse order. Some of the common types you'll see are:
cyclic shifts
invertible multiplication (as above)
x = x XOR (x >> shift)
Feistel rounds (see https://simple.wikipedia.org/wiki/Feistel_cipher)
mixing operations are usually + or XOR.

Multipling row and column vector using .* operation

a =
1
2
3
b =
1 2 3
a.*b
ans =
1 2 3
2 4 6
3 6 9
I used the .* operator to multiply a row vector and a column vector in Octave to see the results. I dont understand how the answer is obtained.
This is because Octave (in a notable difference from Matlab) automatically broadcasts.
The * operator in Octave is the matrix multiplication operator. So in your case a*b would output (in Matlab as well)
a*b
ans =
1 2 3
2 4 6
3 6 9
which should be expected. The product of a 3-by-1 matrix with a 1-by-3 matrix would have dimensions 3-by-3 (inner dimensions must match, the result takes the outer dimensions).
However the .* operator is the element-wise multiplication operation. That means that instead of matrix multiplication, this would multiply each corresponding elements of the two inputs independent from the rest of the matrix. So [1,2,3].*[1,2,3] (or a'.*b) results in [1,4,9]. Again this is in Matlab and Octave.
When using element-wise operations, it is important that the dimensions of the inputs exactly match. So [1,2,3].*[1,2] will through an error because the dimensions do not match. In Matlab, your a.*b will through an error as well. HOWEVER in Octave it won't, instead it will automatically broadcast. You can imagine this is as if it takes one of your inputs and replicates it on a singleton dimension (so in a column vector, the second dimension is a singleton dimension because it's size is 1) and then applies the operator element-wise. In your case you have two matrices with singleton dimensions (i.e. a columan vector and a row vector) so it actually broadcasts twice and you effectively (but note that it does not actually expand the matrices in memory and is often far faster than using repmat) get
[1,2,3;1,2,3;1,2,3].*[1,1,1;2,2,2;3,3,3]
which produces the result you see.
In matlab, to achieve the same result you would have to explicitly call the bsxfun function (binary singleton expansion function) like so:
bsxfun(#times, a, b)

Construct a full rank matrix by adding vectors from the standard basis

I have a nxn singular matrix. I want to add k rows (which must be from the standard basis e1, e2, ..., en) to this matrix such that the new (n+k)xn matrix is full column rank. The number of added rows k must be minimum and they can be added in any order (not just e1, e2 ,..., it can be e4, e10, e1, ...) as long as k is minimum.
Does anybody know a simple way to do this? Any help is appreciated.
You can achieve this by doing a QR decomposition with column pivoting, then taking the transpose of the last n-rank(A) columns of the permutation matrix.
In matlab, this is achieved by the qr function(See the matlab documentation here):
r=rank(A);
[Q,R,E]=qr(A);
newA=[A;transpose(E(:,end-r+1:end))];
Each row of transpose(E(:,end-r+1:end)) will be a member of standard basis, rank of newA will be n, and this is also the minimal number of standard basis you will need to do so.
Here is how this works:
QR decomposition with column pivoting is a standard procedure to decompose a matrix A into products:
A*E==Q*R
where Q is an orthogonal matrix if A is real, or an unitary matrix if A is complex; R is upper triangular matrix, and E is a permutation matrix.
In short, the permutations are chosen so that the diagonal elements are larger than the off-diagonals in the same row, and that size of the diagonal elements are non-increasing. More detailed description can be found on the netlib QR factorization page.
Since Q and E are both orthogonal (or unitary) matrices, the rank of R is the same as the rank of A. To bring up the rank of A, we just need to find ways to increase the rank of R; and this is much more straight forward thanks to the structure of R as the result of pivoting and the fact that it is upper-triangular.
Now, with the requirement placed on pivoting procedure, if any diagonal element of R is 0, the entire row has to be 0. The n-rank(A) rows of 0s in the bottom if R is responsible for the nullity. If we replace the lower right corner with an identity matrix, the that new matrix would be full rank. Well, we cannot really do the replacement, but we can append the rows matrix to the bottom of R and form a new matrix that has the same rank:
B==[ 0 I ] => newR=[ R ; B ]
Here the dimensionality of I is the nullity of A and that of R.
It is readily seen that rank(newR)=n. Then we can also define a new unitary Q matrix by expanding its dimensionality in a trivial manner:
newQ=[Q 0 ; 0 I]
With that, our new rank n matrix can be obtained as
newA=newQ*newR.transpose(E)=[Q*R ; B ]*transpose(E) =[A ; B*transpose(E)]
Note that B is [0 I] and E is a permutation matrix, so B*transpose(E) is simply the transpose
of the last n-rank(A) columns of E, and thus a set of rows made of standard basis, and that's just what you wanted!
Is n very large? The simplest solution without using any math would be to try adding e_i and seeing if the rank increases. If it does, keep e_i. proceed until finished.
I like #Xiaolei Zhu's solution because it's elegant, but another way to go (that's even more computationally efficient is):
Determine if any rows, indexed by i, of your matrix A are all zero. If so, then the corresponding e_i must be concatenated.
After that process, you can simply concatenate any subset of the n - rank(A) columns of the identity matrix that you didn't add in step 1.
rows/cols from Identity matrix can be added in any order. it does not need to be added in usual order as e1,e2,... in general situation for making matrix full rank.

Resources