$$ \partial_{t}{v}+\frac{( v \cdot \nabla) \cdot v }{ A}=-\frac{\nabla{p_{1}}}{ \rho_{0} A}-\frac{e\rho(v \times B)}{\rho A}--e(v \cdot \Omega)(B\cdot \nabla)v+(\frac{\nabla{p_{1}}}{\rho_{0}} \times \Omega) \cdot \nabla{v}-\frac{eE}{A}+-e^{2}(E \cdot B)\Omega-(E \times \Omega) \cdot \nabla{v} $$
I have to find $p_{1}$ from this equation above where $v=-\nabla{\phi}$.How to do it in Mathematica? Here dot means diff w.r.t 't'
I tried to find out $p_{1}$ writing in Mathematica .It's showing errors
How do I compute a function that is a function of a function, written in general form in mathematica? Say I want to compute the full derivative of the following equation w.r.t alpha (or a in my case):
I have tried
v (a_) := u (x (a), a)
D[v (a), a]
But this gives the wrong output (out = v).
Desired output (nevermind the blue arrow):
I want to get rid of "for loop iteration" by using Pytorch functions in my code. But the formula is complicated and I can't find a clue. Can the "for loop iteration" in the below replaced with the Torch operation?
B=10
L=20
H=5
mat_A=torch.randn(B,L,L,H)
mat_B=torch.randn(L,B,B,H)
tmp_B=torch.zeros_like(mat_B)
for x in range(L):
for y in range(B):
for z in range(B):
tmp_B[:,y,z,:]+=mat_B[x,y,z,:]*mat_A[z,x,:,:]
This looks like a good setup for applying torch.einsum. However, we first need to explicit the : placeholders by defining each individual accumulation term.
In order to do so, consider the shape of your intermediate tensor results. The first, mat_B[x,y,z] is shaped (H,), while the second mat_A[z,x,] is shaped (L, H).
In pseudo-code your initial operation is as follows:
for x, y, z, l, h in LxBxBxLxH:
tmp_B[:,y,z,:] += mat_B[x,y,z,:]*mat_A[z,x,:,:]
Knowing this, we can reformulate your initial loop in pseudo-code as:
for x, y, z, l, h in LxBxBxLxH:
tmp_B[l,y,z,h] += mat_B[x,y,z,h]*mat_A[z,x,l,h]
Therefore, we can apply torch.einsum by using the same notation as above:
>>> torch.einsum('xyzh,zxlh->lyzh', mat_B, mat_A)
I'm puzzled by what I think is a mistake in a partial derivative I'm having Mathematica do for me.
Specifically, this is what I have:
Derivative I'd like to take
I'm trying to take the partial derivative of the following w.r.t. the variable θ (apologies for the formatting):
f=(1/4)(-4e((1+θ)/2)ψ+eN((1+θ)/2)ψ+eN((1+θ)/2-θd)ψ)-s
But the solution Mathematica produces seems very different from the one I get when I take the derivative myself. While Mathematica says the partial derivative of f w.r.t. θ is:
(1/4)eψ(N-2)
By hand, I get and am quite confident the correct answer is instead:
(1/4)eψ(N(1-d)-2)
That is, Mathematica is producing something that drops the variable d when it is differentiating. I've explored different functions that take a derivative in Mathematica, and the possibility that maybe some of the variables I'm using (such as d) might be protected or otherwise special, but I can't say that I know why the answer's so off. This is the first time in the notebook that d appears, so it is not set to 0. For context, I'm trying to confirm that the derivative of the function is positive for values of the variables in certain ranges, and we have d>0 and d<(1/2). Doing this all by hand works but I'm trying to confirm with Mathematica as I will be dealing with more complicated functions and need to make sure I'm having Mathematica produce the right derivatives.
Your didn't add spaces in eN and θd, so it thinks they're some other 2-character variables.
Adding spaces between them gives your expected result:
f[θ,e,N,ψ,d,s] = (1/4) (-4 e ((1+θ)/2) ψ + e N ((1+θ)/2) ψ + e N ((1+θ)/2 - θ d) ψ) - s;
D[f[θ, e, N, ψ, d, s], θ] // FullSimplify
(* 1/4 e (-2 + N - d N) ψ *)
I want to recreate a solve function(solve Ax = b for x) for sparse matrix.
In the Julia documentation, it says that when we applied a sparse matrix to lufact(), it returns the following:
L, U, p, q, Rs = F[:(:)]
With the given formula in Julia doc: LU = Rs.*A[p,q], I did some algebra and obtained the following formula:
x = U \ ( L \ (Rs.*b[p]) )
ipermute!(x,q)
This formula matched with the default F\b solver in Julia when the matrix is dense but the result is off when the matrix is sparse. Does anyone know why?
using LinearAlgebra, then B = lu(A); B\b. Julia returns a type and its dispatch on \ handles the rest.