How to apply an index-dependent function to a numpy ndarray? - numpy-ndarray

So numpy ndarrays are quite handy in that you can just type in f(A) for any one-dimensional function f and any ndarray A and it will apply f element-wise. It is also, I was told, a very efficient way of applying a function to a ndarray and avoiding for loops. Avoid for loops, is what I have been told.
Turns out that now I need to apply a function f(A) that is not just one dimensional, but requires knowledge of the index tuple of each element in order to return the correct value for each element. Is there a way to avoid using for loops or explicit recursion and keep working with the direct function application on ndarrays under these circumstances? Or am I out of options?

Use numpy.meshgrid to generate coordinate matrices corresponding to index tuples of each element in the array. Then let your function also depend on these coordinates.
For example a is a three dimensional array.
x, y, z = np.meshgrid(np.arange(a.shape[0]), np.arange(a.shape[1]), np.arange(a.shape[2]), indexing='ij')
gives three arrays x, y, z which contains x, y and z coordinates at each location. The function on array a would then be extended by also giving the index arrays.
f(a, x, y, z)
Be careful with the order of the indices/directions. Check the options of indexing.

Related

How can I remove "for loop iteration" by using torch tensor operator?

I want to get rid of "for loop iteration" by using Pytorch functions in my code. But the formula is complicated and I can't find a clue. Can the "for loop iteration" in the below replaced with the Torch operation?
B=10
L=20
H=5
mat_A=torch.randn(B,L,L,H)
mat_B=torch.randn(L,B,B,H)
tmp_B=torch.zeros_like(mat_B)
for x in range(L):
for y in range(B):
for z in range(B):
tmp_B[:,y,z,:]+=mat_B[x,y,z,:]*mat_A[z,x,:,:]
This looks like a good setup for applying torch.einsum. However, we first need to explicit the : placeholders by defining each individual accumulation term.
In order to do so, consider the shape of your intermediate tensor results. The first, mat_B[x,y,z] is shaped (H,), while the second mat_A[z,x,] is shaped (L, H).
In pseudo-code your initial operation is as follows:
for x, y, z, l, h in LxBxBxLxH:
tmp_B[:,y,z,:] += mat_B[x,y,z,:]*mat_A[z,x,:,:]
Knowing this, we can reformulate your initial loop in pseudo-code as:
for x, y, z, l, h in LxBxBxLxH:
tmp_B[l,y,z,h] += mat_B[x,y,z,h]*mat_A[z,x,l,h]
Therefore, we can apply torch.einsum by using the same notation as above:
>>> torch.einsum('xyzh,zxlh->lyzh', mat_B, mat_A)

representing large binary vector as problog fact/rule

In ProbLog, how do I represent the following as p-fact/rule :
A binary vector of size N, where P bits are 1 ? i.e. a bit is ON with probability P/N, where N > 1000
i come up with this, but it seem iffy :
0.02::one(X) :- between(1,1000,X).
Want to use it later to make calculations on what happens if i apply two-or-more operations of bin-vec such as : AND,OR,XOR,count,overlap, hamming distance, but do it like Modeling rather than Simulation
F.e. if I ORed random 10 vec's, what is the probable overlap-count of this unionized vector and a new rand vec
... or what is the probability that they will overlap by X bits
.... questions like that
PS> I suspect cplint is the same.
Another try, but dont have idea how to query for 'single' result
1/10::one(X,Y) :- vec(X), between(1,10,Y). %vec: N=10, P=?
vec(X) :- between(1,2,X). %num of vecs
%P=2 ??
two(A,B,C,D) :- one(1,A), one(2,B), A =\= B, one(1,C), one(2,D), C =\= D.
based on #damianodamiono , so far :
P/N::vec(VID,P,N,_Bit).
prob_on([],[],_,_).
prob_on([H1|T1],[H2|T2],P,N):-
vec(1,P,N,H1), vec(2,P,N,H2),
prob_on(T1,T2,P,N).
query(prob_on([1],[1],2,10)).
query(prob_on([1,2,3,5],[1,6,9,2],2,10)).
I'm super happy to see that someone uses Probabilistic Logic Programming! Anyway, usually you do not need to create a list with 1000 elements and then attach 1000 probabilities. For example, if you want to state that each element of the list has a probabilty to be true of P/N (suppose 0.8), you can use (cplint and ProbLog have almost the same syntax, so you can run the programs on both of them):
0.8::on(_).
in the recursion.
For example:
8/10::on(_).
prob_on([]). prob_on([H|T]):-
on(H),
prob_on(T).
and then ask (in cplint)
?- prob(prob_on([1,2,3]),Prob).
Prob = Prob = 0.512
in ProbLog, you need to add query(prob_on([1,2,3])) in the program. Note the usage of the anonymous variable in the probabilistic fact on/1 (is needed, the motivation may be complicated so I omit it). If you want a probability that depends on the lenght of the list and other variables, you can use flexible probabilities:
P/N::on(P,N).
and then call it in your predicate with
...
on(P,N),
...
where both P and N are ground when on/2 is called. In general, you can add also a body in the probabilistic fact (turning it into a clause), and perform whatever operation you want.
With two lists:
8/10::on_1(_).
7/10::on_2(_).
prob_on([],[]).
prob_on([H1|T1],[H2|T2]):-
on_1(H1),
on_2(H2),
prob_on(T1,T2).
?- prob(prob_on([1,2,3,5],[1,6,9,2]),Prob).
Prob = 0.09834496
Hope this helps, let me know if something is still not clear.

How to overload arithmetic operators (+, *, -, /) for functions in c++?

I would like to implement a numerical integral whose integrand is evaluated at quadrature points. Therefore something like: integral(domain, f), where domain is indeed the domain where I want to integrate and f is the function to integrate. f is only a function of the Point p (quadrature points) inside the domain and can have vector values (scalar is a particular case).
Since the function f can be, in general, a combination of different functions, I wonder how to overload arithmetic operators for functions.
I already found this Implementing multiplication operator for mathematical functions C++
but it does not cover my question, because the Function returns only x, while In my case I would like to have different Functions which can return a more complex function of x.
So, let f_1,...f_N be different functions which have the same return type, for example a std::array<double,M> with given length M, and which receive the same input Point p, i.e for I=1,...,N:
std::array<double,M> f_i(Point p)
{ std::array<double,M> x;
\\ compute x somehow depending on i
return x;}
Then I would like to create f as a combination of the previous f_1,...f_N, e.g. f=f_1 *f_2+(f_3*f_4)*f_5... (here the operations are meant to be component wise).
In this way I could evaluate f(p) inside integral(domain, f), obtaining for each quadrature point exactly:
f_1(p) *f_2(p)+(f_3(p)*f_4(p))*f_5(p)...
Edit:
I know I have to use functors and not simple functions (which I used just to state the problem), but I am not able to figure out how for this purpose.
Any hint?
Thank you

Mathematica transformation rules with trig functions

I'm an occasional Mathematica user and I am trying to transform an expression from spherical to Cartesian coordinates.
My function is defined as:
g[theta_, phi_] := Cos[phi](Sin[theta])^2 Sin[phi]
I'm hoping to transform that function using the following rules:
Sin[theta]Sin[phi] -> x
Cos[theta]-> y
Sin[theta]Cos[phi]-> z
in order to get the result:
zx
Here is the code I'm using to do that:
g[theta, phi] //. {Sin[theta]Sin[phi] -> x, Cos[theta] -> y, Sin[theta] Cos[phi] -> z}
And the result I get is:
Cos[phi] Sin[phi] Sin[theta]^2
So no transformation occurred.
Is there a function or an option I could add to help Mathematica figure out that the transformation is possible?
Thanks!
Perhaps this will be sufficient
Assuming[Sin[theta]Sin[phi]==x&&Cos[theta]==y&&Sin[theta]Cos[phi]==z,
Simplify[Cos[phi]Sin[theta]^2 Sin[phi]]]
which instantly returns
x z
That doesn't show you the steps or rules it used to arrive at that result, but because it considered x z to be "simpler" than your trig expression the evaluation process went in that direction.
There is a slightly more compact way of doing the same thing, if that matters. Simplify can accept a second argument which are the things which are assumed to be true during the process of simplification. Thus
Simplify[Cos[phi]Sin[theta]^2 Sin[phi],
Sin[theta]Sin[phi]==x&&Cos[theta]==y&&Sin[theta]Cos[phi]==z]
will give you exactly the same result

Best way to do an iteration scheme

I hope this hasn't been asked before, if so I apologize.
EDIT: For clarity, the following notation will be used: boldface uppercase for matrices, boldface lowercase for vectors, and italics for scalars.
Suppose x0 is a vector, A and B are matrix functions, and f is a vector function.
I'm looking for the best way to do the following iteration scheme in Mathematica:
A0 = A(x0), B0=B(x0), f0 = f(x0)
x1 = Inverse(A0)(B0.x0 + f0)
A1 = A(x1), B1=B(x1), f1 = f(x1)
x2 = Inverse(A1)(B1.x1 + f1)
...
I know that a for-loop can do the trick, but I'm not quite familiar with Mathematica, and I'm concerned that this is the most efficient way to do it. This is a justified concern as I would like to define a function u(N):=xNand use it in further calculations.
I guess my questions are:
What's the most efficient way to program the scheme?
Is RecurrenceTable a way to go?
EDIT
It was a bit more complicated than I tought. I'm providing more details in order to obtain a more thorough response.
Before doing the recurrence, I'm having problems understanding how to program the functions A, B and f.
Matrices A and B are functions of the time step dt = 1/T and the space step dx = 1/M, where T and M are the number of points in the {0 < x < 1, 0 < t} region. This is also true for vector the function f.
The dependance of A, B and f on x is rather tricky:
A and B are upper and lower triangular matrices (like a tridiagonal matrix; I suppose we can call them multidiagonal), with defined constant values on their diagonals.
Given a point 0 < xs < 1, I need to determine it's representative xn in the mesh (the closest), and then substitute the nth row of A and B with the function v( x) (transposed, of course), and the nth row of f with the function w( x).
Summarizing, A = A(dt, dx, xs, x). The same is true for B and f.
Then I need do the loop mentioned above, to define u( x) = step[T].
Hope I've explained myself.
I'm not sure if it's the best method, but I'd just use plain old memoization. You can represent an individual step as
xstep[x_] := Inverse[A[x]](B[x].x + f[x])
and then
u[0] = x0
u[n_] := u[n] = xstep[u[n-1]]
If you know how many values you need in advance, and it's advantageous to precompute them all for some reason (e.g. you want to open a file, use its contents to calculate xN, and then free the memory), you could use NestList. Instead of the previous two lines, you'd do
xlist = NestList[xstep, x0, 10];
u[n_] := xlist[[n]]
This will break if n > 10, of course (obviously, change 10 to suit your actual requirements).
Of course, it may be worth looking at your specific functions to see if you can make some algebraic simplifications.
I would probably write a function that accepts A0, B0, x0, and f0, and then returns A1, B1, x1, and f1 - say
step[A0_?MatrixQ, B0_?MatrixQ, x0_?VectorQ, f0_?VectorQ] := Module[...]
I would then Nest that function. It's hard to be more precise without more precise information.
Also, if your procedure is numerical, then you certainly don't want to compute Inverse[A0], as this is not a numerically stable operation. Rather, you should write
A0.x1 == B0.x0+f0
and then use a numerically stable solver to find x1. Of course, Mathematica's LinearSolve provides such an algorithm.

Resources