Avoid Numpy Index For loop - performance

Is there any way to avoid using a second for loop for an operation like this?
for x in range(Size_1):
for y in range(Size_2):
k[x,y] = np.sqrt(x+y) - y
Or is there a better way to optimize this? Right now it is incredibly slow for large sizes.

Here's a vectorized solution with broadcasting -
X,Y = np.ogrid[:Size_1,:Size_2]
k_out = np.sqrt(X+Y) - Y

Supplementing Divakar's solution: If Y and X are not new ranges but some preexisting vectors of numbers, use np.ix_:
Y, X = np.array([[1.3, 3.5, 2], [2.0, -1, 1]])
Y, X = np.ix_(Y, X) # does the same as Y = Y[:, None]; X = X[None, :]
out = np.sqrt(Y+X) - X

Related

Solve function not solving simultaneous equations

How do I solve a problem with 3 simultaneous equations? My code shown below is not giving the correct results.
I am attempting to find the maximum area (A) whilst both lengths x and y follow the following property: 2x + y = 960.
I have already looked at the documentation, and it seems that the format of my arguments is correct.
Solve[{2 x + y == 960, A == x*y, D[A] == 0}, {x, y}]
I am unsure of this, however it might be too complex for the Solve function to work, as it is getting the derivative of one of the variables (D[A]).
However I am able to do this question by hand:
Rearrange 1st equation so that y = 960 - 2x
Substitute y into 2nd equation so that A = x(960 - 2x) = 2x^2 + 960x
Get the derivative: 4x + 960 and solve for 4x + 960 = 0
x = 240
Substitute x = 240 into y = 960 - 2x
y = 960 - 2(240) = 960 - 480 = 480
Therefore dimensions are 240 x 480.
I expect the output to be {240, 480}. Thanks :)
EDIT: Here is what I have typed into mathematica:
Clear[x, y, A]
Solve[{2 x + y == 960, A == x*y, D[A, x] == 0}, {x, y}]
OUT: {{x -> 1/2 (480 - Sqrt[2] Sqrt[115200 - A]),
y -> 480 + Sqrt[2] Sqrt[115200 - A]}, {x ->
1/2 (480 + Sqrt[2] Sqrt[115200 - A]),
y -> 480 - Sqrt[2] Sqrt[115200 - A]}
NMaximize[{x*y, 2 x + y == 960}, {x, y}]
OUT: {115200., {x -> 240., y -> 480.}}
Try this
NMaximize[{x*y,2x+y==960},{x,y}]
which is maximizing the area with your constraint expression and that instantly returns x->240, y->480
The difficulty you were having was with the use of D[A] when Mathematica needs to know what variable you are differentiating with respect to.
Perhaps something in this will help you understand what is happening with your derivative.
EDIT
Look at what Solve is going to be given:
Clear[x,y,A];
A == x*y;
D[A, x]
which gives 0. Why is that? You are taking the derivative of A with respect to x, but A has never been assigned any value, you have only declared that A and x*y are equal. Thus
Clear[x,y,A];
{2 x + y == 960, A == x*y, D[A, x] == 0}
is handing
{2*x + y == 960, A == x*y, True}
to Solve and that is perhaps less puzzling when Solve returns something with A in it.
When some function in Mathematica isn't giving you the result that you expect or that makes sense then checking exactly what is being given to that function as arguments is always a good first step.
There are always several ways of doing anything in Mathematica and some of those seem to make no sense at all

Order of unknowns in Prolog constraint logic programming (clpr)

I have:
:-use_module(library(clpr)).
comp(X, Y, Z):-
{X = Y * Z, Y = Z, Y > 0, Z > 0}.
Which with the query:
?-comp(X,3,Z).
Yields:
X = 9.0,
Z = 3.0
as expected. But why doesn't
comp(9,Y,Z).
also give me values for Y and Z? What I get is instead:
{Z>0.0,Y=Z,9-Y*Z=0.0},
{9-Y*Z=0.0},
{9-Y*Z=0.0}
Thanks!
Probably a weakness of the used CLP(R) that quadratic case doesn't work so well. After Y = Z, it is evident that X = Y**2, and then with X = 9 and Y > 0, you should easily get Y = 3. Which CLP(R) do you use?
A CLP(R) need not only support linear equalities and inequalities. Using for example Gröbner Basis algorithm a CLP(R) could do more, even algebraically. Some computer algebra system can do that easily.
So I guess its not a problem of Prolog per se, rather of the library. Strictly speaking CLP(X) only indicates a domain X. For the domain R of real numbers there is wide variety of potential equation and inequation solvers.
Better with constraints over finite domains using this module:
:-use_module(library(clpfd)).
comp(X, Y, Z):-
X #= Y * Z, Y #= Z, Y #> 0, Z #> 0.
With
comp(9,Y,Z).
I get:
Y = Z, Z = 3

Defining two random variables that depend on a single condition

In sympy, how can I define two random variables, X and Y, that depend on a common condition? For example, how do I solve a problem such as the following:
We throw a dice. If it falls on 1, then X=1 and Y=0. If it falls on 2, then X=0 and Y=1. Otherwise, X=Y=0. What is the covariance of X,Y?
If X and Y are functions of some Z, then create Z and define X, Y through it. Piecewise helps with this:
from sympy.stats import *
Z = Die("Z", 6)
X = Piecewise((1, Eq(Z, 1)), (0, True))
Y = Piecewise((1, Eq(Z, 2)), (0, True))
print(covariance(X, Y)) # -1/36
Aside: If Y is a function of X, then create X first and then define Y in terms of it.
from sympy.stats import Bernoulli, covariance
X = Bernoulli("X", 1/6)
Y = 1 - X
print(covariance(X, Y))
Returns -0.138888888888889.

Tensorflow debug or print statements

I am very new to TensorFlow and trying to learn it. I copied a program from tutorial website. As I modified it, there are issues with the program and I have to debug. I am looking for help to understand how I can print certain values such as cost and optimizer. I have to figure out to see the value being updated in each iteration. I understand that notes cannot be printed but I take that cost and optimizers are inputs which should be printable, right?
plt.ion()
n_observations = 100
xs = np.linspace(-3, 3, n_observations)
ys = np.sin(xs) + np.random.uniform(-0.5, 0.5, n_observations)
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
Y_pred = tf.Variable(tf.random_normal([1]), name='bias')
for pow_i in range(1, 5):
W = tf.Variable(tf.random_normal([1]), name='weight_%d' % pow_i)
Y_pred = tf.add(tf.multiply(tf.pow(X, pow_i), W), Y_pred)
cost = tf.reduce_sum(tf.pow(Y_pred - Y, 2)) / (n_observations - 1)
d = tf.Print(cost, [cost, 2.0], message="Value of cost id:")
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
n_epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
prev_training_cost = 0.0
for epoch_i in range(n_epochs):
for (x, y) in zip(xs, ys):
print("Msg2 x, y ", x, y, cost);
sess.run(optimizer, feed_dict={X: x, Y: y})
sess.run(d)
print("Msg3 x, y ttt ", x, y, optimizer);
training_cost = sess.run(
cost, feed_dict={X: xs, Y: ys})
print(training_cost)
print("Msg3 cost, xs ys", cost, xs, ys);
if epoch_i % 100 == 0:
ax.plot(xs, Y_pred.eval(
feed_dict={X: xs}, session=sess),
'k', alpha=epoch_i / n_epochs)
fig.show()
#plt.draw()
# Allow the training to quit if we've reached a minimum
if np.abs(prev_training_cost - training_cost) < 0.001:
break
prev_training_cost = training_cost
ax.set_ylim([-3, 3])
fig.show()
plt.waitforbuttonpress()
In your example, cost and optimizer refer to tensors in the graph, not inputs to your graph. The need to be fetched in a session.run call to be able to print their python values. For example, in your example, printing training_cost should be printing the cost. Similarly, if you return the value you of optimizer from session.run(optimizer, ...), it should return the correct printable value.
If you are interested in debugging and printing values check out:
tfdbg
tf.Print
Hope that helps!

SICP - Which functions converge to fixed points?

In chapter 1 on fixed points, the book says we can find fixed points of certain functions using
f(x) = f(f(x)) = f(f(f(x))) ....
What are those functions?
It doesn't work for y = 2y when i rewrite it as y = y/2 it works
Does y need to get smaller everytime? Or are there any general attributes that a function has to have to find fixed points by that method?
What conditions it should satisfy to work?
According to the Banach fixed-point theorem, such a point exists iff the mapping (function) is a contraction. That means that, for example, y=2x doesn't have fixed point and y = 0,999... * x has. In general, if f maps [a,b] to [a,b], then |f(x) - f(y)| should be equal to c * |x - y| for some 0 <= c < 1 (for all x, y from [a, b]).
Say you have:
f(x) = sin(x)
then x = 0 is a fixed point of the function since:
f(0) = sin(0) = 0
f(f(0)) = sin(sin(0)) = sin(0) = 0
Not every point along x is a fixed point of sin, only 0 is.
Different functions have different fixed points, if at all. You can find more on fixed points of functions at Wikidpedia

Resources