Related
How to make a program in Mathematica that is able to recognize this image and return the radius of the circular part of it?
While curve extraction is possible the radius can be obtained quite simply, i.e.
img = Import["https://i.stack.imgur.com/LENuK.jpg"];
{wd, ht} = ImageDimensions[img];
data = ImageData[img];
p1 = LengthWhile[data[[-33]], # == {1., 1., 1.} &];
p2 = LengthWhile[Reverse[data[[-33]]], # == {1., 1., 1.} &];
p120 = wd - p1 - p2 - 1;
p3 = LengthWhile[data[[-245]], # == {1., 1., 1.} &];
p4 = LengthWhile[Reverse[data[[-245]]], # == {1., 1., 1.} &];
pdrop = wd - p3 - p4 - 1;
radius = 120/p120*pdrop/2.
55.814
Further automation could automatically detect the widest point of the drop, which is here found by testing: line 245 (see sample lines in bottom image).
Making sense of the scale could be difficult to automate. We can see the outermost ticks are at -60 & 60, a length of 120 which turns out to be 400 pixels, pdrop.
As the sketch below shows, the circular part of the drop is limited by the widest points, so that length and the scale are all that is needed to find the radius.
Two lines are used to find the image scale and outer bounds of the drop: line 33 and 245, shown below coloured red.
Additional code
In the code below r is calibrated against the scale so that it equals 60.
img = Import["https://i.stack.imgur.com/LENuK.jpg"];
{wd, ht} = ImageDimensions[img];
Manipulate[
Graphics[{Rectangle[{0, 0}, {wd, ht}],
Inset[img, {0, 0}, {0, 0}, {wd, ht}],
Inset[Graphics[{Circle[{x, y}, r]},
ImageSize -> {wd, ht}, PlotRange -> {{0, wd}, {0, ht}}],
{0, 0}, {0, 0}, {wd, ht}],
Inset[
Style["r = " <> ToString[Round[60 r/212.8, 0.1]], 16],
{50, 510}]},
ImageSize -> {wd, ht}, PlotRange -> {{0, wd}, {0, ht}}],
{{x, 228}, 0, 300}, {{y, 247}, 0, 300}, {{r, 196}, 0, 300}]
I have the following tensor with dimensions (2, 3, 2, 2) where the dimensions represent (batch_size, channels, height, width):
tensor([[[[ 1., 2.],
[ 3., 4.]],
[[ 5., 6.],
[ 7., 8.]],
[[ 9., 10.],
[11., 12.]]],
[[[13., 14.],
[15., 16.]],
[[17., 18.],
[19., 20.]],
[[21., 22.],
[23., 24.]]]])
I would like to convert this into the following tensor with dimensions (8, 3):
tensor([[ 1, 5, 9],
[ 2, 6, 10],
[ 3, 7, 11],
[ 4, 8, 12],
[13, 17, 21],
[14, 18, 22],
[15, 19, 23],
[16, 20, 24]])
Essentially I would like to create 1D vector over the elements of the matrices. I have tried many operations such as flatten and reshape, but I cannot figure out how to achieve this reshaping.
You can do it this way:
import torch
x = torch.Tensor(
[
[
[[1,2],[3,4]],
[[5,6],[7,8]],
[[9,10],[11,12]]],
[
[[13,14],[15,16]],
[[17,18],[19,20]],
[[21,22],[23,24]]]
]
)
result = x.swapaxes(0, 1).reshape(3, -1).T
print(result)
# > tensor([[ 1., 5., 9.],
# > [ 2., 6., 10.],
# > [ 3., 7., 11.],
# > [ 4., 8., 12.],
# > [13., 17., 21.],
# > [14., 18., 22.],
# > [15., 19., 23.],
# > [16., 20., 24.]])
You could achieve this with an axes permutation and a flattening the resulting tensor:
swap axis=1 (of size 3) with the last one: axis=-1, using torch.permute (torch.swapaxes is an alias),
flatten everything but the last axis i.e. from axis=0 to axis=-2 using torch.flatten.
This looks like:
>>> x.transpose(1, -1).flatten(0, -2)
tensor([[ 1., 5., 9.],
[ 3., 7., 11.],
[ 2., 6., 10.],
[ 4., 8., 12.],
[13., 17., 21.],
[15., 19., 23.],
[14., 18., 22.],
[16., 20., 24.]])
I have a matrix whose shape is (TxK, and K << T). I want to extend it into shape TxT, and right shift the i-th row with i steps.
For an example:
inputs: T= 5, and K = 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
expected outputs:
1 2 3 0 0
0 1 2 3 0
0 0 1 2 3
0 0 0 1 2
0 0 0 0 1
My solutions:
right_pad = T - K + 1
output = F.pad(input, (0, right_pad), 'constant', value=0)
output = output.view(-1)[:-T].view(T, T)
My solution will cause the error -- gradient computation has been modified by an in-place operation. Is there an efficient and feasible way to achieve my purpose?
Your function is fine and is not a cause of your error (using PyTorch 1.6.0, if you are using other version, please update your dependencies).
Code below works fine:
import torch
import torch.nn as nn
import torch.nn.functional as F
T = 5
K = 3
inputs = torch.tensor(
[[1, 2, 3,], [1, 2, 3,], [1, 2, 3,], [1, 2, 3,], [1, 2, 3,],],
requires_grad=True,
dtype=torch.float,
)
right_pad = T - K + 1
output = F.pad(inputs, (0, right_pad), "constant", value=0)
output = output.flatten()[:-T].reshape(T, T)
output.sum().backward()
print(inputs.grad)
Please notice I have explicitly specified dtype as torch.float as you can't backprop integers.
view and slice will never break backpropagation, as the gradient is connected to single value, no matter whether it is viewed as 1D or unsqueezed 2D or whatever. Those are not modified in-place. In-place modification breaking gradient could be:
output[0, 3] = 15.
Also, your solution returns this:
tensor([[1., 2., 3., 0., 0.],
[0., 1., 2., 3., 0.],
[0., 0., 1., 2., 3.],
[0., 0., 0., 1., 2.],
[3., 0., 0., 0., 1.]], grad_fn=<ViewBackward>)
so you have a 3 in the bottom left corner. If that's not what you expect, you should add this line (multiplication by upper triangular matrix with 1) after output = output.flatten()[:-T].reshape(T,T):
output *= torch.triu(torch.ones_like(output))
which gives:
tensor([[1., 2., 3., 0., 0.],
[0., 1., 2., 3., 0.],
[0., 0., 1., 2., 3.],
[0., 0., 0., 1., 2.],
[0., 0., 0., 0., 1.]], grad_fn=<AsStridedBackward>)
And inputs.grad:
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 0.],
[1., 0., 0.]])
You can do this column by column with PyTorch.
# input is a T * K tensor
input = torch.ones((T, K))
index = torch.tensor(np.linspace(0, T - 1, num=T, dtype=np.int64))
output = torch.zeros((T, T))
output[index, index] = input[:, 0]
for k in range(1, K):
output[index[:-k], index[:-k] + k] = input[:-k, k]
print(output)
I have a 136x136 Hamiltonian (matrix) and need to find its eigenvalues.
It cannot be solved analytically with Eigenvalues[H] as it is required to solve a 136th order polynomial.
I need to solve it numerically by replacing the symbolic terms with values before computing its eigenvalues. However, it needs to be plotted for a range of values of the symbolic term for example -1 < x < 1.
Is there a method to numerically solve and plot a range of values?
{
{10.1358 - 6.72029 x, 0., 0.},...
{0., 10.1358 - 6.72029 x, 0.},
{0., 0., 10.1358 - 6.72029 x},
{0., 0., 0.},
{0., 0., 0.},
{0., 0., 0.},
{0., 0., 0.},
{0., 0., 0.},
{0., 0.204252, 0.},
{0., 0., 0.267429}
...
Corner of matrix as example. Matrix is real and symmetric.
When I put two objects below inside GraphicsRow, it seems to turn off their antialiasing. Can anyone see some way to Export graphics row in example below with antialiasing?
(source: yaroslavvb.com)
I tried various combination of Style[#,Antialiasing->True] and Preferences with no luck.
The closest work-around I is to Rasterize them at 4 times the resolution, but that has a side effect of changing appearance of objects with AbsoluteThickness, for instance, Box around each object becomes faded out.
picA = Graphics3D[{Opacity[0.5],
GraphicsComplex[{{-1., 0., 0.}, {0., -1., 0.}, {0., 0., -1.}, {0.,
0., 1.}, {0., 1., 0.}, {1., 0.,
0.}}, {{{EdgeForm[GrayLevel[0.]],
GraphicsGroup[{Polygon[{{4, 5, 1}, {1, 5, 3}, {1, 3, 2}, {4,
1, 2}, {3, 5, 6}, {5, 4, 6}, {4, 2, 6}, {2, 3,
6}}]}]}, {}, {}, {}, {}}}]}];
picB = Graphics3D[{Opacity[0.5],
GraphicsComplex[{{-1., 0., 0.}, {-0.5, -0.8660254037844386,
0.}, {-0.5, 0.8660254037844386, 1.}, {0.,
0., -1.}, {0.5, -0.8660254037844386, 1.}, {0.5,
0.8660254037844386, 0.}, {1., 0.,
0.}}, {{{EdgeForm[GrayLevel[0.]],
GraphicsGroup[{Polygon[{{6, 7, 4}, {2, 1, 4}}],
Polygon[{{1, 2, 5, 3}, {6, 3, 5, 7}, {5, 2, 4, 7}, {3, 6, 4,
1}}]}]}, {}, {}, {}, {}}}]}];
GraphicsRow[{picA, picB}]
just a quick comment: have you enabled anti-aliasing in the Preferences dialog?