I want to plot a graph that express two of ODE by laplace transform - ode

enter image description here
enter image description here
Actually I transform two of ODE into laplace equation by hand..
so i want to inverse my equations and plot the graph by python.
so i want to inverse my equations and plot the graph by python.

Related

Can I use PCA for dimensonality reduction from 3D to 2D and apply curve fiting to get the poly equation?

I am trying to get my third order poly function from 3D data so can I apply PCA for dimension reduction (from 3D to 2D) and then apply curve fitting to get my 3rd order.
Image of the curve

Rotation matrix of ligand in vmd

I am trying to get rotation matrix of ligand to find Euler angles using VMD software. Using measure inertia command I can get principal axes of rotation. How can I form rotation matrix from normalized principal axes? In one paper I found that some solution related to ordering Eigenvalues of the moment of inertia tensor. Also when I form rotation matrix using principal axes as columns, in some cases I get the matrix with determinant -1 ( it should be 1 for proper rotation matrix). How to deal with such improper rotation matrices to get Euler angles?

How to reshape the pre-trained weights to input them to 3d convoluional neural network?

I have pre-trained weights for a 3d convolutional layer using Matlab. The weights is a 5d tensor with dimension (512,4,4,4,160). [out_channels, filter_depth, filter_height, filter_width, in_channels]
Now I want to input it as the initial weights for fine-tuning in tensorflow's tf.nn.conv3d. I see that the shape of weights are allowed for 3d convolutional neural networks should be: (4,4,4,160,512).[filter_depth, filter_height, filter_width, in_channels, out_channels]. Can I just use tf.Variable().reshape(4,4,4,160,512)? But I feel it is not the correct weights if I just use reshape.
The tf.transpose operation can reorder axes: https://www.tensorflow.org/versions/r0.11/api_docs/python/array_ops.html#transpose
Provided that initial shape of tensor input is (512,4,4,4,160) the output tensor of tf.transpose(input, perm=[4,1,2,3,0]) will have shape (160,4,4,4,512).
Also you may need to reverse your weights along some axis or axes. In tensorflow convolutions are implemented as cross-correlations: https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#convolution

Matlab - Image Formation - Matrix

I am doing a very interesting Computer Vision project which talks about how to "create manually" images with Matlab.
The teacher gave me three matrices: the illuminant matrix (called E), the camera sensitivity matrix (called R) and finally, the surface reflectance matrix (called S).
The matrix dimensions are as follows:
S: 31x512x512 (reflectance samples x x-dimension x y-dimension)
R: 31x3
E: 31x1
The teacher gave me also the following relationship:
P=transpose(C)*R=transpose(S)*diagonal(E)*R
Where C is the color matrix.
Where P is the sensor response matrix.
The goal is to display the image formed by all the previous matrices. Therefore, we have to compute the P matrix.
The class of all the matrices is double.
This is what I have done:
Diag_D=diag(D);% Diagonal matrix of D
S_reshaped= reshape(S,31,[512*512]);% Reshape the surface reflectance matrix
S_permute=permute(S_reshaped,[2 1]);% The output matrix is a 262144x31 matrix
Color_Signal_D65_buffer=S_permute*Diag_DD;
Color_Signal_D65=reshape(Color_Signal_D65_buffer,[512 512 31]);% This is the final color matrix
Image_D65_buffer= (reshape(Color_Signal_D65,[512*512],31))*R;% Apply the given formula
Image_D65= reshape(Image_D65_buffer,[512 512 3]);% image formation
Image_D65_norm=sqrt(sum(Image_D65.^2,3));% Compute the Image_D65 norm
Image_D65_Normalized=bsxfun(#rdivide, Image_D65, Image_D65_norm);% Divide each element of the matrix by the norm in order to normalize the matrix
figure
imshow(Image_D65_Normalized)% Display the image
However,it did not work at all. The output is an image but the colors are completely wrong (there is too much blue on the image).
I think it could be a matrix reshaping problem but I have tried all the possible combinations but nothing to do.
Thank you so much for your help
I've finaly found the error. It was a problem in the normalization process. I was using the wrong formula.

How to decompose a shape into a set of basic shapes?

I have a hand drawn shape I as input. The format of I is a sequence of (x,y) coordinates. It could be characters in a language or other shapes. Given a set of basic shapes S (eg { vertical line, horizontal line, circle, semi-circle} ). I would like to decompose I in terms of S.
Is this a standard operation with a standard name?
Is there a standard algorithm/classifier to solve this problem?
As a supplementary to #Don Reba 's answer, I would attach the original publication on generalized Hough transform for your reference. Herein you can see the angle of line can be controlled with theta parameter in the curve equation. And the half of circle is controlled with the coordinates xr and yr in the curve equation. There is a pseudocode on ellipse detection in the paper for your reference. And generalized hough transform can be even used to detect arbitrary shapes with the use of directional information. In SO there was a easy Matlab code on the algorithm.
You could use the generalized Hough transform to match letter contours against basic shapes. You would need a different transform for every kind of shape.

Resources