Ansys APDL : use of the MVERT_6P function - matrix

I am decrypting an old ansys apdl code. This code uses the MVERT_6P function like this: MVERT_6P,A1,A2,A3,A4,A5,A6,'MR0_R1',PAS. I can't find any information about this function in the Ansys help.
A1,2,3,4,5,6 correspond to 3D coordinate points.
PAS=1
This matrix, MR0_R1, is the transition matrix from R0 to R1.
My question is : How is precisely this matrix constructed via the function MVERT_6P ? Thank you.
I think this function builds a matrix named MR0_R1 with the following information:
A1,2,3,4,5,6 correspond to 3D coordinate points.
PAS=1

Related

3D triangulation using HALCON

My aim is to calibrate a pair of cameras and use them for simple measurement purposes. For this purpose, I have already calibrated them using HALCON and have all the necessary intrinsic and extrinsic camera Parameters. The next step for me is to basically measure known lengths to verify my calibration accuracies. So far I have been using the method intersect_lines_of_sight to achieve this. This has given me unfavourable results as the lengths are off by a couple of centimeters. Is there any other method which basically triangulates and gives me the 3D coordinates of a Point in HALCON? Or is there any leads as to how this can be done? Any help will be greatly appreciated.
Kindly let me know in case this post Needs to be updated with code samples
In HALCON there is also the operator reconstruct_points_stereo with which you can reconstruct 3D points given the row and column coordinates of a corresponding pixel. For this you will need to generate a StereoModel from your calibration data that is then used in the operator reconstruct_points_stereo.
In you HALCON installation there is an standard HDevelop example that shows the use of this operator. The example is called reconstruct_points_stereo.hdev and can be found in the example browser of HDevelop.

Resolve matrix differential equation with sparse matrix and ojAlgo

I am developping a java evolution tool with ojAlgo, and I try to resolve the following equation :
where A is a sparse matrix (for now the dimension of the matrix is 2000 x 2000, it will be scaled later on), A is not symmetric and use only real values.
I made some researchs and I tried to find the way to resolve this equation (using SparseStore) on github wiki/javadoc but I didn't find a way to do it. Can you help me find methods/class I should use ?
Thank you
There is no direct/specific method to solve differential equations in ojAlgo. You have to know how to do it (using pen and paper) then ojAlgo can help you perform the calculations.
The main problem here is finding the eigenpairs, right?
Eigenvalue<Double> evd = Eigenvalue.PRIMITIVE.make(matrix);
evd.decompose(matrix);
Array1D<ComplexNumber> values = evd.getEigenvalues();
MatrixStore<ComplexNumber> vectors = evd.getEigenvectors();
Eigenpair pair = evd.getEigenpair(0); // One of the pairs

How to get mesh from kinect fracetrack?

How do I get the kinect facetracking mesh?
this is the mesh: http://imgur.com/TV6dHBC
I have tried several ways, but could not make it work.
e.g.: http://msdn.microsoft.com/en-us/library/jj130970.aspx
3D Face Model Provided by IFTModel Interface
The Face Tracking SDK also tries to fit a 3D mask to the user’s face.
The 3D model is based on the Candide3 model
(http://www.icg.isy.liu.se/candide/) :
Note:
This model is not returned directly at each call to the Face Tracking
SDK, but can be computed from the AUs and SUs.
There is no direct functionality to do that. You have to use the triangle and vertex data to generate the necessary vertex and indices lists that are required.
GetTriangles method gets you the faces (indexes of the vertices of the triangles in a clockwise fashion), and then from using these indexes for the array of vertices to get the 3d model. Array of vertices has to be reconstructed every frame from the AUs and SUs with Get3DShape or GetProjectedShape (2D) functions.
For more, search for IFTModel (http://msdn.microsoft.com/en-us/library/jj130970.aspx) and for visualizeFaceModel (a sample code, which can help in understanding the input parameters of get3DShape).
(This sample uses the getProjectedShape, but the input parameters are nearly identical for both functions)

Image Lucas-Kanade Optical Flow

I am new to this optical flow in image space, and I am kind of confused that weather the optical flow computed in OpenCV by Lucas-Kanade method is distance, displacement or velocity. Perhaps I might sound foolish but I am really confused.
I feel its velocity but I just want to confirm?
I assume you refer to opencv function calcOpticalFlowPyrLK.
This function tracks the position of interest points found in old-frame and returns their position at the new-frame.
The Lucas-Kanade method estimates the local image flow (velocity) vector at point p.
This method computes the displacement of some interest points between two sucessive frames. The output vector contains the calculated new positions of input features in the second image as it is stated in the following link in documentation as well : http://docs.opencv.org/2.4/modules/video/doc/motion_analysis_and_object_tracking.html

distortion coefficents with opencv camera calibraton

I'm writing in visual c++ using opencv library. I used calibrateCamera function with a checkboard pattern to extract intrinsic, extrinsic and distortion values. The problem is that I don't know how to use the distCoeffs matrix (1x5) on my 2D points on the CCD. Can someone help me?
Thanks in advance!
The relevant portion of the documentation is
Tangential distortion occurs because the image taking lenses are not perfectly parallel to the imaging plane. Correcting this is made via the formulas:
x_{corrected} = x + [ 2p_1xy + p_2(r^2+2x^2)]
y_{corrected} = y + [ p_1(r^2+ 2y^2)+ 2p_2xy]
So we have five distortion parameters, which in OpenCV are organized in a 5 column one row matrix:
Distortion_{coefficients}=(k_1 k_2 p_1 p_2 k_3)
You can also use undistort, undistort points, or initUndistortRectifyMap combined with remap

Resources