Global stiffness matrix FEM adding a spring constant - matrix

I'm having some trouble assembling a global stiffness matrix by using for a portal.
In the first picture, the global stiffness matrix is shown for a portal where all supports are totally fixed. It is clear for me on how to assemble to global stiffness matrix. However for the second picture, a 7x7 global stiffness matrix must be assembled due to the rotational stiffness of the support on the right. Could someone help me on how to fill in the last col and row of the global stiffness matrix? Any help is greatly appreciated.
2 portals
I would like to learn on how to fill in a global stiffness matrix when the matrix is larger than 6*6.

Related

3D triangulation using HALCON

My aim is to calibrate a pair of cameras and use them for simple measurement purposes. For this purpose, I have already calibrated them using HALCON and have all the necessary intrinsic and extrinsic camera Parameters. The next step for me is to basically measure known lengths to verify my calibration accuracies. So far I have been using the method intersect_lines_of_sight to achieve this. This has given me unfavourable results as the lengths are off by a couple of centimeters. Is there any other method which basically triangulates and gives me the 3D coordinates of a Point in HALCON? Or is there any leads as to how this can be done? Any help will be greatly appreciated.
Kindly let me know in case this post Needs to be updated with code samples
In HALCON there is also the operator reconstruct_points_stereo with which you can reconstruct 3D points given the row and column coordinates of a corresponding pixel. For this you will need to generate a StereoModel from your calibration data that is then used in the operator reconstruct_points_stereo.
In you HALCON installation there is an standard HDevelop example that shows the use of this operator. The example is called reconstruct_points_stereo.hdev and can be found in the example browser of HDevelop.

Can you recommend a source of reference data for Fundamental matrix calculation

Specifically I'd ideally want images with point correspondences and a 'Gold Standard' calculated value of F and left and right epipoles. I could work with an Essential matrix and intrinsic and extrinsic camera properties too.
I know that I can construct F from two projection matrices and then generate left and right projected point coordinates from 3D actual points and apply Gaussian noise but I'd really like to work with someone else's reference data since I'm trying to test the efficacy of my code and writing more code to test the first batch of (possibly bad) code doesn't seem smart.
Thanks for any help
Regards
Dave
You should work with ground truth datasets for multi-view reconstructions. I recommend to use the Middlebury Multi-View Stereo datasets. Besides the image data in lossless format, they deliver camera parameters, such as camera pose and intrinsic camera calibration as well as the possibility to evaluate your own multi-view reconstruction system.
Perhaps, the results are not computed by "the" gold standard algorithm proposed in the book of Hartley and Zisserman but you can use it to compute the fundamental matrices you require between two views.
To compute the fundamental matrix F from two projection matrices P1 and P2 refer to the code Andrew Zisserman provides.

distortion coefficents with opencv camera calibraton

I'm writing in visual c++ using opencv library. I used calibrateCamera function with a checkboard pattern to extract intrinsic, extrinsic and distortion values. The problem is that I don't know how to use the distCoeffs matrix (1x5) on my 2D points on the CCD. Can someone help me?
Thanks in advance!
The relevant portion of the documentation is
Tangential distortion occurs because the image taking lenses are not perfectly parallel to the imaging plane. Correcting this is made via the formulas:
x_{corrected} = x + [ 2p_1xy + p_2(r^2+2x^2)]
y_{corrected} = y + [ p_1(r^2+ 2y^2)+ 2p_2xy]
So we have five distortion parameters, which in OpenCV are organized in a 5 column one row matrix:
Distortion_{coefficients}=(k_1 k_2 p_1 p_2 k_3)
You can also use undistort, undistort points, or initUndistortRectifyMap combined with remap

WebGL matrix depth not working properly

I'm very new to WebGL, but I'm getting close at understanding the basics.
I'm following the instructions in Jacob Seidelin's book where he explains some of the basics.
I tried rebuilding one of his examples (which is not directly explained in the book).
For some reason the depth in the uModelView matrix doesn't work in my application. I also don't get any errors using the WebGLDebugUtils.
When I set the z property of the uModelView matrix to 0 the front face of the cube fills up the screen. Since I worked with -1 to 1 in the vertices.
Here is my source code: [removed]
The shaders are located in the index.html, be they shouldn't be the problem.
I'm using gl-matrix for the matrix transformations.
Thanks in advance.
You are not using the mat4.perspective correct. Checkout the documentation:
https://github.com/toji/gl-matrix/blob/master/gl-matrix.js#L1722
You should either add the matrix as the last parameter (this is the preferred way since this does not allocate any new object):
mat4.perspective(fov, aspect, near, far, matrix);
or assign it to the matrix:
matrix = mat4.perspective(fov, aspect, near, far);

Liquify filter/iwarp

I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.

Resources