How to solve simultaneous equations through processing - processing

I'm only new to processing and it's confusing me far more than Java or Javascript ever did!
I have to solve simultaneous equations for a college assignment. (It's a class where they don't really explain to us what they're doing in the code.)
I know how they figure out the code with two equations in the code below, however they now want us to do it with 3 equations. Does anyone know how I would do this? I imagined I would just have to add the extra bits into each matrix but it is obviously more complicated than that.
The 3 equations I have are:
x+y+z=9
x+2y+3z=23
x+5y-3z=-7
The code for two equations is the following:
// import Jama.*;
// Solve 3x+2y=3
// -2x-y=-1
// AX=B
// X=InvA B
import java.io.StringWriter;
void setup()
{
size(150,110);
fill(0,0,0);
double [][] Aline12={{ 3, 2}, // Create a 2D array to store A
{-2,-1}};
Matrix A = new Matrix(Aline12); // Copy array to A Matrix data structure
double [][] B1ine12 = {{3}, // Create a 2D array to store B
{-1}};
Matrix B = new Matrix(B1ine12); // Copy array to B Matrix data structure
Matrix X=(A.inverse()).times(B); // Solve for X
text("A",10,12);
app_print(A,0,16);
text("B",110,12);
app_print(B,100,16);
text("X",10,65);
app_print(X,0,70);
}
// Method added to allow printing on applet screen at (x,y)
void app_print(Matrix P, int x, int y)
{
StringWriter stringWriter = new StringWriter();
PrintWriter writer = new PrintWriter(stringWriter);
P.print(writer,5,2);
text(stringWriter.toString(),x,y);
}

You will solve it the same way you solve a system of 2 equations, just add the third variable. Also in practice you almost never want to take the inverse of a matrix, there are better methods like LU decomposition to solve Ax=B. Since you are using Jama, you can try the following snippet
double [][] Aline12={{1.0, 1.0, 1.0}, // Create a 2D array to store A
{1.0, 2.0. 3.0},
{1.0, 5.0, -3.0}};
Matrix A = new Matrix(Aline12); // Copy array to A Matrix data structure
double [][] B1ine12 = {{9}, // Create a 2D array to store B
{23},
{-7}};
Matrix B = new Matrix(B1ine12); // Copy array to B Matrix data structure
Matrix X = A.solve(B) // Solve for X. See Jama documentation on solve
Jama docs
http://math.nist.gov/javanumerics/jama/doc/

Related

Solve task using matrix toolkit java

I have matrix A and vector t. And I need to find vector x, so that A*x = t
So there are just 2 steps, convert maxtrix A and vector t to triangle, and then find vector x (or mb it can be done just in one step using this library, idk). How can I do this using MTJ? There is really few documentation or information about MTJ.
I found how to do it:
double[][] matrix = new double[n][n];
double[] vector = new double[n];
Matrix A = new DenseMatrix(matrix);
Vector t = new DenseVector(vector);
Vector x = new DenseVector(n);
A.solve(t,x)
And then we will have answer in x

How can I append elements to a 3-dimensional array in Processing (v. 3.4)?

I am creating a program to render 3D graphics. I have a 3D array 'shapes' which contains all of the polygons to render. It is an array of polygons, where each polygon is itself an array of points, and each point is an array of 3 integer values (x, y, z co-ordinates). I have tried and failed to use the append() function. How else can I get it to work?
I've tried using the append() function, but this seems to not work with multidimensional arrays.
int[][][] addPolyhedron(int[][][] shapes, int[][][] polyhedron)
{
for(int i = 0; i < polyhedron.length; i ++)
{
shapes = append(shapes, polyhedron[i]);
{
return shapes;
}
I wanted this to extend the array shapes to include all of the polygons in the array polyhedron. However, I receive an error message saying 'type mismatch, "java.lang.Object" does not match with "int[][][]".' Thanks in advance.
In Java, arrays (of any dimension) are not extendable - the size is defined, allocated and fixed upon instantiation. You want to add to (and therefore dynamically resize) shapes. Although Processing does provide the append() function, I think it is more appropriate to use the ArrayList built-in Java data type.
Your function could be refactored into something like this:
ArrayList<Integer[][]> addPolyhedron(ArrayList<Integer[][]> shapes, ArrayList<Integer[][]> polyhedron)
{
shapes.addAll(polyhedron);
return shapes;
}
Note that int[][] has become Integer[][] because an ArrayList cannot be declared with primitive types (int, bool, float, etc.).
Adding an individual program-defined polygon to shapes would be done like this:
shapes.add(new Integer[][] {{1,2,5},{3,4,5},{6,5,4}}); // adds a triangle to shapes.
Getting coordinates from the shapes ArrayList would be done like this:
shapes.get(0)[1][1]; // returns 4.

Few objects moving straight one by one in Unity

I'am trying to create some snake-like movement, but i cant implement algorithm to move one body part straight by another and so on.
I wanna to have some auto-moved snake which consists of separate blocks ( spheres ). This snake should move along some path. I generate path with bezier spline and have already implemented one future snake's part along it. Point for head is obtained from spline by next api:
class BezierSpline
{
Vector3 GetPoint(float progress) // 0 to 1
}
And than I have SnakeMovement script
public class SnakeMovement : MonoBehaviour
{
public BezierSpline Path;
public List<Transform> Parts;
public float minDistance = 0.25f;
public float speed = 1;
//.....
void Update()
{
Vector3 position = Path.GetPoint(progress);
Parts.First().localPosition = position;
Parts.First().LookAt(position + Path.GetDirection(progress));
for (int i = 1; i < Parts.Count; i++)
{
Transform curBody = Parts[i];
Transform prevBody = Parts[i - 1];
float dist = Vector3.Distance(prevBody.position, curBody.position);
Vector3 newP = prevBody.position;
newP.y = Parts[0].position.y;
float t = Time.deltaTime * dist / minDistance * curspeed;
curBody.position = Vector3.Slerp(curBody.position, newP, t);
curBody.rotation = Quaternion.Slerp(curBody.rotation, prevBody.rotation, t);
}
//....
}
For now, if I stopped head movement all parts dont preserve distance and keep moving to the head position. Another problem with above algorithm is that parts don't exectly follow the head path. They can "cut" corners while turning.
The main idea is to have user/ai control for only head(first body part) and each followed part should exectly repeat head path and preserve distance between its neighbours.
For a snake like motion you are likely to get lots of strange behaviours if you treat spheres as seperate objects. While i can imagine its possible to get it to work, I think this is not the best approach.
First solution that comes to mind is to create a List, onto which you would add to index 0, on every frame, the position of the head of the snake.
The list would grow, and all the other segments would wait their turn, so lag x frames, and on each update segment y would have position of list[x*y]
If Count() of the list is greater than number_of_segments*lag, you RemoveAt(Count()-1)
This can be optimized as changing the list is somewhat costly (a ring buffer would be better suited, but a Queue could also work. For starters i find Lists much easier to follow and you can always optimize later). This may behave a bit awkward if your framerate varies a lot but should be very stable in general (as in - no unpredictable motion, we only re-use the same values over and over)
Second method:
You mentioned using a bezier spline to generate a path. beziers are parametrized by a float t so you have something like
SplineAt(t).
if you take your bezier_path_length and distance_between_segments, than segment n should have position of
SplineAt(t-n*distance_between_segments/bezier_path_length)

Camera Properties from Blender to generate Point Cloud

I used Blender to generate some color images and their corresponding depth map, along with their camera properties(intrinsic and extrinsic).
Then I want to use these information to generate a 3D point cloud from these 2D images using 2D to 3D projection techniques.
Here is the viewpoint of one the cameras in Blender.
I wanted to have the rotation and translation matrix of the camera.
I used the code in this link camera matrix for Blender written by #rfabbri and I used this method "get_3x4_RT_matrix_from_blender" to have the rotation matrix.
After that I want to do 2D to 3D projection with all of these information.
For 2D to 3D projection, I wrote the following code in Java:
static double[] projUVZtoXY( double u, double v, double d)
{
// "u" and "v" are the pixel number of 2D image and
// "d" is the depth of this pixel (distance of the point to camera)
double[] p = new double[]{u, v, 1};
double[] translate = calibStruct.getM_Trans(); // Translation Matrix, from **T_world2cv** matrix in **get_3x4_RT_matrix_from_blender** method
double[] rotation = calibStruct.getM_RotMatrix(); // Rotation Matrix, from **R_world2cv** matrix in **get_3x4_RT_matrix_from_blender** method
double[] K = calibStruct.getM_K(); // Intrinsic Matrix, from K matrix in **get_calibration_matrix_K_from_blender** method
double[][] invertR = invert33(rotation); // This method give me R^-1 matrix
double[][] invertK = invert33(K); // This method give me K^-1 matrix
double[][] invK_mul_depth = multiply33_scalar(invertK, d); // this method multiply scalar value of "d" to "invertK" matrix
double[] invK_mul_depth_p = multiply33_31(invK_mul_depth, p); // this method multiply 3*3 matrix of "invK_mul_depth" by 3*1 matrix of "p"
// subtract translation Matrix from the "invK_mul_depth_p" matrix
double[] d_InvK_p_trans = new double[]{invK_mul_depth_p[0] - translate[0],
invK_mul_depth_p[1] - translate[1],
invK_mul_depth_p[2] - translate[2]};
double[] xyz = multiply33_31(invertR, d_InvK_p_trans );
return xyz;
}
All the above code is trying to implement this 3D warping algorithm, to project uv pixel to XYZ 3D point.
But when I generate the 3D point cloud, it looks like this: (in Meshlab)
It is the point cloud of just one image, the following image:
I can't understand what happened here. And why in 3D point Cloud, all of the players in the image, are repeated in a line !
Could anyone guess what is happening?
I think maybe the Rotation Matrix that I get it from Blender, is not correct. What's your idea?
Thanks,
Mozhde

Rotate vector using Java 3D

I'm attempting to use Java3D to rotate a vector. My goal is create a transform that will make the vector parallel with the y-axis. To do this, I calculated the angle between the original vector and an identical vector except that it has a z value of 0 (original x, original y, 0 for z-value). I then did the same thing for the y-axis (original x, 0 for y-value, original z). I then used each angle to create two Transform3D objects, multiply them together and apply to the vector. My code is as follows:
Transform3D yRotation = new Transform3D();
Transform3D zRotation = new Transform3D();
//create new normal vector
Vector3f normPoint = new Vector3f (normal.getX(), normal.getY(), normal.getZ());
//****Z rotation methods*****
Vector3f newNormPointZ = new Vector3f(normal.getX(), normal.getY(),0.0F);
float zAngle = normPoint.angle(newNormPointZ);
zRotation.rotZ(zAngle);
//****Y rotation methods*****
Vector3f newNormPointY = new Vector3f(normal.getX(),0.0F, normal.getZ());
float yAngle = normPoint.angle(newNormPointY);
yRotation.rotY(yAngle);
//combine the two rotations
yRotation.mul(zRotation);
System.out.println("before trans normal = " +normPoint.x + ", "+normPoint.y+", "+normPoint.z);
//PRINT STATEMENT RETURNS: before trans normal = 0.069842085, 0.99316376, 0.09353002
//perform transform
yRotation.transform(normPoint);
System.out.println("normal trans = " +normPoint.x + ", "+normPoint.y+", "+normPoint.z);
//PRINT STATEMENT RETURNS: normal trans = 0.09016449, 0.99534255, 0.03411238
I was hoping the transform would produce x and z values of or very close to 0. While the logic makes sense to me, I'm obviously missing something..
If your goal is to rotate a vector parallel to the y axis, why can't you just manually set it using the magnitude of the vector and setting your vector to <0, MAGNITUDE, 0>?
Also, you should know that rotating a vector to be directly pointing +Y or -Y can cause some rotation implementations to break, since they operate according to the "world up" vector, or, <0,1,0>. You can solve this by building your own rotation system and using the "world out" vector <0,0,1> when rotating directly up.
If you have some other purpose for this, fastgraph helped me with building rotation matrices.
It's best to understand the math of what's going on so that you know what to do in the future.

Resources