ElasticFusion Slam algorithm running with pose graph on TUM RGB-D benchmark - algorithm

I downloaded Freiburg desk dataset from TUM RGB-D SLAM Dataset and Benchmark and converted it to '.klg' which is custom format of slam algorithm . I loaded this klg file to ElasticFusion and runned the SLAM algorithm. The 3d reconstruction output seems good enough while doing it.
Now i want to build 3d reconstruction by already built trajectory information.
I retrieved trajectory data from previous run from '.freibrug' and converted it to desired format by ElasticFusion. I just changed timestamp from seconds to microsenconds by multiplying it to 1000000. And split the variables using "," instead of " " space .
I run the algorithm this time with "-p" flag and path information to trajectory file. Below is my running command.
/path_to_EF/./ElasticFusion -l /path_to_data/rgbd_dataset_freiburg1_desk/test2.klg -p /path_to_data/rgbd_dataset_freiburg1_desk/modified_freiburg.txt
I am expecting to get the same point cloud. But the result i am getting with given data far from the expected.
As you see its accuracy and reconstruction level far worse than previous run .
I do not have problem with trajectory. The below graph shows that trajectory I retrieved from the previous run is close to the groundtruth data provided by TUM RGB-D Benchmark.
Even when I am running it with groundtruth data, it does not build nice 3d reconstruction. What can be the reason and the missing points for such result?
Good suggestions and answers will be appreciated.

I successfully ran the code with trajectory file
(your timestamp should be integer and separate all parameter with space)
modify the ElasticFusion/GUI/src/Tools/GroundTruthOdometry.cpp
void GroundTruthOdometry::loadTrajectory(const std::string & filename)
{
std::ifstream file;
std::string line;
file.open(filename.c_str());
while (!file.eof())
{
unsigned long long int utime;
float x, y, z, qx, qy, qz, qw;
std::getline(file, line);
int n = sscanf(line.c_str(), "%llu %f %f %f %f %f %f %f", &utime, &x, &y, &z, &qx, &qy, &qz, &qw);
if(file.eof())
break;
assert(n == 8);
Eigen::Quaternionf q(qw, qx, qy, qz);
Eigen::Vector3f t(x, y, z);
Eigen::Isometry3f T;
T.setIdentity();
T.pretranslate(t).rotate(q);
camera_trajectory[utime] = T;
}
}
Eigen::Matrix4f GroundTruthOdometry::getTransformation(uint64_t timestamp)
{
Eigen::Matrix4f pose = Eigen::Matrix4f::Identity();
if(last_utime != 0)
{
std::map<uint64_t, Eigen::Isometry3f>::const_iterator it = camera_trajectory.find(last_utime);
if (it == camera_trajectory.end())
{
last_utime = timestamp;
return pose;
}
pose = camera_trajectory[timestamp].matrix();
}
else
{
std::map<uint64_t, Eigen::Isometry3f>::const_iterator it = camera_trajectory.find(timestamp);
Eigen::Isometry3f ident = it->second;
pose = Eigen::Matrix4f::Identity();
camera_trajectory[last_utime] = ident;
}
last_utime = timestamp;
return pose;
}
Basically just disable the M matrix, you can try it out

I took 3 scans: left-to-right, down-to-up and back-to-front. I observed that thought trajectory file seems correct , the building is going wrong. When I move the camera on x axis , on EF it moves in z axis and similar situation for the others. I tried to found transformation matrix manually. I applied this transformation to translation and rotation. It started to work afterwards.

Related

How do I batch-simplify (one after the other) paths of a collection of (very large) SVG files, preferably with a command-line tool on UNIX?

I have perhaps two thousand SVG files, obtained as a result of scanning a number of brochures to greyscale JPG, then batch-binarising them and saving to monochrome TIFF using ScanTailor, then batch-vectorising the TIFFs (using the command-line utility ImageMagick and a quick FOR loop in Fish), and finally resizing/editing them by hand in Inkscape. You can see my workflow heavily favours command-line UNIX utilities; I'll use a graphical tool if necessary, but not for repetitive tasks. If it makes any difference at all, my preferred UNIX distribution is MacOS.
What is left, at the end of this process, is a ~750KB file containing essentially a complete mathematical description of the page. Every printed letter or stroke of the pen has its own path, and (because I didn't use any sort of despeckling algorithm) so does every meaningless artifact (although I made sure to clean up the edges of every page, for workflow reasons).
Most of the scans, though, were imperfect (300 ppi when 600 or 900 ppi would have been better); the binarisation algorithm (the Otsu method) wasn't perfect either, etc. All of the imperfections added up, so in most cases the paths are rather noisy. The path representing the printed capital letter H, for example, needs only eight nodes (corners), or sixteen if it has rounded terminals (ends). I'm willing to accept more than that, because after all the system isn't perfect, but when I see thirty nodes on the H, and it has scalloped sides under magnification, my eyes start to bleed.
I know this isn't anything to worry about when the pages (rendered as PNG) reach the print shop, because the Mark 1 Eyeball will smooth everything out, but I'm too much of a perfectionist to leave it like that.
To solve the problem, I tried selecting all paths in Inkscape with Cmd/A, then using the "simplify path" command by typing Cmd/L. What I expected was that Inkscape would smooth all the paths individually; what resulted was Inkscape smoothing everything collectively into one blurry mess.
I get the result I want if I select path number one and type Cmd/L, then path number two and again Cmd/L, but a representative page has over FOUR HUNDRED paths and this kind of workflow is essentially impracticable.
I know Inkscape has a (very badly documented) command-line mode, and there might perhaps be a script available to do what needs doing, but if it exists somewhere I can't find it. An ideal solution would be to do what I described above, but programmatically (shell script?), then a FOR loop to do it on every file in the directory.
The basic algorithm for path simplification is not that complicated, as long as you do not need to handle curves. So one avenue could be to write a script yourself. The following is an excerpt from a node.js script I have used to simplify polygons/polylines in maps. For geojson files in the size range of up to several MB it would run in 0.1-0.2 seconds on my (old) computer.
The idea is to take the first and last points of a polyline and to measure how far the second point is removed from a line connecting the two. If it is less than a threshold (the smallest deviation from a straight line you will be able to spot), it can be safely removed, and the next middle point can be examined. If not, the point is preserved and further examinations measure the distance from a line from that point to the last.
const sqEpsilon = ... // square (!) of the minimum distance to preserve
// takes a list points in the form [[x,y],...]
function simplifyDP (points) {
const len = points.length;
const markers = new Uint8Array(len);
markers[0] = markers[len - 1] = 1;
simplifyDPStep(points, markers, 0, len - 1);
return markers.reduce((pts, m, i) => {
if (m) pts.push(points[i]);
return pts;
}, []);
}
function simplifyDPStep (points, markers, first, last) {
let maxSqDist = 0, idx;
for (let i = first + 1; i <= last - 1; i++) {
const sqDist = sqDistance(points[i], points[first], points[last]);
if (sqDist > maxSqDist) {
idx = i;
maxSqDist = sqDist;
}
}
if (maxSqDist > sqEpsilon) {
markers[idx] = 1;
simplifyDPStep(points, markers, first, idx);
simplifyDPStep(points, markers, idx, last);
}
}
function sqDistance(p, p1, p2) {
let x = p1[0],
y = p1[1];
const dx = p2[0] - x,
dy = p2[1] - y;
const dot = dx * dx + dy * dy;
if (dot > 0) {
const t = ((p[0] - x) * dx + (p[1] - y) * dy) / dot;
if (t > 1) {
x = p2[0];
y = p2[1];
} else if (t > 0) {
x += dx * t;
y += dy * t;
}
}
const cdx = p[0] - x;
const cdy = p[1] - y;
return cdx * cdx + cdy * cdy;
}

Storing motion vectors from calculated optical flow in a practical way which enables reconstruction of subsequent frames from initial keyframes

I am trying to store the motion detected from optical flow for frames in a video sequence and then use these stored motion vectors in order to predict the already known frames using just the first frame as a reference. I am currently using two processing sketches - the first sketch draws a motion vector for every pixel grid (each of width and height 10 pixels). This is done for every frame in the video sequence. The vector is only drawn in a grid if there is sufficient motion detected. The second sketch aims to reconstruct the video frames crudely from just the initial frame of the video sequence combined with information about the motion vectors got from the first sketch.
My approach so far is as follows: I am able to determine the size, position and direction of each motion vector drawn in the first sketch from four variables. By creating four arrays (two for the motion vector's x and y coordinate and another two for its length in the x and y direction), every time a motion vector is drawn I can append each of the four variables to the arrays mentioned above. This is done for each pixel grid throughout an entire frame where the vector is drawn and for each frame in the sequence - via for loops. Once the arrays are full, I can then save them to a text file as a list of strings. I then load these strings from the text file into the second sketch, along with the first frame of the video sequence. I load the strings into variables within a while loop in the draw function and convert them back into floats. I increment a variable by one each time the draw function is called - this moves on to the next frame (I used a specific number as a separator in my text-files which appears at the end of every frame - the loop searches for this number and then increments the variable by one, thus breaking the while loop and the draw function is called again for the subsequent frame). For each frame, I can draw 10 by 10 pixel boxes and move then by the parameters got from the text files in the first sketch. My problem is simply this: How do I draw the motion of a particular frame without letting what I've have blitted to the screen in the previous frame affect what will be drawn for the next frame. My only way of getting my 10 by 10 pixel box is by using the get() function which gets pixels that are already drawn to the screen.
Apologies for the length and complexity of my question. Any tips would be very much appreciated! I will add the code for the second sketch. I can also add the first sketch if required, but it's rather long and a lot of it is not my own. Here is the second sketch:
import processing.video.*;
Movie video;
PImage [] naturalMovie = new PImage [0];
String xlengths [];
String ylengths [];
String xpositions [];
String ypositions [];
int a = 0;
int c = 0;
int d = 0;
int p;
int gs = 10;
void setup(){
size(640, 480, JAVA2D);
xlengths = loadStrings("xlengths.txt");
ylengths = loadStrings("ylengths.txt");
xpositions = loadStrings("xpositions.txt");
ypositions = loadStrings("ypositions.txt");
video = new Movie(this, "sample1.mov");
video.play();
rectMode(CENTER);
}
void movieEvent(Movie m) {
m.read();
PImage f = createImage(m.width, m.height, ARGB);
f.set(0, 0, m);
f.resize(width, height);
naturalMovie = (PImage []) append(naturalMovie, f);
println("naturalMovie length: " + naturalMovie.length);
p = naturalMovie.length - 1;
}
void draw() {
if(naturalMovie.length >= p && p > 0){
if (c == 0){
image(naturalMovie[0], 0, 0);
}
d = c;
while (c == d && c < xlengths.length){
float u, v, x0, y0;
u = float(xlengths[a]);
v = float(ylengths[a]);
x0 = float(xpositions[a]);
y0 = float(ypositions[a]);
if (u != 1.0E-19){
//stroke(255,255,255);
//line(x0,y0,x0+u,y0+v);
PImage box;
box = get(int(x0-gs/2), int(y0 - gs/2), gs, gs);
image(box, x0-gs/2 +u, y0 - gs/2 +v, gs, gs);
if (a < xlengths.length - 1){
a += 1;
}
}
else if (u == 1.0E-19){
if (a < xlengths.length - 1){
c += 1;
a += 1;
}
}
}
}
}
Word to the wise: most people aren't going to read that wall of text. Try to "dumb down" your posts so they get to the details right away, without any extra information. You'll also be better off if you post an MCVE instead of only giving us half your code. Note that this does not mean posting your entire project. Instead, start over with a blank sketch and only create the most basic code required to show the problem. Don't include any of your movie logic, and hardcode as much as possible. We should be able to copy and paste your code onto our own machines to run it and see the problem.
All of that being said, I think I understand what you're asking.
How do I draw the motion of a particular frame without letting what I've have blitted to the screen in the previous frame affect what will be drawn for the next frame. My only way of getting my 10 by 10 pixel box is by using the get() function which gets pixels that are already drawn to the screen.
Separate your program into a view and a model. Right now you're using the screen (the view) to store all of your information, which is going to cause you headaches. Instead, store the state of your program into a set of variables (the model). For you, this might just be a bunch of PVector instances.
Let's say I have an ArrayList<PVector> that holds the current position of all of my vectors:
ArrayList<PVector> currentPositions = new ArrayList<PVector>();
void setup() {
size(500, 500);
for (int i = 0; i < 100; i++) {
currentPositions.add(new PVector(random(width), random(height)));
}
}
void draw(){
background(0);
for(PVector vector : currentPositions){
ellipse(vector.x, vector.y, 10, 10);
}
}
Notice that I'm just hardcoding their positions to be random. This is what your MCVE should do as well. And then in the draw() function, I'm simply drawing each vector. This is like drawing a single frame for you.
Now that we have that, we can create a nextFrame() function that moves the vectors based on the ArrayList (our model) and not what's drawn on the screen!
void nextFrame(){
for(PVector vector : currentPositions){
vector.x += random(-2, 2);
vector.y += random(-2, 2);
}
}
Again, I'm just hardcoding a random movement, but you would be reading these from your file. Then we just call the nextFrame() function as the last line in the draw() function:
If you're still having trouble, I highly recommend posting an MCVE similar to mine and posting a new question. Good luck.

Clustering objects in GPU

My algorithm is simple for clustering, and it goes like this.
First object is grouped by all other objects which the distance between them is lower the X.
Then we go to the second object, if not included in the first group, we run the same algorithm on the other objects that are not included in the first group,
and so on...
I'm trying to do this algo in the GPU using the fragment shader.
First I set all the locations into a RGBA float texture. Setting for each pixel the location (x,y) - z and w are free for now. Then i draw to a result texture my calculations using the shader. In the end i will read the pixels of the result texture and do my code.
Tried many variations of code, and multi phases draw for performing my algorithm but i'm not happy with the time performances.
The question is,
Is there a way to do one run over the texture to perform my wish (single draw phase) ?
My latest try is this algorithm - My fragment shader
precision highp float;
uniform sampler2D locs;
varying vec2 coord;
uniform float clusterDistance;
const float textureSize = 64.;
void main()
{
// Getting my location
vec4 currData = texture2D(locs, coord);
float offsetPix = 1./textureSize/2.;
vec2 coordIdx = (coord - offsetPix) * textureSize;
// Getting the index of my location
float myIdx = coordIdx.y * textureSize + coordIdx.x;
int clusterIdx = 0;
float clusterNum = 0.;
// Running over all the other locations until me and finding the first close object to me
for (float i=0.;i<textureSize*textureSize;++i)
{
clusterNum = i +1.;
// Which mean that we didn't find any closed object to me so we stop
if (i == myIdx)
{
break;
}
else
{
vec2 pntLoc = vec2(mod(i, textureSize), floor(i/textureSize)) / textureSize+offsetPix;
vec4 pnt = texture2D(locs, pntLoc);
if (distance(currData.xy, pnt.xy) <= clusterDistance)
{
break;
}
}
}
// Print the result
gl_FragColor = vec4(currData.x, currData.y, clusterNum, 1.);
}
But the problem here is that the result can cause a chain clustering. For ex.
if our data is {0,0}, {4,0}, {8,0}, and the max distance to group is 4. Then the first is closed to the second. and then the third is close to the second but not the first. according to my algo, it is returning the index of the second, although that second is out of the picture because is grouped by the first object, and the first is the reference object for distances.
Is it possible to read from the result texture while writing to it?
It would solve my problem, cause then i could check the z value of the result when comparing distances..
No, you cannot read and write to a texture in the same pass (with standard WebGL and I think not at all in the way you intend).
Your algorithm seems rather serial in nature, not well suited for GPU/SIMD execution, but I may misinterpret your intent. Remember that the GPU may run a shader program for multiple data-points (fragments/pixels in this case) at once, having no clue about the results of others.
You also can't break out of a for loop on a SIMD architecture. The for loop will just keep iterating although the changes will not be written for fragments that broke out of it. In other words there is no speed benefit. It's a different story if the break condition evaluates to the same value for all fragments.
You might want to look at other ways of clustering, like k-means.

Seemingly Random crashes processing Pointcloud with Point Cloud Library - possibly memory issue?

I am doing several steps of reprojections of a point cloud (around 40 Million points initially, ~20 Million while processing). The Programm crashes at seemingly random points at one of these 2 loops. If I run it with a smaller subset (~10 Million Points) everything works fine.
//Projection of Point Cloud into a sphere
pcl::PointCloud<pcl::PointXYZ>::Ptr projSphere(pcl::PointCloud<pcl::PointXYZ>::Ptr cloud,int radius)
{
//output cloud
pcl::PointCloud<pcl::PointXYZ>::Ptr output(new pcl::PointCloud<pcl::PointXYZ>);
//time marker
int startTime = time(NULL);
cout<<"Start Sphere Projection"<<endl;
//factor by which each Point Vector ist multiplied to get a distance of radius to the origin
float scalar;
for (int i=0;i<cloud->size();i++)
{
if (i%1000000==0) cout<<i<<endl;
//P
pcl::PointXYZ tmpin=cloud->points.at(i);
//P'
pcl::PointXYZ tmpout;
scalar=radius/(sqrt(pow(tmpin.x,2)+pow(tmpin.y,2)+pow(tmpin.z,2)));
tmpout.x=tmpin.x*scalar;
tmpout.y=tmpin.y*scalar;
tmpout.z=tmpin.z*scalar;
//Adding P' to the output cloud
output->push_back(tmpout);
}
cout<<"Finished projection of "<<output->size()<<" points in "<<time(NULL)-startTime<<" seconds"<<endl;
return(output);
}
//Stereographic Projection
pcl::PointCloud<pcl::PointXYZ>::Ptr projStereo(pcl::PointCloud<pcl::PointXYZ>::Ptr cloud)
{
//output cloud
pcl::PointCloud<pcl::PointXYZ>::Ptr outputSt(new pcl::PointCloud<pcl::PointXYZ>);
//time marker
int startTime = time(NULL);
cout<<"Start Stereographic Projection"<<endl;
for (int i=0;i<cloud->size();i++)
{
//P
if (i%1000000==0) cout<<i<<endl;
pcl::PointXYZ tmpin=cloud->points.at(i);
//P'
pcl::PointXYZ tmpout;
//equation
tmpout.x=tmpin.x/(1.0+tmpin.z);
tmpout.y=tmpin.y/(1.0+tmpin.z);
tmpout.z=0;
//Adding P' to the output cloud
outputSt->push_back(tmpout);
}
cout<<"Finished projection of"<<outputSt->size()<<" points in "<<time(NULL)-startTime<<" seconds"<<endl;
return(outputSt);
}
If I do all the steps independently by saving/loading the pointclouds on the harddisk and rerunning the program for each step it also works fine. I'd like to provie the entire source files but I'm not sure how/if it's neccessary.
Thanks in advance
Edit:1
After about a week I have still no idea what might be the issue here, since the crashes are somewhat random, but not really? I tried to test the programm with a different system workload (freshly rebooted, with heavy duty programs loaded etc.) makes no apparent difference. Since I thought it's maybe a memory issue, I tried o move the large objects from stack to heap (initialising them with new), did also make no difference. By far the largest object is the raw input file, which I open and close by:
ifstream file;
file.open(infile);
/*......*/
file.close();
delete file;
Is that properly done, so that after the method is completed the memory is released?
Edit again:
So I try further and further, and finally I managed to put all the steps into one function like this:
void stereoTiffI(string infile, string outfile, int length)
{
//set up file input
cout<<"Opening file: "<< infile<<endl;
ifstream file;
file.open(infile);
string line;
//skip first lines
for (int i=0;i<9;i++)
{
getline(file,line);
}
//output cloud
pcl::PointCloud<pcl::PointXYZ> cloud;
getline(file,line);
//indexes for string parsing, coordinates and starting Timer
int i=0;
int j=0;
int k=0;
float x=0;
float y=0;
float z=0;
float intensity=0;
float scalar=0;
int startTime = time(NULL);
pcl::PointXYZ tmp;
//begin loop
cout<<"Begin reading and projecting"<< infile<<endl;
while (!file.eof())
{
getline(file,line);
i=0;
j=line.find(" ");
x=atof(line.substr(i,j).c_str());
i=line.find(" ",i)+1;
j=line.find(" ",i)-i;
y=atof(line.substr(i,j).c_str());
i=line.find(" ",i)+1;
j=line.find(" ",i)-i;
z=atof(line.substr(i,j).c_str());
//i=line.find(" ",i)+1;
//j=line.find(" ",i)-i;
//intensity=atof(line.substr(i,j).c_str());
//leave out points below scanner height
if (z>0)
{
//projection to a hemisphere with radius 1
scalar=1/(sqrt(pow(x,2)+pow(y,2)+pow(z,2)));
x=x*scalar;
y=y*scalar;
z=z*scalar;
//stereographic projection
x=x/(1.0+z);
y=y/(1.0+z);
z=0;
tmp.x=x;
tmp.y=y;
tmp.z=z;
//tmp.intensity=intensity;
cloud.push_back(tmp);
k++;
if (k%1000000==0)cout<<k<<endl;
}
}
cout<<"Finished producing projected cloud in: "<<time(NULL)-startTime<<" with "<<cloud.size()<<" points."<<endl;
And this actually works quit nicely and quickly. In a next step I tried to use Pointtype XYZI because I need to also get the intensity of the scanned points. And guess what, the program crashes at around 17000000 again, and again I have no idea why. Please help
Ok, I solved it. Dr. Memory gave me the right hint by giving me a heap allocation error. After a bit of googling I enabled Large Addresses in Visual Studio (Properties -> Linker -> System)
Everything works like a charm.

3D Rotation Matrix deforms over time in Processing/Java

Im working on a project where i want to generate a 3D mesh to represent a certain amount of data.
To create this mesh i want to use transformation Matrixes, so i created a class based upon the mathematical algorithms found on a couple of websites.
Everything seems to work, scale/translation but as soon as im rotating a mesh on its x-axis its starts to deform after 2 to 3 complete rotations. It feels like my scale values are increasing which transforms my mesh. I'm struggling with this problem for a couple of days but i can't figure out what's going wrong.
To make things more clear you can download my complete setup here.
I defined the coordinates of a box and put them through the transformation matrix before writing them to the screen
This is the formula for rotating my object
void appendRotation(float inXAngle, float inYAngle, float inZAngle, PVector inPivot ) {
boolean setPivot = false;
if (inPivot.x != 0 || inPivot.y != 0 || inPivot.z != 0) {
setPivot = true;
}
// If a setPivot = true, translate the position
if (setPivot) {
// Translations for the different axises need to be set different
if (inPivot.x != 0) { this.appendTranslation(inPivot.x,0,0); }
if (inPivot.y != 0) { this.appendTranslation(0,inPivot.y,0); }
if (inPivot.z != 0) { this.appendTranslation(0,0,inPivot.z); }
}
// Create a rotationmatrix
Matrix3D rotationMatrix = new Matrix3D();
// xsin en xcos
float xSinCal = sin(radians(inXAngle));
float xCosCal = cos(radians(inXAngle));
// ysin en ycos
float ySinCal = sin(radians(inYAngle));
float yCosCal = cos(radians(inYAngle));
// zsin en zcos
float zSinCal = sin(radians(inZAngle));
float zCosCal = cos(radians(inZAngle));
// Rotate around x
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[1][1] = xCosCal;
rotationMatrix.matrix[1][2] = xSinCal;
rotationMatrix.matrix[2][1] = -xSinCal;
rotationMatrix.matrix[2][2] = xCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Rotate around y
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[0][0] = yCosCal;
rotationMatrix.matrix[0][2] = -ySinCal;
rotationMatrix.matrix[2][0] = ySinCal;
rotationMatrix.matrix[2][2] = yCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Rotate around z
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[0][0] = zCosCal;
rotationMatrix.matrix[0][1] = zSinCal;
rotationMatrix.matrix[1][0] = -zSinCal;
rotationMatrix.matrix[1][1] = zCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Untranslate the position
if (setPivot) {
// Translations for the different axises need to be set different
if (inPivot.x != 0) { this.appendTranslation(-inPivot.x,0,0); }
if (inPivot.y != 0) { this.appendTranslation(0,-inPivot.y,0); }
if (inPivot.z != 0) { this.appendTranslation(0,0,-inPivot.z); }
}
}
Does anyone have a clue?
You never want to cumulatively transform matrices. This will introduce error into your matrices and cause problems such as scaling or skewing the orthographic components.
The correct method would be to keep track of the cumulative pitch, yaw, roll angles. Then reconstruct the transformation matrix from those angles every update.
If there is any chance: avoid multiplying rotation matrices. Keep track of the cumulative rotation and compute a new rotation matrix at each step.
If it is impossible to avoid multiplying the rotation matrices then renormalize them (starts on page 16). It works for me just fine for more than 10 thousand multiplications.
However, I suspect that it will not help you, numerical errors usually requires more than 2 steps to manifest themselves. It seems to me the reason for your problem is somewhere else.
Yaw, pitch and roll are not good for arbitrary rotations. Euler angles suffer from singularities and instability. Look at 38:25 (presentation of David Sachs)
http://www.youtube.com/watch?v=C7JQ7Rpwn2k
Good luck!
As #don mentions, try to avoid cumulative transformations, as you can run into all sort of problems. Rotating by one axis at a time might lead you to Gimbal Lock issues. Try to do all rotations in one go.
Also, bare in mind that Processing comes with it's own Matrix3D class called PMatrix3D which has a rotate() method which takes an angle(in radians) and x,y,z values for the rotation axis.
Here is an example function that would rotate a bunch of PVectors:
PVector[] rotateVerts(PVector[] verts,float angle,PVector axis){
int vl = verts.length;
PVector[] clone = new PVector[vl];
for(int i = 0; i<vl;i++) clone[i] = verts[i].get();
//rotate using a matrix
PMatrix3D rMat = new PMatrix3D();
rMat.rotate(angle,axis.x,axis.y,axis.z);
PVector[] dst = new PVector[vl];
for(int i = 0; i<vl;i++) {
dst[i] = new PVector();
rMat.mult(clone[i],dst[i]);
}
return dst;
}
and here is an example using it.
HTH
A shot in the dark: I don't know the rules or the name of the programming language you are using, but this procedure looks suspicious:
void setIdentity() {
this.matrix = identityMatrix;
}
Are you sure your are taking a copy of identityMatrix? If it is just a reference you are copying, then identityMatrix will be modified by later operations, and soon nothing makes sense.
Though the matrix renormalization suggested probably works fine in practice, it is a bit ad-hoc from a mathematical point of view. A better way of doing it is to represent the cumulative rotations using quaternions, which are only converted to a rotation matrix upon application. The quaternions will also drift slowly from orthogonality (though slower), but the important thing is that they have a well-defined renormalization.
Good starting information for implementing this can be:
http://www.cprogramming.com/tutorial/3d/quaternions.html
http://www.scheib.net/school/library/quaternions.pdf
A useful academic reference can be:
K. Shoemake, “Animating rotation with quaternion curves,” ACM
SIGGRAPH Comput. Graph., vol. 19, no. 3, pp. 245–254, 1985. DOI:10.1145/325165.325242

Resources