Seemingly Random crashes processing Pointcloud with Point Cloud Library - possibly memory issue? - memory-management

I am doing several steps of reprojections of a point cloud (around 40 Million points initially, ~20 Million while processing). The Programm crashes at seemingly random points at one of these 2 loops. If I run it with a smaller subset (~10 Million Points) everything works fine.
//Projection of Point Cloud into a sphere
pcl::PointCloud<pcl::PointXYZ>::Ptr projSphere(pcl::PointCloud<pcl::PointXYZ>::Ptr cloud,int radius)
{
//output cloud
pcl::PointCloud<pcl::PointXYZ>::Ptr output(new pcl::PointCloud<pcl::PointXYZ>);
//time marker
int startTime = time(NULL);
cout<<"Start Sphere Projection"<<endl;
//factor by which each Point Vector ist multiplied to get a distance of radius to the origin
float scalar;
for (int i=0;i<cloud->size();i++)
{
if (i%1000000==0) cout<<i<<endl;
//P
pcl::PointXYZ tmpin=cloud->points.at(i);
//P'
pcl::PointXYZ tmpout;
scalar=radius/(sqrt(pow(tmpin.x,2)+pow(tmpin.y,2)+pow(tmpin.z,2)));
tmpout.x=tmpin.x*scalar;
tmpout.y=tmpin.y*scalar;
tmpout.z=tmpin.z*scalar;
//Adding P' to the output cloud
output->push_back(tmpout);
}
cout<<"Finished projection of "<<output->size()<<" points in "<<time(NULL)-startTime<<" seconds"<<endl;
return(output);
}
//Stereographic Projection
pcl::PointCloud<pcl::PointXYZ>::Ptr projStereo(pcl::PointCloud<pcl::PointXYZ>::Ptr cloud)
{
//output cloud
pcl::PointCloud<pcl::PointXYZ>::Ptr outputSt(new pcl::PointCloud<pcl::PointXYZ>);
//time marker
int startTime = time(NULL);
cout<<"Start Stereographic Projection"<<endl;
for (int i=0;i<cloud->size();i++)
{
//P
if (i%1000000==0) cout<<i<<endl;
pcl::PointXYZ tmpin=cloud->points.at(i);
//P'
pcl::PointXYZ tmpout;
//equation
tmpout.x=tmpin.x/(1.0+tmpin.z);
tmpout.y=tmpin.y/(1.0+tmpin.z);
tmpout.z=0;
//Adding P' to the output cloud
outputSt->push_back(tmpout);
}
cout<<"Finished projection of"<<outputSt->size()<<" points in "<<time(NULL)-startTime<<" seconds"<<endl;
return(outputSt);
}
If I do all the steps independently by saving/loading the pointclouds on the harddisk and rerunning the program for each step it also works fine. I'd like to provie the entire source files but I'm not sure how/if it's neccessary.
Thanks in advance
Edit:1
After about a week I have still no idea what might be the issue here, since the crashes are somewhat random, but not really? I tried to test the programm with a different system workload (freshly rebooted, with heavy duty programs loaded etc.) makes no apparent difference. Since I thought it's maybe a memory issue, I tried o move the large objects from stack to heap (initialising them with new), did also make no difference. By far the largest object is the raw input file, which I open and close by:
ifstream file;
file.open(infile);
/*......*/
file.close();
delete file;
Is that properly done, so that after the method is completed the memory is released?
Edit again:
So I try further and further, and finally I managed to put all the steps into one function like this:
void stereoTiffI(string infile, string outfile, int length)
{
//set up file input
cout<<"Opening file: "<< infile<<endl;
ifstream file;
file.open(infile);
string line;
//skip first lines
for (int i=0;i<9;i++)
{
getline(file,line);
}
//output cloud
pcl::PointCloud<pcl::PointXYZ> cloud;
getline(file,line);
//indexes for string parsing, coordinates and starting Timer
int i=0;
int j=0;
int k=0;
float x=0;
float y=0;
float z=0;
float intensity=0;
float scalar=0;
int startTime = time(NULL);
pcl::PointXYZ tmp;
//begin loop
cout<<"Begin reading and projecting"<< infile<<endl;
while (!file.eof())
{
getline(file,line);
i=0;
j=line.find(" ");
x=atof(line.substr(i,j).c_str());
i=line.find(" ",i)+1;
j=line.find(" ",i)-i;
y=atof(line.substr(i,j).c_str());
i=line.find(" ",i)+1;
j=line.find(" ",i)-i;
z=atof(line.substr(i,j).c_str());
//i=line.find(" ",i)+1;
//j=line.find(" ",i)-i;
//intensity=atof(line.substr(i,j).c_str());
//leave out points below scanner height
if (z>0)
{
//projection to a hemisphere with radius 1
scalar=1/(sqrt(pow(x,2)+pow(y,2)+pow(z,2)));
x=x*scalar;
y=y*scalar;
z=z*scalar;
//stereographic projection
x=x/(1.0+z);
y=y/(1.0+z);
z=0;
tmp.x=x;
tmp.y=y;
tmp.z=z;
//tmp.intensity=intensity;
cloud.push_back(tmp);
k++;
if (k%1000000==0)cout<<k<<endl;
}
}
cout<<"Finished producing projected cloud in: "<<time(NULL)-startTime<<" with "<<cloud.size()<<" points."<<endl;
And this actually works quit nicely and quickly. In a next step I tried to use Pointtype XYZI because I need to also get the intensity of the scanned points. And guess what, the program crashes at around 17000000 again, and again I have no idea why. Please help

Ok, I solved it. Dr. Memory gave me the right hint by giving me a heap allocation error. After a bit of googling I enabled Large Addresses in Visual Studio (Properties -> Linker -> System)
Everything works like a charm.

Related

Has anyone encounter this this stripping artifact during "RayTracing in one Weekend"?

I am trying to port the "RayTracing in One Weekend" into metal compute shader. I encounter this strip artifacts in my project:
Is it because my random generator does not work well?
Does anyone have a clue?
// 参照 https://www.pcg-random.org/ 实现
typedef struct { uint64_t state; uint64_t inc; } pcg32_random_t;
uint32_t pcg32_random_r(thread pcg32_random_t* rng)
{
uint64_t oldstate = rng->state;
rng->state = oldstate * 6364136223846793005ULL + rng->inc;
uint32_t xorshifted = ((oldstate >> 18u) ^ oldstate) >> 27u;
uint32_t rot = oldstate >> 59u;
return (xorshifted >> rot) | (xorshifted << ((-rot) & 31));
}
void pcg32_srandom_r(thread pcg32_random_t* rng, uint64_t initstate, uint64_t initseq)
{
rng->state = 0U;
rng->inc = (initseq << 1u) | 1u;
pcg32_random_r(rng);
rng->state += initstate;
pcg32_random_r(rng);
}
// 生成0~1之间的浮点数
float randomF(thread pcg32_random_t* rng)
{
//return pcg32_random_r(rng)/float(UINT_MAX);
return ldexp(float(pcg32_random_r(rng)), -32);
}
// 生成x_min ~ x_max之间的浮点数
float randomRange(thread pcg32_random_t* rng, float x_min, float x_max){
return randomF(rng) * (x_max - x_min) + x_min;
}
I found this link.It says that the primary ray hit point is either above or below the sphere's surface a litter bit due to the float precision error. It is a z-fighting problem
Seeing these "circles aroune the image center" artifact is almost always a dead give-away for z-fighting (rays along any such "circle" always have the same distance to any flat object you're looking at, so they either all round up or all round down, giving you this artifact).
This z-fighting then translates to this artifact because sometimes all the rays on such a circle are inside the sphere (meaning their shadow rays get self-occluded by the sphere), or all outside (they do what they should). What you want to do is offset the ray origin of the shadow rays a tiny bit (say, 1e-3f) along the normal direction at the hitpoint.
You may also want to read up on Carsten Waechter's article in Ray Tracing Gems 1 - it's for triangles, but explains the problem and potential solution very well.

ElasticFusion Slam algorithm running with pose graph on TUM RGB-D benchmark

I downloaded Freiburg desk dataset from TUM RGB-D SLAM Dataset and Benchmark and converted it to '.klg' which is custom format of slam algorithm . I loaded this klg file to ElasticFusion and runned the SLAM algorithm. The 3d reconstruction output seems good enough while doing it.
Now i want to build 3d reconstruction by already built trajectory information.
I retrieved trajectory data from previous run from '.freibrug' and converted it to desired format by ElasticFusion. I just changed timestamp from seconds to microsenconds by multiplying it to 1000000. And split the variables using "," instead of " " space .
I run the algorithm this time with "-p" flag and path information to trajectory file. Below is my running command.
/path_to_EF/./ElasticFusion -l /path_to_data/rgbd_dataset_freiburg1_desk/test2.klg -p /path_to_data/rgbd_dataset_freiburg1_desk/modified_freiburg.txt
I am expecting to get the same point cloud. But the result i am getting with given data far from the expected.
As you see its accuracy and reconstruction level far worse than previous run .
I do not have problem with trajectory. The below graph shows that trajectory I retrieved from the previous run is close to the groundtruth data provided by TUM RGB-D Benchmark.
Even when I am running it with groundtruth data, it does not build nice 3d reconstruction. What can be the reason and the missing points for such result?
Good suggestions and answers will be appreciated.
I successfully ran the code with trajectory file
(your timestamp should be integer and separate all parameter with space)
modify the ElasticFusion/GUI/src/Tools/GroundTruthOdometry.cpp
void GroundTruthOdometry::loadTrajectory(const std::string & filename)
{
std::ifstream file;
std::string line;
file.open(filename.c_str());
while (!file.eof())
{
unsigned long long int utime;
float x, y, z, qx, qy, qz, qw;
std::getline(file, line);
int n = sscanf(line.c_str(), "%llu %f %f %f %f %f %f %f", &utime, &x, &y, &z, &qx, &qy, &qz, &qw);
if(file.eof())
break;
assert(n == 8);
Eigen::Quaternionf q(qw, qx, qy, qz);
Eigen::Vector3f t(x, y, z);
Eigen::Isometry3f T;
T.setIdentity();
T.pretranslate(t).rotate(q);
camera_trajectory[utime] = T;
}
}
Eigen::Matrix4f GroundTruthOdometry::getTransformation(uint64_t timestamp)
{
Eigen::Matrix4f pose = Eigen::Matrix4f::Identity();
if(last_utime != 0)
{
std::map<uint64_t, Eigen::Isometry3f>::const_iterator it = camera_trajectory.find(last_utime);
if (it == camera_trajectory.end())
{
last_utime = timestamp;
return pose;
}
pose = camera_trajectory[timestamp].matrix();
}
else
{
std::map<uint64_t, Eigen::Isometry3f>::const_iterator it = camera_trajectory.find(timestamp);
Eigen::Isometry3f ident = it->second;
pose = Eigen::Matrix4f::Identity();
camera_trajectory[last_utime] = ident;
}
last_utime = timestamp;
return pose;
}
Basically just disable the M matrix, you can try it out
I took 3 scans: left-to-right, down-to-up and back-to-front. I observed that thought trajectory file seems correct , the building is going wrong. When I move the camera on x axis , on EF it moves in z axis and similar situation for the others. I tried to found transformation matrix manually. I applied this transformation to translation and rotation. It started to work afterwards.

Drawing image(PGraphics) gives unwanted double image mirrored about x-axis. Processing 3

The code is supposed to fade and copy the window's image to a buffer f, then draw f back onto the window but translated, rotated, and scaled. I am trying to create an effect like a feedback loop when you point a camera plugged into a TV at the TV.
I have tried everything I can think of, logged every variable I could think of, and still it just seems like image(f,0,0) is doing something wrong or unexpected.
What am I missing?
Pic of double image mirror about x-axis:
PGraphics f;
int rect_size;
int midX;
int midY;
void setup(){
size(1000, 1000, P2D);
f = createGraphics(width, height, P2D);
midX = width/2;
midY = height/2;
rect_size = 300;
imageMode(CENTER);
rectMode(CENTER);
smooth();
background(0,0,0);
fill(0,0);
stroke(255,255);
}
void draw(){
fade_and_copy_pixels(f); //fades window pixels and then copies pixels to f
background(0,0,0);//without this the corners dont get repainted.
//transform display window (instead of f)
pushMatrix();
float scaling = 0.90; // x>1 makes image bigger
float rot = 5; //angle in degrees
translate(midX,midY); //makes it so rotations are always around the center
rotate(radians(rot));
scale(scaling);
imageMode(CENTER);
image(f,0,0); //weird double image must have something not working around here
popMatrix();//returns window matrix to normal
int x = mouseX;
int y = mouseY;
rectMode(CENTER);
rect(x,y,rect_size,rect_size);
}
//fades window pixels and then copies pixels to f
void fade_and_copy_pixels(PGraphics f){
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
for (int i = 0; i < pixels.length; i++) {
//////////////FADE PIXELS in window and COPY to f:///////////////
color p = pixels[i];
//get color values, mask then shift
int r = (p & 0x00FF0000) >> 16;
int g = (p & 0x0000FF00) >> 8;
int b = p & 0x000000FF; //no need for shifting
// reduce value for each color proportional
// between fade_amount between 0-1 for 0 being totallty transparent, and 1 totally none
// min is 0.0039 (when using floor function and 255 as molorModes for colors)
float fade_percent= 0.005; //0.05 = 5%
int r_new = floor(float(r) - (float(r) * fade_percent));
int g_new = floor(float(g) - (float(g) * fade_percent));
int b_new = floor(float(b) - (float(b) * fade_percent));
//maybe later rewrite in a way to save what the difference is and round it differently, like maybe faster at first and slow later,
//round doesn't work because it never first subtracts one to get the ball rolling
//floor has a minimum of always subtracting 1 from each value each time. cant just subtract 1 ever n loops
//keep a list of all the pixel as floats? too much memory?
//ill stick with floor for now
// the lowest percent that will make a difference with floor is 0.0039?... because thats slightly more than 1/255
//shift back and or together
p = 0xFF000000 | (r_new << 16) | (g_new << 8) | b_new; // or-ing all the new hex together back into AARRGGBB
f.pixels[i] = p;
////////pixels now copied
}
f.updatePixels();
}
This is a weird one. But let's start with a simpler MCVE that isolates the problem:
PGraphics f;
void setup() {
size(500, 500, P2D);
f = createGraphics(width, height, P2D);
}
void draw() {
background(0);
rect(mouseX, mouseY, 100, 100);
copyPixels(f);
image(f, 0, 0);
}
void copyPixels(PGraphics f) {
loadPixels();
f.loadPixels();
for (int i = 0; i < pixels.length; i++) {
color p = pixels[i];
f.pixels[i] = p;
}
f.updatePixels();
}
This code exhibits the same problem as your code, without any of the extra logic. I would expect this code to show a rectangle wherever the mouse is, but instead it shows a rectangle at a position reflected over the X axis. If the mouse is on the top of the window, the rectangle is at the bottom of the window, and vice-versa.
I think this is caused by the P2D renderer being OpenGL, which has an inversed Y axis (0 is at the bottom instead of the top). So it seems like when you copy the pixels over, it's going from screen space to OpenGL space... or something. That definitely seems buggy though.
For now, there are two things that seem to fix the problem. First, you could just use the default renderer instead of P2D. That seems to fix the problem.
Or you could get rid of the for loop inside the copyPixels() function and just do f.pixels = pixels; for now. That also seems to fix the problem, but again it feels pretty buggy.
If somebody else (paging George) doesn't come along with a better explanation by tomorrow, I'd file a bug on Processing's GitHub. (I can do that for you if you want.)
Edit: I've filed an issue here, so hopefully we'll hear back from a developer in the next few days.
Edit Two: Looks like a fix has been implemented and should be available in the next release of Processing. If you need it now, you can always build Processing from source.
An easier one, and works like a charm:
add f.beginDraw(); before and f.endDraw(); after using f:
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
f.beginDraw();
and
f.updatePixels();
f.endDraw();
Processing must know when it's drawing in a buffer and when not.
In this image you can see that works

Storing motion vectors from calculated optical flow in a practical way which enables reconstruction of subsequent frames from initial keyframes

I am trying to store the motion detected from optical flow for frames in a video sequence and then use these stored motion vectors in order to predict the already known frames using just the first frame as a reference. I am currently using two processing sketches - the first sketch draws a motion vector for every pixel grid (each of width and height 10 pixels). This is done for every frame in the video sequence. The vector is only drawn in a grid if there is sufficient motion detected. The second sketch aims to reconstruct the video frames crudely from just the initial frame of the video sequence combined with information about the motion vectors got from the first sketch.
My approach so far is as follows: I am able to determine the size, position and direction of each motion vector drawn in the first sketch from four variables. By creating four arrays (two for the motion vector's x and y coordinate and another two for its length in the x and y direction), every time a motion vector is drawn I can append each of the four variables to the arrays mentioned above. This is done for each pixel grid throughout an entire frame where the vector is drawn and for each frame in the sequence - via for loops. Once the arrays are full, I can then save them to a text file as a list of strings. I then load these strings from the text file into the second sketch, along with the first frame of the video sequence. I load the strings into variables within a while loop in the draw function and convert them back into floats. I increment a variable by one each time the draw function is called - this moves on to the next frame (I used a specific number as a separator in my text-files which appears at the end of every frame - the loop searches for this number and then increments the variable by one, thus breaking the while loop and the draw function is called again for the subsequent frame). For each frame, I can draw 10 by 10 pixel boxes and move then by the parameters got from the text files in the first sketch. My problem is simply this: How do I draw the motion of a particular frame without letting what I've have blitted to the screen in the previous frame affect what will be drawn for the next frame. My only way of getting my 10 by 10 pixel box is by using the get() function which gets pixels that are already drawn to the screen.
Apologies for the length and complexity of my question. Any tips would be very much appreciated! I will add the code for the second sketch. I can also add the first sketch if required, but it's rather long and a lot of it is not my own. Here is the second sketch:
import processing.video.*;
Movie video;
PImage [] naturalMovie = new PImage [0];
String xlengths [];
String ylengths [];
String xpositions [];
String ypositions [];
int a = 0;
int c = 0;
int d = 0;
int p;
int gs = 10;
void setup(){
size(640, 480, JAVA2D);
xlengths = loadStrings("xlengths.txt");
ylengths = loadStrings("ylengths.txt");
xpositions = loadStrings("xpositions.txt");
ypositions = loadStrings("ypositions.txt");
video = new Movie(this, "sample1.mov");
video.play();
rectMode(CENTER);
}
void movieEvent(Movie m) {
m.read();
PImage f = createImage(m.width, m.height, ARGB);
f.set(0, 0, m);
f.resize(width, height);
naturalMovie = (PImage []) append(naturalMovie, f);
println("naturalMovie length: " + naturalMovie.length);
p = naturalMovie.length - 1;
}
void draw() {
if(naturalMovie.length >= p && p > 0){
if (c == 0){
image(naturalMovie[0], 0, 0);
}
d = c;
while (c == d && c < xlengths.length){
float u, v, x0, y0;
u = float(xlengths[a]);
v = float(ylengths[a]);
x0 = float(xpositions[a]);
y0 = float(ypositions[a]);
if (u != 1.0E-19){
//stroke(255,255,255);
//line(x0,y0,x0+u,y0+v);
PImage box;
box = get(int(x0-gs/2), int(y0 - gs/2), gs, gs);
image(box, x0-gs/2 +u, y0 - gs/2 +v, gs, gs);
if (a < xlengths.length - 1){
a += 1;
}
}
else if (u == 1.0E-19){
if (a < xlengths.length - 1){
c += 1;
a += 1;
}
}
}
}
}
Word to the wise: most people aren't going to read that wall of text. Try to "dumb down" your posts so they get to the details right away, without any extra information. You'll also be better off if you post an MCVE instead of only giving us half your code. Note that this does not mean posting your entire project. Instead, start over with a blank sketch and only create the most basic code required to show the problem. Don't include any of your movie logic, and hardcode as much as possible. We should be able to copy and paste your code onto our own machines to run it and see the problem.
All of that being said, I think I understand what you're asking.
How do I draw the motion of a particular frame without letting what I've have blitted to the screen in the previous frame affect what will be drawn for the next frame. My only way of getting my 10 by 10 pixel box is by using the get() function which gets pixels that are already drawn to the screen.
Separate your program into a view and a model. Right now you're using the screen (the view) to store all of your information, which is going to cause you headaches. Instead, store the state of your program into a set of variables (the model). For you, this might just be a bunch of PVector instances.
Let's say I have an ArrayList<PVector> that holds the current position of all of my vectors:
ArrayList<PVector> currentPositions = new ArrayList<PVector>();
void setup() {
size(500, 500);
for (int i = 0; i < 100; i++) {
currentPositions.add(new PVector(random(width), random(height)));
}
}
void draw(){
background(0);
for(PVector vector : currentPositions){
ellipse(vector.x, vector.y, 10, 10);
}
}
Notice that I'm just hardcoding their positions to be random. This is what your MCVE should do as well. And then in the draw() function, I'm simply drawing each vector. This is like drawing a single frame for you.
Now that we have that, we can create a nextFrame() function that moves the vectors based on the ArrayList (our model) and not what's drawn on the screen!
void nextFrame(){
for(PVector vector : currentPositions){
vector.x += random(-2, 2);
vector.y += random(-2, 2);
}
}
Again, I'm just hardcoding a random movement, but you would be reading these from your file. Then we just call the nextFrame() function as the last line in the draw() function:
If you're still having trouble, I highly recommend posting an MCVE similar to mine and posting a new question. Good luck.

Vector out of range

I am trying to draw a bounding box around contours using OpenCV. This is a real time application where all the images are grabbed from a camera real time, and Following is the important part of the code
RTMotionDetector.h
vector<vector<Point>> *contours;
vector<vector<Point>> *contoursPoly;
RTMotionDetector.cpp
RTMotionDetector::RTMotionDetector(void)
{
current = new Mat();
currentGrey = new Mat();
canny = new Mat();
next = new Mat();
absolute = new Mat();
cam1 = new VideoCapture();
cam2 = new VideoCapture();
contours = new vector<vector<Point>>();
contoursPoly = new vector<vector<Point>>();
boundRect = new vector<Rect>();
}
double RTMotionDetector::getMSE(Mat I1, Mat I2)
{
Mat s1;
//Find difference
cv::absdiff(I1, I2, s1); // |I1 - I2|
imshow("Difference",s1);
//Do canny to get edges
cv::Canny(s1,*canny,30,30,3);
imshow("Canny",*canny);
//Find contours
findContours(*canny,*contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_NONE);
//System::Windows::Forms::MessageBox::Show(""+contours->size());
//Draw contours
drawContours(*current,*contours,-1,Scalar(0,0,255),2);
for(int i=0;i<contours->size();i++)
{
cv::approxPolyDP(Mat((*contours)[i]),(*contoursPoly)[i],3,true);
//boundRect[i] = boundingRect(contoursPoly[i]);
}
}
As soon as the following part gets executed, I am getting an error
cv::approxPolyDP(Mat((*contours)[i]),(*contoursPoly)[i],3,true);
Here is the error I am getting.
If I comment out that piece of code, then no issues. I know this is ArrayIndexOutOfBounds issue but I really can't find a fix. May be because I am new to Windows Programming.
It is very important that contours stay as a pointer instead of local variable, because local variable slowed the program in an unbelievable way.
You need to find which access to which vector has gone beyond its bounds.
You loop til the size of contours,
for(int i=0;i<contours->size();i++)
but then access (*contoursPoly)[i]
I would hazard a guess that the contoursPoly has gone beyond its bounds, which you can check by breaking into the debugger as suggested.
Changing the loop to
for(int i=0;i<contours->size() && i<contoursPoly->size();i++)
might solve the immediate problem.
Here
(*contoursPoly)[i]
you try to access something that doesn't exist.
What's more, the documentation says:
C++: void approxPolyDP(InputArray curve, OutputArray approxCurve, double epsilon, bool closed)
...
approxCurve - (...) The type should match the type of the input curve (...)
Here you have input - Mat and output - vector< Point >. Maybe that works too, IDK.

Resources