I have a question about displaying a 3D volume made from an array of 2D grayscale images using OpenGL.
To be more precise, I only want to display the central slices of the volume in x, y and z directions.
I have successfully done this just by looping through the volume data at the coordinates I want to show and painting them using GL_POINTS, as shown here (displaying the z direction central slice):
for (x=0; x<sizeX; x++){
for (y=0; y<sizeY; y++){
color=volume(x,y,0)
glBegin(GL_POINTS);
glColor3f(color,color,color);
glVertex3f(x,y,0);
glEnd();
}
}
I know the real world dimensions of the voxels in mm so I was thinking of displaying the voxels as cubes with that dimensions instead of GL_POINTS like I'm doing it now. Is this a good way to do this? I have read somewhere that displaying voxels as cubes generally isn't a good idea.
Just to add, the volume consists of around 300 images with dimensions of 400x350px.
If your application is to display slices of dense volumetric data, I don't see the benifit of using either points or voxels. To me the best way to achieve what you are asking for is to use simple planes, that you texture map with your image data. You mention the dimensions of your grid data, if you want to visualize this fact, you could also use boxes instead of planes. Using texture maps should be far more efficient for your purpose compared to glPoint or using boxes to represent each individual voxel.
There are many Tutorials on the web which tell you how to do that (e.g. NeHe).
Related
I am working on fusing Lidar and Camera images in order to perform a classification object algorithm using CNN.
I want to use the KITTI Dataset which provide synchronized lidar and rgb image data. Lidar are 3D-scanners, so the output is a 3D Point Cloud.
I want to use depth information from point cloud as a channel for the CNN. But I have never work with point cloud so I am asking for some help. Is projecting the point cloud into the camera image plane (using projection matrix provide by Kitti) will give me the depth map that I want? Is Python libray pcl useful or I should move to c++ libraries?
If you have any suggestions, thanks you in advance
I'm not sure what projection matrix provide by Kitti includes, so the answer is it depends. If this projection matrix only contains a transformation matrix, you cannot generate depth map from it. The 2D image has distortion that comes from the 2D camera and the point cloud usually doesn't have distortion, so you cannot "precisely" map point cloud to rgb image without intrinsic and extrinsic parameters.
PCL is not required to do this.
Depth map essentially is mapping depth value to rgb image. You can treat each point in point cloud(each laser of lider) as a pixel of the rgb image. Therefore, I think all you need to do is finding which point in point cloud corresponding to the first pixel(top left corner) of the rgb image. Then read the depth value from point cloud based on rgb image resolution.
You have nothing to do with camera. This is all about point cloud data. Lets say you have 10 million of points and each point has x,y,z in meters. If the data is not in meters first convert it. Then you need the position of the lidar. When you subtract position of car from all the points one by one, you will take the position of lidar to the (0,0,0) point, then you can just print the point on a white image. The rest is simple math, there may be many ways to do it. First that comes to my mind: think rgb as binary numbers. Lets say 1cm is scaled to change in 1 blue, 256cm change equals to change in 1 green and 256x256 which is 65536 cm change equals change in 1 red. We know that cam is (0,0,0) if rgb of the point is 1,0,0 then that means 256x256x1+0x256+0x1=65536 cm away from the camera. This could be done in C++. Also you can use interpolation and closest point algorithms to fill blanks if there are
I'm trying to generate height-map for spherical planet with perlin noise. How can I make it with seamless left/right borders? I smoothed heightmap in poles, but cannot understand how can I loop left and right sides.
This is how my textures look liked for now:
Mirroring (by y-axis)
This is great for making seamless background textures. But as you mentioned the texture must not contain distinct patterns otherwise it would be obvious. This can be used as a start point for texture generator
Morphing
There are vector and raster morphs out there depend on the content of image. You can try to use simple raster morph done by Linear interpolation (if resolution is the same which is your case) but this can make the texture blurry which can be disturbing on some images. For starters you can try to morph texture and its mirror together:
This is cosine weight distribution (50%:50% on sides and 100%:0% in the middle):
This is constant weight distribution (50%:50%):
adjusted texture generators
You can adjust your current texture generator to render seamlessly
create/use seamless texture background (created by #1,#2 or even #3)
add random number of random features with looped x axis
so if x is going out from the left it will go in from the right ...
x' = x%xs where xs is texture x-resolution
I have a matrix 'A' with two columns which contains 2D points ( coordinates 'x' and 'y'). These points earlier were projected onto the plane from 3d cloud, so they create a 2d shape of some object.
For the second I have a noisy image 'B'(4k x 4k matrix) with similar (but translated and scaled) shape. What I want to do is to correlate points from matrix 'A' and use them as a binary mask for the object on the image 'B'. Currently i dont have a slightest idea how to do it.
Thanks for all help.
Following off of what AnonSubmitter85 suggested with the pattern recognition method, something like a SIFT (Scale Invariant Feature Transformation) detector might be helpful in the case of the scaled and rotated object. Similarly, matlab has a set of functions that do SURF (Speeded Up Robust Features) detection:
http://www.mathworks.com/help/vision/examples/find-image-rotation-and-scale-using-automated-feature-matching.html
Hopefully this can stimulate some new ideas.
This question is regarding SIFT features and its use in MATLAB. I have some data containing the following information:
The x,y co-ordinates of each feature point
The scale at each feature point
The orientation at each feature point
The descriptor vector for each feature point. Each vector has 128 components.
I'm having a problem with drawing the keypoints onto the corresponding input image. Does anyone have an idea of how to do this? Are there any built-in functions I can use?
Thanks a lot.
If you just want to display the points themselves, read in the x and y co-ordinates into separate arrays, then display the image, use the hold on command, then plot the points. This will overlay your points on top of the image.
Assuming you have these co-ordinates already available and stored as x and y, and assuming you have your image available and stored as im, you can do this:
imshow(im);
hold on;
plot(x, y, 'b.');
Keep in mind that the x co-ordinates are horizontal and increase from the left to right. The y co-ordinates are vertical and increase from top to bottom. The origin of the co-ordinates is at the top left corner, and is at (1,1).
However, if you want to display the detection window, the size of it and the orientation of the descriptor, then suggest you take a look at the VLFeat library as it has a nice function to do that for you.
Suggestion by Rafael Monteiro (See comment below)
You can also modify each point so that it reflects the scale that it was detected at. If you want to do it this way, assume you have saved the scales as another array called scales. After, try doing the following:
imshow(im);
hold on;
for i = 1 : length(x)
plot(x(i), y(i), 'b.', 'MarkerSize', 10*scales(i));
end
The constant factor of 10 is pretty arbitrary. I'm not sure how many scales you have used to detect the keypoints and so if the scale is at 1.0, then this would display the keypoint at the default size when plotting. Play around with the number until you get results that you're comfortable with. However, if scale doesn't matter for you, the first method is fine.
Basically I was trying to achieve this: impose an arbitrary image to a pre-defined uneven surface. (See examples below).
-->
I do not have a lot of experience with image processing or 3D algorithms, so here is the best method I can think of so far:
Predefine a set of coordinates (say if we have a 10x10 grid, we have 100 coordinates that starts with (0,0), (0,10), (0,20), ... etc. There will be 9x9 = 81 grids.
Record the transformations for each individual coordinate on the t-shirt image e.g. (0,0) becomes (51,31), (0, 10) becomes (51, 35), etc.
Triangulate the original image into 81x2=162 triangles (with 2 triangles for each grid). Transform each triangle of the image based on the coordinate transformations obtained in Step 2 and draw it on the t-shirt image.
Problems/questions I have:
I don't know how to smooth out each triangle so that the image on t-shirt does not look ragged.
Is there a better way to do it? I want to make sure I'm not reinventing the wheels here before I proceed with an implementation.
Thanks!
This is called digital image warping. There was a popular graphics text on it in the 1990s (probably from somebody's thesis). You can also find an article on it from Dr. Dobb's Journal.
Your process is essentially correct. If you work pixel by pixel, rather than trying to use triangles, you'll avoid some of the problems you're facing. Scan across the pixels in target bitmap, and apply the local transformation based on the cell you're in to determine the coordinate of the corresponding pixel in the source bitmap. Copy that pixel over.
For a smoother result, you do your coordinate transformations in floating point and interpolate the pixel values from the source image using something like bilinear interpolation.
It's not really a solution for the problem, it's just a workaround :
If you have the 3D model that represents the T-Shirt.
you can use directX\OpenGL and put your image as a texture of the t-shirt.
Then you can ask it to render the picture you want from any point of view.