Rendering images and voxelizing the images - image

I am using the shapenet dataset. From this dataset, I have 3d models in .obj format. I rendered the images of these 3d models using pyrender library which gives me an image like this :
Now I am using raycasting to voxelize this image. The voxel model I get is something like below :
I am not able to understand why I am getting the white or light brown colored artifacts in the boundary of the object.
The reason I could come up with was maybe the pixels at the boundary of the object contain two colors, so when I traverse the image as numpy array, I get an average of these two colors which gives me these artifacts. But I am not sure if this is the correct reason.
If anyone has any idea about what could be the reason, please let me know

Related

Binarized grayscale image contains too much noise

I currently have a digital pathology image like this:
Firstly I turn the image into grayscale using the following codes:
img=imread('DigitalPathology8.png');
figure;
imshow(img)
hsv=rgb2hsv(img);
s=hsv(:,:,2);
And I got this grayscale image:
While I try to binarize this grayscale image using the following codes:
bw = imbinarize(s,'global');
figure
subplot(2,1,1)
imshow(s)
subplot(2,1,2)
imshow(bw)
I got the image like this:
What's wrong with my codes? When I applied the same algorithm to other images like this:
I could get the binarized image which only blue cells are white and other cells including backgrounds are black. So I also expect the same result after I applying the same codes to the first image I mentioned.
Could someone please help me out?
You should better use the rgb2gray()(look here) for your conversion:
grey=rgb2gray(img)
This should get you something like this:
Instead of global thresholding, i would recommend more sophisticated methods such as Otsu, which will get you much better results:
However, if you only want to extract the blue cells instead of a simple thresholded version of your image, you should use a totally different approach like MaxEntropy on the grayscale image. This will give you something like this:
and this
This tresholding method does not seem to be included in matlab, but a plugin can be found.
You could also try a total different approach to detect the blue dots by thresholding based on color similarity:
With this approach you would set each pixel to white which has a color distance to the blue color which is smaller than a given threshold. This should give you something like this (red markings represent the foreground of the image):
Reference color:
For this approach i took the RGB color (17.3,32.5,54.5) as reference color, my max distance was 210. If you have ImageJ, you can this approach interactivly, a while back i wrote a plugin for that.As you can see, this approach also detects wrong cells, which is caused by the high value for the distance and the choosen reference color. This errors may be minimized by selecting a more appropriate reference color and smaller distance values.

Project Tango C API mapping specific 3D point to color

I am using Project Tango C API. I signed up to color image and depth image callbacks(TangoService_connectOnPointCloudAvailable and TangoService_connectOnFrameAvailable). So I do have TangoPointCloud and matching TangoImageBuffer. I've rendered them separately and they are valid. So I understand that I essentially can loop through each 3D point now in TangoPointCloud and do whatever I want with them. The trouble is I also want the corresponding color for each 3D point.
The Tango standard examples have lots of examples such as drawing depth and color separately or do OpenGL texture on depth image but they don't have a simple sample that maps 3D point to its color.
I tried TangoSupport_projectCameraPointToDistortedPixel but it gives weird results. I also tried TangoXYZij approach but it is obsolete.
Please those who achieved this help, I wasted two days going back and forth with this.

Detect quadrilateral from grayscale image

I'm looking for a method to detect quadrilateral based on grayscale images like this.
The actual solution I've made is based on HoughLines and has two problems:
As it is a parametric method, small changes on the input image gives
me two different rectangles.
The outputs are not precise as the
borders of the rectangle in the input image is thick.
Can you recommend me another method to do this ? I'm actually looking at this article but it seems to be a slow method.

Google Maps Polygon Representation

I used google maps to make a project that records flood incident in a certain area. I used polygons to represent those floods, since the project is for planning purposes it is required of us to output all the historical flood data into a single map. My problem is if I simply just output all polygons, it would look messy and cluttered. So I was wondering what method I could use to represent these polygons in a better fashion. We were advised to use heatmaps, but I can't seem to find tutorials on how to make polygons into heatmaps. Any suggestions would be appreciated. Thanks!
To turn a polygon into a heatmap, render the polygons in black with high transparency into a white bitmap. This should result in a grayscale image, which will be darker where many polygons overlap. Then convert the gray values of the bitmap into the hue value of a corresponding semi-transparent color bitmap.
Why did the rendering look messy? Did you try rendering filled polygons with high transparency and no borderline? That should result in areas that are more prone to flooding being more "highlighted".

how use mapping uv in three.js

I use a wood texture image in my model. by default my texture is stretched on the model you see this on woodark. When I changed the repeat the texture is more stretching and I are not understand why. I search to undertand how to use right the mapping in my model with base explain but I have found only examples with colors pixels.
thank to answers
You should make sure your textures have power of two dimensions (ie. 256x256, 512x512 etc). Textures of arbitrary dimensions (NPOT) bring all kinds of mapping trouble in WebGL.
If you are unable to resize textures server-side, you can do it client-side. This link has some sample javascript code, as well as other relevant information: http://www.khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences

Resources