Converting object space normal mappingto tangent space normal mapping - three.js

I need to convert object space normal mapping (1st image) to tangent space normal mapping (2nd image).
HAVE
NEED
Is there some way to do that?

Related

UV Map color change

I have this UV map, it shows a black and white surface. Other Normal maps work fine. What is wrong with this one?
This one works as requested:
Where is the problem, are the colors incorrect?
The difference is between object space normal mapping (1st image) and tangent space normal mapping (2nd image).
You can read more about it at http://www.surlybird.com/tutorials/TangentSpace/ and http://docs.cryengine.com/display/SDKDOC4/Tangent+Space+Normal+Mapping

Matching scaled and translated binary mask to noisy image of 2d object in MATLAB

So I have a matrix A 300x500 which contains binary image of some object (background 0, object 1), and noisy image B with depicted the same object. I want to match binary mask A to the image B. Object on the mask has exactly the same shape as object on the noisy image. The problem is that images have different sizes (both of their planes and the objects depicted on them). Moreover the object on mask is located in the middle of the plane, contrary, on the image B is translated. Does anyone now simple solution how can I match these images?
Provided you don't rotate or scale your object the peak in cross-correlation should give you the shift between the two objects.
From the signal prcessing toolbox you can use xcorr2(A, B) to do this. The help even has it as one of the examples.
The peak position indicates the offset to get from one to the other. The fact that one indut is noisy will introduce some uncertainty in your answer, but this is inevitable as they do not exactly match.

How is size used in THREE.PointCloudMaterial?

I am trying to understand how the "size" attribute in the THREE.PointCloudMaterial translates to the size of it's points on the screen.
With an orthographic camera set at (-1,1,1-1) and size = 1, the points do not fill half the screen, so apparently this parameter does not refer to camera space. Nor does it refer to pixels; at "size = 1", the points >> 1 pixel.
Furthermore, if I resize the browser window, changing it's height, the points scale in size, while if I resize the window's width, the points do not scale in size (!?!)
Any clarification on how "size" get's translated to screen or camera space would be greatly appreciated.
In case it is of interest why I need to know this: I am trying to overlay a PointCloud with a THREE.PointCloudMaterial (with which I can use a texture map) over a second PointCloud that uses a ShaderMaterial (where I can send the size parameter straight to gl_PointSize and know exactly how big each point will be). I am having trouble matching up the point sizes in the two clouds.
Thanks!
-mike
Here, at line 368 the code starts.
It uses gl_PointSize to rasterize a vertex. Two options are present, one with attenuation, the other without. Without, the point gets rasterized to a fixed size in pixels. With, the size is divided by depth and creates a perspective effect. This is happening in the vertex shader.
Looking at the code, it seems that the size would be expressed in world units in the case of attenuation, and to a fixed pixel size if not.

Any ideas on how to remove small abandoned pixel in a png using OpenCV or other algorithm?

I got a png image like this:
The blue color is represent transparent. And all the circle is a pixel group. So, I would like to find the biggest one, and remove all the small pixel, which is not group with the biggest one. In this example, the biggest one is red colour circle, and I will retain it. But the green and yellow are to small, so I will remove them. After that, I will have something like this:
Any ideas? Thanks.
If you consider only the size of objects, use the following algorithm: labellize the connex components of the mask image of the objects (all object pixels are white, transparent ones are black). Then compute the areas of the connex components, and filter them. At this step, you have a label map and a list of authorized labels. You can read the label map and overwrite the mask image with setting every pixel to white if it has an authorized label.
OpenCV does not seem to have a labelling function, but cvFloodFill can do the same thing with several calls: for each unlabeled white pixel, call FloodFill with this pixel as marker. Then you can store the result of this step in an array (of the size of the image) by assigning each newly assigned pixel with its label. Repeat this as long as you have unlabellized pixels.
Else you can recode the connex component function for binary images, this algorithm is well known and easy to implement (maybe start with Matlab's bwlabel).
The handiest way to filter objects if you have an a priori knowledge of their size is to use morphological operators. In your case, with opencv, once you've loaded your image (OpenCV supports PNG), you have to do an "openning", that is an erosion followed by a dilation.
The small objects (smaller than the size of the structuring element you chose) will disappear with erosion, while the bigger will remain and be restored with the dilation.
(reference here, cv::morphologyEx).
The shape of the big object might be altered. If you're only doing detection, it is harmless, but if you want your object to avoid transformation you'll need to apply a "top hat" transform.

Does anyone know how to draw dotted lines in openGL SE using fragment shader

I tried to use this tutorial:
http://korkd.com/2012/02/15/dashed-lines/#comment-32
but I don't know what are: sourcePoint, mv and a_position.
If you have any other suggestions please help...
sourcePoint is the starting point of the line in world space. It is a uniform, which means that the same value is used for the entire draw operation.
mv (also a uniform) is the modelview matrix, which transforms a point from model space to world space, so that a_position is using the same coordinate system as sourcePoint. It is the same thing as u_modelViewProjectionMatrix but without the projection transformation.
a_position is a varying, which means that the vertex shader sets a value for each vertex, and then the fragment shader gets an interpolated value for each pixel. So the value the fragment shader receives will be the position of the pixel in world space.
If you still are confused, I suggest reading up on how shaders work. It can be a tad confusing at first.

Resources