Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed last year.
Improve this question
I'm using RingGeometry in my scene, at start the rings are renderer perfectly, but if I rotate the scene, artifacts appears on the rings.
I tried CircleGeometry and I have the same result.
The circle and the scene has an alpha channel because I need to variate the rings opacity.
Before rotating a ring
After rotating a ring
Before rotating a circle
After rotating a circle
Thank you #Marquizzo that was it.
I have and invisible circle at the same position (used for raycaster.intersectObjects), even if it was at opacity 0 it was causing z-fighting.
I moved this invisible circle on the z-axis and the render is perfect now.
Related
This question already has answers here:
90 degree field of view without distortion in THREE.PerspectiveCamera
(2 answers)
Closed 1 year ago.
Is this an FOV issue with my perspective camera? In my scene, spheres look like eggs/oval shaped rather than spheres when they reach the edges of the screen. Anyone know why this happens?
It sounds like you've encountered one of the unfortunate realities of 3D.
In any 3-dimensional scene, the view from a given point is most naturally thought of as a sphere. When we render a scene, we're rendering a piece of that sphere, but we need to somehow convert that piece of a sphere into a flat rectangle, since our computer screens are flat, not round.
So, in order to render a 3D scene as a rectangle, the software needs to use a projection. For 3D rendering, the most common projection is probably a rectilinear projection, also called a gnomonic projection. (On Wikipedia, see "Rectilinear lens" for a discussion of rectilinear projections in photography, and "Gnomonic projection" for a discussion of rectilinear projections in mapmaking.)
The biggest advantage of a rectilinear projection is that straight lines in the scene appear as straight lines in the rendering. A big disadvantage is that objects far from the center are distorted: small circles get turned into large ovals.
This phenomenon is an unalterable mathematical fact that no software will ever be able to overcome. However, there are things you may be able to do to mitigate the situation. One option is to use a narrower field of view. Another option is to use a different projection; the answers here have a few suggestions for how to do that: Three.js - Fisheye effect
I am currently using three.js and trying to create a 3D experience with a semi-transparent material that can be viewed from all angles. I've noticed that depending on the camera angle, only certain portions of the mesh are semi-transparent and will show the content behind them. In this example below I've created two half cylinders and applied the same transparent material with the stack overflow logo. The half cylinder on the left properly shows the logo on the closest surface, as well as the surface behind it. The half cylinder on the right only shows the logo on the closest surface and fails to render the logo that wraps behind it. However, it does properly render the background image so the material is still treated correctly as transparent:
If I spin the orbital camera around 180 degrees the side that originally failed to see through now works and the other side exhibits the wrong behavior. This leads me to believe it's related to the camera position / depth sorting. The material in this case is a standard MeshPhongMaterial with transparent set to true, side as DoubleSide, and a single map for the transparent stack overflow logo. The geometry is formed from an opened ended CylinderGeometry. Any help would be greatly appreciated!
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 9 months ago.
Improve this question
In ThreeJS I have some plant objects that are just double sided faces with a texture on them. The texture has transparent pixels. I have alphaTest set to 0.5. One side of the face is showing the texture and transparent pixels are really transparent. At the other side, the transparent pixels are black - and not transparent.
I have tried turning depthTest off. That does remove the black, but it introduces a lot of new and even worse problems. But it might be a clue that it has something to do with depth.
I also tried a custom depth Shader with alphaTest set to 0.5, but that does not appear do anything.
It also is not a lighting issue, I have tried to light the dark side with several types of lights, but no results there.
It was a side effect of the OutlineEffect. When I turned that off, the plants were transparent on both sides. Luckily you can switch it on or off for every material, so I do not need to remove it entirely.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
With a camera inside a cylinder I capture a image. I want to detec if there are some deformation due to a collision outside. I also want to detec in which side the collision occurs. The image inside the cylinder have a lot of dots which forms a grid. Which is the better way to do this?
A simple way to detec the collision is to subtract the image without collision with the real image. If the result isn't "zero", something changed and probally a collision occured. But this doesn't give me which side the cylinder deformed.
I already tried to do a projection of the points in the plane, but i couldn't do it.
In this link you can find a question post by me with the problem of the projection: Projection of a image from inside a cylinder to a plane 2D [Matlab]
In that link you can see all the information about this problem.
An idea is to use region props in the image and see which part of the image deformed, but I want to do something a little more complex. I want to measure the deformation, to have an idea how much it deformed during the impact. This is the reason why i thought about doing some projection in the plane and measure the distance that the points deformed. Do you have any idea to do this in a more simple way? How can i do it
Someone can help me please?
Here's a little code/pseudo-code to try to help. In words:
I would subtract the before and after images and take the absolute value of the difference image. Then, I would have some sort of threshold for whether or not the difference is just due to variation in noise and not a real change. Next I find the center of mass (weighted by the magnitude of difference), which can be done easily with the image processing toolbox (regionprops). The center of mass of the variation would be a good estimate of where a "collision" occurred, i.e. a deformation in the cylinder
So that would be something along the lines of:
diffIm = abs(originalIm - afterIm)
threshold = someNumber
diffIm = diffIm(diffIm>threshold)
%Tell regionprops that the whole image is one region by passing it a array of ones the size of the image, and diffIm as the measurment image
props = regionprops(ones(size(diffIm)),diffIm,'WeightedCentroid')
%WeightedCentroid is the center of mass, and it is weighted by the grayscale image diffIm
You now have the location of the centroid of deformation in your image space, and all you would need is a map to convert that to cylinder space (if you needed that), otherwise you could just plot the centroid over the original image for a visual output of where the code expects the collision occurred.
On another note, if you have control of your expirimental setup, I would expect that a checkerboard pattern would give you better results than the dots (because the dots are very spaced out, and if the collison only affects the white space you might not be able to detect it at all). A checkerboard would mean you have more edges than can be displaced, which is the brunt of what would be detected anyways. A checkerboard may also be easier for mapping to a plane if you were still trying to do that, because you could know that all the edges are either parallel or intersecting at right angles, and also evenly spaced.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a math/vector/matrix question that I can't seem to work out.
I have 4 points in 3D space that represent the bounds of a surface.
I have written a raycast algoritum to get the intersection location of the mouse "ray" against the rectangle in the 3d scene.
The rectangle in the scene has a rotation and translation matrix applied to it so it can be moved anywhere in the scene, and my raycast system does correctly get the ray hit location on the surface.
My problem is that I need to now take the Ray Hit location that is in world space and work out where on the 2d surface of the rectangle the hit is.
I cannot work out how to do this.
Your hit point is in the world space. To get the point in the same coordinate system as the original 4 points, just calculate the inverse of the rotation and translation matrix and multiply this inverse matrix by the hit point.
The resulting point will be in the same coordinate system as the 4 points that represent the bounds of the surface.