Is there a way compute the ratio between world unit and pixels in ThreeJS ? I need to determine how many units apart my objects need to be in order to be rendered 1 pixel apart on the screen.
The camera is looking at the (x,y) plane from a (0, 0, 10) coordinate, and objects are drawn in 2D on the (x,y) plane at z=0.
<Canvas gl={{ alpha: true, antialias: true }} camera={{ position: [0, 0, 10] }}>
I cannot seem to figure out what the maths are or if there is any function that does it already...
I'm thinking I might have to compare the size of the canvas in pixels and world units, but I dont know how to get that either. There's also this raycasting solution, but surely there has to be a way to just compute it, no ?
Related
I have a set of coordinates of a 6-image Cubemap (Front, Back, Left, Right, Top, Bottom) as follows:
[ [160, 314], Front; [253, 231], Front; [345, 273], Left; [347, 92], Bottom; ... ]
Each image is 500x500p, being [0, 0] the top-left corner.
I want to convert these coordinates to their equivalents in equirectangular, for a 2500x1250p image. The layout is like this:
I don't need to convert the whole image, just the set of coordinates. Is there any straight-forward conversion por a specific pixel?
convert your image+2D coordinates to 3D normalized vector
the point (0,0,0) is the center of your cube map to make this work as intended. So basically you need to add the U,V direction vectors scaled to your coordinates to 3D position of texture point (0,0). The direction vectors are just unit vectors where each axis has 3 options {-1, 0 , +1} and only one axis coordinate is non zero for each vector. Each side of cube map has one combination ... Which one depends on your conventions which we do not know as you did not share any specifics.
use Cartesian to spherical coordinate system transformation
you do not need the radius just the two angles ...
convert the spherical angles to your 2D texture coordinates
This step depends on your 2D texture geometry. The simplest is rectangular texture (I think that is what you mean by equirectangular) but there are other mappings out there with specific features and each require different conversion. Here few examples:
Bump-map a sphere with a texture map
How to do a shader to convert to azimuthal_equidistant
For the rectangle texture you just scale the spherical angles into texture resolution size...
U = lon * Usize/(2*Pi)
V = (lat+(Pi/2)) * Vsize/Pi
plus/minus some orientation signs to match your coordinate systems.
btw. just found this (possibly duplicate QA):
GLSL Shader to convert six textures to Equirectangular projection
I have a box 100m x 100m to act as a floor in a reactVR test I am working on, I'd like to add a texture to it but the tile texture just stretches over the entire surface rather than tile, as desired. Here is my component code, nothing special:
<Box
dimWidth={100}
dimDepth={100}
dimHeight={0.5}
texture={asset('check_floor_tile.jpg')}
style={{
color:'#333333',
transform: [{translate: [0, -1, 0]}]
}}
lit
/>
I've had a look for examples without success, any help would be appreciated. Thanks.
You can now tile a texture across a surface by using specifying repeat on the texture property of any component that extends BasicMesh (Box, Plane, Sphere, Cylinder, Model).
The functionality has been added to Reach VR via this PR.
<Plane
texture={{
...asset('texture.jpg'),
repeat: [4, 4],
}}
/>
I'm creating an android side scrolling game using the libgdx library. I am using immediateModeRenderer20 in GL_TRIANGLE_STRIP mode to render 2D triangle strips that scroll infinitely. The renderering works fine, I have figured out how to use solid colors, gradients and alternating patterns on the strip.
Is there any way to render a triangle strip but overlay it with a .png or a Texture or something like that?
I have looked into the texCoord(...) method in the immediateModeRenderer20 docs but I haven't found any solid examples on how to use it.
If anyone needs any code snippets or images, let me know.
Yes, it's possible, I've recently attempted the same.
The loop for rendering it looks simply:
texture.bind();
immediateModeRenderer20.begin(camera().combined, GL20.GL_TRIANGLE_STRIP);
immediateModeRenderer20.color(new Color(1, 1, 1, 1));
immediateModeRenderer20.texCoord(textureCoordinate.x, textureCoordinate.y);
immediateModeRenderer20.vertex(point.x, point.y, 0f);
immediateModeRenderer20.end();
But the important thing is that you build your texture coordinates to match your triangles. In my case I would draw a rope like this one:
http://imgur.com/i0ohFoO
from a texture of a straight rope. To texture each triangle you will need texture coordinates x and y - remember that textures use different coordinates: from 0.0 to 1.0 for both x and y.
http://imgur.com/wxQ93KO
So your triangle vertex will need textureCoord value of:
x: 0.0, y: 0.0
x: 0.0, y: 1.0
x: triangle length, y: 0.0
x: triangle length, y: 1.0
and so on.
Is there any way to configure the camera in Three.js so that when a 2d object (line, plane, image) is rendered at z=0, it doesn't bleed (from perspective) into other pixels.
Ex:
var plane = new THREE.Mesh(new THREE.PlaneGeometry(1, 1), material);
plane.position.x = 4;
plane.position.y = 3;
scene.add(plane);
...
// Get canvas pixel information
context.readPixels(....);
If you example the data from readPixels, I always find that the pixel is rendering into its surrounding pixels (ex: 3,3,0 may contain some color information), but would like it to be pixel perfect if the element that is draw is on the z=0 plane.
You probably want to use THREE.OrthographicCamera for the 2d stuff instead of THREE.PerspectiveCamera. That way they are not affected by perspective projection.
Which pixels get rendered depends on where your camera is. If your camera for example t z=1 then a lot of pixels will get rendered. If you move your camera to z=1000 then you see, due to perspective, maybe only 1 pixel will get rendered from your geometry.
Please check this neat piece of code I found:
glEnable(GL_LINE_SMOOTH);
glColor4ub(0, 0, 0, 150);
mmDrawCircle( ccp(100, 100), 20, 0, 50, NO);
glLineWidth(40);
ccDrawLine(ccp(100, 100), ccp(100 + 100, 100));
mmDrawCircle( ccp(100+100, 100), 20, 0, 50, NO);
where mmDrawCircle and ccDrawLine just draws these shapes [FILLED] somehow... (ccp means a point with the given x, y coordinates respectively).
My problem .... Yes, you guessed it, The line overlaps with the circle, and both are translucent (semi transparent). So, the final shape is there, but the overlapping part becomes darker and the overall shape looks ugly.. i.e, I would be fine if I was drawing with 255 alpha.
Is there a way to tell OpenGL to render one of the shapes in the overlapping parts??
(The shape is obviously a rectangle with rounded edges .. half-circles..)
You could turn on GL_DEPTH_TEST and render the line first and a little closer to the camera. When you then render the circle below, the fragments of the line won't be touched.
(You can also use the stencil buffer for an effect like this).
Note that this might still look ugly. If you want to use anti-aliasing you should think quite hard on which blending modes you apply and in what order you render the primitives.