How do you tile a texture across a surface in React VR? - webvr

I have a box 100m x 100m to act as a floor in a reactVR test I am working on, I'd like to add a texture to it but the tile texture just stretches over the entire surface rather than tile, as desired. Here is my component code, nothing special:
<Box
dimWidth={100}
dimDepth={100}
dimHeight={0.5}
texture={asset('check_floor_tile.jpg')}
style={{
color:'#333333',
transform: [{translate: [0, -1, 0]}]
}}
lit
/>
I've had a look for examples without success, any help would be appreciated. Thanks.

You can now tile a texture across a surface by using specifying repeat on the texture property of any component that extends BasicMesh (Box, Plane, Sphere, Cylinder, Model).
The functionality has been added to Reach VR via this PR.
<Plane
texture={{
...asset('texture.jpg'),
repeat: [4, 4],
}}
/>

Related

Is instancing applicable to a Three.js scene consisting of ExtrudeGeometry meshes with varying geometries?

I have a Three.js scene consisting of many buildings which are formed by stacking ExtrudeGeometry meshes (think buildings in Mapbox GL JS):
I'm creating these meshes using THREE.Shape and THREE.ExtrudeGeometry (I'm using react-three-fiber):
function coordsToShape(coords) {
const shape = new Shape();
let [x, z] = coords[0];
shape.moveTo(x, z);
for (const [x, z] of coords.slice(1)) {
shape.lineTo(x, z);
}
return shape;
}
function Floor(props) {
const {coords, bottom, top, color} = props;
const shape = coordsToShape(coords);
const geom = new ExtrudeGeometry(shape, {depth: top - bottom, bevelEnabled: false});
return (
<mesh castShadow geometry={geom} position={[0, bottom, 0]} rotation-x={-Math.PI / 2}>
<meshPhongMaterial color={color} />
</mesh>
)
}
Then I stack the floors to produce the scene:
export default function App() {
return (
<Canvas>
{ /* lights, controls, etc. */ }
<GroundPlane />
<Floor coords={coords1} bottom={0} top={1} color="skyblue" />
<Floor coords={coords2} bottom={1} top={3} color="pink" />
<Floor coords={coords3} bottom={0} top={1} color="aqua" />
<Floor coords={coords4} bottom={1} top={3} color="orange" />
</Canvas>
)
}
Full code/demo here. This results in one mesh for the ground plane and one for each building section, so five total.
I've read that using instancing to reduce the number of meshes is a good idea. Is instancing relevant to this scene? Most instancing examples show identical geometries with colors, positions and rotations varying. But can the geometry vary? Should I be using mergeBufferGeometries? But if I do that, will I still get the performance wins? Since I have coordinate arrays already, I'd also be happy using them to construct a large buffer of coordinates directly.
Is instancing relevant to this scene?
Instancing in general is applicable if you are going to render a large number of objects with the same geometry and material but with different transformations (and other per-instance properties like e.g. color).
Merging geometries only makes sense if they can all share the same material. So if you need different colors per objects you can achieve this by defining vertex color data. Besides, the geometries should be considered as static since it is complicated to perform individual transformations if the data are already merged.
Both approaches are intended to lower the number of draw calls in your app which is an important performance metric. Try to use them whenever possible.

ThreeJS world unit to pixel conversion

Is there a way compute the ratio between world unit and pixels in ThreeJS ? I need to determine how many units apart my objects need to be in order to be rendered 1 pixel apart on the screen.
The camera is looking at the (x,y) plane from a (0, 0, 10) coordinate, and objects are drawn in 2D on the (x,y) plane at z=0.
<Canvas gl={{ alpha: true, antialias: true }} camera={{ position: [0, 0, 10] }}>
I cannot seem to figure out what the maths are or if there is any function that does it already...
I'm thinking I might have to compare the size of the canvas in pixels and world units, but I dont know how to get that either. There's also this raycasting solution, but surely there has to be a way to just compute it, no ?

React Native Animated.event onScroll scale image from the top left corner rather center

Hello there actually I've an square image and I've managed that the image gets scaled to zero if I'm scrolling down.
My ScrollView onScroll
onScroll={Animated.event(
[{nativeEvent: {contentOffset:{y: this.state.scrollY}}}])}
My interpolate works if the users scroll 164px and reaches 209px then the image will be not visible.
let scale = this.state.scrollY.interpolate({inputRange: [164, 209], outputRange: [1, 0], extrapolate: "clamp"});
Here is my wrapper around the image with the transform (scale)
<Animated.View style={[styles.image, {opacity, transform: [{scale}]}]}>
Actually the Image is scaled from the center point rather from the top left corner, which I want to achieve. Do anyone knows how I can scale from the top left corner with the style property transform? I've uploaded a image to demonstrate.

How to render a texture on a triangle strip?

I'm creating an android side scrolling game using the libgdx library. I am using immediateModeRenderer20 in GL_TRIANGLE_STRIP mode to render 2D triangle strips that scroll infinitely. The renderering works fine, I have figured out how to use solid colors, gradients and alternating patterns on the strip.
Is there any way to render a triangle strip but overlay it with a .png or a Texture or something like that?
I have looked into the texCoord(...) method in the immediateModeRenderer20 docs but I haven't found any solid examples on how to use it.
If anyone needs any code snippets or images, let me know.
Yes, it's possible, I've recently attempted the same.
The loop for rendering it looks simply:
texture.bind();
immediateModeRenderer20.begin(camera().combined, GL20.GL_TRIANGLE_STRIP);
immediateModeRenderer20.color(new Color(1, 1, 1, 1));
immediateModeRenderer20.texCoord(textureCoordinate.x, textureCoordinate.y);
immediateModeRenderer20.vertex(point.x, point.y, 0f);
immediateModeRenderer20.end();
But the important thing is that you build your texture coordinates to match your triangles. In my case I would draw a rope like this one:
http://imgur.com/i0ohFoO
from a texture of a straight rope. To texture each triangle you will need texture coordinates x and y - remember that textures use different coordinates: from 0.0 to 1.0 for both x and y.
http://imgur.com/wxQ93KO
So your triangle vertex will need textureCoord value of:
x: 0.0, y: 0.0
x: 0.0, y: 1.0
x: triangle length, y: 0.0
x: triangle length, y: 1.0
and so on.

Three.js pixel perfect at z=0 plane

Is there any way to configure the camera in Three.js so that when a 2d object (line, plane, image) is rendered at z=0, it doesn't bleed (from perspective) into other pixels.
Ex:
var plane = new THREE.Mesh(new THREE.PlaneGeometry(1, 1), material);
plane.position.x = 4;
plane.position.y = 3;
scene.add(plane);
...
// Get canvas pixel information
context.readPixels(....);
If you example the data from readPixels, I always find that the pixel is rendering into its surrounding pixels (ex: 3,3,0 may contain some color information), but would like it to be pixel perfect if the element that is draw is on the z=0 plane.
You probably want to use THREE.OrthographicCamera for the 2d stuff instead of THREE.PerspectiveCamera. That way they are not affected by perspective projection.
Which pixels get rendered depends on where your camera is. If your camera for example t z=1 then a lot of pixels will get rendered. If you move your camera to z=1000 then you see, due to perspective, maybe only 1 pixel will get rendered from your geometry.

Resources