Multiple scenes in VR mode and different cameras - three.js

I'm trying to implement this fiddle in an XR environment.
In the fiddle, the second scene is fixed to the screen, but not with the oculus/player camera...
Does anyone have a solution to making a scene or an object always on the top right corner of an oculus? I suppose there is a mistake with the size and camera but cannot find what...
I'm not sure, I broke my brain, it's my first step in XR...
Portal Second Scene Code :
function Viewcube() {
const { gl, scene, camera, size } = useThree()
const virtualScene = useMemo(() => new Scene(), [])
const virtualCam = useRef()
const ref = useRef()
const [hover, set] = useState(null)
const matrix = new Matrix4()
useFrame(() => {
matrix.copy(camera.matrix).invert()
ref.current.quaternion.setFromRotationMatrix(matrix)
gl.autoClear = true
gl.render(scene, camera)
gl.autoClear = false
gl.clearDepth()
gl.render(virtualScene, virtualCam.current)
}, 1)
return createPortal(
<>
<OrthographicCamera ref={virtualCam} makeDefault={false} position={[0, 0, 100]} />
<mesh
ref={ref}
raycast={useCamera(virtualCam)}
position={[size.width / 2 - 80, size.height / 2 - 80, 0]}
onPointerOut={(e) => set(null)}
onPointerMove={(e) => set(Math.floor(e.faceIndex / 2))}>
{[...Array(6)].map((_, index) => (
<meshLambertMaterial attachArray="material" key={index} color={hover === index ? 'hotpink' : 'white'} />
))}
<boxGeometry args={[60, 60, 60]} />
</mesh>
<ambientLight intensity={0.5} />
<pointLight position={[10, 10, 10]} intensity={0.5} />
</>,
virtualScene
)
}
To summarize, I would like to display something fixed in the top-right corner of the oculus view, even when moving head, but I am stuck...

Related

Three.js drag a model on x and z axis. React three fiber

I am trying to make models draggable in three.js. I want my model to follow my mouse when I move it. This is what I am trying to accomplish. I am using react-three-fiber and #use-gesture/react
What I am trying to accomplish
Here is how my program looks
My program
The difference is quite noticeable. On the good example, the model follows the mouse wherever it goes. On my program, that is not the case.
Here is my code for the cube
const BasicOrangeBox = ({setControlsDisabled, startPosition} : BasicOrangeBoxType) => {
const { camera } = useThree();
const [boxPosition, setBoxPosition] = useState(startPosition)
const bind = useGesture({
onDrag: ({movement: [x, y]}) => {
setControlsDisabled(true);
setBoxPosition( (prev) => {
const newObj = {...prev};
newObj.x = newObj.x0 + (x/100);
newObj.z = newObj.z0 + (y/100);
return newObj;
} )
},
onDragEnd: () => {
setControlsDisabled(false);
setBoxPosition( (prev) => {
const newObj = {...prev};
newObj.x0 = newObj.x;
newObj.z0 = newObj.z;
return newObj;
} )
}
})
return (
<mesh
{...bind() }
position={[boxPosition.x, boxPosition.y, boxPosition.z]}
>
<boxGeometry />
<meshBasicMaterial color={"orange"} />
</mesh>
)
}
Here is how I made a mesh cube draggable only on x and z axis like in a video.
Needed packages:
three
react-three/fiber
use-gesture/react
First, I created a plane that spanned across my whole viewport and assigned it to a useRef
<mesh rotation={[MathUtils.degToRad(90), 0, 0]} ref={planeRef} position={[0, -0.01, 0]}>
<planeGeometry args={[innerWidth, innerHeight]} />
<meshBasicMaterial color={0xfffffff} side={DoubleSide} />
</mesh>
Then I added that ref to a useContext so I can use it in different components.
Next, I imported raycaster from useThree hook and planeRef from aforementioned useContext.
Then I used useGesture onDrag and onDragEnd to enable and disable my OrbitControls
Inside the onDrag, I used raycaster's intersectsObject method and added an array of only one element, my plane, as a parameter. This gave me x, y, z coordinates where my mouse intersects with the plane. (Y is always 0)
Then I updated my box position.
Here is the full code snippet
const BasicOrangeBox = ({setControlsDisabled, startPosition} : BasicRedBoxType) => {
const { raycaster } = useThree();
const [boxPosition, setBoxPosition] = useState(startPosition);
const planeRef = useContext(PlaneContext);
const bind = useGesture({
onDrag: () => {
const intersects = raycaster.intersectObjects([planeRef]);
if (intersects.length > 0){
const intersection = intersects[0];
console.log(intersection.point.x);
setBoxPosition({
x: intersection.point.x,
y: intersection.point.y,
z: intersection.point.z,
})
}
setControlsDisabled(true);
},
onDragEnd: () => {
setControlsDisabled(false);
}
})
return ( //#ts-ignore Ignores type error on next line
<mesh
{...bind() }
position={[boxPosition.x, boxPosition.y, boxPosition.z]}
>
<boxGeometry />
<meshBasicMaterial color={"orange"} />
</mesh>
)
}

How Redux-Saga connect with Babylon-React hook

I've been doing a project to have a redux-saga react pattern to store and display the babylon scene logic, what I thoungt was distributing babylon stuff inside a single js file then export to a react fragment.
My qestion is how can we sent the data generate in babylon js by users, outside of the babylon js file (I have tried things like useState but it seemed that only work on react fragment but my babylon js is only handle for game logic.) I think if I can figure out this, I will be able to do futhur step like conncet with redux-saga.
My purpose is first of all bring the params like the position x,y,z outside createScene.js to be utilized by redux-saga, and if the user refresh the page, the scene he created won't dispear.
React newbie here seeking for suggestion, thanks in advance!
React-babylon hook below
import SceneComponent from 'babylonjs-hook'
import styled from 'styled-components'
import 'App.css'
import { onRender, onSceneReady } from '../hooks/babylonjs/createScene'
const ThreeDEditPageMain = styled.div``
const ThreeDEditPage = () => (
<ThreeDEditPageMain>
<SceneComponent antialias onSceneReady={onSceneReady} onRender={onRender} id="my-canvas" />
</ThreeDEditPageMain>
)
export default ThreeDEditPage
createScene.js below
import {
ActionManager,
ArcRotateCamera,
Color3,
ExecuteCodeAction,
HemisphericLight,
Mesh,
MeshBuilder,
StandardMaterial,
Vector3,
VertexBuffer,
} from '#babylonjs/core'
export const onSceneReady = scene => {
// This creates and positions a free camera (non-mesh)
const camera = new ArcRotateCamera('camera1', 0.4, 0.4, 50, new Vector3(0, 5, -10), scene)
// This targets the camera to scene origin
camera.setTarget(Vector3.Zero())
const canvas = scene.getEngine().getRenderingCanvas()
// This attaches the camera to the canvas
camera.attachControl(canvas, true)
camera.wheelPrecision = 50
// This creates a light, aiming 0,1,0 - to the sky (non-mesh)
const light = new HemisphericLight('light', new Vector3(0, 1, 0), scene)
// Default intensity is 1. Let's dim the light a small amount
light.intensity = 0.7
// Our built-in 'ground' shape.
const ground = MeshBuilder.CreateGround(
'ground',
{ width: 100, height: 100, subdivisions: 100 },
scene,
)
ground.updateFacetData()
// console.log(ground.facetNb)
// Our built-in 'box' shape.
const size = 4
const box = MeshBuilder.CreateBox('box', { size }, scene)
// Move the box upward 1/2 its height
// box.position.y = 1
box.position = new Vector3(size / 2, size / 2, size / 2)
box.bakeCurrentTransformIntoVertices()
box.isPickable = false
const positions = ground.getVerticesData(VertexBuffer.PositionKind)
// console.log(positions)
const snappedPosition = new Vector3()
box.position = snappedPosition
scene.onPointerMove = e => {
const pickingInfo = scene.pick(scene.pointerX, scene.pointerY)
if (pickingInfo.hit && pickingInfo.pickedMesh.name === 'ground') {
snappedPosition.x = Math.round(pickingInfo.pickedPoint.x)
snappedPosition.y = Math.round(pickingInfo.pickedPoint.y)
snappedPosition.z = Math.round(pickingInfo.pickedPoint.z)
}
}
// click action for player
ground.actionManager = new ActionManager(scene)
ground.actionManager.registerAction(
new ExecuteCodeAction(ActionManager.OnPickUpTrigger, () => {
// player clicked
console.log(
`gen a new box at x:${snappedPosition.x}, y:${snappedPosition.y}, z:${snappedPosition.z}`,
)
const genBox = Mesh.CreateBox('box', 4, scene)
genBox.position = new Vector3(snappedPosition.x, snappedPosition.y + 2, snappedPosition.z)
const mat = new StandardMaterial('mat', scene)
mat.diffuseColor = new Color3(Math.random(), Math.random(), Math.random()) // color stuff
genBox.material = mat
}),
)
}
export function onRender(sence) {
}

Memory leak, CSG import, THREEJS

I was able to make this example working https://sbcode.net/threejs/engraving/.
I am now looking to engrave my Mesh previously imported from GLB file in the scene.
Below my code:
const loader = new GLTFLoader();
let sword
loader.load("scene/glb/object.glb", function (gltf) {
sword = gltf.scene; // sword 3D object is loaded
sword.scale.set(1, 1, 1);
sword.position.y = 0;
sword.position.x = 0;
sword.position.z = 0;
engravedMesh = sword.children[0]
engravedCSG = CSG.fromMesh(engravedMesh)
scene.add(sword);
engraving()
});
let font
function engraving() {
const loaderFont = new FontLoader()
loaderFont.load('fonts/helvetiker_regular.typeface.json', function (f) {
font = f
regenerateGeometry()
})
}
function regenerateGeometry() {
let newGeometry
newGeometry = new TextGeometry("AAAAAAAAAAAAAAAAAAAAAAAA", {
font: font,
size: 3,
height: 3,
curveSegments: 2,
})
newGeometry.center()
//bender.bend(newGeometry, 'y', Math.PI / 16)
newGeometry.translate(0, 0, 0)
//scene.add(newGeometry)
const textCSG = CSG.fromGeometry(newGeometry)
var engraved = engravedCSG.subtract(textCSG)
engravedMesh.geometry.dispose()
engravedMesh.geometry = CSG.toMesh(
engraved,
new THREE.Matrix4()
).geometry
}
When I tried to execute to execute it, my scree has frozen.
Is there something I did wrong ?
Finally it works with another glb file.
I guess I built a Sphere on Blender with too high definition.

ThreeJS WebXR modifies camera properties on re-entering VR the second/third/nth time

I'm facing a very weird experience with WebXR API. WebXR API changes the VR camera properties when I re-enter the VR. The camera somehow cuts my objects (shown below) when I re-enter the VR mode the second, third or nth time.
It always works properly (shown below) when I enter VR first time.
I would like to know why the objects are getting cut on the second/third/nth VR attempt and also how to debug WebXR immersive-vr camera properties.
I'm using very basic WebXR API codes as follows:
window.onload=function()
{
init();
animate();
}
function init()
{
canvas = document.getElementById( 'vw_canvas' );
canvas.width = canvas.clientWidth;
canvas.height = canvas.clientHeight;
canvasCssWidth = canvas.style.width;
canvasCssHeight = canvas.style.height;
group = new THREE.Object3D();//we create this to make it a parent of camera object.
camera = new THREE.PerspectiveCamera( 75, canvas.width / canvas.height, 1, 10000 );
group.rotation.order = 'XZY';//default is XYZ..we change this to XZY. because in case of XYZ when we rotate the object around its Y axis even the X and Z axis changes. so to avoid that we give higher priority to Z axis.
scene = new THREE.Scene();
group.add(camera);
scene.add(group);
//add more 3D objects to scene
renderer = new THREE.WebGLRenderer({antialias:true, powerPreference: "high-performance"});
renderer.setPixelRatio( canvas.devicePixelRatio );
renderer.setSize( canvas.width, canvas.height );
renderer.xr.enabled = true;
renderer.xr.setReferenceSpaceType( 'local' );
canvas.appendChild(renderer.domElement);
var WEBVR =
{
createButton: function ( renderer )
{
function showEnterXR( /*device*/ ) {
var currentSession = null;
function onSessionStarted( session )
{
session.addEventListener( 'end', onSessionEnded );
renderer.xr.setSession( session );
vrButton.style.backgroundImage="url('icons/noVR.svg')";
document.getElementById("vr-button-tooltip").setAttribute("tooltip","Exit VR");
currentSession = session;
isVRpresenting=true;
openVRMenu();
}
function onSessionEnded( event )
{
currentSession.removeEventListener( 'end', onSessionEnded );
renderer.xr.setSession( null );
vrButton.style.backgroundImage="url('icons/yesVR.svg')";
document.getElementById("vr-button-tooltip").setAttribute("tooltip","Enter VR");
currentSession = null;
isVRpresenting=false;
closeVRMenu();
}
vrButton.style.backgroundImage="url('icons/yesVR.svg')";
document.getElementById("vr-button-tooltip").setAttribute("tooltip","Enter VR");
isVRpresenting=false;
vrButton.onclick = function ()
{
if (runOnlyOnce)
{
makeVRMenuItems();
runOnlyOnce=false;
}
if ( currentSession === null )
{
var sessionInit = { optionalFeatures: [ 'local-floor', 'bounded-floor' ] };
navigator.xr.requestSession( 'immersive-vr', sessionInit ).then( onSessionStarted );
}
else
{
currentSession.end();
}
};
}
function showVRNotFound()
{
vrButton.onclick = function ()
{
//open VR popup that shows devices to use in order to exp VR
};
isVRavailable=false;
vrButton.style.backgroundImage="url('icons/noVR.svg')";
document.getElementById("vr-button-tooltip").setAttribute("tooltip","VR is not supported on your device");
vrButton.onclick = null;
// renderer.xr.setDevice( null );
isVRpresenting=false;
}
if ( 'xr' in navigator )
{
isVRavailable=true;
navigator.xr.isSessionSupported( 'immersive-vr' ).then( function ( supported ) {
supported ? showEnterXR() : showVRNotFound();
} );
}
else
{
vrButton.onclick = function ()
{
//open VR popup that shows devices to use in order to exp VR
};
isVRavailable=false;
vrButton.style.backgroundImage="url('icons/noVR.svg')";
document.getElementById("vr-button-tooltip").setAttribute("tooltip","VR is not supported on you device");
isVRpresenting=false;
}
},
};
WEBVR.createButton( renderer );
}
function animate()
{
renderer.setAnimationLoop( animate );
update();
}
function update()
{
renderer.render( scene, camera );
}
Seems like the WebXR session takes the scene camera parameters for the first time it enters VR. Then, on second and subsequent visits the WebXR session sets the camera to default settings. Hence, in order to update the camera properties to be same as scene camera properties, we need to use this session.updateRenderState({ depthFar: 10000 });.In my case my scene camera had depthFar=10000 but WebXR camera resets the depthFar property to 1000 in the second and subsequent visits in VR, which was the reason of frustum culling (image in question).

How to create events using React Native

I'm making an application using React VR. If you don't know React VR, well it's based on React Native with some other components, includes Three.js and other stuff, specific for using WebVR.
I've making a component named NavigateButton. Below is my code:
import React from 'react';
import { AppRegistry, asset, StyleSheet, Pano, Text, View, VrButton, Sphere } from 'react-vr';
export class NavigateButton extends React.Component {
render() {
return (
<VrButton onClick={() => this.onNavigating()}>
<Sphere radius={0.5} widthSegments={10} heightSegments={10} style={{ color: "red" }} />
</VrButton>
);
}
onNavigating() { // This method must throw an event
console.log(this.props.to);
}
};
If the user clicks on the VrButton (this is like a HTML 5 button-tag but for VR with inside it, a sphere), an event must been raised to the place where I call the NavigateButton component. That's on code below:
import React from 'react';
import { AppRegistry, asset, StyleSheet, Pano, Text, View, VrButton, Sphere } from 'react-vr';
import { NavigateButton } from './components/nativateButton.js';
let room = asset('360 LR/inkom_hal.jpg');
export default class MainComp extends React.Component {
render() {
return (
<View>
<Pano source={asset('360 LR/inkom_hal.jpg')} />
<View style={{ transform: [{ translate: [20, 0, 0] }] }}>
<NavigateButton to="garage"></NavigateButton>
<!-- and must been catch here -->
</View>
<View style={{ transform: [{ translate: [-7, 0, -20] }] }}>
<NavigateButton to="woonkamer"></NavigateButton>
<!-- or here -->
</View>
</View>
);
}
}
AppRegistry.registerComponent('MainComp', () => MainComp);
Is it possible to do that? I would something like code below to catch the event:
<NavigateButton to="woonkamer" onNavigate={() => this.change()}></NavigateButton>
I've searched on the internet but nothing found that could help me.
Here is the instruction how to create Sample VR app with React VR prepared by me and my team:
Creating a VR tour for web
The structure of future app’s directory is as follows:
+-node_modules
+-static_assets
+-vr
\-.gitignore
\-.watchmanconfig
\-index.vr.js
\-package.json
\-postinstall.js
\-rn-cli-config.js
The code of a web app would be in the index.vr.js file, while the static_assets directory hosts external resources (images, 3D models). You can learn more on how to get started with React VR project here. The index.vr.js file contains the following:
import React from 'react';
import {
AppRegistry,
asset,
StyleSheet,
Pano,
Text,
View,
}
from 'react-vr';
class TMExample extends React.Component {
render() {
return (
<View>
<Pano source={asset('chess-world.jpg')}/>
<Text
style={{
backgroundColor:'blue',
padding: 0.02,
textAlign:'center',
textAlignVertical:'center',
fontSize: 0.8,
layoutOrigin: [0.5, 0.5],
transform: [{translate: [0, 0, -3]}],
}}>
hello
</Text>
</View>
);
}
};
AppRegistry.registerComponent('TMExample', () => TMExample);
VR components in use
We use React Native packager for code pre-processing, compilation, bundling and asset loading. In render function there are view, pano and text components. Each of these React VR components comes with a style attribute to help control the layout.
To wrap it up, check that the root component gets registered with AppRegistry.registerComponent, which bundles the application and readies it to run. Next step to highlight in our React VR project is compiling 2 main files.
Index.vr.js file
In constructor we’ve indicated the data for VR tour app. These are scene images, buttons to switch between scenes with X-Y-Z coordinates, values for animations. All the images we contain in static_assets folder.
constructor (props) {
super(props);
this.state = {
scenes: [{scene_image: 'initial.jpg', step: 1, navigations: [{step:2, translate: [0.73,-0.15,0.66], rotation: [0,36,0] }] },
{scene_image: 'step1.jpg', step: 2, navigations: [{step:3, translate: [-0.43,-0.01,0.9], rotation: [0,140,0] }]},
{scene_image: 'step2.jpg', step: 3, navigations: [{step:4, translate: [-0.4,0.05,-0.9], rotation: [0,0,0] }]},
{scene_image: 'step3.jpg', step: 4, navigations: [{step:5, translate: [-0.55,-0.03,-0.8], rotation: [0,32,0] }]},
{scene_image: 'step4.jpg', step: 5, navigations: [{step:1, translate: [0.2,-0.03,-1], rotation: [0,20,0] }]}],
current_scene:{},
animationWidth: 0.05,
animationRadius: 50
};
}
Then we’ve changed the output of images linking them to state, previously indicated in constructor.
<View>
<Pano source={asset(this.state.current_scene['scene_image'])}
style={{
transform: [{translate: [0, 0, 0]}]
}}/>
</View>
Navigational buttons
In each scene we’ve placed transition buttons for navigation within a tour, taking data from state. Subscribing to onInput event to convey switching between scenes, binding this to it as well.
<View>
<Pano source={asset(this.state.current_scene['scene_image'])} onInput={this.onPanoInput.bind(this)}
onLoad={this.sceneOnLoad} onLoadEnd={this.sceneOnLoadEnd}
style={{ transform: [{translate: [0, 0, 0]}] }}/>
{this.state.current_scene['navigations'].map(function(item,i){
return <Mesh key={i}
style={{
layoutOrigin: [0.5, 0.5],
transform: [{translate: item['translate']},
{rotateX: item['rotation'][0]},
{rotateY: item['rotation'][1]},
{rotateZ: item['rotation'][2]}]
}}
onInput={ e => that.onNavigationClick(item,e)}>
<VrButton
style={{ width: 0.15,
height:0.15,
borderRadius: 50,
justifyContent: 'center',
alignItems: 'center',
borderStyle: 'solid',
borderColor: '#FFFFFF80',
borderWidth: 0.01
}}>
<VrButton
style={{ width: that.state.animationWidth,
height:that.state.animationWidth,
borderRadius: that.state.animationRadius,
backgroundColor: '#FFFFFFD9'
}}>
</VrButton>
</VrButton>
</Mesh>
})}
</View>
onNavigationClick(item,e){
if(e.nativeEvent.inputEvent.eventType === "mousedown" && e.nativeEvent.inputEvent.button === 0){
var new_scene = this.state.scenes.find(i => i['step'] === item.step);
this.setState({current_scene: new_scene});
postMessage({ type: "sceneChanged"})
}
}
sceneOnLoad(){
postMessage({ type: "sceneLoadStart"})
}
sceneOnLoadEnd(){
postMessage({ type: "sceneLoadEnd"})
}
this.sceneOnLoad = this.sceneOnLoad.bind(this);
this.sceneOnLoadEnd = this.sceneOnLoadEnd.bind(this);
this.onNavigationClick = this.onNavigationClick.bind(this);
Button animation
Below, we’ll display the code for navigation button animations. We’ve built animations on button increase principle, applying conventional requestAnimationFrame.
this.animatePointer = this.animatePointer.bind(this);
animatePointer(){
var delta = this.state.animationWidth + 0.002;
var radius = this.state.animationRadius + 10;
if(delta >= 0.13){
delta = 0.05;
radius = 50;
}
this.setState({animationWidth: delta, animationRadius: radius})
this.frameHandle = requestAnimationFrame(this.animatePointer);
}
componentDidMount(){
this.animatePointer();
}
componentWillUnmount(){
if (this.frameHandle) {
cancelAnimationFrame(this.frameHandle);
this.frameHandle = null;
}
}
In componentWillMount function we’ve indicated the current scene. Then we’ve also subscribed to message event for data exchange with the main thread. We do it this way due to a need to work out a React VR component in a separate thread.
In onMainWindowMessage function we only process one message with newCoordinates key. We’ll elaborate later why we do so. Similarly, we’ve subscribed to onInput event to convey arrow turns.
componentWillMount(){
window.addEventListener('message', this.onMainWindowMessage);
this.setState({current_scene: this.state.scenes[0]})
}
onMainWindowMessage(e){
switch (e.data.type) {
case 'newCoordinates':
var scene_navigation = this.state.current_scene.navigations[0];
this.state.current_scene.navigations[0]['translate'] = [e.data.coordinates.x,e.data.coordinates.y,e.data.coordinates.z]
this.forceUpdate();
break;
default:
return;
}
}
<Pano source={asset(this.state.current_scene['scene_image'])} onInput={this.onPanoInput.bind(this)}
style={{ transform: [{translate: [0, 0, 0]}] }}/>
rotatePointer(nativeEvent){
switch (nativeEvent.keyCode) {
case 38:
this.state.current_scene.navigations[0]['rotation'][1] += 4;
break;
case 39:
this.state.current_scene.navigations[0]['rotation'][0] += 4;
break;
case 40:
this.state.current_scene.navigations[0]['rotation'][2] += 4;
break;
default:
return;
}
this.forceUpdate();
}
Arrow turns are done with ↑→↓ alt keys, for Y-X-Z axes respectively.
See and download the whole index.vr.js file on Github HERE.
Client.js file
Moving further into our React VR example of virtual reality web applications, we’ve added the code below into init function. The goal is processing of ondblclick, onmousewheel and message events, where the latter is in rendering thread for message exchanges. Also, we’ve kept a link to vr and vr.player._camera objects.
window.playerCamera = vr.player._camera;
window.vr = vr;
window.ondblclick= onRendererDoubleClick;
window.onmousewheel = onRendererMouseWheel;
vr.rootView.context.worker.addEventListener('message', onVRMessage);
We’ve introduced the onVRMessage function for zoom returning to default when scenes change. Also, we have added the loader when scene change occurs.
function onVRMessage(e) {
switch (e.data.type) {
case 'sceneChanged':
if (window.playerCamera.zoom != 1) {
window.playerCamera.zoom = 1;
window.playerCamera.updateProjectionMatrix();
}
break;
case 'sceneLoadStart':
document.getElementById('loader').style.display = 'block';
break;
case 'sceneLoadEnd':
document.getElementById('loader').style.display = 'none';
break;
default:
return;
}
}
onRendererDoubleClick function for 3D-coordinates calculation and sending messages to vr component to change arrow coordinates. The get3DPoint function is custom to our web VR application and looks like this:
function onRendererDoubleClick(){
var x = 2 * (event.x / window.innerWidth) - 1;
var y = 1 - 2 * ( event.y / window.innerHeight );
var coordinates = get3DPoint(window.playerCamera, x, y);
vr.rootView.context.worker.postMessage({ type: "newCoordinates", coordinates: coordinates });
}
Switch to mouse wheel
We’ve used the onRendererMouseWheel function for switching zoom to a mouse wheel.
function onRendererMouseWheel(){
if (event.deltaY > 0 ){
if(window.playerCamera.zoom > 1) {
window.playerCamera.zoom -= 0.1;
window.playerCamera.updateProjectionMatrix();
}
}
else {
if(window.playerCamera.zoom < 3) {
window.playerCamera.zoom += 0.1;
window.playerCamera.updateProjectionMatrix();
}
}
}
Exporting coordinates
Then we’ve utilized Three.js to work with 3D-graphics. In this file we’ve only conveyed one function to export screen coordinated to world coordinates.
import * as THREE from 'three';
export function get3DPoint(camera,x,y){
var mousePosition = new THREE.Vector3(x, y, 0.5);
mousePosition.unproject(camera);
var dir = mousePosition.sub(camera.position).normalize();
return dir;
}
See and download the whole client.js file on Github HERE. There’s probably no need to explain how the cameraHelper.js file works, as it is plain simple, and you can download it as well.
Also, if you are interested in a lookalike project estimate or same additional technical details about ReactVR development - you can find some info here:

Resources