THREEJS display an object3D in front of a 360 stereo video sphere - three.js

Hi i'm working on a vr project with threejs and an oculus rift.
I made a 24 meters wide 360 stereoscopic video
but when i try to display some text in front of it
i have some strange trouble vision effect or kind of eye separation issue.
if anyone has an idea... thanks :/

The text must always be closer than anything it obstructs on the video for this to work. That is, if the 360 cameras took images with nothing closer than 1.5 meters, the text should be at around 1.2 meter.
Disabling the head movement effect on the text will help too (keep the rotation, just disable the translation).
And 24 meters is a bit low, try a few hundred, maybe a kilometer. Remember that the video must look like it's in the infinite range and both head movement and IPD must be completely ignorable.

Related

Wrong clipping in glTF-model - AFrame

Video preview : https://i.imgur.com/VMhJV8v.mp4
I'm having clipping issues for quite some time now, not sure what causes this but so far the only solution is to move the object closer towards the camera and scale it down ( video preview link above )
I tried messing around with the camera clipping settings ( changed far / near values ). Played around with the 3d object transparency and container marker visibility (1) - no dice , tried to change renderOrder (2) , frustumCulled (3) values recursively without any luck ...
Using latest ARjs + Aframe image based markers with animated 3d models
The issue disappeared once I scaled down the image marker from 990x990 to 375x375 and reduced the a-entity scale value by the same ratio ( nailing down the position is still a bit tricky for different DPI devices )
¯\_(ツ)_/¯

Three.js: See through objects artefact on mobile

I recently tried my app on mobile and noticed some weird behavior, seems like camera near plane is clipping the geometry however other objects at the same distance aren't clipped... Materials are StandarMaterials, depthTest and depthWrite are set to true.
I must add I can't reproduce this issue on my desktop. Which makes it difficult to understand what's going on, since it's working perfectly at first sight.
Here are 2 gifs showing the problem:
You can see the same wall on the left in the next gif
Thanks!
EDIT:
It seems the transparent faces (on mobile) was due to logarithmicDepthBuffer = true (but don't know why?) and I also had additional artefacts cause by camera near and far planes being too far from each other producing depth issues (see Flickering planes)...
EDIT 2:
Well I wasn't searching for the right terms... Just found this today: https://github.com/mrdoob/three.js/issues/13047#issuecomment-356072043
So logarithmicDepthBuffer uses EXT_frag_depth which is only supported by 2% of mobiles according to WebGLStats. A workaround would be to tesselate the geometries...
Well I wasn't searching for the right terms... Just found this today: https://github.com/mrdoob/three.js/issues/13047#issuecomment-356072043
So logarithmicDepthBuffer uses EXT_frag_depth which is only supported by 2% of mobiles according to WebGLStats. A workaround would be to tesselate the geometries or stay with linear depth buffer...
Additional artefacts caused by camera near and far planes being too far from each other producing depth issues (see Flickering planes)...
You don't need a logarithmic depth buffer to fix this. You've succumbed to the classic temptation to bring your near clip REALLY close to the eye and the far clip very far away. This creates a very non-linear depth precision distribution and is easily mitigated by pushing the near clip plane out by a reasonable amount. Try to sandwich your 3D data as tightly as possible between your near and far clip planes and tolerate some near plane clipping.

How to render side by side videos in OculusRiftEffect or VREffect

I'm experimenting with videojs-vr which is using THREE.OculusRiftEffect for rendering the video in an Oculus friendly way.
I downloaded a side by side video from YouTube and played it within the videojs-vr example.html.
Now I'm searching for a way to show only the left part of the video in the left camera of OculusRiftEffect / VREffect and the right part for the right eye.
I think I have to find/use an event which draws the movie onto the mesh and identify which camera is currently rendered to copy only the left or the right part of the video.
If you're using Three.js I would make two spheres, one for left eye and the other one for right eye. Then separate the video using a shader on those spheres to map only half texture in each one. Then attach each sphere to one of the two cameras.
Don't sure how to do it in three.js, as I come from Unity and I'm still a noob with three, but I did exactly that in Unity. Maybe the idea helps you.

Programmatic correction of camera tilt in a positioning system

A quick introduction:
We're developing a positioning system that works the following way. Our camera is situated on a robot and is pointed upwards (looking at the ceiling). On the ceiling we have something like landmarks, thanks to whom we can compute the position of the robot. It looks like this:
Our problem:
The camera is tilted a bit (0-4 degrees I think), because the surface of the robot is not perfectly even. That means, when the robot turns around but stays at the same coordinates, the camera looks at a different position on the ceiling and therefore our positioning program yields a different position of the robot, even though it only turned around and wasn't moved a bit.
Our current (hardcoded) solution:
We've taken some test photos from the camera, turning it around the lens axis. From the pictures we've deduced that it's tilted ca. 4 degrees in the "up direction" of the picture. Using some simple geometrical transformations we've managed to reduce the tilt effect and find the real camera position. On the following pictures the grey dot marks the center of the picture, the black dot is the real place on the ceiling under which the camera is situated. The black dot was transformed from the grey dot (its position was computed correcting the grey dot position). As you can easily notice, the grey dots form a circle on the ceiling and the black dot is the center of this circle.
The problem with our solution:
Our approach is completely unportable. If we moved the camera to a new robot, the angle and direction of tilt would have to be completely recalibrated. Therefore we wanted to leave the calibration phase to the user, that would demand takings some pictures, assessing the tilt parameters by him and then setting them in the program. My question to you is: can you think of any better (more automatic) solution to computing the tilt parameters or correcting the tilt on the pictures?
Nice work. To have an automatic calibration is a nice challenge.
An idea would be to use the parallel lines from the roof tiles:
If the camera is perfectly level, then all lines will be parallel in the picture too.
If the camera is tilted, then all lines will be secant (they intersect in the vanishing point).
Now, this is probably very hard to implement. With the camera you're using, distortion needs to be corrected first so that lines are indeed straight.
Your practical approach is probably simpler and more robust. As you describe it, it seems it can be automated to become user friendly. Make the robot turn on itself and identify pragmatically which point remains at the same place in the picture.

Implementing terrains in XNA similar to Battle Zone (1980)

I am developing a 3D game for Windows Phone that includes terrains and volcanoes at infinite distance similar to Battle Zone (1980) by Atari Inc. The player can never touch the terrains no matter how far player drives. Currently, to implement this I am mapping a 2D texture inside the wall of cylinder. The cylinder is also moving with the player so that the player can never reach terrains. I am not sure whether this is a good method to implement terrains as I am facing problems like distortion of texture when mapping it on the wall of cylinder.
Please suggest me methods to implement a view of terrains in XNA similar to Battle Zone?
normally instead of cylinder developers use box (so-called SkyBox)
It has less polygons and in general less distortion (could be some at edges)
To make it look more real some devs like Valve use off-screen render in first pass that include skybox + some distant models with low details and moving cloud sprites or textured ring with alpha. Both points of view are synchronised (main camera and off-screen camera) then (without clearing colour buffer) they render final scene on top. Thanks to that far building will move a bit and scene surrounding will look less plain. To avoid z-buffer cleaning between passes they simply doing first pass under the floor(literally) of the scene of main pass.

Resources