Wrong clipping in glTF-model - AFrame - three.js

Video preview : https://i.imgur.com/VMhJV8v.mp4
I'm having clipping issues for quite some time now, not sure what causes this but so far the only solution is to move the object closer towards the camera and scale it down ( video preview link above )
I tried messing around with the camera clipping settings ( changed far / near values ). Played around with the 3d object transparency and container marker visibility (1) - no dice , tried to change renderOrder (2) , frustumCulled (3) values recursively without any luck ...
Using latest ARjs + Aframe image based markers with animated 3d models

The issue disappeared once I scaled down the image marker from 990x990 to 375x375 and reduced the a-entity scale value by the same ratio ( nailing down the position is still a bit tricky for different DPI devices )
¯\_(ツ)_/¯

Related

Three.js texture loads in small resolution, same picture scaled up brakes the scene (small image about 600x600px, scaled up to about 1400px)

I have a pretty strange behaviour with three.js when I try to load different textures for an environment cube map. Everything works fine till I try to test the same scene with larger textures. My camera is pretty much stable so I only have to change one side of the env cube to a large resolution texture as that will be the background of the scene which will be visible, the other 5 sides are small pngs - those are only visible in reflection.
There is no clear breaking point, what seems to usually work as an image for the env.cube is about 600x600px-ish, going any higher resulting in the scene loading completely black.
To make the scene look nice on most devices, I have to go up to a resolution around 1500x1500px (so not insanely large) for the background, and I have no idea why it breaks with a bigger image.
What I already tried/did:
image paths are fine, overwriting a working image to a larger version also breaks the scene.
I had no other idea what to try, maybe it has to do something with photoshop and its image encoding or something along those lines?
the scene contains:
a camera, a gltf model to test with and the environment cube. everything works perfect with small textures.
I already looked at the texture documentation of threejs and found nothing about what could cause this behaviour, I'm completely stuck.

Set a Maximum Range for Face Rendering in AFrame scene

i got a Scene which loads a very large .obj File (a lot of faces), this results in low fps ...
I want to set a maximum distance from camera where faces should be rendered.
So far i only tried to use the fog component, which does not what i expected...
Anyone got a idea ?
I believe you can achieve this by a THREE.PerspectiveCamera property called far which determines the camera frustum far plane.
You can check it out in the docs. It can be easily set it like this:
let scene = document.querySelector("a-scene")
scene.camera.far = 3 // default is 1000 afaik
Check it out in this fiddle (move around a bit).
Here i threw it into an aframe component.

Three.js: See through objects artefact on mobile

I recently tried my app on mobile and noticed some weird behavior, seems like camera near plane is clipping the geometry however other objects at the same distance aren't clipped... Materials are StandarMaterials, depthTest and depthWrite are set to true.
I must add I can't reproduce this issue on my desktop. Which makes it difficult to understand what's going on, since it's working perfectly at first sight.
Here are 2 gifs showing the problem:
You can see the same wall on the left in the next gif
Thanks!
EDIT:
It seems the transparent faces (on mobile) was due to logarithmicDepthBuffer = true (but don't know why?) and I also had additional artefacts cause by camera near and far planes being too far from each other producing depth issues (see Flickering planes)...
EDIT 2:
Well I wasn't searching for the right terms... Just found this today: https://github.com/mrdoob/three.js/issues/13047#issuecomment-356072043
So logarithmicDepthBuffer uses EXT_frag_depth which is only supported by 2% of mobiles according to WebGLStats. A workaround would be to tesselate the geometries...
Well I wasn't searching for the right terms... Just found this today: https://github.com/mrdoob/three.js/issues/13047#issuecomment-356072043
So logarithmicDepthBuffer uses EXT_frag_depth which is only supported by 2% of mobiles according to WebGLStats. A workaround would be to tesselate the geometries or stay with linear depth buffer...
Additional artefacts caused by camera near and far planes being too far from each other producing depth issues (see Flickering planes)...
You don't need a logarithmic depth buffer to fix this. You've succumbed to the classic temptation to bring your near clip REALLY close to the eye and the far clip very far away. This creates a very non-linear depth precision distribution and is easily mitigated by pushing the near clip plane out by a reasonable amount. Try to sandwich your 3D data as tightly as possible between your near and far clip planes and tolerate some near plane clipping.

THREEJS display an object3D in front of a 360 stereo video sphere

Hi i'm working on a vr project with threejs and an oculus rift.
I made a 24 meters wide 360 stereoscopic video
but when i try to display some text in front of it
i have some strange trouble vision effect or kind of eye separation issue.
if anyone has an idea... thanks :/
The text must always be closer than anything it obstructs on the video for this to work. That is, if the 360 cameras took images with nothing closer than 1.5 meters, the text should be at around 1.2 meter.
Disabling the head movement effect on the text will help too (keep the rotation, just disable the translation).
And 24 meters is a bit low, try a few hundred, maybe a kilometer. Remember that the video must look like it's in the infinite range and both head movement and IPD must be completely ignorable.

getting sprites to work with three.js and different camera types

I've got a question about getting sprites to work with three.js using perspective and orthogonal cameras.
I have a building being rendered in one scene. At one location in the scene all of the levels are stacked on top of each other to give a 3D view of the building and an orthogonal camera is being used to view it. In another part of the scene, I have just the selected level of the building being shown and a perspective camera is being used. The screen is divided between the two views. The idea being the user selects a level from the building view and a more detailed map of that selected level is shown on the other part of the screen.
I played around with sprites for a little bit and as far as I understand it; if the sprite is being viewed with a perspective camera then the sprite's scale property is actual it's size property and if a sprite is being viewed with an orthogonal camera the scale property scales the sprite according to the view port.
I placed the sprite where both cameras can see it and this seems to be the case. If I scale the sprite by 0.5, then the sprite takes up half the orthogonal camera's view port and I can't see it with the perspective camera (presumably because for it, the sprite is 0.5px x 0.5px and is either rounded to 0px (not rendered, or 1px, effectively invisible). If I scale the sprite by say 50, the the perspective camera can see it (presumably because it's a 50px x 50px square) and the orthogonal camera is over taken by the sprite (presumably because it's being scaled by 50 times the view port).
Is my understanding correct?
I ask because in the scene I'm rendering, the building and detailed areas are ~1000 units apart on the x-axis. If I place a sprite somewhere on the detail map I need it to be ~35x35 pixels and when I do this it works fine for the detail view but building view is overtaken. I played with the numbers and it seems that if I scale the sprite by 4, it starts to show up on my building view, even though there's a 1000 unit distance between the views and the sprite isn't visible with the perspective camera.
So. If my understanding is correct then I need to either use separate scenes; have a much bigger gap between views; use the same camera type for both views; or not use sprites.
There are basically two different ways you can use sprites, either with 2D screen coordinates or 3D scene coordinates. Perhaps scene coordinates are what you need? For examples of both, check out the example at:
http://stemkoski.github.io/Three.js/Sprites.html
and in particular, when you zoom in and zoom out in that demo, notice that the sprites in-scene will change size, while the others do not.
Hope this helps!

Resources