As an example, I tried to set up an AR scene in three.js.
I use "aruco.js" to do that. When I load obj or any other models everything works great, However when the marker is placed in front of the camera it gets detected and the scene flickers/bumps violently. Any serious reason why this is happening?
As live demo would be hard to set up, I just uploaded a video on YouTube to illustrate my point: https://youtu.be/9jMso7vmw1M
So my exact question is: what is the best way to make AR scene stick to the marker without any flicker?
Code in jsfiddle: https://jsfiddle.net/6cw3ta57/
var markerObject3D = new THREE.Object3D()
scene.add(markerObject3D)
// set the markerObject3D as visible
markerObject3D.visible = false
//////////////////////////////////////////////////////////////////////////////////
// code in jsfiddle
//////////////////////////////////////////////////////////////////////////////////
Related
NOTE: It appears I oversimplified my initial question. See below for the edit.
I'm trying to combine the technique shown in the clipping/stencil example of Three.js, which uses the stencil buffer to render 'caps' when clipping geometry, with an EffectComposer-based rendering pipeline, but I am running into some difficulties. A fiddle demonstrating the problem can be found at https://jsfiddle.net/2vc76ajd/1/.
The EffectComposer has two passes: a RenderPass and a ShaderPass using CopyShader (see code below).
composer = new EffectComposer(renderer);
composer.addPass(new RenderPass(scene, camera));
var shaderPass = new ShaderPass(CopyShader);
shaderPass.enabled = false;
composer.addPass(shaderPass);
The first renders the scene as usual, the latter merely copies the rendertarget onto a fullscreen quad. If I disable the ShaderPass everything works as intended: the geometry is clipped, and cutting planes are drawn in a different color:
When the ShaderPass is enabled by clicking the 'copy pass' checkbox in the upper right, however, the entire cutting plane gets rendered, rather than just the 'caps':
Presumably there is some interaction here between offscreen render targets and stencil buffers. However, I have so far been unable to find a way to have subsequent render passes look the same as the initial render. Can anyone tell me what I am missing?
EDIT: While WestLangley's answer solved my initial problem, it unfortunately doesn't work when you're using an SSAOPass, which is what I was doing before trying to simplify the problem for the question. I have posted an updated fiddle at https://jsfiddle.net/bavL98hf/1/, which includes the proposed fix and now toggles between a RenderPass or an SSAOPass. With SSAO turned on, the result is this:
I have tried setting stencilBuffer to true on all the render targets used in SSAOPass in addition to the ones in EffectComposer, but sadly that doesn't work this time. Can anyone tell me what else I am overlooking?
I need to render a single specific mesh from a scene into a texture using a THREE.WebGLRenderTarget. I already achieved that during the rendering of the scene, all other meshes, except this one, are being ignored. So, I basically achieved my goal. The thing i hate is, that there is still a lot of unnecessary work going on for my whole scene graph, during the render process of the scene. I need to render this texture every frame, so with my current method i get extrem fps drop downs. (There are lots of meshes in the whole scene graph).
So what i found was the function "renderBufferImmediate" from the THREE.WebGLRenderer. (Link to the renderer source code here) My pseudo code to achieve my goal would look like this:
var mesh = some_Mesh;
var renderer = some_WebGLRenderer;
var renderTarget = some_WebGLRenderTarget;
renderer.setRenderTarget(renderTarget);
var materialProperties = renderer.properties.get(mesh.material);
var program = materialProperties.program;
renderer.renderBufferImmediate(mesh, program, mesh.material);
var texture = renderTarget.texture;
The renderBufferImmediate function takes an instance of an THREE.Object3D, a WebGLShaderProgram and a THREE.Material. Problem i see here: The implementation of this function tries to lookup properties of the Object3D which, afaik doesn't exist. (Like "hasPositions" or "hasNormals"). In short: my approach doesn't work.
I would be grateful if someone could tell me if i can use this function for my purpose (Meaning i am currently using it wrong) or if there is another solution for my problem.
Thanks in advance.
So, the scene include an earth spinning on its axis, a moon rotating around the earth, and a light source to the right that will help to simulate the effect of an eclipse. I thought it would be easy because we've done shadows and transformations before but I ran into a problem.
In our template we have the following at the top:
// For the assignment where a texture is required you should
// deactivate the Detector and use ONLY the CanvasRenderer. There are some
// issues in using waht are called Cross Domain images for textures. You
// can get more details by looking up WebGL and CORS using Google search.
// if ( Detector.webgl )
// var renderer = new THREE.WebGLRenderer();
// else
var renderer = new THREE.CanvasRenderer();
My problem is, when I leave it like that, the spotlight doesn't appear on the scene. However, as was warned, if I activate the Detector, the textures won't work.
But I need both textures and the spotlight. How do I work around this?
You are confusing yourself. Detector.webgl only checks for support of WebGL on the browser. The code below uses the WebGL renderer if the current browser supports WebGL and CanvasRenderer if there is no WebGL support.
if ( Detector.webgl )
var renderer = new THREE.WebGLRenderer();
else
var renderer = new THREE.CanvasRenderer();
With WebGL - loading textures will run into a cross domain issue. Best to then execute the code either on a web server or a local server like http://www.wampserver.com/en/ for Windows or https://www.mamp.info/en/ for Mac. Or npm-package like https://github.com/tapio/live-server.
As far as I know shadows are not supported on the CSSCanvasRender. I would ask your assignment head to clarify.
I want to generate the top and perspective view of an object.
Input: A 3d object, maybe .obj or .dae file.
Output: the image files presenting the top and front view of the loaded object.
Here is some expected output:
The perspective view of a chair
The top view of a chair:
Can anyone give me some suggestions to solve this problem? Demo may be preferred
You could create a small three.js scene with your obj or collada-file loaded using the appropriate loaders. (see examples of the specific loaders). Then create the necessary cameras you want to have in the scene. See examples for orthographic and perspective cameras that come with three.js, too.
To produce the images you want, you could use the toDataURL function, see this thread here and use google
Three.js and HTML5 Canvas toDataURL
in essence, after the objects are loaded, you could do something like:
renderer.render(scene, topViewCamera);
dataurl = canvas.toDataURL();
renderer.render(scene, perspectiveCamera);
dataurl2 = canvas.toDataURL();
I think you could also use 2 renderTargets and then use those for output, too, but maybe if you are new to three.js, start with the toDataURL() method from HTML5.
I have a problem with my Three.js 3D application - at least according to some people I know.
My application rests at [http://176.9.149.205/planungstool/]. Some people who supposedly have the most recent version of Chrome and Firefox, can not see the textured areas. For example, they do not see the roof or front of the 3D house. They do, however, see the non-textured stuff like the tree or the floor.
What's weird is that I don't have that problem and most of the other people I asked do not have it as well. Here is what it should look like and does look like for me: [http://176.9.149.205/planungstool/house.jpg]
Does anyone have an idea what could cause this? Could it be some client-side settings? Or maybe some access control policy?
I'm loading the textures like this:
var myTexture = new THREE.ImageUtils.loadTexture('gfx/textures/texture.jpg');
And then I just create meshes with lambert material that have this texture as their map.
If you read this and do not know what could cause this error, it would be nice if you could at least tell me if you see the textured areas or not, given you have a recent version of Chrome or Firefox.
I can see the textures on current chrome on mac. I had a similar problem with the canvas renderer (anything textured was invisible). For me I changed from using the ImageUtils.loadTexture to a texture and texture loader and it works.
var texture = new THREE.Texture();
var texLoader = new THREE.ImageLoader();
texLoader.addEventListener( 'load', function(event){
texture.image = event.content;
texture.needsUpdate = true;
} );
texLoader.load('texture.png');
I do however still have problems with a canvas renderer in safari but you appear to only be using the webgl renderer. Hope this helps.