First I am explaining the above image. Image is marking with 1, 2 and 3.
1 - This is the rectangle shape.
2 - This is the rectangle shape.
3 - This is the circle shape (draw with destination-in global composite operation).
Every shape draw using HTML5 canvas.
Now I want to same draw using threejs with WebGLRenderer. So is it possible to draw? If yes then how?
3rd shape can be anything (for ex - circle, rectangle, polygon).
Any suggestion?
We can erase an area in threejs by set the blending property. In threejs different types of blending property available. For example THREE.SubtractiveBlending which is use to subtract the area.
For details -
1) http://threejs.org/docs/#Reference/Constants/Materials
2) http://threejs.org/examples/#webgl_materials_blending
3) http://threejs.org/examples/#webgl_materials_blending_custom
To draw using WebGLRenderer, is basically changing the canvas by:
renderer = new THREE.WebGLRenderer();
Just beware of the methods of Canvas that do not exist in WebGLRenderer.
If you show as part of the code, it would be good to give more precision. But anything, just comment here!
Related
I have an existing WebGL canvas that is being rendered without using ThreeJS, and is for all intents and purposes a black box to me, apart from two facts: (1) I have access to the underlying webgl canvas DOM element and can position and resize it on the screen, and (2) I know the properties of the camera for the scene, and get updates on every render cycle for that camera.
The problem I need to solve can be simplified to the following: I need to have my own separate ThreeJS canvas that displays both the black box canvas data, and then elements that I draw, like a cube for a simple example. I can already easily overlay the two canvases, set the transparency on my canvas for everything but the cube, and then align the two with the camera events from the black box library. This works quite well.
The issue with this is that when I draw my objects, like a cube, they don't respect the depth buffer of the black box canvas. So I might have a cube that is properly aligned with the backing scene and movements of the scene, but then it isn't properly masked when something in the black box canvas is closer to the camera than the cube. My thought is that I need to solve this in one of two ways: (1) I can have my renderer write to the other canvas with autoClear = false and preserveDrawingBuffer = true, or (2) I can somehow copy the depth buffer from the black box canvas into my canvas, and then set up my renderer so that it respects the new depth buffer.
I haven't been successful with either approach yet, so I'm wondering if this is possible, and if so which of the above approaches, or what other approach, can solve this problem?
--Edit--
See https://jsfiddle.net/zdxyoajb/ for angular/typescript implementation of the above attempts. In the following animate loop, if I comment out the overlayRenderer lines, the below sphere will be red and offset from the center (as it should be), but if I don't comment the lines, I get the below image. I also get the following error:
WebGL: INVALID_OPERATION: uniformMatrix4fv: location is not from current program
animate() {
requestAnimationFrame(() => this.animate());
this.blackBoxCamera.copy(this.overlayCamera);
this.blackBoxRenderer.render(this.blackBoxScene, this.blackBoxCamera);
this.overlayRenderer.state.reset();
this.overlayRenderer.render(this.overlayScene, this.overlayCamera);
}
I want to show a image of a donut on the screen, but I want it to be random how big it is and how big the hole in the donut is. Is There Any easy way to do this?
I Cant just scale or descale a image of a donut because i want the hole to have different sizes also.
Thanks!
I would do this by following these steps:
Create a game object that only contains a Sorting Group.
As child objects of that object create the following:
Make a Sprite Renderer of a donut without a hole. Scale it randomly. Set its Mask Interaction to Not Visible Under Mask
Make a Sprite Renderer of the hole of a donut. Scale it randomly. Make sure its Mask Interaction is set to None.
Make a Sprite Mask that has the same shape as the hole in step 2, and scale it to be the same size as the hole in part 2.
Layer these pieces top to bottom like: Donut Hole (step 2), Sprite Mask (step 3), Donut-without-hole (step 1)
At the end it should look like
- Object with SortingGroup
- Donut Hole Sprite (Mask Interaction: None)
- Sprite Mask
- Donut-without-Hole Sprite (Mask Interaction: Not Visible Under Mask)
Putting the donut sprites together inside the SortingGroup parent allows you to have many donuts without the masks interfering with other donuts.
For realising a scrollable text container (using own bitmap fonts that are basically small sprite meshes) I am using local clipping planes.
When my text container moves the clipping planes are updated according to the global boundaries of my container.
This works perfectly except for fast movements. In this case the clipping planes are slightly delayed behind the container making the text shine through where it shouldn't.
My first thought was that the necessary code for updating the clipping planes might cause the delay.. but when I use apply this order:
1. update the text box position
2. update the clipping planes
3. render()
the delay still exists
Is the reason maybe located in the threejs framework in how the actual clipping is applied?
Here's a small code snippet that shows how I compute my upper clippin plane using two helper meshes. The one is a plane that is positioned orthogonally on my text object (red plane in the picture). The other one is a THREE.Object3D that is positioned in the middle of the upper edge for computing the right plane constant.
// get the world direction of a helper plane mesh that is located orthogonally on my text plane
var upperClippingPlaneRotationProxyMeshWordDirection = _this.upperClippingPlaneRotationProxyMesh.getWorldDirection();
// get the world position of a helper 3d object that is located in the middle of the upper edge of my text plane
var upperClippingPlanePositionProxyObjPosition = _this.upperClippingPlanePositionProxyObj.getWorldPosition();
// a plane through origin which makes it easier for computing the plane constant
var upperPlaneInOrigin = new THREE.Plane(upperClippingPlaneRotationProxyMeshWordDirection, 0);
var dist = upperPlaneInOrigin.distanceToPoint(upperClippingPlanePositionProxyObjPosition);
var upperClippingPlane = new THREE.Plane(upperClippingPlaneRotationProxyMeshWordDirection, dist*-1);
// clipping plane update
_this.myUpperClippingPlane.copy(upperClippingPlane);
picture showing the text object with clipping plane helpers
I found the reason for the delay. In my matrix updating code I only used updateMatrix() on the text object when it moves. To make sure that its child objects including the helper meshes update instantly I had to call updateMatrixWorld(true), this makes sure that the clipping planes are computed correctly
Hello i am new to ThreeJS and texture mapping,
Let's say I have a 3D-Plane with the size of (1000x1000x1). When I apply a texture to it, it will be repeated or it will be scaled, to atleast filling the full plane.
What I try to achieve is, to change the scaling of the picture on the plane at runtime. I want the Image to get smaller and stop fitting the full plane.
I know there is a way to map each face to a part of a picture, but is it also possible to map it to a negative number in the picture, so it will be transparent?
My question is:
I UV-Mapped a Model in Blender and imported it with the UV-Coords into my ThreeJS-Code. Now i need to scale the texture down, like described before. Do I have to remap the UV-Cords or do i have to manipulate the image and add an transparent edge?
Further, will I be able on the same way to move the image on the picture?
I already achieved this kind of usage in java3d by manipulating bufferedImages and drawing them onto transparent ones. I am not sure this will be possible using javascript, so i want to know if it is possible by texture-mapping.
Thank you for your time and your suggestions!
This can be done using mapping the 3d -plane to a canvas ,where the image is drawn (fabric.js can be used for canvas drawings).Inshort set the canvas as texture for the 3d model
yourmodel.material.map = document.getElementById("yourCanvas");
Hope it helps :)
Yes. In THREE, there are some controls on the texture object..
texture.repeat, and texture.offset .. they are both Vector2()s.
To repeat the texture twice you can do texture.repeat.set(2,2);
Now if you just want to scale but NOT repeat, there is also the "wrapping mode" for the texture.
texture.wrapS (U axis) and texture.wrapT (V axis) and these can be set to:
texture.wrapS = texture.wrapT = THREE.ClampToEdgeWrapping;
This will make the edge pixels of the texture extend off to infinity when sampling, so you can position a single small texture, anywhere on the surface of your uv mapped object.
https://threejs.org/docs/#api/textures/Texture
Between those two options (including texture.rotation) you can position/repeat a texture pretty flexibly.
If you need something even more complex.. like warping the texture or changing it's colors, you may want to change the UV's in your modeller, or draw your texture image into a canvas, modify the canvas, and use the canvas as your texture image, as described in ArUns answer. Then you can modify it at runtime as well.
This is kind of frustrating me as I've been grizzling over it for a couple of hours now.
Basically I'm drawing 2D sprites through spritebatch and 3D orthographically projected geometry using the BasicEffect class.
My problem is controlling what gets rendered on top of what. At first I thought it would be simply controlling the render order, i.e. if I do:
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
It would mean the 2D stuff would render over the 3D stuff, however since I don't control when the device begins/ends renders this isn't the result. The 3D always renders on top of the 2D elements, no matter the projection settings,world translation, the z components of the 3D geometries vertex definitions and the layer depth of the 2D elements.
Is there something I'm not looking into here?
What's the correct way to handle the depth here?
OK I figured it out 2 seconds after posting this question. I don't know if it was coincidence or if StackOverflow has a new feature granting the ability to see future answers.
The Z position of spritebatch elements are between 0 and 1 so they're not directly comparable to the z positions of orthographic geometry being rendered.
When you create an orthographic matrix however you define a near and far clip plane. The Z pos you set should be within this clip plane. I had a hunch that the spritebatch class is effectively drawing quads orthographically so by extension that 0 to 1 would mean 0 was representing a near clip and 1 a far clip, and the depth was probably being rendered into the same place the 3D geometry depth is being rendered to.
Soooo, to make it work I just figured that the near/far clips I was defining for the orthographic render will be measured against the near/far clips of the sprites being rendered, so it was simply a matter of setting the right z value, so for example:
If I have a near clip of 0 and a far clip of 10000 and I wanted to draw it so that it would correspond to 0.5f layer depth and render in front of sprites being drawn at 0.6 and behind sprites being drawn at 0.4 I do:
float zpos = 0.5f;
float orthoGraphicZPos = LinearInterpolate(0, 10000, zpos);
Or just zpos * 10000 :D
I guess it would make more sense to have your orthographic renderers near/far clip to be 0 and 1 to directly compare with the sprites layer depths.
Hopefully my reasoning for this solution was correct (more or less).
As an aside, since you mentioned you had a hunch on how the sprite batch was drawing quads. You can see the source code for all the default/included shaders and the spritebatch class if you are curious, or need help solving a problem like this:
http://create.msdn.com/en-US/education/catalog/sample/stock_effects
The problem is that the spritebatch messes with some of the renderstates that are used when you draw your 3d objects. To fix this you just have to reset them before rendering your 3d objects like so.
GraphicsDevice.BlendState = BlendState.Opaque;
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
Note that this is for xna 4.0 which I am pretty sure your using anyway. More info can be found on shawn hargreaves blog here. This will draw the reset the render states, draw the 3d objects, then the 2d objects over them. Without resetting the render states you get weird effects like your seeing.