How to get UV coordinate area of rendered material - three.js

Is it possible to know what area of the material is rendered on the display? I think Three.js already doing this right?
I trying to make Zoom level. When someone zooming it will become a good resolution.
I making CUBE using 6 planes and I’m trying to know the coordinate of the material and which areas are rendered on display.
The below example shows on the display only some areas of TOP, some areas of RIGHT and FRONT.
If I know which coordinate of material is rendered on the display, I’ll draw a good resolution of the image on the canvas.
Thank you,

Related

Three.js - DoubleSided Semi-Transparent Materials Only Work From Certain Angles

I am currently using three.js and trying to create a 3D experience with a semi-transparent material that can be viewed from all angles. I've noticed that depending on the camera angle, only certain portions of the mesh are semi-transparent and will show the content behind them. In this example below I've created two half cylinders and applied the same transparent material with the stack overflow logo. The half cylinder on the left properly shows the logo on the closest surface, as well as the surface behind it. The half cylinder on the right only shows the logo on the closest surface and fails to render the logo that wraps behind it. However, it does properly render the background image so the material is still treated correctly as transparent:
If I spin the orbital camera around 180 degrees the side that originally failed to see through now works and the other side exhibits the wrong behavior. This leads me to believe it's related to the camera position / depth sorting. The material in this case is a standard MeshPhongMaterial with transparent set to true, side as DoubleSide, and a single map for the transparent stack overflow logo. The geometry is formed from an opened ended CylinderGeometry. Any help would be greatly appreciated!

Creating a magnifying-glass effect in three.js WebGL

I'm working with an orthographic view in three.js/WebGL renderer, and I want a magnifying glass that tracks with the user mouse. I'm looking for the best way of doing this that's efficient.
When working with html5 canvas raw commands, this was easy: I simply defined a circular clip region, zoomed my coordinates, and re-drew the whole scene. With 3d objects, it's less obvious how do to it.
The method I've found so far is to do the following:
Define a second camera that looks into the zoomed region. Set the orthographic clip coordinates to be small so that it doesn't need to do much work
Create a THREE.WebGLRenderTarget
Tell the renderer and all my line textures that the resolution is about to change
Render the scene into the RenderTarget
Add a CircleGeometry as a MeshObject at the spot at the mouse position (in world coordinate but above the rest of the scene, close to the camera). Call this the lens.
Give the lens the WebGLRenderTarget as a texture.
Go back to my default camera, reset all my resolution parameters, and redraw the scene with the 'lens' object added.
This works (see image below) but I'm worried about parts of it:
I have to render twice per frame
Lines don't draw well, because the resolution problems. I have to keep track of all materials that need to know screen resolution and update all of them twice per screen render.
Related problems:
I want to overlay some plot axes on top of this, and possibly gridlines. These would change as the view pans. I'm not sure if I should make these 3d objects, or do it in a 2d canvas context I lay overtop.
I want to overlay some plot lines, and have them show up sensibly in the zoomed view. "Sensible" here is hard to figure out: I don't want them too fat in the zoomed view, but I also don't want to scale them up as much as the image detail (which is being rendered as a texture onto Plane objects behind).
This is a long post, but I'm still new to three.js and looking for good ideas.

Tiledmap stays dark after world rotation in Phaser

I want to create a top down game, on which the "camera" rotates with the character (like in Tap Tap Dash). But Phaser does not implement camera rotation, so I followed this thread to create a world group, which will be rotated.
As you can see in the following screenshot, after rotating the Tilemaps (the road and the arrows) as well as the sprites (the coins), black areas occur. What is really strange is that the sprites are rendered correctly as you can see on the bottom of the screenshot, but the Tilemaps are not fully rendered.
I have tried to resize the world again and trying all kind of methods of the camera, world and layer object. But I am out of ideas. Hopefully someone can give me a hint how to approach this problem.
Thank you!

getting sprites to work with three.js and different camera types

I've got a question about getting sprites to work with three.js using perspective and orthogonal cameras.
I have a building being rendered in one scene. At one location in the scene all of the levels are stacked on top of each other to give a 3D view of the building and an orthogonal camera is being used to view it. In another part of the scene, I have just the selected level of the building being shown and a perspective camera is being used. The screen is divided between the two views. The idea being the user selects a level from the building view and a more detailed map of that selected level is shown on the other part of the screen.
I played around with sprites for a little bit and as far as I understand it; if the sprite is being viewed with a perspective camera then the sprite's scale property is actual it's size property and if a sprite is being viewed with an orthogonal camera the scale property scales the sprite according to the view port.
I placed the sprite where both cameras can see it and this seems to be the case. If I scale the sprite by 0.5, then the sprite takes up half the orthogonal camera's view port and I can't see it with the perspective camera (presumably because for it, the sprite is 0.5px x 0.5px and is either rounded to 0px (not rendered, or 1px, effectively invisible). If I scale the sprite by say 50, the the perspective camera can see it (presumably because it's a 50px x 50px square) and the orthogonal camera is over taken by the sprite (presumably because it's being scaled by 50 times the view port).
Is my understanding correct?
I ask because in the scene I'm rendering, the building and detailed areas are ~1000 units apart on the x-axis. If I place a sprite somewhere on the detail map I need it to be ~35x35 pixels and when I do this it works fine for the detail view but building view is overtaken. I played with the numbers and it seems that if I scale the sprite by 4, it starts to show up on my building view, even though there's a 1000 unit distance between the views and the sprite isn't visible with the perspective camera.
So. If my understanding is correct then I need to either use separate scenes; have a much bigger gap between views; use the same camera type for both views; or not use sprites.
There are basically two different ways you can use sprites, either with 2D screen coordinates or 3D scene coordinates. Perhaps scene coordinates are what you need? For examples of both, check out the example at:
http://stemkoski.github.io/Three.js/Sprites.html
and in particular, when you zoom in and zoom out in that demo, notice that the sprites in-scene will change size, while the others do not.
Hope this helps!

How to draw 3D background in Games

I am trying to make a 3d car race in iphone using OPENGL ES 1.x.
I do not know how to draw the background sky in my scene. I tried using only planes for background but where should i placed that plane? I mean if i placed that plane outside the whole track then the frustum is not so big to show that planes in the scene.
Any suggestions will be of great help.
You can make a small skysphere or box, as suggested by Davido and turbovonce's link, which is centered around the viwer and fits into the frustum. You draw this first, without writing into the depth buffer. Then you draw the other stuff and as the skybox has not written to depth buffer it is just overwritten, except the parts where no scene objects are rendered, which are exactly the parts of the image where the sky should be visible.
You want a sky dome. Take a look at this website, it contains tons of references that should help you.
http://www.vterrain.org/Atmosphere/
Create a sphere in a 3d modeling app such as Maya or Blender and map a sky texture to the sphere. Export the model then load the model and its texture into the app, place in the scene. You should now have a background sky rendering in your game.

Resources