I am working on a configurator in Threejs for which I have a scene in which I load a model gltf, it has various elemenets in it like a real scene. For example inside office scene where we have a table, and on top of table there is notepad and pens. Apart from table there are some chairs and other material as well.
Now I want to give user a feature to change length, breadth, and height of any table or some other materials.
For this I have tried changing:
scaling of the material:
textureMaterials.scale.x = 1.5;
textureMaterials.scale.z = 1.5;
But the issue is that by changing the scale the position of the material also gets changed, and suppose even if I set the position of the material and show it at right position then the stuff which is placed on top of that table remains at its original position.
So there are a lot of things related.
Please let me know what is the good way to do this?
Thanks
Related
I want to render a room with a floor + roof that is open to one side. The room contains a point light and the "outside" it lit by an ambient light (the sun). There is one additional requirement: The user should be able to look inside the room to see whats going on. But I cannot simply remove the roof because then the room is fully lit by the ambient light.
I think my problem could be solved by having 3d objects that are transparent by still are blocking the light.
To give you an idea about my current scene, this is how it looks like:
The grey thing is the wall of my room. The black thing is the floor of the room. The green thing is the ground of the scene. The room contains a point light.
I am currently using two scenes (see Exclude Area from Directional/Ambient Lighting) because I wanted the inside of the room to be unaffected by the ambient light. But now my lights can only affect either the inside of my room (the point light) OR the outside (the ambient light) but not both.
A runnable sample of my scene can be found here:
https://codesandbox.io/s/confident-worker-64kg7m?file=/src/index.js
Again: I think that my problem could be solved by having transparent objects that still block the light. If I had that I would simply have a 3d plane on top of my room (as the roof) and make it transparent... It would block the light that is inside of the (but still let it go outside if the room is open) and it would also block the ambient light (partially - if the room is open)...
Maybe there is also another solution that I am not seeing.
Just use one scene instead of two, then enable shadows across the relevant meshes so a light doesn't cross from inside to outside. Once you're using only one scene, the steps to take in your demo are:
Disable AmbientLight, and use DirectionalLight only, since AmbientLight illuminates everything indiscriminately, and that's not what you want.
Place the directional light above your structure, so it shines from the top-down.
Enable shadow-casting on the walls
Add a ceiling mesh with the material's side set to side: THREE.BackSide. This will only render the back side of the Mesh, which means it won't be visible from above, but it will still cast shadows.
const roomCeilMat = new MeshStandardMaterial({
side: BackSide
});
const roomCeiling = new Mesh(roomFloorGeo, roomCeilMat);
roomCeiling.position.set(0, 0, 1);
roomCeiling.castShadow = true;
scene1.add(roomCeiling);
See here for a working copy of your demo:
https://codesandbox.io/s/stupefied-williams-qd7jmi?file=/src/index.js
I would assign a flat, emissive material to the room. Or a depth gradient if it becomes terrain. Since ambient light doesn't cast shadow. It saves a light and extra geometry or groups. Plus web model viewer(s) would probably render it better. If you're doing a reveal transition, use a clip plane or texture alpha mask.
It depends on the presentation versus the output format. Also it depends on the complexity of the final floorplan. If your process is simple it will run Sims Lite on a Raspberry Voxel.
I need to get the camera up direction and i've tried many ways with no luck, i'm not an expert of quaternions so i'm doubting i did it right.
I've tried:
camera.up
camera.up.applyMatrix4(camera.matrixWorld);
new THREE.Vertex3(0,1,0).applyMatrix4(camera.matrixWorld);
camera.up.normalize().applyMatrix4(camera.matrixWorld);
after this i create two planes passing by two points of my interest, and add the plane helper to the scene and i can see they are very far from where i was expecting them. (i'm expecting two planes that looks like the top and bottom of the camera frustum).
P.s. the camera is a shadow camera of a directional light so an orthographic camera, and i manipulate the directional light position and target before doing this operation, but i've called updateMatrixWorld on the light, on it's target and the camera, on the camera i've called also updateProjectionMatrix... still no results
I've made a sandbox to see what i've tried till now, and better visualize what i want to achieve:
https://codesandbox.io/embed/throbbing-cache-j5yse
once i manage to get the green arrow to point to the top of the blue triangle of the camera helper i'm good to go
In the normal render flow, shadow camera matrices are updated as part of rendering the shadow map (WebGLShadowMap.render).
However, if you want the updated matrix values before the render, then you'll need to update them manually (you already understand this part).
The shadow camera is a property of (not a child of) the DirectionalLight. As such, it doesn't follow the same rules as other scene objects when it comes to updating its matrices (because it's not really a child of the scene). Instead, you need to call the shadow property's updateMatrices method (inherited from LightShadow.updateMatrices).
const dl = new THREE.DirectionalLight(0xffffff, 1)
dl.shadow.updateMatrices(dl) // <<------------------------ Updates the shadow camera
This updates the shadow camera with information from the DirectionalLight's own matrix, and its target's matrix, to properly orient the shadow camera.
Finally, it looks like you're trying to get the "world up" of the camera. Personally, I'd use the convenience function localToWorld:
let up = new THREE.Vector3(0, 1, 0)
dl.shadow.camera.localToWorld(up) // destructively converts "up" from local-to-camera into world coordinates
via trial and errors i've figured out that what gave me the correct result was:
calling
directionalLight.shadow.updateMatrices(...)
and then
new THREE.Vector3(0,1,0).applyQuaternion(directionalLight.shadow.camera.quaternion)
I've been struggling with this one for hours, and found nothing either in the docs or here on SO that would point me to the right direction to achieve what I aim at.
I'm loading a scene containing several meshes. The first one is used as an actual mesh, rendered on the scene, the other ones are just used as morph targets (their geometries, properly speaking).
loader.load("scene.json", function (loadedScene) {
camera.lookAt( scene.position );
var basis = loadedScene.getObjectByName( "Main" ).geometry;
var firstTarget = loadedScene.getObjectByName( "Target1" ).geometry;
// and so on for the rest of the "target" meshes
basis.morphTargets[0] = {name: 'fstTarget', vertices: firstTarget.vertices};
var MAIN = new THREE.Mesh(basis);
This works very well, and I can morph the MAIN mesh with no hassle by playing with the influence values. The differences between the basis mesh and the target are not huge, basically they're just XY adjustments (2D shape variations).
Now, I'm using a textures material: UVs are properly projected (Blender export) and the result is nice with the MAIN mesh as is.
The problem comes when the basis shape is morphed towards a target geometry: as expected, the texture (UVs) adapts automatically, but this is not what I need to achieve => I'd need to have the UVs "morph" towards the morph target's UVs for the texture to look the same.
Here is an example of what I have now (left: basis mesh, right: morphTargetInfluences = 1 for the first morph target)
morph target and texture
What I'd like to have is the exact same texture projection on the final, morphed mesh...
I can't figure out how to do that the right way. Should I reassign the target UVs to the MAIN mesh (and how would I do that)?
The result would be like having a cloth below which a shape is morphed, and the cloth being "shrinked-wrapped" all the time against that underlying shape => you can actually see the shape changes, but the cloth itself is not deformed, just wrapping itself properly and consistently around the shape...
Any help would be much appreciated! Thanks in advance :)
When i put an object to scene, re-size him by morph targets like in this example:
https://threejsdoc.appspot.com/doc/three.js/examples/webgl_morphtargets.html
when i move camera out of the original size of object, but still looking on re-sized parts, object is disappear.
That means, when i stretch box 100 times, i still need to look on his original center, when i look only on his re-sized part, it becomes invisible.
Do you have any experience with this behavior?
How can i set the object to be visible of full length?
Meshes are frustum-culled based on their "un-morphed" geometry.
You can prevent a mesh from being frustum-culled by setting:
mesh.frustumCulled = false;
three.js r.68
I just started learning OpenGL and cocos2d and I need an advice.
I'm writing a game in which player is allowed to touch and move rectangles on the screen in a top-down view. Every time a rectangle is touched, it moves up (towards the screen) in z direction and is scaled a bit to look like it's closer than the rest. It drops down to z = 0 after touch ends.
I'd like the risen rectangles to drop shadow under them, but can't get it to work. What approach would you recommend for the best result?
Here's what I have so far.
During setup I turn on the depth buffer and then:
1. all the textures are generated with CCRenderTexture
2. the generated textures are used as an atlas to create CCSpriteBatchNode
3. when a rectangle (tile) is touched:
static const float _raisedScale = 1.2;
static const float _raisedVertexZ = 30;
...
-(void)makeRaised
{
_state = TileStateRaised;
self.scale = _raisedScale;
self.scale = _raisedScale;
self.vertexZ = _raisedVertexZ;
_glowOverlay.vertexZ = _raisedVertexZ;
_glowOverlay.opacity = 255;
}
glow overlay is used to "light up" the rectangle.
After that I animate it using -(void)update:(ccTime)delta
Is there a way to make OpenGl cast the shadow for me using cocos? For example using shaders or OpenGL shadowing. Or do I have to use a texture overlay to simulate the shadow?
What do you recommend? How would you do it?
Sorry for a newbie question, but it's all really new to me and I really need your help
.
EDIT 6th of March
I managed to get sprites with shadow overlay show under the tiles and it looks ok until one tile has to drop shadow on another which has a non-zero vertexZ value. I tried to create additional shadow sprites which would be scaled and shown on top of the other tiles (usually rising or falling down), but I have problems with animation (tile up, tile down).
Why complicate the problem.
Simply create a projections of how the shadow would look like using your favourite graphics editing program and save it as a png. When the object is lifted, insert your shadowSprite behind your lifted object (you can shift it left/right depending on where you think your light source is).
When the user drops the object down, the show can remain under the object and move with it, making it self visible when the item is lifted again.