I have a dynamic spheres. How to enter it in motion? To then be able to bounce back from obstacles. Gravity disabled.
Do Not I use after x ++ y ++ in 3d and in the shape of a sphere actor.
To set a dynamic actor in motion. Use addForce function of that actor.
To disable gravity for an actor, use raiseBodyFlag (NX_BF_DISABLE_GRAVITY) on that actor.
Related
I'm using ThreeJS, but this is a general math question.
My end goal is to position an object in my scene using 2D screen space coordinates; however, I want a specific z position in the perspective projection.
As an example, I have a sphere that I want to place towards the bottom left of the screen while having the sphere be 5 units away from the camera. If the camera were to move, the sphere would maintain its perceived size and position.
I can't use an orthographic camera because the sphere needs to be able to move around in the perspective projection. At some point the sphere will be undocked from the screen and interact with the scene using physics.
I'm sure the solution is somewhere in the camera inverse matrix, however, that is beyond my abilities at the current moment.
Any help is greatly appreciated.
Thanks!
Your post includes too many questions, which is out of scope for StackOverflow. But I’ll try to answer just the main one:
Create a plane Mesh using PlaneGeometry.
Rotate it to face the camera, place it 5 units away from the camera.
Add it as a child with camera.add(plane); so whenever the camera moves, the plane moves with it.
Use a Raycaster’s .setfromCamera(cords, cam)
then
.intersectObject(plane)
method to convert x, y screen cords into an x, y, z world position where it
intersects the plane. You can read about it in the docs.
Once it’s working, make the plane invisible with visible = false
You can see the raycaster working in this official example: https://threejs.org/examples/#webgl_geometry_terrain_raycast
I am building an application in AFrame and I want to constrain the viewers movement, that is I want to limit where the camera can go in the scene. For example I have a a-plane that is the floor and I want the camera to stop moving when it reaches 0 on the Z axis to stop the camera from going through the floor or stop again if it reaches 20 on Z axis. I also wish to limit the movement in x,y directions. There are no obstacles in the scene besides the a-plane. Is creating a navigation mesh my only option or is there an easier way to constrain movement? Thanks!
I don't know of built in tools to do this, but you could do it with programming (this sounds pretty easy). You could create a custom component, attached to the camera, with a tick handler, that records the position of the camera in world space and stores in in a variable (camPosPrevFrame). Then create a function to test if the current position is outside of the bounds. If so, set the camera coordinate on the axis that has exceeded its limit, to the previously recorded boundary (camPosPrevFrame). If you are simply testing whether the camera is on one side of an orthagonal plane (say the world space xy plane), that is pretty simple math (camera.getWorldPosition.x>someAmount). If you have a more complex situation, there are ways to test if a point is on either side of any arbitrary plane (it involves the dot product).
I hope to create a 3D type Snake game by Babylon or THREE.js.
Here's the picture about how i want the snake to move when the snake to turn left or turn right.
movement comparison
That is, I want the snake to move more "smoothly", just like the car's movement when it turn left or turn right.
Do I need skeleton animation to achieve this goal? If so, could you give me a solution/suggestion about how to implement it.
Another potential (likely simpler) way to do it would be to use a particle system. When snake grows you could just change the params on the particle system, then the engine would be responsible for handling smoothing.
Particle system would just need to adhere to the following rules:
Particles shoot out in one direction from the head mesh.
Particles minimum_lifetime = maximum_lifetime
Particle minimum_speed = maximum_speed
Then as it grows in size you just increment the particles direction. And you then just compare head position to it's particle (tail) positions to check for collisions.
The default particle system in BJS doesn't as far as I know support collisions officially, but you could query the positions of the particles to check for collisions, or use the solid particle system which allows for physics. (SPS is probably the best way to go in the end).
Here is an example playground however just to demonstrate the movement.
https://playground.babylonjs.com/#PLRKFW#1
So, im building an architectural visualization with Three.js, and one of the things the user should be able to do is to click on things and orbit around them. The problem is that the camera is able to clip through wall. I fixed that by assigning each clickable object its own limiting azimuth and polar angles. Now the Problem is that azimuth angles go from -PI to +PI and its impossible to limit between for example 1.5, and -2.4 because its limiting the "wrong" way. I hope this graphic explains that a little better:
Heres a link to the live version:
(You control by clicking on the ground)
https://jim-fx.com/modern/
As you can see, on objects on the right side of the room the limiting works flawless, but on the cabinet and the vases the camera clips through the wall.
If anyone could help me that would be amazing. And any other tipps are welcome aswell.
Greetings, Max
There is several solution to your problem. One is to implements a kind of collision detection with some real or virtual wall for your camera, wich stops the rotation. However, I guess your are looking for something simpler to implement.
As i don't know Three.js very well, I will provides you a generic solution, but which should be easily adaptable to Three.js.
The first thing is to do not use the built-in Three.js orbit control, but to implement your own, where you control all your transformations. And, this is in fact very easy.
To create an orbitable camera, you simply have to crate:
A "null" transformable object, which mean a simple transformable entity that does not embed any shape (is not rendered, is invisible, but exists). I hope Three.js provides such elementary thing.
A camera, which should be itself another transformable.
Once you have this, you simply parent the camera to the "null" object. Now parented to the "null" object, if you rotate the "null" object, you rotate the camera with. Then to orbits, you now have to move back the camera from the parent object:
Null Camera
+ - - - - - - - - - |>
Like this, the "null" object becomes the camera "look at point", and if you rotate the "null" object around Y (I believe Three.js use Y up), you controls the camera azimuth. If you rotate the "null" object in X or Z (depending coordinate system), you will control the camera altitude. Then, you even can control the camera forward-backward to close up to the "look at point" by moving your camera in its local Z axis..
Well, you now have an orbit-camera easy to control. But your problem is not yet solved: How to make this control Pi / -Pi possible in every camera initial orientation ?
Simple: You create second "null" transform object, name it "the socle", and you parent the first one to this last one: Like this, the rotation of the camera "look at point" is always local, and you can now rotate "the socle" to give your "Orbital camera" group, an initial orientation in the world space.
In fact, it is pretty like creating virtual gimbals. I hope I was clear, with pictures this would be more easy to visualize...
I have a situation that I'm not realy sure how I can handle. I have an openGl object of about 20k vertices, and I need to offer the user the possibility to select any one of these vertices (let's say with a smallest margin of error possible). Now here is what I want to do in order to do this:
Next to the 3D canvas of the object, I also offer the user 3 'slices' done by the planes x=0; y=0 and z=0. Say for the simplest example for a sphere these would be 3 circles, correponding to 'cutting' out one of the dimensions. Now let's take the z=0 one for the purpose of the example. When the user clicks on a point say (x_circle, y_circle) i would like to get the actual point in the 3d representation where he clicked. The z would be 0 of course but I can't figure out a way to get the x and y. I can easily translate that (x_circle, y_circle) -> (x_screen, y_screen) which would have the same result as a click on the canvas at those coordinates, but I need to find a way to translate that into the (x, y, 0) coordinate in 3D view.
The same thing would need to be done with x=0, y=0 but I think if I can understand/implement a way for z=0 I can just apply more or less the same solution with an added rotation over something. If anyone can help with any examples/code or even math behind this it would help a lot because at the moment I'm not really sure how to proceed.
When the user clicks, you can render the vertices using GL.POINTS (with a certain size, if you like) to an off-screen buffer using a shader that renders each vertex' index into RGBA. Then you read back the pixel on the mouse position and see what index it is.