I am looking to make a coil in three.js. Something that can be used to move things, like in a vending machine. I'm not sure though how to go about doing this because it doesn't seem to fit any of the existing meshes or geometries.
it is possible to create a mesh outside three.js, for example, in 3d studio max, or maya, and then import them to three.js.
example
i would try the following...
create an array of points that describe the helix (check wikipedia for the formulae)
create a curve from points ( THREE.CatmullRomCurve3(points) )
If you want a circular cross section use THREE.TubeGeometry and pass it your curve
if you want a non circular cross section, create a new THREE.Shape with the cross section
use THREE.ExtrudeGeometry and pass your shape and curve to it
just a guess hope it helps
Related
I compose multiple STLs for 3D printing / milling. For that I also use CSG and need some raytracing for detecting features of the models.
My scene is pretty much static. Just have to move around the models to arrange them. For this use case I'm not really sure which approach for moving / rotating the models is right.
Currently I manipulate the BufferGeometries directly. So everything in the geometry is like in the real world. Each position, each normal. No calculation from / to local or world coordinates.
On the other hand I could do the same thing with changing the meshes, which means to change just a matrix.
For me, working with the mesh is more for animation etc. While working with the geometry to manipulate the real object, which is my intention.
I'm wondering when one would translate / rotate the geometry and when the mesh. I know that manipulating the geometry is not best for CPU, which is not a problem for my use case.
Geometry can be translated so that subsequent transformations (such as scale or rotation) originate from a more preferred vector. Meshes can share a geometry. There are unique use cases for either if you care to memorize the list. Sometimes I integrate preexisting code samples. Sometimes the decision is made for me by some aspect of the process. As for the properties which may be similar, which is more convenient? I like the pattern of modifying an Object3D dummy using those methods and then updating from its matrix. There's a whole book on normals, but I didn't write it, sadly...
I'm trying to use the three.js physics cloth demo to create a tablecloth but the resulting cloth is always a vertical banner. Is uses the ammo softBodyHelpers.CreatePatch function to make the softbody but I could not find doc on how this can be used. Does any one know how this can be adapted to make a table cloth ? My initial mesh looks fine as a table cloth but as soon as the first physics loop updates the mesh it moves immediately vertices into a vertical banner.
According to ammo.js github page:
ammo.js is a direct port of the Bullet physics engine to JavaScript,
using Emscripten. The source code is translated directly to
JavaScript, without human rewriting, so functionality should be
identical to the original Bullet.
Looking at the documentation for Bullet's SoftBodyHelpers, CreatePatch is indeed meant for rectangular geometry only.
Good news, though - there's a separate function called CreateEllipsoid.
Do you have any other static geometry in the scene for the cloth to collide with? Otherwise it will just be a sheet, falling through frictionless space.
Does Three.JS have a function or capability of AI( Artificial intelligence )? Specifically let's say a FPS game. I want enemies to look for me and try to kill me, is it possible in three.js? Do they have a functionality or a system of such?
Webgl
create buffer
bind buffer
allocate data
set up state
issue draw call
run GLSL shaders
three.js
create a 3d context using WebGL
create 3 dimensional objects
create a scene graph
create primitives like spheres, cubes, toruses
move objects around, rotate them scale them
test for intersections between rays, triangles, planes, spheres, etc.
create 'materials' (rather than shaders)
javascript
write algorithms
I want enemies to look for me and try to kill me
Yes, three.js is capable of doing this, you just have to write an algorithm using three's classes. Your enemies would be 3d objects, casting rays, intersecting with other objects, etc.
You would be building a game engine, and you could use three.js as your rendering framework within that engine. Rendering is just one part of it. Think of a 2d shooter, you could make it using a 2d context, but you could also enhance it and make it 2.5d, by working with a 3d context. Everything else can stay the same.
any webgl engine that might have it ? or is it just not a webgl thing
Unity probably has everything you can possibly think of. Unity is capable of outputting WebGL, so it could be considered a 'webgl engine'.
Bablyon.js is more engine like.
Three Js is the best and most powerfull WebGL 3d engine that has no equal on the market , and its missing out on such an ability
Three.js isn't exactly a 3d engine. Wikipedia says:
Three.js is a lightweight cross-browser JavaScript library/API used to
create and display animated 3D computer graphics on a Web browser.
Three.js uses WebGL.
so if i need to just draw a car, or a spinning logo, i don't need them to come looking for me, or try to shoot me. I just need them to stay in one place, and rotate.
For a graphics demo you don't even need this - with a few draw instructions, you could render a full screen quad with a very elaborate pixel shader. Three gives you a ton of options, especially if you consider all the featured examples.
It works both ways, while you can expand three.js anyway you want, you can strip it down for just a very specific purpose.
If you need to build an app that needs to do image processing, and feature no '3d' graphics, you could still leverage webgl with three.js.
You don't need any vector, matrix, ray , geometry classes.
If you don't have vector3, you probably cant keep planeGeometry, but you would use bufferGeometry, and manually construct a plane. No transformations need to happen, so no need for matrix classes. You'd use shaders, and textures, and perhaps something like the EffectsComposer.
I’m afraid not. Three.js is just a engine for displaying 3d content.
Using it to create games only is one possibility. However few websites raise with pre-coded stuff like AI (among other things) to attract game creators, but using them is more restrictive than writing the exact code you need
Three.js itself doesn't however https://mugen87.github.io/yuka/ is a great AI engine that can work in collaboration with three to create AI.
They do a line if sight and a shooting game logic, as well as car logic which I've been playing around with recently, a React Three Fiber example here: https://codesandbox.io/s/loving-tdd-u1fs9o
I'm struggling with a visualization I'm working on that involves a stream of repeated images. I have it working with a single sprite with a ParticleSystem, but I can only apply a single material to the system. Since I want to choose between textures I tried creating a pool of Particle objects so that I could choose the materials individually, but I can't get an individual Particle to show up with the WebGL renderer.
This is my first foray into WebGL/Three.js, so I'm probably doing something bone-headed, but I thought it would be worth asking what the proper way to go about this is. I'm seeing three possibilities:
I'm using Particle wrong (initializing with a mapped material, adding to the scene, setting position) and I need to fix what I'm doing.
I need a ParticleSystem for each sprite I want to display.
What I'm doing doesn't fit into particles at all and I really should be using another object type.
All the examples I see using the canvas renderer use Particle directly, but I can't find an example using the WebGL renderer that doesn't use ParticleSystem. Any hints?
Ok, I am going from what I have read elsewhere on this github issues page. You should start by reading it. It seems that the Particle is simply for the Canvas Renderer, and it will become Sprite in a further edition of Three.JS. ParticleSystem, however is not going to fulfill your needs either it seems. I don't think these classes are going to help you accomplish this in WebGL in 3D. Depending on what you are doing you might be better off with the CanvasRenderer anyway. ParticleSystem will only allow you to apply a single material which will serve as the material for each particle in the system as you suggested.
Short answer:
You can render THREE.Particle using THREE.CanvasRenderer only.
I've read the "learning webgl" tutorial, but it does not explain everything. Something like google experiments with webgl are amazing, but I've been wondering... how do you move a 3D object along a custom path to swing into the scene or create a custom transition?
webgl -> opengl in web, so how do you do that in opengl?
what you're looking for is pretty common functionality, but it is hard to find concrete examples showing how to do it.
the easiest way i have found to do it is using Apple's J3DIMath.js webgl library.
you basically want to define a "camera" perspective matrix, then move the camera along a predefined path of vertices through your 3d space. as you move along the "track" of vertices, at each draw frame you can call the function J3DIMatrix4.lookat(), passing it the position vector along the path, the direction to look at, and the "up" direction, and it will create the appearance of a moving camera.
i hope this helps!
J3DIMath.js