best geometry for kirigami in three.js? - three.js

I'm trying to model simple pop-up kirigami objects in three.js. Here are some real-world examples:
At the moment I can build them using Shape objects, first drawn in 2d and then rotated and translated into place.
It works, but I wonder if there's a more appropriate sort of geometry I should be using, to recreate the whole sheet of paper as a single object in three.js, instead of a collection of individual surfaces. Ultimately I'd like to be able to open and close the popup.
I've read and reread the docs and examples, but I can't figure this one out. I'm new to threejs and dazzled by its capabilities, but it's a steep learning curve. Thanks for any wisdom.

Related

ThreeJS - delineate between meshes

I am displaying a number of different models simultaneously in three.
Some models have the same texture, and it can make it hard to tell where one starts and another ends.
As an example, first image is from my three viewer, second image is from Blender:
It is not obvious in three where the two objects intersect.
I've so far attempted to alter lighting and material settings but have been without success on that front.
I also tried an outline post processing effect but due to what I think is a disorderly output from Sketchup (where the models were made) the outline effect is chaotic:
I am trying to find a good way to clearly delineate between models.
Raycaster from mouse position. De-emphasize the other models' opacity or something. Or if you're serious, you could try the clipping stencil. There's some really good examples for mesh BVH that demonstrates this. https://gkjohnson.github.io/three-mesh-bvh/example/bundle/clippedEdges.html

Three.js rendering object without adding to scene?

I'm creating a game in three.js. It has a lot of objects, and I'm using hashtables and chunk data structures to increase performance.
However, every object (simple cubes/planes) is added to the scene by Scene.add(geometry);
So, three.js stores the objects in a datastructure which could be not the best structure.
My questions are:
Does it matter how three.js stores the added objects?
And is there a way to render objects in the render loop manually, without having to add it to the scene by using Scene.add(...); ?
How do other games, like voxel.js, solve this problem? It looks like voxel.js has a great performance compared to my game. And actually, a voxel game exists just of simple planes. My game is almost the same as a minecraft-like block world, where all blocks exist of planes. Only the visible planes are added to the scene. But I don't get the same good performance as a voxel.js game. I want to figure out what I can do to make it faster.

Create floor with dynamic soft reflections

Using three.js am trying to create a floor that reflects the objects that sit upon it. Preferably the floor material should reflect not like a mirror but in a more 'matte' or diffused way.
To achieve this I looked to Jaume Sanchez Elias who has made a great example using a cube camera: Look for the "smooth material" example on this page:
http://www.clicktorelease.com/blog/making-of-cruciform
Here is my attempt using the same technique. But as you see the reflections are misplaced, they do not appear underneath the mountain objects as expected.
http://dev.udart.dk/stackoverflow_reflections/
I am looking to correct this or to use any other technique that will achieve a more correct diffused reflection.
There are three.js examples using the cube camera technique but they all create mirror-like effects not a soft reflection.
Vibber. Parallax-corrected cubemaps, the technique used in cru·ci·form, only works for closed volumes, like cubes. It works really well to simulate correct reflections inside a room, but not so much for outdoors or open/large scenes. They also can't reflect anything that it's inside the cubemap, you'd have to split the volume in many sub-volumes.
I can think of a couple of solutions for what you want to achieve:
SSR: Screen-space reflections, you can find more info in many places on the internet. It's not the most trivial of effects to implement, and you might have to change the way you render your scene.
Simpler post-processing approach: since you have a flat floor, render the mountains vertically flipped on a framebuffer object, blur it, and render the regular scene on top. For extra effect, render the depth of the flipped mountains, and use that value as the blur radius, to get diffuse reflections.
As always, there's a ton of ways to achieve the (un)expected result :)

Artificial intelligence functionality in three.js

Does Three.JS have a function or capability of AI( Artificial intelligence )? Specifically let's say a FPS game. I want enemies to look for me and try to kill me, is it possible in three.js? Do they have a functionality or a system of such?
Webgl
create buffer
bind buffer
allocate data
set up state
issue draw call
run GLSL shaders
three.js
create a 3d context using WebGL
create 3 dimensional objects
create a scene graph
create primitives like spheres, cubes, toruses
move objects around, rotate them scale them
test for intersections between rays, triangles, planes, spheres, etc.
create 'materials' (rather than shaders)
javascript
write algorithms
I want enemies to look for me and try to kill me
Yes, three.js is capable of doing this, you just have to write an algorithm using three's classes. Your enemies would be 3d objects, casting rays, intersecting with other objects, etc.
You would be building a game engine, and you could use three.js as your rendering framework within that engine. Rendering is just one part of it. Think of a 2d shooter, you could make it using a 2d context, but you could also enhance it and make it 2.5d, by working with a 3d context. Everything else can stay the same.
any webgl engine that might have it ? or is it just not a webgl thing
Unity probably has everything you can possibly think of. Unity is capable of outputting WebGL, so it could be considered a 'webgl engine'.
Bablyon.js is more engine like.
Three Js is the best and most powerfull WebGL 3d engine that has no equal on the market , and its missing out on such an ability
Three.js isn't exactly a 3d engine. Wikipedia says:
Three.js is a lightweight cross-browser JavaScript library/API used to
create and display animated 3D computer graphics on a Web browser.
Three.js uses WebGL.
so if i need to just draw a car, or a spinning logo, i don't need them to come looking for me, or try to shoot me. I just need them to stay in one place, and rotate.
For a graphics demo you don't even need this - with a few draw instructions, you could render a full screen quad with a very elaborate pixel shader. Three gives you a ton of options, especially if you consider all the featured examples.
It works both ways, while you can expand three.js anyway you want, you can strip it down for just a very specific purpose.
If you need to build an app that needs to do image processing, and feature no '3d' graphics, you could still leverage webgl with three.js.
You don't need any vector, matrix, ray , geometry classes.
If you don't have vector3, you probably cant keep planeGeometry, but you would use bufferGeometry, and manually construct a plane. No transformations need to happen, so no need for matrix classes. You'd use shaders, and textures, and perhaps something like the EffectsComposer.
I’m afraid not. Three.js is just a engine for displaying 3d content.
Using it to create games only is one possibility. However few websites raise with pre-coded stuff like AI (among other things) to attract game creators, but using them is more restrictive than writing the exact code you need
Three.js itself doesn't however https://mugen87.github.io/yuka/ is a great AI engine that can work in collaboration with three to create AI.
They do a line if sight and a shooting game logic, as well as car logic which I've been playing around with recently, a React Three Fiber example here: https://codesandbox.io/s/loving-tdd-u1fs9o

Particles vs ParticleSystem in three.js

I'm struggling with a visualization I'm working on that involves a stream of repeated images. I have it working with a single sprite with a ParticleSystem, but I can only apply a single material to the system. Since I want to choose between textures I tried creating a pool of Particle objects so that I could choose the materials individually, but I can't get an individual Particle to show up with the WebGL renderer.
This is my first foray into WebGL/Three.js, so I'm probably doing something bone-headed, but I thought it would be worth asking what the proper way to go about this is. I'm seeing three possibilities:
I'm using Particle wrong (initializing with a mapped material, adding to the scene, setting position) and I need to fix what I'm doing.
I need a ParticleSystem for each sprite I want to display.
What I'm doing doesn't fit into particles at all and I really should be using another object type.
All the examples I see using the canvas renderer use Particle directly, but I can't find an example using the WebGL renderer that doesn't use ParticleSystem. Any hints?
Ok, I am going from what I have read elsewhere on this github issues page. You should start by reading it. It seems that the Particle is simply for the Canvas Renderer, and it will become Sprite in a further edition of Three.JS. ParticleSystem, however is not going to fulfill your needs either it seems. I don't think these classes are going to help you accomplish this in WebGL in 3D. Depending on what you are doing you might be better off with the CanvasRenderer anyway. ParticleSystem will only allow you to apply a single material which will serve as the material for each particle in the system as you suggested.
Short answer:
You can render THREE.Particle using THREE.CanvasRenderer only.

Resources