I build a complex BufferGeometry from scratch and I am looking for a way to selectively hide or show sections of it - (I don't want to have to save the positions elsewhere so moving vertices around is not an option)
After reading the code a bit, I wondered if I could use BufferGeometry.groups except that I don't want/need a different material per group - maybe assigning a different material could work and I could make the opacity 0 for parts I don't want? However, here will be many thousands of groups though so I doubt that would be effective.
Started a JSFiddle with a boiled down example: http://jsfiddle.net/callum/x7y72k1e/
Is there a way to do it efficiently?
(As I'm typing this, I wonder if I just store the positions in a mirror attribute array and use that).
Related
In the following codesandbox example:
there are different geometries merged together thanks to drei library (2Boxes and 1 Sphere).
I would like to
have a lot of this new geometry with a good performances. I think the solution would be to instanciate the merged geometry but I do not know how to do it.
Still be able to control each part properties (for example, make a cube bigger than the other one in a special instance)
Could you please help me ?
A solution would be to:
Create an instanceMesh for each part of the geometry
Assemble each part by changing each geometry position
You can find this solution with 2 codeSandboxes:
one using the Drei librairy but which get laggy around 10k elements
one using instanceMesh from R3F which work pretty well with 100k elements.
I do not know if we can improve more the performance, but it seems to work pretty well like this.
I have InstancedBufferGeometry working in my scene. However, some of the instances are mirrors of the source, hence they have a negative scale to represent the geometry.
This flips the winding order of those instances and look wrong due to Back Face Culling (which I want to keep).
I'm fully aware of the limitations within this approach, but I was wondering if there was a way to tackle this that I may have not come across yet? Maybe some trick in the shader to specify which ones are front face and which are back face? I don't recall this being possible though...
Or should I be doing two separate loads? (Which will duplicate the draw calls)
I'm loading a lot of different geometries (which are all instanced) so trying to make sure I get the best performance possible.
Thanks!
Ant
[EDIT: Added a little more detail]
It would help if you provide an example. As far as I can understand your question, simple answer is - no, you can't do that.
As far as i'm aware, primitives are rejected before they get to the shader, meaning that it's not in your control. If you want to use negative scaling, and make sure that surfaces are still visible - enable rendering of both faces (Front and Back).
Alternatively, you might be okay with simply rotating objects and sticking to positive scale - if you have to have mirroring - you're out of luck here.
One more idea: have 2 instanced objects, one with normal geometry and one with mirrored, you can fix up normals in the mirrored geometry.
I am placing icons with a fixed diameter/radius on a line using d3.scaleTime. This works well except for the case in which dates are close to one another, leading to an unwanted overlap.
In that specific case, I would want the icons to "relax" and not touch.
My code rather complex, including animations etc. — so I drew the problem here:
These are my attempts:
I looked at d3-force for collision prevention, but I was not quite sure how to merge such an approach with an existing time scale. Could this be helpful? http://jsbin.com/gist/fee5ce57c3fc3e94c3332577d1415df4 However, it may occur that the icons then do not align on a horizontal straight line anymore, which is a disadvantage, because I do not want them to spread vertically.
I also thought about calculating overlaps and then manually adjusting the data so that the overlap does not occur. That, however seems a bit more complex because I would have to somehow recursively find the best position for every icon.
Could interpolation help me? I thought there must be something like "snap to grid", but then two icons could snap to the same position, couldn't they?
Which d3 concept makes most sense to solve this problem?
I have written code to render a scene containing lights that work like projectors. But there are a lot of them (more than can be represented with a fixed number of lights in a single shader). So right now I render this by creating a custom shader and rendering the scene once for each light. Each pass the fragment shader determines the contribution for that light and the blender adds in that contribution to the backbuffer. I found this really awkward to set up in three.js. I couldn't find a way of doing multiple passes like this where there were different materials and different geometry required for the different passes. I had to do this by having multiple scenes. The problem there is that I can't have an object3d that is in multiple scenes (Please correct me if I'm wrong). So I need to create duplicates of the objects - one for each scene it is in. This all starts looking really hacky quickly. It's all so special that it seems to be incompatible with various three.js framework features such as VR Rendering. Each light requires shadowing, but I don't have memory for a shadow buffer for each light, so it alternates between rendering the shadow buffer for the light, then the accumulation phase for that light, then the shadow buffer for the next light, then the accumulator the the next light, etc.
I'd much rather set this up in a more "three.js" way. I seem to be writing hack upon hack to get this working, each time forgoing yet another three.js framework feature that doesn't working properly in conjunction with my multi-pass technique. But it doesn't seem like what I'm doing is so out of the ordinary.
My main surprise is that I can't figure out a way to set up a multi-pass scene that does this back and forth rendering and accumulating. And my second surprise is that the Object3D's that I create don't like being added to multiple scenes a the same time, so I have to create duplicates of each object for each scene it wants to be in, in order to keep their states from interfering with each other.
So is there a way of rendering this kind of multi-pass accumulative scene in a better way? Again, I would describe it as a scene with > the max number of lights allows in a single shader pass so their contributions need to be alternatively rendered (shadow buffers) and then additively accumulated in multiple passes. The lights work like typical movie projectors that project an image (as opposed to being a uniform solid color light source).
How can I do multi-pass rendering like this and still take advantage of good framework stuff like stereo rendering for VR and automatic shadow buffer creation?
Here's a simplified snippet that demonstrates the scenes that are involved:
renderer.render(this.environment.scene, camera, null);
for (let i = 0, ii = this.projectors.length; i < ii; ++i) {
let projector = this.projectors[i];
renderer.setClearColor(0x000000, 1);
renderer.clearTarget(this.shadowRenderTarget, true, true, false);
renderer.render(projector.object3D.depthScene, projector.object3D.depthCamera, this.shadowRenderTarget);
renderer.render(projector.object3D.scene, camera);
}
renderer.render(this.foreground.scene, camera, null);
There is a scene that renders lighting from the environment (done with normal lighting) then there is a scene per projector that computes the shadow map for the projector and then adds in the light contribution from each projector, then there is a "foreground" scene with overlays and UI stuff in it.
Is there a more "three.js" way?
Unfortunately i think the answer is no.
I'd much rather set this up in a more "three.js" way. I seem to be writing hack upon hack to get this working,
and welcome to the world of three.js development :)
scene graph
You cannot have a node belong to multiple parents. I believe three also does not allow you to do this:
const myPos = new THREE.Vector3()
myMesh_layer0.position = myPos
myMesh_layer1.position = myPos
It wont work with eulers, quaternions or a matrix.
Managing the matrix updates in multiple scenes would be tricky as well.
the three.js way
There is no way to go about the "hack upon hack" unless you start hacking the core.
Notice that it's 2018 but the official way of including three.js into your web app is through <src> tags.
This is a great example of where it would probably be a better idea not to do things the three.js way but the modern javascript way ie use imports, npm installs etc.
Three.js also does not have a robust core that allows you to be flexible with code around it. It's quite obfuscated and conflated with a limited number of hooks exposed that would allow you to write effects such as you want.
Three is often conflated with it's examples, if you pick a random one, it will be written in a three.js way, but far from what best javascript/coding practices are, at least today.
You'll often find large monolithic files, that would benefit from being broken up.
I believe it's still impossible to import the examples as modules.
Look at the material extensions examples and consider if you would want to apply that pattern in your project.
You can probably encounter more pain points, but this is enough to illustrate that the three.js way may not always be desirable.
remedies
Are few and far between. I've spent more than a year trying to push the onBeforeRender and onAfterRender hooks. It seems useful and allowed for some refactors, but another feature had to be nuked first.
The other feature was iterated on during the course of that year and only addressed a single example, until it was made obvious that onBeforeRender would address both the example, and allow for much more.
This unfortunately also seems to be the three.js way. Since the base is so big and includes so many examples, it's more likely that someone would try to optimize a single example, then try to find a common pattern for refactoring a whole bunch of them.
You could go and file an issue on github, but it would be very hard to argue for something as generic as this. You'd most likely have to write some code as a proposal.
This can become taxing quite quick, because it can be rejected, ignored, or you could be asked to provide examples or refactor existing ones.
You mentioned your hacks failing to work with various three's features, like VR. This i think is a problem with three, VR has been the focus of development for the past couple of years at least, but without ever addressing the core issues.
The good news is, three is more modular than it was ever before, so you can fork the project and tweak the parts you need. The issues with three than may move to a higher level, if you find some coupling in the renderer for example that makes it hard to sync your fork, it would be easier to explain than the whole goal of your particular project.
What parameters, modes, tricks, etc can be applied to get sharpness for texts ?
I'm going to draw a lot so I cant use 3d text.
I'm using canvas to write the text and some symbols. I'm creating somethinbg like label information.
Thanks
This is no simple matter since you'll run into memory issues with 100k "font textures". Since you want 100k text elements you'll have several difficulties to manage. I had a similar problem too once and tossed together a few techniques in order to make it work. Simply put you need some sort of LOD ("Level of Detail") to make that work. That setup might look like following:
A THREE.ParticleSystem built up with BufferGeometry where every position is one text-position
One "highres" TextureAtlas with 256 images on it which you allocate dynamically with those images that are around you (4096px x 4096px with 256x256px images)
At least one "lowres" TextureAtlas where you have 16x16px images. You prepare that one beforehand. Same size like previous, but there you have all preview images of your text and every image is 16x16px in size.
A kdtree data structure to use a nearestneighbour algorithm with to figure out which positions are near the camera (alike http://threejs.org/examples/#webgl_nearestneighbour)
The sub-imaging module to continually replace highres textures with directly on the GPU: https://github.com/mrdoob/three.js/pull/4661
An index for every position to tell it which position on the TextureAtlas it should use for display
You see where I'm going. Here's some docs on my experiences:
The Stackoverflow post: Display many thousand images in three.js
The blog where I (begun) to explain what I was doing: http://blogs.fhnw.ch/threejs/
This way it will take quite some time until you have satisfying results. The only way to make this simpler is to get rid of the 16x16px preview images. But I wouldn't recommend that... Or of course something depending on your setup. Maybe you have levels? towns? Or any other structure where it would make sense to only display a portion of these texts? That might be worth a though before tackling the big thing.
If you plan to really work on this and make this happen the way I described I can help you with some already existing code and further explanations. Just tell me where you're heading :)