I want to make BokehPass to work with transparent objects. I see that I could replace materialDepth with my own material, but I do not see how would I pass proper texture there. Is it even possible? Is it possible with the engine hack?
Related
Open3d's easy draw_geometries utility makes it possible to copy & paste camera parameters to restore a certain view point after it has been changed. It seems like this functionality would also be available when using the SceneWidget and its Open3DScene high level scene. However I have not figured out a way to mimic this behavior.
Copying and pasting a viewpoint from draw_geometries onto notepad reveals this information:
boundingbox_max, boundingbox_min, field_of_view, front, lookat, up, zoom
In order for it to have the same effect using the SceneWidget I would have to somehow obtain this information from the scene's camera, create a copy, and then load it later when it is needed. Nevertheless, I cannot access the above properties explicitly through the camera object, nor have I found a way to set them (assuming I already have them).
The next "obvious" solution would be the camera class's copy_from method, which sounds great, except I am unable to instantiate the Camera class in order to use it.
How can I achieve this save & restore viewpoint effect?
Thanks in advance
PIXI.js has Container#cacheAsBitmap which causes the container to "render" itself to an image, save that, render the image instead of its children and when a child is added or removed or updated, the cache is updated.
What's the alternative for Three.js (but instead of an image it would be a mesh)?
I may not be understanding your question properly, but your reply to Sabee's answer was helpful. It sounds like you're looking to either merge multiple geometries into a single mesh or implement a form of model instancing, with the goal of reducing draw calls.
There is more than one way to accomplish this, depending on your requirements. You can merge multiple geometries into a single geometry object, and provide either one material or an array of materials (where each index corresponds to one of the merged geometries). You can also use GPU-accelerated instancing to achieve a similar effect with only a single copy of the geometry.
I'll refer to Dusan Bosnjak's excellent Medium series on instancing, which starts here: https://medium.com/#pailhead011/instancing-with-three-js-36b4b62bc127
As well, here are the three.js examples regarding instancing: https://threejs.org/examples/?q=instanc#webgl_buffergeometry_instancing_dynamic
Pixi.js is a 2D javascript library, using WebGL to render the images(frames) into html5 canvas. Three.js allows the creation of Graphical Processing Unit (GPU)-accelerated 3D animations using WebGL.
The browser cannot store rendered 3D frames, this work allows the GPU Accelerated Render Cache, which depends on what hardware's they run. Helpful post understanding what's going on behind the scenes.
But you can cahce your assets in browser like images, json objects of 3D models and etc.
In Three.js Cache class is a global object, used by assets loaders (TextureLoader, ImageLoader, AudioLoader ...), by default is disabled (false). To enable it you can set THREE.Cache.enabled = true ;
I think by default the browser should cache the textures for performance reasons, but if you want to be sure simply enable the cache by force code it in Three.js. Also the creator of Three.js answered this question.
I am doing a bit of work with threejs. And now just wondering if it was possible to name rotations or joints.
So it seems possible to write code like:
arm.rotateZ( 180 ).name="ARM_ANGLE";
But then how does one subsequently access and set the same rotation?
I know in x3d it is possible to do this, so was thinking it would be possible to do as well in threejs. In x3d, one can define a reference as:
<Transform DEF="ArmAngle" rotation="0 0 1 3.19">
And then later define a route to reference it like:
<ROUTE fromNode='spinarm' fromField='value_changed' toNode='ArmAngle' toField='set_rotation'></ROUTE>
What you are describing sounds like animation keys or transform key frames.
You can define these in a modeller like Blender and export them or generate them programatically.
But generally, what you are describing from x3d would have to be a layer built on top of three if you really want that style of interface, but honestly, it's pretty straightforward to use the scene graph style of manipulation.. i.e. finding an object and setting its position and rotation.. OR defining an animation in a modeller and then calling that animation. The advantage of using animations is that you can then blend between them.
You CAN name Objects in three.. so for instance you could name your arm.. and then find it using scene.getObjectByName("arm"). getObjectByName is a method of all Object3Ds.
I'm using same technique explained in this example
I defined a JSON file that contains all objects, geometries and animations. However, I couldn't find the way to specify in the animation a change of object's parent. Also, by looking at the documentation of the Three.js library I don't see any ObjectKeyframeTrack which would be nice to set an object's parent.
My questions is referring the JSON file format, not using the API, to do this.
I know there is a similar question (Change Object parent in Three.js?) but is not what I am looking for.
In any case, other approaches are welcome as well.
This was solved in the forum but i'll crosspost the answer here: the three.js animation system does not provide a way to serialize changes in the scene graph as part of a JSON file. JSON keyframe animation can affect properties of objects like position, rotation, scale, and color. To change the parent of an object, you would need to use custom JS at the appropriate time in the animation playback.
I want to add a simple black box(like this: effect) on a texture(ID3D11ShaderResourceView), is there a simple way to do it in DX11? don't want write a shadow to do it.
Well, what you're trying to do is actually "initializing texture programmatically". Textures from D3D POV are nothing more than pieces of memory with clearly defined layout. Normally, you create a texture resource, read data from a texture file (like *.BMP for example), put the data in the texture and then feed it to the pipeline for sampling.
In your case though, you need an additional step:
Create texture resource using either D3D11_USAGE_DEFAULT or D3D11_USAGE_DYNAMIC flag - so you can access it from the CPU
Read the color map to your texture
Depending on the chosen type, either add your data to the initial data or Map/Unmap and add your data (by your data I mean overwrite each edge of the image with black color)
This can be also done to kind of "generate" textures, like for example checker-board or clouds.
All the information you need can be found here.