Multiple GLTF loading and Merging on server side - three.js

We are trying to merge multiple GLTFs on server side and export the merged gltf as final result on server.
Things we have trued that worked and that didn't:
We have used three js GLTFLoader parse method to load multiple gltf files, merge the parent and children of the loaded objects and export the final model using GLTFExporter.
We have used jsdom to resolve the issues related to window, document etc.
Above mentioned points work for gltfs without textures
While Loading GLTF with textures the response gets stuck.
a) We tried hosting the files on server and use the GLTFLoader load method with "localhost:******" as loader path.
b) Internally TextureLoader invoked ImageLoader where the onLoad event was not getting triggered at all . May be jsdom was not invoking this.
c) To solve this we changed ImageLoader to have canvas image :
const { Image } = require("canvas");
const image = new Image();
image.onload = onImageLoad;
image.onerror = onImageError;
d) The load method got resolved after above change.
e) Next step - Exporting the GLTF - We got stuck due to ImageData not found error. We added ImageData from canvas and the GLTF was exported.
f) The exported GLTF is not viewable die to corrupted data in images
"images": [
{
"mimeType": "image/png",
"uri": "data:,"
}
],
If someone loaded and merged GLTfs with texture images purely server side, Please Help!

As three.js is primarily a 3D rendering library for the web, and relies on various web image and WebGL APIs, I'm not sure using THREE.GLTFLoader is the most efficient way to merge glTF files on a Node.js server. I'd suggest this, instead:
import { Document, NodeIO } from '#gltf-transform/core';
import { KHRONOS_EXTENSIONS } from '#gltf-transform/extensions';
const io = new NodeIO().registerExtensions(KHRONOS_EXTENSIONS);
const document = new Document();
const root = document.getRoot();
// Merge all files.
for (const path of filePaths) {
document.merge(io.read(path));
}
// (Optional) Consolidate buffers.
const buffer = root.listBuffers()[0];
root.listAccessors().forEach((a) => a.setBuffer(buffer));
root.listBuffers().forEach((b, index) => index > 0 ? b.dispose() : null);
io.write('./output.glb', document);
It's worth noting that this process will result in a glTF file containing multiple separate scenes. If you want to combine them into a single scene, arranged in some way, you'd need to use the scene API to do that. If the files are not on disk, there are other NodeIO APIs for processing binary or JSON files in memory.

Related

Collider made of geometry does not match mesh data(position, scale, ..etc)

I wanted to load the spatial model using the gltfloader, and then create a collision with geometry data.
By the way, I’m trying to get help with a different result than I expected.
problem image
here is sample image
You can see that the collider is larger than the visible Mesh data.
Most of the spatial data has the above situation.
The example code used is as follows. (tsx on react)
function Space(props) {
const url = props.data?.spaceFileName ? `https://space-test.vrin.co.kr/files/${props.data.spaceFileName}` : null;
const model = useGLTF(url);
const clone = model.scene;
const getCollide = (child) => {
console.log(child);
const geometry = new Geometry().fromBufferGeometry(child.geometry);
const vertices = geometry.vertices.map((v) => new THREE.Vector3().copy(v));
const faces = geometry.faces.map((f) => [f.a, f.b, f.c]);
const normals = geometry.faces.map((f) => new THREE.Vector3().copy(f.normal));
return <Object key={child.uuid} clone={clone} args={[vertices, faces, normals]} model={model} />
};
const traverseObject = (child) => child.children.map((c) => {
if (c.type === 'Object3D' || c.type === 'Group') {
return traverseObject(c);
} if (c.type === 'Mesh') {
return getCollide(c);
}
});
const objects = traverseObject(model.scene)
return <group dispose={null}>{objects}</group>;
}
Some spatial data generate collisions properly, but some spatial data do not generate collisions properly.
make sample image
here is sample what i made on blender
As a result of watching the Blender program, it is expected that it does not seem to be able to properly load the Geometry layer at the top.
blender2
If you look at the collation in the image above, there is a group called Sketchfeb at the top.
If i erase that group layer,
blender2
It can be seen that it is very similar to the shape of the collision body shown in the first picture.
I don’t know if I wrote the recursive function incorrectly or if there is an internal problem with the module.
Does anyone know the solution?
I don’t know if I wrote the recursive function incorrectly or if there is an internal problem with the module.
Does anyone know the solution?

THREE.WebGLProgram Shader Error because of defined v1 constant

I am using Three.js and I am loading an HDR as an environment map for the scene. Upon loading, I receive this error:
The line in questions is this:
I am assuming that the defined v1 causes an issue in this line because it is not undefined.
I am loading the HDR map like this:
return new Promise((resolve, reject) => {
new RGBELoader()
.setDataType(THREE.HalfFloatType)
.load(
path, // <-- hdr file
(texture) => {
// I tried fromEquirectangular and fromCubemap
// const envMap = this.pmremGenerator.fromEquirectangular(texture).texture;
const envMap = this.pmremGenerator.fromCubemap(texture).texture;
texture.needsUpdate = true;
resolve({ envMap });
},
undefined,
reject
);
});
Does anyone know what is causing this issue in THREE?
const envMap = this.pmremGenerator.fromCubemap(texture).texture;
I doubt this method call is correct. RGBELoader can not load textures in the cube map format. You probably want to use fromEquirectangular(). Before using this method, you need this line in your onLoad() callback:
texture.mapping = THREE.EquirectangularReflectionMapping;
Besides, please check if the usage of PMREMGenerator is actually necessary in your app. In latest releases, three.js internally uses PMREMGenerator to prepare environment maps for the usage with PBR materials.

The function "getRenderProxy" to get a ThreeJS Mesh from an object's fragId in Forge Viewer doesn't work correctly

I'm trying to get a threeJS Mesh from Autodek Forge objects using the function
'viewer.impl.getRenderProxy(viewer.model, fragId)'.
The problem that I encounter is if I put this function in a loop routine to get Meshs of multiple objects, I get just a random Mesh.
To find out the problem's origin, I used a similar function that is :
'viewer.impl.getFragmentProxy(viewer.model, fragId)'
and it worked just fine.
Her is the routine code that I use and the result :
for(let i = 0, len = nodNamee.length; i < (len); i = i+3){
var instanceTree = viewer.model.getData().instanceTree;
var fragIds = [];
instanceTree.enumNodeFragments(nodNamee[i+1], function(fragId){
fragIds.push(fragId);
});
fragIds.forEach(function(fragId) {
var renderProxy = viewer.impl.getRenderProxy(viewer.model, fragId);
fragtoMesh.push(renderProxy);
//var fragmentproxy = viewer.impl.getFragmentProxy(viewer.model, fragId);
//fragtoProxy.push(fragmentproxy);
});
}
Result :
Arry of fragtoMesh
This is because the getFragmentProxy method always returns the same instance of THREE.Mesh, just with different properties. Basically, the method works like this under the hood:
let cachedMesh = new THREE.Mesh();
// ...
getRenderProxy(model, fragId) {
// Find the geometry, material, and other properties of the fragment
cachedMesh.geometry = fragGeometry;
cachedMesh.material = fragMaterial;
// ...
return cachedMesh;
}
// ...
Note that this is a performance optimization because if the getFragmentProxy (which is only meant for internal use) function returned a new instance every time it's called by other parts of Forge Viewer, it would cause a huge memory churn.
So in your case, if you really need to store all the THREE.Mesh instances in an array, you'll need to clone them or copy their individual properties into separate THREE.Mesh objects.

THREE AnimationMixer.clipAction() throws cannot parse trackname at all

I am trying to load some simple keyframe animation (just positions) using the JSON loader.
Using the dev branch r80.
To load the entire scene (made and animated in softimage, exported to FBX, imported into blender, exported using the THREE JSON export script). The file looks good and loads ok.
But when i try to load the animation using:
object.traverse( function ( child ) {
switch(child.name) {
case "dae_scene:helium_balloon_model:helium_balloon_model":
//do anim
console.log(object.animations[0]);
animationClips.balloon1 = object.animations[0]; //
//animationClips.balloon1.weight = 1;
animationMixer = new THREE.AnimationMixer( child );
var sceneAnimation = animationMixer.clipAction(animationClips.balloon1);
sceneAnimation.play();
break;
}
}
it produces:
three.min.js?ver=4.5.4:712 Uncaught Error: cannot parse trackName at all: dae_scene:helium_balloon_model:helium_balloon_model.position
Anyone who can point me in the right direction?

React renderToString() Performance and Caching React Components

I've noticed that the reactDOM.renderToString() method starts to slow down significantly when rendering a large component tree on the server.
Background
A bit of background. The system is a fully isomorphic stack. The highest level App component renders templates, pages, dom elements, and more components. Looking in the react code, I found it renders ~1500 components (this is inclusive of any simple dom tag that gets treated as a simple component, <p>this is a react component</p>.
In development, rendering ~1500 components takes ~200-300ms. By removing some components I was able to get ~1200 components to render in ~175-225ms.
In production, renderToString on ~1500 components takes around ~50-200ms.
The time does appear to be linear. No one component is slow, rather it is the sum of many.
Problem
This creates some problems on the server. The lengthy method results in long server response times. The TTFB is a lot higher than it should be. With api calls and business logic the response should be 250ms, but with a 250ms renderToString it is doubled! Bad for SEO and users. Also, being a synchronous method, renderToString() can block the node server and backup subsequent requests (this could be solved by using 2 separate node servers: 1 as a web server, and 1 as a service to solely render react).
Attempts
Ideally, it would take 5-50ms to renderToString in production. I've been working on some ideas, but I'm not exactly sure what the best approach would be.
Idea 1: Caching components
Any component that is marked as 'static' could be cached. By keeping a cache with the rendered markup, the renderToString() could check the cache before rendering. If it finds a component, it automatically grabs the string. Doing this at a high level component would save all the nested children component's mounting. You would have to replace the cached component markup's react rootID with the current rootID.
Idea 2: Marking components as simple/dumb
By defining a component as 'simple', react should be able to skip all the lifecycle methods when rendering. React already does this for the core react dom components (<p/>, <h1/>, etc). Would be nice to extend custom components to use the same optimization.
Idea 3: Skip components on server-side render
Components that do not need to be returned by the server (no SEO value) could simply be skipped on the server. Once the client loads, set a clientLoaded flag to true and pass it down to enforce a re-render.
Closing and other attempts
The only solution I've implemented thus far is to reduce the number of components that are rendered on the server.
Some projects we're looking at include:
React-dom-stream (still working on implementing this for a test)
Babel inline elements (seems like this is along the lines of Idea 2)
Has anybody faced similar issues? What have you been able to do?
Thanks.
Using react-router1.0 and react0.14, we were mistakenly serializing our flux object multiple times.
RoutingContext will call createElement for every template in your react-router routes. This allows you to inject whatever props you want. We also use flux. We send down a serialized version of a large object. In our case, we were doing flux.serialize() within createElement. The serialization method could take ~20ms. With 4 templates, that would be an extra 80ms to your renderToString() method!
Old code:
function createElement(Component, props) {
props = _.extend(props, {
flux: flux,
path: path,
serializedFlux: flux.serialize();
});
return <Component {...props} />;
}
var start = Date.now();
markup = renderToString(<RoutingContext {...renderProps} createElement={createElement} />);
console.log(Date.now() - start);
Easily optimized to this:
var serializedFlux = flux.serialize(); // serialize one time only!
function createElement(Component, props) {
props = _.extend(props, {
flux: flux,
path: path,
serializedFlux: serializedFlux
});
return <Component {...props} />;
}
var start = Date.now();
markup = renderToString(<RoutingContext {...renderProps} createElement={createElement} />);
console.log(Date.now() - start);
In my case this helped reduce the renderToString() time from ~120ms to ~30ms. (You still need to add the 1x serialize()'s ~20ms to the total, which happens before the renderToString()) It was a nice quick improvement. -- It's important to remember to always do things correctly, even if you don't know the immediate impact!
Idea 1: Caching components
Update 1: I've added a complete working example at the bottom. It caches components in memory and updates data-reactid.
This can actually be done easily. You should monkey-patch ReactCompositeComponent and check for a cached version:
import ReactCompositeComponent from 'react/lib/ReactCompositeComponent';
const originalMountComponent = ReactCompositeComponent.Mixin.mountComponent;
ReactCompositeComponent.Mixin.mountComponent = function() {
if (hasCachedVersion(this)) return cache;
return originalMountComponent.apply(this, arguments)
}
You should do this before you require('react') anywhere in your app.
Webpack note: If you use something like new webpack.ProvidePlugin({'React': 'react'}) you should change it to new webpack.ProvidePlugin({'React': 'react-override'}) where you do your modifications in react-override.js and export react (i.e. module.exports = require('react'))
A complete example that caches in memory and updates reactid attribute could be this:
import ReactCompositeComponent from 'react/lib/ReactCompositeComponent';
import jsan from 'jsan';
import Logo from './logo.svg';
const cachable = [Logo];
const cache = {};
function splitMarkup(markup) {
var markupParts = [];
var reactIdPos = -1;
var endPos, startPos = 0;
while ((reactIdPos = markup.indexOf('reactid="', reactIdPos + 1)) != -1) {
endPos = reactIdPos + 9;
markupParts.push(markup.substring(startPos, endPos))
startPos = markup.indexOf('"', endPos);
}
markupParts.push(markup.substring(startPos))
return markupParts;
}
function refreshMarkup(markup, hostContainerInfo) {
var refreshedMarkup = '';
var reactid;
var reactIdSlotCount = markup.length - 1;
for (var i = 0; i <= reactIdSlotCount; i++) {
reactid = i != reactIdSlotCount ? hostContainerInfo._idCounter++ : '';
refreshedMarkup += markup[i] + reactid
}
return refreshedMarkup;
}
const originalMountComponent = ReactCompositeComponent.Mixin.mountComponent;
ReactCompositeComponent.Mixin.mountComponent = function (renderedElement, hostParent, hostContainerInfo, transaction, context) {
return originalMountComponent.apply(this, arguments);
var el = this._currentElement;
var elType = el.type;
var markup;
if (cachable.indexOf(elType) > -1) {
var publicProps = el.props;
var id = elType.name + ':' + jsan.stringify(publicProps);
markup = cache[id];
if (markup) {
return refreshMarkup(markup, hostContainerInfo)
} else {
markup = originalMountComponent.apply(this, arguments);
cache[id] = splitMarkup(markup);
}
} else {
markup = originalMountComponent.apply(this, arguments)
}
return markup;
}
module.exports = require('react');
It's not a complete solution
I had the same issue, with my react isomorphic app, and I used a couple of things.
Use Nginx in front of your nodejs server, and cache the rendered response for a short time.
In Case of showing a list of items, I use only a subset of list. For example, I will render only X items to fill up the viewport, and load the rest of the list in the client side using Websocket or XHR.
Some of my components are empty in serverside rendering and will only load from client side code (componentDidMount).
These components are usually graphs or profile related components. Those components usually don't have any benefit from SEO point of view
About SEO, from my experience 6 Month with an isomorphic app. Google Bot can read Client side React Web page easily, so I'm not sure why we bother with the server side rendering.
Keep the <Head>and <Footer> as static string or use template engine (Reactjs-handlebars), and render only the content of the page, (it should save a few rendered components). In case of a single page app, you can update the title description in each navigation inside Router.Run.
I think fast-react-render can help you. It increases the performance of your server rendering three times.
For try it, you only need to install package and replace ReactDOM.renderToString to FastReactRender.elementToString:
var ReactRender = require('fast-react-render');
var element = React.createElement(Component, {property: 'value'});
console.log(ReactRender.elementToString(element, {context: {}}));
Also you can use fast-react-server, in that case render will be 14 times as fast as traditional react rendering. But for that each component, which you want to render, must be declared with it (see an example in fast-react-seed, how you can do it for webpack).

Resources