I've stumbled upon the problem that some browsers and devices render the MeshStandardMaterial reflection poorly.
Consider the example below:
and this example below:
Both comparisons are running simultaneously on the same computer, same graphics card, identical attributes, but different browsers. As you can see, the reflections on the right are almost unidentifiable.
Additionally, I'm getting some triangulation issues at sharp angles that make it seem as if the reflection is being calculated in the vertex shader:
I understand that different browsers have different WebGL capabilities, as the results on http://webglreport.com/ illustrate:
Does anybody know what WebGL extension or feature the IE/Edge browsers are missing that I can look for? I want to put a sniffer that uses a different material if it doesn't meet the necessary requirements. Or if anybody has a full solution, that would be even better. I've already tried playing with the EnvMap's minFilter attribute, but the reflections are still calculated differently.
I don't know which extensions are needed but you can easily test. Before you init THREE.js put some code like this
const extensionsToDisable = [
"OES_texture_float",
"OES_texture_float_linear",
];
WebGLRenderingContext.prototype.getExtension = function(oldFn) {
return function(extensionName) {
if (extensionsToDisable.indexOf(extensionName) >= 0) {
return null;
}
return oldFn.call(this, name);
};
}(WebGLRenderingContext.prototype.getExtension);
WebGLRenderingContext.prototype.getSupportedExtensions = function(oldFn) {
return function() {
const extensions = oldFn.call(this);
return extensions.filter(e => extensionsToDisable.indexOf(e) < 0);
};
}(WebGLRenderingContext.prototype.getSupportedExtensions);
Then just selectively disable extensions until Firefox/Chrome look the same as IE/Edge.
The first thing I'd test is disabling every extension that's in Chrome/Firefox that's not in IE/Edge just to verify that turning them all off reproduces the IE/Edge behavior.
If it does reproduce the issue then I'd do a binary search (turn on half the disabled extensions), and repeat until I found the required ones.
const extensionsToDisable = [
"EXT_blend_minmax",
"EXT_disjoint_timer_query",
"EXT_shader_texture_lod",
"EXT_sRGB",
"OES_vertex_array_object",
"WEBGL_compressed_texture_s3tc_srgb",
"WEBGL_debug_shaders",
"WEBKIT_WEBGL_depth_texture",
"WEBGL_draw_buffers",
"WEBGL_lose_context",
"WEBKIT_WEBGL_lose_context",
];
WebGLRenderingContext.prototype.getExtension = function(oldFn) {
return function(extensionName) {
if (extensionsToDisable.indexOf(extensionName) >= 0) {
return null;
}
return oldFn.call(this, name);
};
}(WebGLRenderingContext.prototype.getExtension);
WebGLRenderingContext.prototype.getSupportedExtensions = function(oldFn) {
return function() {
const extensions = oldFn.call(this);
return extensions.filter(e => extensionsToDisable.indexOf(e) < 0);
};
}(WebGLRenderingContext.prototype.getSupportedExtensions);
const gl = document.createElement("canvas").getContext("webgl");
console.log(gl.getSupportedExtensions().join('\n'));
console.log("WEBGL_draw_buffers:", gl.getExtension("WEBGL_draw_buffers"));
Related
I wanted to load the spatial model using the gltfloader, and then create a collision with geometry data.
By the way, I’m trying to get help with a different result than I expected.
problem image
here is sample image
You can see that the collider is larger than the visible Mesh data.
Most of the spatial data has the above situation.
The example code used is as follows. (tsx on react)
function Space(props) {
const url = props.data?.spaceFileName ? `https://space-test.vrin.co.kr/files/${props.data.spaceFileName}` : null;
const model = useGLTF(url);
const clone = model.scene;
const getCollide = (child) => {
console.log(child);
const geometry = new Geometry().fromBufferGeometry(child.geometry);
const vertices = geometry.vertices.map((v) => new THREE.Vector3().copy(v));
const faces = geometry.faces.map((f) => [f.a, f.b, f.c]);
const normals = geometry.faces.map((f) => new THREE.Vector3().copy(f.normal));
return <Object key={child.uuid} clone={clone} args={[vertices, faces, normals]} model={model} />
};
const traverseObject = (child) => child.children.map((c) => {
if (c.type === 'Object3D' || c.type === 'Group') {
return traverseObject(c);
} if (c.type === 'Mesh') {
return getCollide(c);
}
});
const objects = traverseObject(model.scene)
return <group dispose={null}>{objects}</group>;
}
Some spatial data generate collisions properly, but some spatial data do not generate collisions properly.
make sample image
here is sample what i made on blender
As a result of watching the Blender program, it is expected that it does not seem to be able to properly load the Geometry layer at the top.
blender2
If you look at the collation in the image above, there is a group called Sketchfeb at the top.
If i erase that group layer,
blender2
It can be seen that it is very similar to the shape of the collision body shown in the first picture.
I don’t know if I wrote the recursive function incorrectly or if there is an internal problem with the module.
Does anyone know the solution?
I don’t know if I wrote the recursive function incorrectly or if there is an internal problem with the module.
Does anyone know the solution?
for the p5js rendering engine, if in setup() function I use WEBGL vs P2D, how can I know later in my code what rendering mode I am in? I have wrote generic functions that work across 2D and 3D modes and I want the code to execute in different ways based on the rendering mode.
There probably are more straightforward and elegant ways of doing it but, in a pinch, you can read the drawingContext of the renderer used and see if it's either an instance of WebGLRenderingContext or CanvasRenderingContext2D
const webglSketch = p => {
p.setup = () => {
p.createCanvas(100, 100, p.WEBGL)
p.background('red')
console.log('WEBGL?', p._renderer.drawingContext instanceof WebGLRenderingContext)
console.log('2D?', p._renderer.drawingContext instanceof CanvasRenderingContext2D)
}
}
const twoDSketch = p => {
p.setup = () => {
p.createCanvas(100, 100)
p.background('blue')
console.log('WEBGL?', p._renderer.drawingContext instanceof WebGLRenderingContext)
console.log('2D?', p._renderer.drawingContext instanceof CanvasRenderingContext2D)
}
}
new p5(webglSketch)
new p5(twoDSketch)
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.0.0/p5.min.js"></script>
If you're not using the instance mode, just check the _renderer global object.
I've noticed that the reactDOM.renderToString() method starts to slow down significantly when rendering a large component tree on the server.
Background
A bit of background. The system is a fully isomorphic stack. The highest level App component renders templates, pages, dom elements, and more components. Looking in the react code, I found it renders ~1500 components (this is inclusive of any simple dom tag that gets treated as a simple component, <p>this is a react component</p>.
In development, rendering ~1500 components takes ~200-300ms. By removing some components I was able to get ~1200 components to render in ~175-225ms.
In production, renderToString on ~1500 components takes around ~50-200ms.
The time does appear to be linear. No one component is slow, rather it is the sum of many.
Problem
This creates some problems on the server. The lengthy method results in long server response times. The TTFB is a lot higher than it should be. With api calls and business logic the response should be 250ms, but with a 250ms renderToString it is doubled! Bad for SEO and users. Also, being a synchronous method, renderToString() can block the node server and backup subsequent requests (this could be solved by using 2 separate node servers: 1 as a web server, and 1 as a service to solely render react).
Attempts
Ideally, it would take 5-50ms to renderToString in production. I've been working on some ideas, but I'm not exactly sure what the best approach would be.
Idea 1: Caching components
Any component that is marked as 'static' could be cached. By keeping a cache with the rendered markup, the renderToString() could check the cache before rendering. If it finds a component, it automatically grabs the string. Doing this at a high level component would save all the nested children component's mounting. You would have to replace the cached component markup's react rootID with the current rootID.
Idea 2: Marking components as simple/dumb
By defining a component as 'simple', react should be able to skip all the lifecycle methods when rendering. React already does this for the core react dom components (<p/>, <h1/>, etc). Would be nice to extend custom components to use the same optimization.
Idea 3: Skip components on server-side render
Components that do not need to be returned by the server (no SEO value) could simply be skipped on the server. Once the client loads, set a clientLoaded flag to true and pass it down to enforce a re-render.
Closing and other attempts
The only solution I've implemented thus far is to reduce the number of components that are rendered on the server.
Some projects we're looking at include:
React-dom-stream (still working on implementing this for a test)
Babel inline elements (seems like this is along the lines of Idea 2)
Has anybody faced similar issues? What have you been able to do?
Thanks.
Using react-router1.0 and react0.14, we were mistakenly serializing our flux object multiple times.
RoutingContext will call createElement for every template in your react-router routes. This allows you to inject whatever props you want. We also use flux. We send down a serialized version of a large object. In our case, we were doing flux.serialize() within createElement. The serialization method could take ~20ms. With 4 templates, that would be an extra 80ms to your renderToString() method!
Old code:
function createElement(Component, props) {
props = _.extend(props, {
flux: flux,
path: path,
serializedFlux: flux.serialize();
});
return <Component {...props} />;
}
var start = Date.now();
markup = renderToString(<RoutingContext {...renderProps} createElement={createElement} />);
console.log(Date.now() - start);
Easily optimized to this:
var serializedFlux = flux.serialize(); // serialize one time only!
function createElement(Component, props) {
props = _.extend(props, {
flux: flux,
path: path,
serializedFlux: serializedFlux
});
return <Component {...props} />;
}
var start = Date.now();
markup = renderToString(<RoutingContext {...renderProps} createElement={createElement} />);
console.log(Date.now() - start);
In my case this helped reduce the renderToString() time from ~120ms to ~30ms. (You still need to add the 1x serialize()'s ~20ms to the total, which happens before the renderToString()) It was a nice quick improvement. -- It's important to remember to always do things correctly, even if you don't know the immediate impact!
Idea 1: Caching components
Update 1: I've added a complete working example at the bottom. It caches components in memory and updates data-reactid.
This can actually be done easily. You should monkey-patch ReactCompositeComponent and check for a cached version:
import ReactCompositeComponent from 'react/lib/ReactCompositeComponent';
const originalMountComponent = ReactCompositeComponent.Mixin.mountComponent;
ReactCompositeComponent.Mixin.mountComponent = function() {
if (hasCachedVersion(this)) return cache;
return originalMountComponent.apply(this, arguments)
}
You should do this before you require('react') anywhere in your app.
Webpack note: If you use something like new webpack.ProvidePlugin({'React': 'react'}) you should change it to new webpack.ProvidePlugin({'React': 'react-override'}) where you do your modifications in react-override.js and export react (i.e. module.exports = require('react'))
A complete example that caches in memory and updates reactid attribute could be this:
import ReactCompositeComponent from 'react/lib/ReactCompositeComponent';
import jsan from 'jsan';
import Logo from './logo.svg';
const cachable = [Logo];
const cache = {};
function splitMarkup(markup) {
var markupParts = [];
var reactIdPos = -1;
var endPos, startPos = 0;
while ((reactIdPos = markup.indexOf('reactid="', reactIdPos + 1)) != -1) {
endPos = reactIdPos + 9;
markupParts.push(markup.substring(startPos, endPos))
startPos = markup.indexOf('"', endPos);
}
markupParts.push(markup.substring(startPos))
return markupParts;
}
function refreshMarkup(markup, hostContainerInfo) {
var refreshedMarkup = '';
var reactid;
var reactIdSlotCount = markup.length - 1;
for (var i = 0; i <= reactIdSlotCount; i++) {
reactid = i != reactIdSlotCount ? hostContainerInfo._idCounter++ : '';
refreshedMarkup += markup[i] + reactid
}
return refreshedMarkup;
}
const originalMountComponent = ReactCompositeComponent.Mixin.mountComponent;
ReactCompositeComponent.Mixin.mountComponent = function (renderedElement, hostParent, hostContainerInfo, transaction, context) {
return originalMountComponent.apply(this, arguments);
var el = this._currentElement;
var elType = el.type;
var markup;
if (cachable.indexOf(elType) > -1) {
var publicProps = el.props;
var id = elType.name + ':' + jsan.stringify(publicProps);
markup = cache[id];
if (markup) {
return refreshMarkup(markup, hostContainerInfo)
} else {
markup = originalMountComponent.apply(this, arguments);
cache[id] = splitMarkup(markup);
}
} else {
markup = originalMountComponent.apply(this, arguments)
}
return markup;
}
module.exports = require('react');
It's not a complete solution
I had the same issue, with my react isomorphic app, and I used a couple of things.
Use Nginx in front of your nodejs server, and cache the rendered response for a short time.
In Case of showing a list of items, I use only a subset of list. For example, I will render only X items to fill up the viewport, and load the rest of the list in the client side using Websocket or XHR.
Some of my components are empty in serverside rendering and will only load from client side code (componentDidMount).
These components are usually graphs or profile related components. Those components usually don't have any benefit from SEO point of view
About SEO, from my experience 6 Month with an isomorphic app. Google Bot can read Client side React Web page easily, so I'm not sure why we bother with the server side rendering.
Keep the <Head>and <Footer> as static string or use template engine (Reactjs-handlebars), and render only the content of the page, (it should save a few rendered components). In case of a single page app, you can update the title description in each navigation inside Router.Run.
I think fast-react-render can help you. It increases the performance of your server rendering three times.
For try it, you only need to install package and replace ReactDOM.renderToString to FastReactRender.elementToString:
var ReactRender = require('fast-react-render');
var element = React.createElement(Component, {property: 'value'});
console.log(ReactRender.elementToString(element, {context: {}}));
Also you can use fast-react-server, in that case render will be 14 times as fast as traditional react rendering. But for that each component, which you want to render, must be declared with it (see an example in fast-react-seed, how you can do it for webpack).
I have developed a Watchapp with Pebble.js that fetches a remote file, containing an integer, and emits that many "short" Vibe events.
The trouble is: Vibe events do not happen if one is currently in process. I have resorted to something like this to try to spread them out (where BUMP_COUNT_INT == number of Vibes to emit):
for (var i = 0; i < BUMP_COUNT_INT; i++) {
setTimeout(function(){
Vibe.vibrate('short');
}, 900*i);
However, even the 900ms( * Vibes) isn't consistent. There is sometimes more or less space between them, and they sometimes merge (causing fewer Vibes than expected).
It appears that the C SDK is capable of custom sequences.
I was hoping someone had come across a cleaner workaround, or a more stable way to pull this off using Pebble.js ... ?
Should I just accept that I'll have to spread the Vibes out even further, if I want to continue with Pebble.js?
What would you do?
Custom patterns are not available in Pebble.js but you could easily add a new 'type' of vibe in Pebble.js and implement it as a custom pattern in the C side of Pebble.js.
The steps would be:
Clone the Pebble.js project on GitHub and get a local copy. You will need to download and install the Pebble SDK to compile it locally on your computer (this will not work on CloudPebble).
Declare a new type of vibe command in src/js/ui/simply-pebble.js (the Pebble.js JavaScript library):
var vibeTypes = [
'short',
'long',
'double',
'custom'
];
var VibeType = makeArrayType(vibeTypes);
Create a new type of Vibe in src/simply/simply_msg.c
enum VibeType {
VibeShort = 0,
VibeLong = 1,
VibeDouble = 2,
VibeCustom = 3,
};
And then extend the Vibe command handler to support this new type of vibe in src/simply/simply_msg.c
static void handle_vibe_packet(Simply *simply, Packet *data) {
VibePacket *packet = (VibePacket*) data;
switch (packet->type) {
case VibeShort: vibes_short_pulse(); break;
case VibeLong: vibes_break_pulse(); break;
case VibeDouble: vibes_double_pulse(); break;
case VibeCustom:
static const uint32_t const segments[] = { 200, 100, 400 };
VibePattern pat = {
.durations = segments,
.num_segments = ARRAY_LENGTH(segments),
};
vibes_enqueue_custom_pattern(pat);
break;
}
}
An even better solution would be to suggest a patch so that any custom pattern could be designed on the JavaScript side and sent to the watch.
I'm trying to display a model created by SnappyTree/Proctree ( http://www.snappytree.com/ ).
Proctree is designed to work with GLGE, but I'm nearly there in using the library generated data in Three.js. Basically I construct a custom json object, add proctree data to it and use JSONLoader to generate the final geometry.
What I suspect is happening, the vertices (pointcloud) are imported correctly, but the array faces refer to wrong vertices or are otherwise interpreted wrong.
var tree = new Tree(json); // Proctree
// window.console.log(tree);
var model = { "metadata" : { "formatVersion" : 3.1, "generatedBy": "bb3d2proctree", "vertices": 0, "faces": 0, "description": "Autogenerated from proctree." },
"materials": [{ // just testing...
"diffuse": 20000
}],
"colors": [0xff00ff, 0xff0000] // just testing
};
model.vertices = Tree.flattenArray(tree.verts);
model.normals = Tree.flattenArray(tree.normals);
model.uvs = [Tree.flattenArray(tree.UV)];
model.faces = Tree.flattenArray(tree.faces);
var loader = new THREE.JSONLoader();
loader.createModel(model, function(geometry, materials) {
// cut out for brewity... see jsfiddle
}
It's almost working (haven't got into materials yet..) The tree looks kinda correct, however the faces are a bit messed up, and I'm sure there is some simple format difference, and it should be possible to modify the faces array so it works correctly with Three.js.
JSFiddle here: http://jsfiddle.net/nrZuS/
How could I import the data correctly into Three.js?
This is how it should look like: http://www.snappytree.com/#seed=861&segments=10&levels=5&vMultiplier=0.66&twigScale=0.47&initalBranchLength=0.5&lengthFalloffFactor=0.85&lengthFalloffPower=0.99&clumpMax=0.449&clumpMin=0.404&branchFactor=2.75&dropAmount=0.07&growAmount=-0.005&sweepAmount=0.01&maxRadius=0.269&climbRate=0.626&trunkKink=0.108&treeSteps=4&taperRate=0.876&radiusFalloffRate=0.66&twistRate=2.7&trunkLength=1.55&trunkMaterial=TrunkType2&twigMaterial=BranchType5 (except my code so far only tries to import the trunk without twigs, and without textures, so at this point I'm only worried about the trunk shape as can be seen in the jsfiddle)
Never mind, got it to work.
Instead of:
model.faces = Tree.flattenArray(tree.faces);
I do:
model.faces = [];
for (var i = 0; i < tree.faces.length; i++) {
var face = tree.faces[i];
model.faces.push(0);
model.faces.push(face[0]); // v1
model.faces.push(face[1]); // v2
model.faces.push(face[2]); // v3
}
Updated jsFiddle here: http://jsfiddle.net/KY7eq/