programmatically detect rendering mode in p5js? - p5.js

for the p5js rendering engine, if in setup() function I use WEBGL vs P2D, how can I know later in my code what rendering mode I am in? I have wrote generic functions that work across 2D and 3D modes and I want the code to execute in different ways based on the rendering mode.

There probably are more straightforward and elegant ways of doing it but, in a pinch, you can read the drawingContext of the renderer used and see if it's either an instance of WebGLRenderingContext or CanvasRenderingContext2D
const webglSketch = p => {
p.setup = () => {
p.createCanvas(100, 100, p.WEBGL)
p.background('red')
console.log('WEBGL?', p._renderer.drawingContext instanceof WebGLRenderingContext)
console.log('2D?', p._renderer.drawingContext instanceof CanvasRenderingContext2D)
}
}
const twoDSketch = p => {
p.setup = () => {
p.createCanvas(100, 100)
p.background('blue')
console.log('WEBGL?', p._renderer.drawingContext instanceof WebGLRenderingContext)
console.log('2D?', p._renderer.drawingContext instanceof CanvasRenderingContext2D)
}
}
new p5(webglSketch)
new p5(twoDSketch)
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.0.0/p5.min.js"></script>
If you're not using the instance mode, just check the _renderer global object.

Related

Collider made of geometry does not match mesh data(position, scale, ..etc)

I wanted to load the spatial model using the gltfloader, and then create a collision with geometry data.
By the way, I’m trying to get help with a different result than I expected.
problem image
here is sample image
You can see that the collider is larger than the visible Mesh data.
Most of the spatial data has the above situation.
The example code used is as follows. (tsx on react)
function Space(props) {
const url = props.data?.spaceFileName ? `https://space-test.vrin.co.kr/files/${props.data.spaceFileName}` : null;
const model = useGLTF(url);
const clone = model.scene;
const getCollide = (child) => {
console.log(child);
const geometry = new Geometry().fromBufferGeometry(child.geometry);
const vertices = geometry.vertices.map((v) => new THREE.Vector3().copy(v));
const faces = geometry.faces.map((f) => [f.a, f.b, f.c]);
const normals = geometry.faces.map((f) => new THREE.Vector3().copy(f.normal));
return <Object key={child.uuid} clone={clone} args={[vertices, faces, normals]} model={model} />
};
const traverseObject = (child) => child.children.map((c) => {
if (c.type === 'Object3D' || c.type === 'Group') {
return traverseObject(c);
} if (c.type === 'Mesh') {
return getCollide(c);
}
});
const objects = traverseObject(model.scene)
return <group dispose={null}>{objects}</group>;
}
Some spatial data generate collisions properly, but some spatial data do not generate collisions properly.
make sample image
here is sample what i made on blender
As a result of watching the Blender program, it is expected that it does not seem to be able to properly load the Geometry layer at the top.
blender2
If you look at the collation in the image above, there is a group called Sketchfeb at the top.
If i erase that group layer,
blender2
It can be seen that it is very similar to the shape of the collision body shown in the first picture.
I don’t know if I wrote the recursive function incorrectly or if there is an internal problem with the module.
Does anyone know the solution?
I don’t know if I wrote the recursive function incorrectly or if there is an internal problem with the module.
Does anyone know the solution?

Can loadImage() be used with a JavaScript Promise?

I'm writing an application draws particles on screen in a chemical reaction.
I have written a Particle class and want to handle the async nature of loading images in P5.js in this class.
My thinking was that if I wrap the loadImage() function in a Promise, I should be able to load all of my particle sprites async and then have the draw code execute as soon as the P5 Image object resolves.
I had this code working fine using just callback functionality of loadImage(), but I ultimately have to hand off the image object to a physics library to model particle motion as well as other constructors, so a Promise pattern seemed to be the right solution.
class Particle {
constructor(name) {
// Set properties on the Particle
this.imageUrl = "THE_URL";
this.imageObject = null;
}
loadParticleImage() {
console.log("attempting to load image: " + this.imageUrl)
return new Promise(function (resolve, reject) {
loadImage(this.imageUrl, (result) => {
// Sets the image object property to the result for other methods to access
this.imageObject = result;
// Resolves the Promise with the result
resolve(result);
}, reject(Error("Image not found")));
})
}
drawParticle() {
console.log("attempting to draw particle");
this.loadParticleImage().then(function (imageObject) {
// code to draw image and handoff to physics library
}, function (error) {
console.log("imageObject not defined", error);
});
}
}
And in the setup() function of the main sketch file, I would initialize a particle and draw it using something like:
theParticle = new Particle("Water");
theParticle.drawParticle();
I get a stack error saying that the image could not be loaded and I can't quite figure out why.
attempting to draw particle
particle.js: attempting to load image: img/svg/Water.svg
particle.js: imageObject not defined Error: Image not found
at particle.js
at new Promise (<anonymous>)
at Particle.loadParticleImage (particle.js)
at Particle.drawParticle (particle.js)
at <anonymous>:1:18
I can spot two mistakes in your code:
you are always immediately calling reject(). You probably wanted to pass a callback to loadImage that will reject the promise:
loadImage(this.imageUrl, (result) => {
…
}, () => {
// ^^^^^^^
reject(new Error("Image not found"));
});
the this keyword in your callback is not the one you expect. Use an arrow function for the promise executor callback as well to have this refer to your Particle instance that loadParticleImage was called on:
return new Promise((resolve, reject) => {
// ^^
loadImage(this.imageUrl, result => {
this.imageObject = result;
…

Global _navigation prop access - v1.x

I'm using react-navigation v1.x. I like to have a global access to navigation prop of a navigator. My hope is that if I doglobalNavigation.addListener() or goBack etc with no arguments, it should be as if I did that from the currently focused screen. I also want to pass it some argument, so that it should be as if I called it rom a certain "key". I use this global from various places (like redux middleware etc).
Would be very useful to also get "getCurrentRouteName" and "getCurrentRouteKey". In pre v1.x I had done custom stuff in routers, and was hoping to avoid that now.
I tried to have a ref to the navigator and use ref._navigation for things like navigate, goBack, etc. I wanted to use it with addListener.
Here is how I get and hold the ref:
export AppNavigatorUtils = {}
class AppContent extends Component<Props> {
constructor(props: Props) {
super(props);
AppNavigatorUtils.getNavigation = this.getNavigation;
}
render() {
return (
<AppNavigator ref={this.refNavigator} />
)
}
refNavigator = el => this.navigator = el
getNavigation = () => this.navigator ? this.navigator._navigation : null
}
Doing AppNavigatorUtils.getNavigation().addListener('didBlur, (e) => console.log('blur from e:', e)) is not working. I expected this to add listener to the currently in focus screen.
Anyone any ideas?

Threejs: mesh standard material reflection issues

I've stumbled upon the problem that some browsers and devices render the MeshStandardMaterial reflection poorly.
Consider the example below:
and this example below:
Both comparisons are running simultaneously on the same computer, same graphics card, identical attributes, but different browsers. As you can see, the reflections on the right are almost unidentifiable.
Additionally, I'm getting some triangulation issues at sharp angles that make it seem as if the reflection is being calculated in the vertex shader:
I understand that different browsers have different WebGL capabilities, as the results on http://webglreport.com/ illustrate:
Does anybody know what WebGL extension or feature the IE/Edge browsers are missing that I can look for? I want to put a sniffer that uses a different material if it doesn't meet the necessary requirements. Or if anybody has a full solution, that would be even better. I've already tried playing with the EnvMap's minFilter attribute, but the reflections are still calculated differently.
I don't know which extensions are needed but you can easily test. Before you init THREE.js put some code like this
const extensionsToDisable = [
"OES_texture_float",
"OES_texture_float_linear",
];
WebGLRenderingContext.prototype.getExtension = function(oldFn) {
return function(extensionName) {
if (extensionsToDisable.indexOf(extensionName) >= 0) {
return null;
}
return oldFn.call(this, name);
};
}(WebGLRenderingContext.prototype.getExtension);
WebGLRenderingContext.prototype.getSupportedExtensions = function(oldFn) {
return function() {
const extensions = oldFn.call(this);
return extensions.filter(e => extensionsToDisable.indexOf(e) < 0);
};
}(WebGLRenderingContext.prototype.getSupportedExtensions);
Then just selectively disable extensions until Firefox/Chrome look the same as IE/Edge.
The first thing I'd test is disabling every extension that's in Chrome/Firefox that's not in IE/Edge just to verify that turning them all off reproduces the IE/Edge behavior.
If it does reproduce the issue then I'd do a binary search (turn on half the disabled extensions), and repeat until I found the required ones.
const extensionsToDisable = [
"EXT_blend_minmax",
"EXT_disjoint_timer_query",
"EXT_shader_texture_lod",
"EXT_sRGB",
"OES_vertex_array_object",
"WEBGL_compressed_texture_s3tc_srgb",
"WEBGL_debug_shaders",
"WEBKIT_WEBGL_depth_texture",
"WEBGL_draw_buffers",
"WEBGL_lose_context",
"WEBKIT_WEBGL_lose_context",
];
WebGLRenderingContext.prototype.getExtension = function(oldFn) {
return function(extensionName) {
if (extensionsToDisable.indexOf(extensionName) >= 0) {
return null;
}
return oldFn.call(this, name);
};
}(WebGLRenderingContext.prototype.getExtension);
WebGLRenderingContext.prototype.getSupportedExtensions = function(oldFn) {
return function() {
const extensions = oldFn.call(this);
return extensions.filter(e => extensionsToDisable.indexOf(e) < 0);
};
}(WebGLRenderingContext.prototype.getSupportedExtensions);
const gl = document.createElement("canvas").getContext("webgl");
console.log(gl.getSupportedExtensions().join('\n'));
console.log("WEBGL_draw_buffers:", gl.getExtension("WEBGL_draw_buffers"));

how do I set quad buffering with jogl 2.0

I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem.
I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library.
Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering.
Here's the previous code:
public class Theatre extends PGraphicsOpenGL{
protected void allocate()
{
if (context == null)
{
// If OpenGL 2X or 4X smoothing is enabled, setup caps object for them
GLCapabilities capabilities = new GLCapabilities();
// Starting in release 0158, OpenGL smoothing is always enabled
if (!hints[DISABLE_OPENGL_2X_SMOOTH])
{
capabilities.setSampleBuffers(true);
capabilities.setNumSamples(2);
}
else if (hints[ENABLE_OPENGL_4X_SMOOTH])
{
capabilities.setSampleBuffers(true);
capabilities.setNumSamples(4);
}
capabilities.setStereo(true);
// get a rendering surface and a context for this canvas
GLDrawableFactory factory = GLDrawableFactory.getFactory();
drawable = factory.getGLDrawable(parent, capabilities, null);
context = drawable.createContext(null);
// need to get proper opengl context since will be needed below
gl = context.getGL();
// Flag defaults to be reset on the next trip into beginDraw().
settingsInited = false;
}
else
{
// The following three lines are a fix for Bug #1176
// http://dev.processing.org/bugs/show_bug.cgi?id=1176
context.destroy();
context = drawable.createContext(null);
gl = context.getGL();
reapplySettings();
}
}
}
This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre").
Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying:
PGraphicsOpenGL pg = ((PGraphicsOpenGL)g);
pgl = pg.beginPGL();
gl = pgl.gl;
glu = pg.pgl.glu;
gl2 = pgl.gl.getGL2();
GLProfile profile = GLProfile.get(GLProfile.GL2);
GLCapabilities capabilities = new GLCapabilities(profile);
capabilities.setSampleBuffers(true);
capabilities.setNumSamples(4);
capabilities.setStereo(true);
GLDrawableFactory factory = GLDrawableFactory.getFactory(profile);
If I go on, I should do something like this:
drawable = factory.getGLDrawable(parent, capabilities, null);
but drawable isn't a field anymore and I can't find a way to do it.
How do I set quad buffering?
If I try this:
gl2.glDrawBuffer(GL.GL_BACK_RIGHT);
it obviously doesn't work :/
Thanks.

Resources