'Texture unit out of range error' in three.js breaks app? - three.js

I am trying to use sparks.js in my app via the THREEx.Sparks module that Jerome made for his live sparks editor. However it only works with three.js up to and including r50 - after which the sparks trail never gets rendered.
I have put up a (somewhat) minimal fork of the editor for all to play with. The cylinder always gets drawn, but as we step up through the three.js revisions:
r46 works, but is of course now very outdated
r50 and r49 draw the particles as expected, but with this WebGL error: WebGL: INVALID_ENUM: activeTexture: texture unit out of range
starting at r51, the particle system of the sparks is not drawn
The error is quite likely to do with how THREE handles improperly initialised textures. I have chased it down to the 128x128 radial texture created in _buildDefaultTexture inside THREEx.Sparks.js.
The reason this is a SO question (and not a GH bug report) is that THREE r51+ have evidently got the right error handling - they don't even attempt the bad behaviour because that makes WebGL a sad panda, whereas r50 and down just did it anyway and caused errors. My question is:
what exactly is THREEx.Sparks.js not doing properly here
and why did it kinda still work prior to r51
System info: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17 | WebGL 1.0 (OpenGL ES 2.0 Chromium) | WebKit | WebKit WebGL | WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium)
Screenshot of sparksjs-dev in action using r50 available for your viewing pleasure at i.stack.imgur.com/p5EfN.png (if it hasn't been pulled yet), which shows the bungled uniform1i and activeTexture calls in the WebGL debugger.

I am not able to debug your code for you, but I can definitely point you in the right direction.
First, see the Migration Wiki for help upgrading to the current version.
For example, you will see that in THREEx.Sparks.js,
texture : { type: "t", texture: this._texture }
should now be
texture : { type: "t", value: this._texture }
There may be other issues.
I can't comment on old versions of three.js, only the current one.
three.js r.55

Related

Got 6 hours for rendering one frame in blender, What am I doing wrong?

I'm new to blender, after creating an animation, I want to render it. With the help of a large number of guides on the Internet, I think, I got the optimal animation rendering settings, as a result, but the beginning of the animation rendering I get 6 hours to render only 1 frame.
A computer:
RTX 3050 Ti 4gb
Ryzen 7 5800
16gb RAM
Settings:
I realize that the problem is in my settings, but I do not understand where.
Also, for some reason, when rendering 1 picture, the entire background I created change to orange due to technical error:
View in blender:
After render:
Thank you a lot for your time, I would really appreciate your help
(Im using Blender 3.4)
To reduce the render time make sure you head onto Edit > Preferences > System in that make sure you have CUDA enabled in Cycle render services. And for the change in background, that can be caused by some bug. So, upgrade your Blender to at least 3.8, because that's more stabler the its previous ones. Also make sure you have all the dependcy files(.dll) fixed and your DirectX version is 11.

Why three.js pointcloud is not same rendering between mac and windows (Intel HD graphics 4000)?

I'm developing Huge PointCloud Viewer(over 10millions) using Three.js.
But, I got a strange result - not same rendering between mac and windows.
below figure is on Mac
On Mac, and next figureOn windows on Windows(7).
both uses Intel HD graphics 4000.
What's happens in Chrome Browser?
Addtionally informations: same situation are iPhoneSE, iPhoneX, iPad4, MacBook, MacBookAir, and MacBookPro. those machines display very sparsed point cloud(intel HD graphics series commonly)
But, only iMac(2017) displays Huge point cloud, successfully. It uses Radeon pro 555, not intel GPU.
I want to any message about info or/and err, but no error in "chrome_debug.log"
=== P.S. === below my code
if(data.point.position.length>0){
var geometry = new THREE.BufferGeometry();
geometry.addAttribute('position', new THREE.BufferAttribute(data.point.position, 3));
geometry.addAttribute('color', new THREE.BufferAttribute(data.point.color, 3));
var material = new THREE.PointsMaterial({
vertexColors: THREE.VertexColors,
size:0.8,
sizeAttenuation : true,
});
}
=== P.P.S. ===
For all
Try and Error, before I may find resolving it.
When pointMaterial.sizeAttenuation = false, FAR perspective view on Mac like on Windows. However, NEAR perspective view become sparse point cloud.
But if NEAR perspective, creating with pointMaterial.sizeAttenuation = true, I got better than before result.
Thanks a lot for your suggestion.
On both the working and the failing configuration, visit to http://webglreport.com/?v=1 and check the value of the OES_element_index_uint extension. My assumption is that this extension not supported on the failing machine/OS combination (driver support missing on MacOS).
This extension is required by three.js to render more than 64k vertices from a single BufferGeometry. On machines that don't support this extension, you'll need to split up your geometry.
Your problem is pixel density on Mac devices. Mac devices often have a pixel ratio of 2 or higher on retina displays. When you declare pointsMaterial.size, you're telling it how big each point is in pixels, so make sure you're taking window.devicePixelRatio into account when assigning their pixel size. Something like
pointsMaterial.size = mySize * window.devicePixelRatio;
should do the trick.
For Chrome users, disabling "Use hardware acceleration when available" could solve the sparse point cloud problem, but the fps may be very low.
I didn't dig into the further reason, if someone finds it, Pls let me know~

Camera texture in Unity with multithreaded rendering

I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.

Has anyone ported globe (the chrome experiment) to the latest threejs version?

Short story:
The globe code is based (and contains) threejs v40 while the latest version in github is threejs r55 at the moment. I was wondering if anybody (more knowledgeable then me in this area) has ported globe to a newer threejs version?
Long story:
I was fiddling around with googles globe from http://www.chromeexperiments.com/globe.
I noticed that it is based on an old threejs (on github) version. Using the latest version (and getting the same results!) did not prove to be easy. Also see this question.
I changed around some function names and fumbled some parameters, no big deal. Then I turned to the shaders. That proved to be more challenging. The old version of threejs seems to have a bug when you do Mesh.flipSided = true;: the normal vectors seem to be different in the shaders between old and new version. But the shader code in globe was written towards this bug, so I had to correct the shader code.
I now have something that sort of looks the same, but combining the atmosphere and the earth is not working at all. I am suspecting this threejs bug to play a part in it, but I am not sure. Again this is a flipSided bug that might have been used by the globe authors.
Well, ehm, I am sort of stuck here. I can do what I wanted to do sticking with the old version, but that somehow feels bad.
Can anyone shed some light here?
According to the changelog at https://code.google.com/p/webgl-globe/source/browse/globe/globe.js :
"Nov 5, 2012. Updated to threejs r52 and tweenjs r7."
So, on the surface it sounds like it has been updated... which parts of the code specifically are giving you trouble?
Nowadays, instead of Mesh.flipSided = true, in the material of the mesh, you need to set side:THREE.BackSide instead.
Can you post or send a link to your code for further investigation? An updated version of the globe project sounds like a most worthy endeavor.

Antialiased lines using jogl on Windows 7 look horrible

My java application uses jogl to create surfaces with an overlaid grid. The lines of the grid are anti-aliased. However, on Windows the anti-aliasing is complete garbage, unless I add a small alpha value to the surface. Here are screen shots of the same scene on all three platforms:
Mac OS X:
Ubuntu 11.10:
Windows 7:
Windows 7 with alpha=0.01:
The basic procedure to create this is as follows:
gl.glPushAttrib( GL.GL_ALL_ATTRIB_BITS );
gl.glEnable(GL.GL_POLYGON_OFFSET_FILL);
gl.glPolygonOffset(1.0f, 1.0f);
drawSurface(gl);
gl.glDisable(GL.GL_POLYGON_OFFSET_FILL);
gl.glPopAttrib();
gl.glDisable( GL.GL_LIGHTING );
gl.glDepthFunc(GL.GL_LEQUAL);
float[] c = {0, 0, 0, 0.5f};
gl.glColor4fv(c, 0);//the alpha here gets overridden if one is specified for the surface (which is why the Windows-with-alpha produces a darker grid)
drawGrid(gl);
gl.glDepthFunc(GL.GL_LESS);
gl.glEnable( GL.GL_LIGHTING );
The drawSurface() method also does the following before creating the polygons for the surface (using GL_TRIANGLE_STRIP primitives):
gl.glPushAttrib(GL.GL_LIGHTING_BIT);
gl.glColorMaterial(GL.GL_FRONT_AND_BACK, GL.GL_AMBIENT_AND_DIFFUSE);
gl.glEnable(GL.GL_COLOR_MATERIAL);
The drawGrid() method sets up anti-aliasing like so:
gl.glEnable(GL.GL_BLEND);
gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
gl.glEnable(GL.GL_LINE_SMOOTH);
gl.glHint(GL.GL_LINE_SMOOTH_HINT, GL.GL_NICEST);
The grid itself is created using GL_LINE_STRIPs
I've read up on the OpenGL documentation regarding line anti-aliasing and also tried out the polygon offset example here.
As for hardware, I have dual boot IBM ThinkPad (64-bit quad core) with an integrated nVidia 3100M card and an Intel Core 2 Duo iMac which has an ATI Radeon. Since Ubuntu and Windows are running on the same hardware, I know it can't be a hardware issue.
I'm looking for suggestions for improving the anti-aliasing of the grid lines on Windows.
Turns out I had not tried (I thought I had, but I did not rebuild and test it correctly)
gl.glDepthMask( false );
Adding that in correctly did greatly improve the fragmentation of the anti-aliased lines of the grid.
That said, I'm still not 100% happy with the look of the lines, but, for now, this question is resolved. Here is a screen shot of the 'improved' grid lines:
While this is a great improvement, it's still not as good as Mac OS X or even Ubuntu.

Resources