THREE.js BufferGeometry index size - three.js

Does THREE.BufferGeometry currently support 32-bit index buffers? WebGL 1.0 doesn't, unless extension "OES_element_index_uint" is explicitly enabled. But is this somehow implemented in THREE.js by default? I could not find information about this anywhere in the docs.

you could look at the source and see that yes, three.js supports 32bit index buffers
Then you could look at webglstats.com and see it claims 98% of devices support that extension. You could also probably reason that any device that doesn't support that extension is probably old and underpowered and not worth worrying about.
TLDR; Yes, Three.js supports 32-bit index buffers

Related

In OpenGL ES what is an "external image"? Why do we need GL_OES_EGL_image_external?

I am reading through spec for external images. It says:
This extension provides a mechanism for creating EGLImage texture targets
from EGLImages. This extension defines a new texture target,
TEXTURE_EXTERNAL_OES.
I have done my best but I can't find out what an "external image is". This extension, and many of the related extension specs, reference "EGLImages" and similar things but I can't figure out what they are.
Why do I need this?
Typically to create an image I load a file from disk. I believe that is "external".
This question basically says it is an image not created by the graphics driver but wouldn't mean virtually all images ever created would be EGLImages or "external images"? When using OpenGL I don't remember having to worry about if my image was external or not.
Can somebody explain what an "External" image is, why it is needed (mainly I see this w/r/t OpenGL ES) and why these extensions are needed? Frankly I am not sure what an "EGL Image" is either, or why they make a distinction.
Thank you
This is a late answer.
An external image AKA external texture is typically used to supply frames from an image stream (e.g. camera preview, decoded video) as OpenGL textures. Such frames usually have special color encodings and memory layouts (e.g. multi-plane YUV). The extension mentioned above allows sampling such images as if they were regular OpenGL textures (with a few limitations).

How to use gl_LastFragData in WEBGL?

I'm currently working on a THREE.JS project. I need to customize a fragment_shader with programmable blending instead of using predefined blending. To do this, I want to use gl_LastFragData in my fragment_shader. But I got this error.Error image
How can I use gl_LastFragData in WEBGL or is there any other equivalent way?
gl_LastFragData is not present in webgl or opengl specs.
Hovewer there is extensions mechanism in these API's.
You can query for available extensions on program start and see if desired features are available. To use available extension in shader program you should activate it in shader source code.
You error message says that you try to use extension functionality when it is unavailable.
Speaking of your exact case - check EXT_shader_framebuffer_fetch extension. Also worth checking ARM_shader_framebuffer_fetch, NV_shader_framebuffer_fetch.
Hovewer these extensions are written against OpenGL 2.0, OpenGL ES2.0. I'm not sure if they do exist as webgl extensions.
Expect framebuffer fetch functionality to be present on mobile devices and not present on desktop devices. As far as I understand that comes from difference between GPU architectures for mobile and desktop (tile-based vs immediate mode rasterizers). Tile-based gpu can use tile local memory for effective lookup.
There is no gl_LastFragData in WebGL. WebGL is based on OpenGL ES 2.0, WebGL2 is based on OpenGL ES 3.0. Neither of those support gl_LastFragData
The traditional way of using a previous result is to pass it in as a texture when generating the next result
someOperation1(A, B) -> TempTexture1
someOperation2(TempTexture1, C) -> TempTexture2
someOperation3(TempTexture2, D) -> TempTexture1
someOperation4(TempTexture1, E) -> TempTexture2
someOperation5(TempTexture2, F) -> resultTexture/fb/canvas/window
Example

is it support the jpeg picture deal with GL_TEXTURE_EXTERNAL_OES mode likes deal with video

For saving memory and improving performance,I want to use a special format texture to deal with jpeg picture. The format handles by GL_TEXTURE_EXTERNAL_OES but process is same to GL_TEXTURE_2D (only different from glBindTexture and shader program texture declaration)
I have done it in egl hardware mode('rasterizer_type': 'direct-gles'). But have problems when I use skia hardware mode ('rasterizer_type': 'hardware'), I found skia hardware mode don`t support it directly and will call render_image_fallback_function_ (HardwareRasterizer::Impl::RenderTextureEGL) to deal with it likes 360 video. I found the result for display is very different from it show in egl hardware mode, It seems that the way only use to deal with 360 video. Is there a way to possible I let skia hardware mode support the special format directly or I only add a new way in TexturedMeshRenderer to deal with picture to distinguish 360 video.
Cobalt/Starboard supports letting the platform define custom (possibly accelerated) image decode functionality in starboard/image.h, are you using this to set GL_TEXTURE_EXTERNAL_OES, or are you modifying common Cobalt code?
If you are modifying Cobalt code, you may want to search through https://cobalt.googlesource.com/cobalt/+/master/src/cobalt/renderer/rasterizer/skia/hardware_image.cc for references to "GL_TEXTURE_2D" and make sure that they still make sense after your changes. In particular, you may need to adjust HardwareFrontendImage::CanRenderInSkia().

OES/EXT/ARB_framebuffer_object

What are the differences between OES/EXT/ARB_framebuffer_object extensions. Can all of these extensions be used with OpenGLES 1.1 or OpenGLES2.0 applications? Or are there any guidlines w.r.t what extension to be used with which version of GLESx.x?
OK After some googling i found the below piece of info...
GLES FBO
a. are core under GLES2
b. under GLES1, exposed via the extension GL_OES_framebuffer_object,
under which API entry point are glFunctionNameOES
OpenGL 1.x/2.x with GL_EXT_framebuffer_object
under which API entry points are glSomeFunctionEXT
OpenGL 3.x FBO/GL_ARB_framebuffer_object
under GL 3.x, FBO's are core and API points are glSomeFunction
also, there is a "backport" exttention for GL 2.x, GL_ARB_framebuffer_object
API entry point are glSomeFunction(). Note the lack of EXT or ARB suffix.
Token naming:
1a. no suffix
1b. _OES
_EXT
no suffix.
fortuantely, the token names map to the same values
Additionally, their usage is different:
1a,1b: Depth and stencil buffers are attatched sperately as render buffers
or also possibly supported is attatching both as one buffer with
the extension GL_OES_packed_depth_stencil.
Depth buffer is defualt at 16bits!
2,3: The spec allows for attatching depth and stencil seperately, but
all consumer level desktop hardware does not support this, rather to
attatch both a stencil and depth buffer calls for a depth-stencil texture.
2. extension GL_EXT_packed_depth_stencil, type is GL_DEPTH24_STENCIL8_EXT
3. part of the FBO spec, type is GL_DEPTH24_STENCIL8
Note: the value of the tokens GL_DEPTH24_STENCIL8 and GL_DEPTH24_STENCIL8_EXT
are the same.
Issues with GL_EXT_framebuffer_object
a) GL_EXT_framebuffer_object might not be listed in GL 3.x contexts because
FBO's are core.
b) also, if have a GL 2.x context with newer hardware, possible that
GL_EXT_framebuffer_object is not listed but GL_ARB_framebuffer_object is
Differences of capabilities:
FBO support via 3.x/GL_ARB_framebuffer_object allows for color buffer attathments
to have different types and resoltions, additionally, MSAA and blit functionality
is part of 3.x core and part of GL_ARB_framebuffer_object.
FBO support via GL_EXT_framebuffer_object, both blit and MSAA support
are exposed as separate extensions.

Setting up OpenGL/Cuda interop in Windows

I am writing a DLL that needs to do some work in Cuda 3.2 and some work in OpenGL. OpenGL will render some grayscale images that my Cuda code needs to read in and modify, and then give back to OpenGL as a texture. I believe I need to create PBOs to do that. I have done some basic OpenGL stuff before but never worked with extensions, and that's where my problem is - I've been searching for 2 days and so far haven't been able to find a working example, despite wading through pages and pages of code. None of the samples I've tried work (and I'm sure my vid card will support it, being a GTX470)
Some specific questions:
1. I installed the nvidia opengl sdk. Should I be using glew.h and wglew.h to access the extensions?
2. My DLL does not have any UI - do I need to create a hidden window or is there an easier way to create an off-screen rendering context?
3. Can I create a grayscale PBO by using GL_RED_8UI format? Will both cuda and gl be happy with that? I read the opengl interop section in the cuda programming manual and it said GL_RGBA_8UI was only usable by pixel shaders because it was an OpenGL 3.0 feature, but I didn't know if that applied to a 1-channel format. 1 channel float would also work for my purposes.
4. I thought this would be fairly easy to do - does it really require hundreds of lines of code?
Edit:
I have code to create an OpenGL context attached to a HBITMAP. Should I create a bitmap-rendering context and then try to attach a PBO to that? Or will that slow me down by also rendering to CPU memory? Is it better to create an invisible window and attach the PBO to that? Also, does the pixel format of my PBO have to match the window/bitmap? What about the dimensions?
Thanks,
Alex
There's actually an example of how to use OpenGL and CUDA together. Look at the SimpleGL example.
You may want to take a look at this example:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st

Resources