How to use gl_LastFragData in WEBGL? - three.js

I'm currently working on a THREE.JS project. I need to customize a fragment_shader with programmable blending instead of using predefined blending. To do this, I want to use gl_LastFragData in my fragment_shader. But I got this error.Error image
How can I use gl_LastFragData in WEBGL or is there any other equivalent way?

gl_LastFragData is not present in webgl or opengl specs.
Hovewer there is extensions mechanism in these API's.
You can query for available extensions on program start and see if desired features are available. To use available extension in shader program you should activate it in shader source code.
You error message says that you try to use extension functionality when it is unavailable.
Speaking of your exact case - check EXT_shader_framebuffer_fetch extension. Also worth checking ARM_shader_framebuffer_fetch, NV_shader_framebuffer_fetch.
Hovewer these extensions are written against OpenGL 2.0, OpenGL ES2.0. I'm not sure if they do exist as webgl extensions.
Expect framebuffer fetch functionality to be present on mobile devices and not present on desktop devices. As far as I understand that comes from difference between GPU architectures for mobile and desktop (tile-based vs immediate mode rasterizers). Tile-based gpu can use tile local memory for effective lookup.

There is no gl_LastFragData in WebGL. WebGL is based on OpenGL ES 2.0, WebGL2 is based on OpenGL ES 3.0. Neither of those support gl_LastFragData
The traditional way of using a previous result is to pass it in as a texture when generating the next result
someOperation1(A, B) -> TempTexture1
someOperation2(TempTexture1, C) -> TempTexture2
someOperation3(TempTexture2, D) -> TempTexture1
someOperation4(TempTexture1, E) -> TempTexture2
someOperation5(TempTexture2, F) -> resultTexture/fb/canvas/window
Example

Related

THREE.js BufferGeometry index size

Does THREE.BufferGeometry currently support 32-bit index buffers? WebGL 1.0 doesn't, unless extension "OES_element_index_uint" is explicitly enabled. But is this somehow implemented in THREE.js by default? I could not find information about this anywhere in the docs.
you could look at the source and see that yes, three.js supports 32bit index buffers
Then you could look at webglstats.com and see it claims 98% of devices support that extension. You could also probably reason that any device that doesn't support that extension is probably old and underpowered and not worth worrying about.
TLDR; Yes, Three.js supports 32-bit index buffers

Easiest way to import image file to OpenGL ES 2.0 cross platform

I am learning to use OpenGL ES 2.0 by using MoSync to write cross platform C code. I have already managed to draw basic shapes such as a triangle, square and circle so the next stage is to draw some text to the screen. After reading various books, tutorials and forum posts I realise I have to create a texture atlas bitmap.
I have a file with the text I want to use, i.e 0-9 a-z image file. Before I can upload and bind it to a texture object I first need to upload the image to OpenGL. Various tutorials use UIImage or BitmapFactory to upload the image but I cannot use these as MoSync does not contain their header files. Could anyone suggest a way to load my image file to OPenGL?
To use MoSync on the Android platform you are probably going to have to make a native library for MoSync and your OpenGL ES code in C++. Most OpenGL ES projects on Android are done in native code for many reasons which are detailed in this article:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1/
I ended up using maOpenGLTexImage(MAHandle image), which works exactly as glTexImage2D() but it uses an image resource instead and figures out pixel formats etc.

OES/EXT/ARB_framebuffer_object

What are the differences between OES/EXT/ARB_framebuffer_object extensions. Can all of these extensions be used with OpenGLES 1.1 or OpenGLES2.0 applications? Or are there any guidlines w.r.t what extension to be used with which version of GLESx.x?
OK After some googling i found the below piece of info...
GLES FBO
a. are core under GLES2
b. under GLES1, exposed via the extension GL_OES_framebuffer_object,
under which API entry point are glFunctionNameOES
OpenGL 1.x/2.x with GL_EXT_framebuffer_object
under which API entry points are glSomeFunctionEXT
OpenGL 3.x FBO/GL_ARB_framebuffer_object
under GL 3.x, FBO's are core and API points are glSomeFunction
also, there is a "backport" exttention for GL 2.x, GL_ARB_framebuffer_object
API entry point are glSomeFunction(). Note the lack of EXT or ARB suffix.
Token naming:
1a. no suffix
1b. _OES
_EXT
no suffix.
fortuantely, the token names map to the same values
Additionally, their usage is different:
1a,1b: Depth and stencil buffers are attatched sperately as render buffers
or also possibly supported is attatching both as one buffer with
the extension GL_OES_packed_depth_stencil.
Depth buffer is defualt at 16bits!
2,3: The spec allows for attatching depth and stencil seperately, but
all consumer level desktop hardware does not support this, rather to
attatch both a stencil and depth buffer calls for a depth-stencil texture.
2. extension GL_EXT_packed_depth_stencil, type is GL_DEPTH24_STENCIL8_EXT
3. part of the FBO spec, type is GL_DEPTH24_STENCIL8
Note: the value of the tokens GL_DEPTH24_STENCIL8 and GL_DEPTH24_STENCIL8_EXT
are the same.
Issues with GL_EXT_framebuffer_object
a) GL_EXT_framebuffer_object might not be listed in GL 3.x contexts because
FBO's are core.
b) also, if have a GL 2.x context with newer hardware, possible that
GL_EXT_framebuffer_object is not listed but GL_ARB_framebuffer_object is
Differences of capabilities:
FBO support via 3.x/GL_ARB_framebuffer_object allows for color buffer attathments
to have different types and resoltions, additionally, MSAA and blit functionality
is part of 3.x core and part of GL_ARB_framebuffer_object.
FBO support via GL_EXT_framebuffer_object, both blit and MSAA support
are exposed as separate extensions.

Setting up OpenGL/Cuda interop in Windows

I am writing a DLL that needs to do some work in Cuda 3.2 and some work in OpenGL. OpenGL will render some grayscale images that my Cuda code needs to read in and modify, and then give back to OpenGL as a texture. I believe I need to create PBOs to do that. I have done some basic OpenGL stuff before but never worked with extensions, and that's where my problem is - I've been searching for 2 days and so far haven't been able to find a working example, despite wading through pages and pages of code. None of the samples I've tried work (and I'm sure my vid card will support it, being a GTX470)
Some specific questions:
1. I installed the nvidia opengl sdk. Should I be using glew.h and wglew.h to access the extensions?
2. My DLL does not have any UI - do I need to create a hidden window or is there an easier way to create an off-screen rendering context?
3. Can I create a grayscale PBO by using GL_RED_8UI format? Will both cuda and gl be happy with that? I read the opengl interop section in the cuda programming manual and it said GL_RGBA_8UI was only usable by pixel shaders because it was an OpenGL 3.0 feature, but I didn't know if that applied to a 1-channel format. 1 channel float would also work for my purposes.
4. I thought this would be fairly easy to do - does it really require hundreds of lines of code?
Edit:
I have code to create an OpenGL context attached to a HBITMAP. Should I create a bitmap-rendering context and then try to attach a PBO to that? Or will that slow me down by also rendering to CPU memory? Is it better to create an invisible window and attach the PBO to that? Also, does the pixel format of my PBO have to match the window/bitmap? What about the dimensions?
Thanks,
Alex
There's actually an example of how to use OpenGL and CUDA together. Look at the SimpleGL example.
You may want to take a look at this example:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st

Can't create Direct2D DXGI Surface

I'm calling this method:
http://msdn.microsoft.com/en-us/library/dd371264(VS.85).aspx
The call fails with E_NOINTERFACE. The documentation is especially unhelpful as to why this may happen. I've enabled all of the DirectX 11 debug stuff and that's the best I got. I know that I have a valid IDXGISurface1* (also tried IDXGISurface) and the other parameters are set correctly. Any ideas as to why this call may fail?
Edit:
I also am having problems creating D3D11 devices. If I pass nullptr as the IDXGIAdapter* argument in D3D11CreateDeviceAndSwapChain, it works fine, but if I enumerate the adapters myself and pass in a pointer (the only one returned), it fails with invalid argument. The MSDN documentation explicitly says that if nullptr is passed, then the system uses the first return from EnumAdapters1. I am running a DX11 system.
Direct2D only works when you create a Direct3D 10.1 device, but it can share surfaces with Direct3D 11. All you need to do is create both devices and render all of your Direct2D content to a texture that you share between them. I use this technique in my own applications to use Direct2D with Direct3D 11. It incurs a slight cost, but it is small and constant per frame.
A basic outline of the process you will need to use is:
Create your Direct3D 11 device like you do normally.
Create a texture with the D3D10_RESOURCE_MISC_SHARED_KEYEDMUTEX option in order to allow access to the ID3D11KeyedMutex interface.
Use the GetSharedHandle to get a handle to the texture that can be shared among devices.
Create a Direct3D 10.1 device, ensuring that it is created on the same adapter.
Use OpenSharedResource function on the Direct3D 10.1 device to get a version of the texture for Direct3D 10.1.
Get access to the D3D10 KeyedMutex interface for the texture.
Use the Direct3D 10.1 version of the texture to create the RenderTarget using Direct2D.
When you want to render with D2D, use the keyed mutex to lock the texture for the D3D10 device. Then, acquire it in D3D11 and render the texture like you were probably already trying to do.
It's not trivial, but it works well, and it is the way that they intended you to interoperate between them. Windows 8 looks like it will introduce full D3D11 compatibility, so it will be just as simple as you expect.
Direct2D uses D3D10 devices not D3D11 devices. D3D11 device is probably that is reported as lacking interface by that E_NOINTERFACE.

Resources