I need to blend a few image together into a single one, pretty much as what's described here: OpenGL - mask with multiple textures.
I used the solution that is proposed there, but there's an issues with the glBlendFuncSeparate method.
Turns out that this method was introduced in later openGL versions, and according to my gl.h file the version I'm using is 1.
After much searching and reading I realized that this is what I have to work with and that I can't just upgrade my openGL version.
I went ahead and downloaded GLEW.
I added glew.h and glew.c into my VS10 project, defined GLEW_BUILD and now it finally compiles without complaining about glBlendFuncSeparate, but when I run the program it crashes when it tries to call the method, saying Access Violation, I guess that it points to NULL and then crashes when that's being run.
I continued reading and searching on this, and from what I understand, I need to use OpenGl Extensions to make it work.
If what's written in Using OpenGL extensions On Windows is correct then I'm missing something.
Let's say I do everything it says, I "download and install the latest drivers and SDKs for your graphics card" and then compile it, even if it runs on my machine, I see no guarantee that it won't crash on someone else's machine, since they might not have done the same.
I have two questions:
Am I missing something here? this whole process seems way too complicated, and environment dependent.
Is there an alternative for using glBlendFuncSeparate in this kind of a scenario?
You don't need glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO); to use trick described in OpenGL - mask with multiple textures. Yes, you can't added color directly to alpha channel, like described in previous example, but you can be little tricky.
During writing your mask just disable writting all color channels, except alpha:
glColorMask(false, false, false, true);
and enable multiplying mask's alpha on background alpha-channel:
glBelndFunc(GL_ZERO, GL_SRC_ALPHA);
After writing bitmask, don't forget setup your glColorMask back.
glColorMask(true, true, true, true);
//-----------------------------------------------------------------------------------------------------------------------
And yes, you need mask with information in alpha channel:
1) It's can be done with GIMP (very simple, but required GIMP knowlege).
2) You can write you own rootine, for pushing color information to alpha channel, before mask texture creation (it's very simple - just few lines of code).
3) Or just use GL_ALPHA "format" attribute in glTexImage2D for mask texture. This flag just writes bitmaps color to texture alpha channel.
Related
We've encountered a strange problem on newer laptops using built-in graphic cards.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK).
Afterwards we parse the feedback buffer. This has worked for many years.
Now we have a problem with glyphs containing holes (only on platforms with built-in graphic cards):
wglUseFontOutlines works perfectly. If we just draw the returned display lists, everything is fine. However, the token stream generated with GL_FEEDBACK is corrupt. The debugger shows nothing unusual, all functions return with success and the parsing itself works fine too. It is really the binary data generated by GL_FEEDBACK mode, which is wrong.
Has anyone else encountered this problem?
And is there an alternative way to obtain outlines and fillings for true type fonts on Windows?
I'm just guessing into the blue here: The GL_SELECT and GL_FEEDBACK rendering modes were usually not supported by widespread GPU driver OpenGL implementations. Only a handful graphics cards from the previous century actually did support these rendering modes. Hence you would almost always fall back into a software implementation when using those modes.
However given modern GPU's vastly more flexible feedback mechanisms, the latest drivers could actually try to implement those rendering modes using GPU features (somewhat weird, because those modi have been removed from modern OpenGL profiles). Anyway, this could be the reason why you're experiencing these problems.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK). Afterwards we parse the feedback buffer.
That's a cool Rube-Goldberg machine. Why don't you simply cut the middleman and obtain the glyph outlines directly using the appropriate Windows GDI functions (GetGlyphOutline) for this? This is what wglUseFontOutlines is using internally anyway.
I followed this post to play with OpenGL (programmable pipeline) on Ruby
Basically, I'm just trying to create a blue window, and here's the code.
Ray::GL.major_version = 3
Ray::GL.minor_version = 2
Ray::GL.core_profile = true # if you want/need one
window = Ray::Window.new("Test Window", [800, 600])
window.make_current
glClearColor(0, 0, 1, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
Instead, I got a white window created. This indicated that I was missing something, but I couldn't figure out what I was missing as the resources for OpenGL on Ruby seemed limited. I have been searching all over the web, but all I found was fixed-pipeline OpenGL stuff for Ruby.
Yes, I could use Ray's built-in functions to set the background color and draw stuff, but I didn't want to do that. I just wanted to use Ray to setup the window, then called OpenGL APIs directly. However, I couldn't figure out what I was missing in the code above.
I would greatly appreciate any hint or pointer to this (maybe I needed to swap the buffer? but then I didn't know how to do it with Ray). Is there any body familiar with using Ray that can give me some hints on this?
Or, are there any other tools that would allow me to setup OpenGL binding (for none fixed-pipeline)?
It would appear that you set the clear color to be blue, then cleared the back buffer to make it blue. But, as you said, you have not swapped the buffers to put the back buffer onto your screen. As far as swapping buffers goes, here's another answer from stack overflow
"Swapping the front and back buffer of a double buffered window is a function provided by the underlying graphics system, i.e. Win32 GDI, or X11 GLX. The function's you're looking for are wglSwapBuffers and/or glXSwapBuffers. On MacOS X NSOpenGLViews are automatically swapped.
However most likely you're using some framework, like GLUT, GLFW or Qt, which provide a portable wrapper around those functions. Read the framework's documentation."
I've never used Ray, so I'd say just keep rooting around in the documentation or look through example projects to see how buffer swapping is done.
I have an existing component that draws Direct2D content to an ID2D1RenderTarget and I would like to save that drawing to an image file. The questions here, here and here, although they helped me, did not provide a clear answer as how to do it.
My nullth idea was to try the official MSDN method. Unfortunately, it is not available in Win7.
My first idea was to modify the drawing routine to make it accept the RenderTarget as a parameter and use ID2D1Factory::CreateWicBitmapRenderTarget to draw directly into a IWICBitmap, but it turns out to be quite difficult for me (because it would be necessary to modify not only the drawing routine itself, but also the drawing callbacks of all users of that component (the code, written in Delphi, uses Embarcadero's TDirect2DCanvas, and thus did not need to manage all Direct2D resources, like render target or brushes)).
My second idea was to create an ID2D1Bitmap, fill it with what is already drawn using ID2D1Bitmap::CopyFromRenderTarget and then draw that ID2D1Bitmap to a WicBitmapRenderTarget (this is about what was done here). I had the same kind of problems as those who asked the questions I link to: different resources affinities, as briefly explained Kenny Kerr.
So is it possible under Win7 without having to implement my first idea, and how would you do it?
Direct2D 1.1 is supported on Windows 7 if you install the Platform Update. Unfortunately, that doesn't solve your problem without first creating two more of them: 1) it's still pre-release/beta, and 2) it adds another installation dependency for you to worry about.
I am writing a DLL that needs to do some work in Cuda 3.2 and some work in OpenGL. OpenGL will render some grayscale images that my Cuda code needs to read in and modify, and then give back to OpenGL as a texture. I believe I need to create PBOs to do that. I have done some basic OpenGL stuff before but never worked with extensions, and that's where my problem is - I've been searching for 2 days and so far haven't been able to find a working example, despite wading through pages and pages of code. None of the samples I've tried work (and I'm sure my vid card will support it, being a GTX470)
Some specific questions:
1. I installed the nvidia opengl sdk. Should I be using glew.h and wglew.h to access the extensions?
2. My DLL does not have any UI - do I need to create a hidden window or is there an easier way to create an off-screen rendering context?
3. Can I create a grayscale PBO by using GL_RED_8UI format? Will both cuda and gl be happy with that? I read the opengl interop section in the cuda programming manual and it said GL_RGBA_8UI was only usable by pixel shaders because it was an OpenGL 3.0 feature, but I didn't know if that applied to a 1-channel format. 1 channel float would also work for my purposes.
4. I thought this would be fairly easy to do - does it really require hundreds of lines of code?
Edit:
I have code to create an OpenGL context attached to a HBITMAP. Should I create a bitmap-rendering context and then try to attach a PBO to that? Or will that slow me down by also rendering to CPU memory? Is it better to create an invisible window and attach the PBO to that? Also, does the pixel format of my PBO have to match the window/bitmap? What about the dimensions?
Thanks,
Alex
There's actually an example of how to use OpenGL and CUDA together. Look at the SimpleGL example.
You may want to take a look at this example:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st
I've been trying to get OpenGL-ES to do something roughly like the following to see if glPushMatrix() and glPopMatrix() could be used to put things such as blending states back how they were before glPushMatrix() was called.
It works for rotation/translation stuff - why doesn't it work for some other things such as blend states?
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); //<-first blend mode
glPushMatrix();
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA); //<-second blend mode
//...drawing and stuff here...
glPopMatrix();
//at this point it appears the second blend mode is still in effect - why?
Am I properly confused or is there another pop/push combination of functions for states not popped/pushed by glPopMatrix() and glPushMatrix()?
Is there another way to easily set everything back to a previous state? Thanks for any illumination!
A stack for attributes does not exist for OpenGL-ES, sorry.
You can write one yourself if you really want to. All attributes are gettable, so any stack-datastructure would do.
Imho a better way is to define a hand full of useful blending presets and have a little state-machine that allows you to switch from one blending mode to another using the least calls into OpenGL-ES. After all - how many different blendmodes do you really need?
You can use glGet() to get all blending options. Then you can use them to restore the blending state.
As you know, OpenGL is a state machine and the various glPush and glPop functions control stacks. Now, there are multiple stacks. The matrix stack contains only the coordinate transformations. There is another stack, called the attribute stack, which does contain your blend function setting. Check out glPushAttrib.