OpenGL 3.1+ with Ruby - ruby

I followed this post to play with OpenGL (programmable pipeline) on Ruby
Basically, I'm just trying to create a blue window, and here's the code.
Ray::GL.major_version = 3
Ray::GL.minor_version = 2
Ray::GL.core_profile = true # if you want/need one
window = Ray::Window.new("Test Window", [800, 600])
window.make_current
glClearColor(0, 0, 1, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
Instead, I got a white window created. This indicated that I was missing something, but I couldn't figure out what I was missing as the resources for OpenGL on Ruby seemed limited. I have been searching all over the web, but all I found was fixed-pipeline OpenGL stuff for Ruby.
Yes, I could use Ray's built-in functions to set the background color and draw stuff, but I didn't want to do that. I just wanted to use Ray to setup the window, then called OpenGL APIs directly. However, I couldn't figure out what I was missing in the code above.
I would greatly appreciate any hint or pointer to this (maybe I needed to swap the buffer? but then I didn't know how to do it with Ray). Is there any body familiar with using Ray that can give me some hints on this?
Or, are there any other tools that would allow me to setup OpenGL binding (for none fixed-pipeline)?

It would appear that you set the clear color to be blue, then cleared the back buffer to make it blue. But, as you said, you have not swapped the buffers to put the back buffer onto your screen. As far as swapping buffers goes, here's another answer from stack overflow
"Swapping the front and back buffer of a double buffered window is a function provided by the underlying graphics system, i.e. Win32 GDI, or X11 GLX. The function's you're looking for are wglSwapBuffers and/or glXSwapBuffers. On MacOS X NSOpenGLViews are automatically swapped.
However most likely you're using some framework, like GLUT, GLFW or Qt, which provide a portable wrapper around those functions. Read the framework's documentation."
I've never used Ray, so I'd say just keep rooting around in the documentation or look through example projects to see how buffer swapping is done.

Related

Winapi How do I draw a rectangle to a specific Window handle?

I'm Using a Wrapper Library in GMS2 That was made back in GM6 Days (gamemaker) Someone was able to wrap majority of the win32API to use in GM6-8. There is only 1 odd instance in where the WinAPI system seems to mess up when drawing the controls to the Main Application Window.
The desired goal is to draw an image to an Child window and draw a grid defining it's splitting according to the user input EX: 16x16 and having the user select squares VIA Mouse Click + Dragging over the boxes.
Unfortunately I have little to no experience in win32API so i'm a bit lost as to where to start.
Looking over the documentation it looks like he left majority of the script names of the DLL to mimic the format of that when calling in C++ or C (just my assumptions).
From His Documentation he has things like "Drawing System" Which Contains things like "Move Item","Add Line","Add Graphic Buffer" etc... and then other Graphic Buffer functions. But then theres the "Draw" functions which has things like "Draw Fill Rect , DrawSelectObj" etc... he doesn't really provide examples so i'm unsure as to how to use these things together to get my desired results. What is the difference between a drawing system and a draw function? Do I have to use them in conjunction, along with the Graphics Buffer?
Can Someone point in the right direction of the necessary steps to get it done? An Example without code and just the function equivalent will suffice, I just need to know out of which functions to use and then later bind it to the Child Window.
An Example Code from his demo is something like this
GbGradient2 = API_GB_Create (105,105); //Graphics Buffer
DcGradient2 = API_GB_GetDC (GbGradient2);
API_Draw_Gradient (DcGradient2,0,0,105,105,0,c_yellow,c_lime);
BrGradient2 = API_Draw_CreatePatternBrush (API_GB_GetBitmap (GbGradient2));
API_Draw_Gradient (DcGradient2,0,0,105,105,0,c_red,65535);
BrGradient3 = API_Draw_CreatePatternBrush (API_GB_GetBitmap (GbGradient2));
hRectangle = API_DS_AddRectangle (2,5,5,105,105); // Adds a rectangle(Drawing System)
hEllipse = API_DS_AddEllipse (2,5,5,105,105);
hNoPen = API_Draw_CreatePen (PS_NULL,0,0);
API_DS_SetItemBrush (hRectangle,BrGradient2); // Sets the brush
API_DS_SetItemBrush (hEllipse,BrGradient3);
API_DS_SetItemPen (hRectangle,hNoPen); // Sets the pen
API_DS_SetItemPen (hEllipse,hNoPen);
API_Draw_Gradient (GbGradient2,0,0,16,16,0,c_yellow,c_lime);
Lookin at it a little more it looks like the draw functions are linked to GDI somehow.
since GMS2 is a cross platform tool , its windows-only functionality gas been removed.
you can make a nice GUI for that porpose by using GMS2 objects , as you have a little Xp
about Win32 API,this will be easier than that big stuffy coding
here are some tips ,
creating a window object with a rectangle sprite
creating ui objects at the create event of above object
adding some code to the global mouse event

DirectX to OpenGL hot-swap, doesn't display on Win32 window

During the developement of my engine, I'm trying to implement a feature, that enables hot-swapping between OpenGL and DirectX. Currently I'm testing on Win32 platform, and came across the following problem:
I implemented both renderer (OpenGL 3.0, and Direct3D11), both work fine alone. The swapping mechanism is the following:
Destroy the current rendering context, and build up the new one. For example: Release all DirectX objects, and then create OpenGL context, via WGL. I'm trying to implement this, using only one window (HWND).
Swapping from OpenGL 3.0 to DirectX11 works. (After destroying OpenGL, DirectX renders fine)
Destroying OpenGL and then recreating OpenGL again, works. Same with DirectX.
When I'm trying to swap from DirectX to OpenGL, the window will stop displaying the newly draw content, only the lastly drawn DirectX frame.
To construct the OpenGL context I'm using WGL. The class for the window was created with the CS_OWNDC style. I'm using SwapBuffers to flip the window buffers. Before setting up the context, I use SetPixelFormat with the previously returned value from ChoosePixelFormat. The created context is version 3.0, ensured via wglCreateContextAttribsARB.
Additional information:
All of the DirectX references are released, this was checked by calling ReportLiveDeviceObjects and checking the return value of ID3D11Device1::Release (0). ID3D11DeviceContext1::ClearState and Flush were called to ensure object destruction.
None of the OpenGL methods report error via glGetError, this is checked after every call. This is same for all OS, and WGL calls.
The OpenGL rendering calls are executing as expected, for example:
OpenGL rendering with 150 fps
Swap to DirectX, render with 60 fps (VSYNC)
Swap back to OpenGL, rendering again with 150 fps (not more)
There are other scenarios where OpenGL renders with more than 150 fps, so the rendering calls are executing properly.
My guess is that the flipping of the buffers doesn't work somehow, however SwapBuffers returns TRUE anyway.
I tried using SaveDC and RestoreDC before and after using DirectX, this resulted in now solution.
Using wglSwapLayerBuffers instead of SwapBuffers gives no change.
Can I somehow restore the HWND, or HDC to the original state, or do you guys have any idea why this might happen?
Guess I posted my question to soon, but however, this is how I solved it.
I dug around the documnentation for DirectX, and for the function CreateSwapChainForHwnd, I found the following:
Because you can associate only one flip presentation model swap chain at a time with an HWND, the Microsoft Direct3D 11 policy of deferring the destruction of objects can cause problems if you attempt to destroy a flip presentation model swap chain and replace it with another swap chain.
I was using DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL in my swap chain descriptor, and this could mean, that DirectX sets up a flip swap chain for the window, but when I try to use it with OpenGL, it will fail swapping buffers somehow.
The solution for this, is to not use FLIP mode for creating the swap chain:
DXGI_SWAP_CHAIN_DESC1 scd;
scd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
scd.Scaling = DXGI_SCALING_ASPECT_RATIO_STRETCH;
You have to set the Scaling to something else than DXGI_SCALING_NONE, or the creation will fail.
The interesting part is, that the DirectX still does not properly destroy the flip model on the window, altough I did everything it suggested in the documentation (ClearState and Flush calls).
CreateSwapChainForHwnd see Remarks
Edit: I found this question after some time. If anybody still has some idea, how to revert back to using GDI again instead of the DWM backbuffer, it is greatly appreciated.

OpenGL and (the lack of) glBlendFuncSeparate

I need to blend a few image together into a single one, pretty much as what's described here: OpenGL - mask with multiple textures.
I used the solution that is proposed there, but there's an issues with the glBlendFuncSeparate method.
Turns out that this method was introduced in later openGL versions, and according to my gl.h file the version I'm using is 1.
After much searching and reading I realized that this is what I have to work with and that I can't just upgrade my openGL version.
I went ahead and downloaded GLEW.
I added glew.h and glew.c into my VS10 project, defined GLEW_BUILD and now it finally compiles without complaining about glBlendFuncSeparate, but when I run the program it crashes when it tries to call the method, saying Access Violation, I guess that it points to NULL and then crashes when that's being run.
I continued reading and searching on this, and from what I understand, I need to use OpenGl Extensions to make it work.
If what's written in Using OpenGL extensions On Windows is correct then I'm missing something.
Let's say I do everything it says, I "download and install the latest drivers and SDKs for your graphics card" and then compile it, even if it runs on my machine, I see no guarantee that it won't crash on someone else's machine, since they might not have done the same.
I have two questions:
Am I missing something here? this whole process seems way too complicated, and environment dependent.
Is there an alternative for using glBlendFuncSeparate in this kind of a scenario?
You don't need glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO); to use trick described in OpenGL - mask with multiple textures. Yes, you can't added color directly to alpha channel, like described in previous example, but you can be little tricky.
During writing your mask just disable writting all color channels, except alpha:
glColorMask(false, false, false, true);
and enable multiplying mask's alpha on background alpha-channel:
glBelndFunc(GL_ZERO, GL_SRC_ALPHA);
After writing bitmask, don't forget setup your glColorMask back.
glColorMask(true, true, true, true);
//-----------------------------------------------------------------------------------------------------------------------
And yes, you need mask with information in alpha channel:
1) It's can be done with GIMP (very simple, but required GIMP knowlege).
2) You can write you own rootine, for pushing color information to alpha channel, before mask texture creation (it's very simple - just few lines of code).
3) Or just use GL_ALPHA "format" attribute in glTexImage2D for mask texture. This flag just writes bitmaps color to texture alpha channel.

OpenGL equivalent to GL_POINT_SIZE_ARRAY_OES?

I'm trying to draw point sprites in a small Mac app. I want each sprite to have its own size, and I know that OpenGL ES has the client state "GL_POINT_SIZE_ARRAY_OES".
I did some googling and discovered that there is a similar value "GL_POINT_SIZE_ARRAY_APPLE" which (you'd think) should do the same thing. For some reason, though, it doesn't seem to. Here's my drawing code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_POINT_SIZE_ARRAY_APPLE);
glVertexPointer(2, GL_FLOAT, sizeof(SpriteData), spriteVertices);
glPointSizePointerAPPLE(GL_FLOAT, sizeof(SpriteData), spriteVertices + sizeof(LocationF));
glDrawArrays(GL_POINTS, 0, spriteCount);
glDisableClientState(GL_POINT_SIZE_ARRAY_APPLE);
glDisableClientState(GL_VERTEX_ARRAY);
SpriteData is a struct containing the vertex/size data of each sprite. spriteVertices is just an interleaved array of that struct.
The vertex pointer is working fine; it's drawing the sprites, but seems to be ignoring their size values. It instead defaults to the value set by glPointSize().
Despite the fact that this code compiles with no warnings, it seems very suspicious to me that googling "GL_POINT_SIZE_ARRAY_APPLE" brings up almost no results. Is this a useless parameter? If so, how else can I achieve what I want?
There is no official OpenGL extension which exposes a GL_POINT_SIZE_ARRAY_APPLE extension. This may be some detritus in Apple's headers, but you shouldn't use it. Just use a generic vertex array and use the value you pass as a point size.
If you want cross-platform code, you should avoid system-dependent headers. Instead, use a proper OpenGL loader, which comes with cross-platform headers that won't have system-dependent, non-standard detritus in them.

glPushMatrix() / glPopMatrix() doesn't affect blending states. Why is this?

I've been trying to get OpenGL-ES to do something roughly like the following to see if glPushMatrix() and glPopMatrix() could be used to put things such as blending states back how they were before glPushMatrix() was called.
It works for rotation/translation stuff - why doesn't it work for some other things such as blend states?
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); //<-first blend mode
glPushMatrix();
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA); //<-second blend mode
//...drawing and stuff here...
glPopMatrix();
//at this point it appears the second blend mode is still in effect - why?
Am I properly confused or is there another pop/push combination of functions for states not popped/pushed by glPopMatrix() and glPushMatrix()?
Is there another way to easily set everything back to a previous state? Thanks for any illumination!
A stack for attributes does not exist for OpenGL-ES, sorry.
You can write one yourself if you really want to. All attributes are gettable, so any stack-datastructure would do.
Imho a better way is to define a hand full of useful blending presets and have a little state-machine that allows you to switch from one blending mode to another using the least calls into OpenGL-ES. After all - how many different blendmodes do you really need?
You can use glGet() to get all blending options. Then you can use them to restore the blending state.
As you know, OpenGL is a state machine and the various glPush and glPop functions control stacks. Now, there are multiple stacks. The matrix stack contains only the coordinate transformations. There is another stack, called the attribute stack, which does contain your blend function setting. Check out glPushAttrib.

Resources