Test Program Using GLEW Crashing on Launch - windows

I am trying to work with OpenGL on Windows in Eclipse using MinGW.
So far I've been able to set up GLFW correctly and now I am trying to load OpenGL pointers with GLEW.
I downloaded the GLEW source and built it using the instructions at Building glew on windows with mingw and that didn't appear to cause any problems.
I have a test program that I am trying to run. The program runs and opens the window if I comment out the calls to GLEW (lines 32 to 40)
/*USE OUR OPENING LIBRARY GLEW TO LOAD OPENGL FUNCTIONS*/
glewExperimental = GL_TRUE;
glewInit();
/*Test that GLEW was loaded by calling the OpenGL function glGenBuffers*/
GLuint vertexBuffer;
glGenBuffers(1, &vertexBuffer);
printf("%u\n", vertexBuffer);
However when I try to make any calls to glew - either set glewExperimental or call glewInit() - the program builds fine but when I run it the program crashes with the error message "HelloGLEW.exe has stopped working". This is the full error message and there are no other error messages.
Can anyone explain/theorise as to the cause of the problem and suggest how to fix them? Or possibly someone had the same problem and can explain how they solved it.
This is the full program code:
//Opening Toolkit: GLEW (OpenGL Extention Wrangler)
#define GLEW_STATIC
#include <GL/glew.h>
//Window Toolkit: GLFW
#include <GLFW/glfw3.h>
#include <stdio.h>
int main(void)
{
/* USE GLFW TO CREATE OUR CONTEXT AND WINDOW*/
GLFWwindow* window;
/* Initialize the library */
if (!glfwInit())
return -1;
/* Create a windowed mode window and its OpenGL context */
window = glfwCreateWindow(640, 480, "Hello GLEW", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
/* Make the window's context current */
glfwMakeContextCurrent(window);
/*USE OUR OPENING LIBRARY GLEW TO LOAD OPENGL FUNCTIONS*/
glewExperimental = GL_TRUE;
glewInit();
/*Test that GLEW was loaded by calling the OpenGL function glGenBuffers*/
GLuint vertexBuffer;
glGenBuffers(1, &vertexBuffer);
printf("%u\n", vertexBuffer);
/* Loop until the user closes the window */
while (!glfwWindowShouldClose(window))
{
/* Render here */
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
return 0;
}
I've linked it with glew32, glu32, glfw3, opengl32 and gdi32 in that order. Here are the compilation commands:
g++ "-IC:\\Libraries\\GLFW3\\include" "-IC:\\Libraries\\GLEW\\include" -O0 -g3 -Wall -c -fmessage-length=0 -o main.o "..\\main.cpp"
g++ "-LC:\\Libraries\\GLFW3\\i386\\lib-mingw" "-LC:\\Libraries\\GLEW\\lib" -o HelloGLEW.exe main.o -lglew32 -lglu32 -lglfw3 -lopengl32 -lgdi32
Like I said can anyone explain/theorise as to the cause of the problem and suggest how to fix them? Or possibly someone had the same problem and can explain how they solved it.

Related

SDL in Xcode on MacMini M1 - Window not showing

I know this is similar to some other posts but still a little different...
I'm using the newest version of Xcode with SDL. The following code should show me a window but nothing happens except that I get the following message: Metal API Validation Enabled
Program ended with exit code: 0
When I disable this validation nothing happens at all. Any ideas on what might be wrong?
#include <SDL2/SDL.h>
#include <iostream>
int main() {
SDL_Init((SDL_INIT_VIDEO) <0);
SDL_Window *window;
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 800, 600, SDL_WINDOW_SHOWN); //also tried different WINDOW_ input here
if (window == NULL) {
// In the case that the window could not be made...
printf("Could not create window: %s\n", SDL_GetError());
return 1;
}
SDL_Renderer *renderer = SDL_CreateRenderer(window, -1, 0);
SDL_SetRenderDrawColor(renderer, 255, 255, 255, 255);
SDL_RenderClear(renderer);
SDL_RenderPresent(renderer);
SDL_Delay(3000);
}
````
I don't have enough reputation to comment, but for a start, the int main() should be replaced with int main(int argc, char* argv[]) and i'm not sure about SDL_Init((SDL_INIT_VIDEO) <0); just try SDL_Init(SDL_INIT_VIDEO); also, im not 100% about this I don't use mac, but if there are .dll on mac make sure you have the correct .dll files aswell (note what you're compiling (64bit or 32bit) use the corresponding .dll files)

How to use Cmake for OpenGL + Qt 5.8 in OS X Sierra?

I just tried to setup CMake for a project using Qt 5.8 to do OpenGL stuff in OS X Sierra. Here you can see my main CMakeLists
cmake_minimum_required (VERSION 3.8)
set (PROJECT_NAME "FluidEngine")
project (${PROJECT_NAME})
set (CMAKE_PREFIX_PATH "/Users/BRabbit27/Qt/5.8/clang_64")
set (CMAKE_AUTOMOC ON)
find_package (Qt5Widgets)
find_package (Qt5Gui)
find_package (Qt5OpenGL)
set (CPP_SOURCES "")
set (HPP_SOURCES "")
set (INCLUDE_PATHS "")
add_subdirectory (src)
include_directories (${INCLUDE_PATHS} ${OPENGL_INCLUDE_DIRS})
add_executable (${PROJECT_NAME} ${CPP_SOURCES} ${HPP_SOURCES})
target_link_libraries (${PROJECT_NAME} Qt5::Widgets Qt5::Gui Qt5::OpenGL )
It perfectly configures the xcode project, no errors.
Then my code for rendering a basic triangle looks like:
GLWindow::GLWindow(QWidget* parent) : QOpenGLWidget(parent)
{}
void GLWindow::initializeGL()
{
initializeOpenGLFunctions();
GLfloat verts[] =
{
0.f, 1.f,
-1.f, -1.f,
1.f, -1.f
};
GLuint myBufferID;
glGenBuffers(1, &myBufferID);
glBindBuffer(GL_ARRAY_BUFFER, myBufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
}
void GLWindow::resizeGL(int w, int h)
{
glViewport(0, 0, w, h);
}
void GLWindow::paintGL()
{
glDrawArrays(GL_TRIANGLES, 0, 3);
}
And what I get is a black window. I got this code from this video tutorial
Am I missing something in my cmake file or something subtle in OS X to use OpenGL? Since OS X is now promoting Metal, perhaps something must be enabled but I do not know what.
I already tried setting the version of OpenGL used in the main function
int main(int argc, char** argv)
{
QApplication app(argc, argv);
QSurfaceFormat format;
format.setVersion(4, 1);
format.setProfile(QSurfaceFormat::CoreProfile);
QSurfaceFormat::setDefaultFormat(format);
GLWindow glwindow;
glwindow.show();
return app.exec();
}
Any idea?
UPDATE
Added project to github, you can clone it here to test in your machine.
The way I build the project is just cmake -GXcode .. assuming i'm in /path/to/project/build_xcode directory.
Hope this can help you reproduce and perhaps give me a clue on what I could be doing wrong.
I checked all the error messages and from there make my way to found the solution. Fortunately someone already replied in SO with a related problem and had a solution to the problem. You can find the complete answer here OpenGL: INVALID_OPERATION following glEnableVertexAttribArray
For short
You're seeing this error on OS X because it only supports the OpenGL
Core Profile if you're using OpenGL 3.x or higher. Your code is not
Core Profile compliant. You were most likely using the Compatibility
Profile on Windows.
Specifically, the Core Profile requires a Vertex Array Object (VAO) to
be bound for all vertex related calls. So before calling
glEnableVertexAttribArray(), or other similar functions, you will
need to create and bind a VAO.

GLEW crashing in XCode

I'm trying to run a simple OpenGL program using GLFW (version 3.0.2) and GLEW (version 1.10.0) in XCode (version 4.6.3) on OS X 10.8.4. The entire code is shown below.
#include <GLFW/glfw3.h>
#include <OpenGL/OpenGL.h>
#include <iostream>
using namespace std;
void RenderScene()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
void InitGL()
{
glClearColor(1, 0, 0, 1);
}
void ErrorFunc(int code, const char *msg)
{
cerr << "Error " << code << ": " << msg << endl;
}
int main(void)
{
GLFWwindow* window;
/* Report errors */
glfwSetErrorCallback(ErrorFunc);
/* Initialize the library */
if (!glfwInit())
return -1;
/* Window hints */
glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
/* Create a windowed mode window and its OpenGL context */
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
/* Make the window's context current */
glfwMakeContextCurrent(window);
/* Initialize OpenGL */
InitGL();
/* Loop until the user closes the window */
while (!glfwWindowShouldClose(window))
{
/* Render here */
RenderScene();
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Most of this came straight from GLFW's documentation; only the rendering function and GLEW initialization are mine. I have added frameworks for OpenGL, Cocoa and IOKit and linked against libGLEW.a and libglfw3.a. The program compiles successfully but appears to crash when attempting to execute functions GLEW was supposed to take care of. Here, the program crashes on glClearBufferfv. If I comment that out, I get a window with a black background. My guess is GLEW is secretly not working, since it reports no errors but doesn't seem to be doing its job at all.
The exact error message XCode throws at me is error: address doesn't contain a section that points to a section in a object file with an error code of EXC_BAD_ACCESS. If I replace glClearBufferfv with glClearColor the program doesn't crash, but still has a black background when it should actually be red. When queried, OpenGL returns the version string 2.1 NVIDIA-8.12.47 310.40.00.05f01, which explains why calls to newer functions aren't working, but shouldn't GLEW have set up the correct OpenGL context? Moreover, GLFW's documentation says that they've been creating OpenGL 3+ contexts since GLFW 2.7.2. I really don't know what to do.
glClearBuffer (...) is an OpenGL 3.0 function, it is not implemented in all versions of OS X (some only implement OpenGL 2.1). Because OS X does not use runtime extensions, GLEW is not going to fix this problem for you.
You will have to resort to the traditional method for clearing buffers in older versions of OS X (10.6 or older). This means setting the "clear color" and then clearing the color buffer as a two-step process. Instead of a single function call that can clear a specific buffer to a specific value, use this:
#define USE_GL3 // This code requires OpenGL 3.0, comment out if unavailable
void RenderScene()
{
GLfloat color[] = {1.0f, 0.0f, 0.0f};
#ifdef USE_GL3 // Any system that implements OpenGL 3.0+
glClearBufferfv (GL_COLOR, 0, color);
#else // Any other system
glClearColor (color [0], color [1], color [2]);
glClear (GL_COLOR_BUFFER_BIT);
#endif
}
This is not ideal, however. There is no point in setting the clear color multiple times. You should set the clear color one time when you initialize the application and replace the ! USE_GL3 branch of the code with glClear (GL_COLOR_BUFFER_BIT);
Now, because you mentioned you are using Mac OS X 10.8, you can ignore a lot of what I wrote above. OS X 10.8 actually implements OpenGL 3.2 if you do things correctly.
You need two things for glClearBuffer (...) to work on OS X:
Mac OS X 10.7+ (which you have)
Tell glfw to create an OpenGL 3.2 core context
Before you create your window in glfw, add the following code:
glfwWindowHint (GLFW_OPENGL_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_OPENGL_VERSION_MINOR, 2);
glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
Once you have an OpenGL 3.2 core context, you can also eliminate the whole ! USE_GL3 pre-processor branch from your code. This was a provision to allow your code to work on OS X implementations that do not support OpenGL 3.2.
GLEW doesn't really work on mac unless you enable the experimental option. Enable it after setting all your stuff in GLFW.
glewExperimental = GL_TRUE;
Edit:
And you also set to use OpenGL Core with
glfwOpenWindowHint( GLFW_OPENGL_VERSION_MAJOR, 3 );
glfwOpenWindowHint( GLFW_OPENGL_VERSION_MINOR, 2 );
Slightly different from yours.

What periodic event and source (X server generated?) is this GTK app catching every second or so?

The following consists of just a button that when clicked can produce output on the console. (The output is just the value of a local loop counter and a global variable.)
EDIT: The point is the code is to investigate how gtk_main_interation() works; so I do not want to wrap that call around a gtk_events_pending() loop. The code is purely pedagogical in nature.
The strange part of the code is that instead the "clicked" event handler there's a loop that calls gtk_main_iteration(). gtk_main_iteration is supposed to block if there are no events pending. Yet playing around with this little app shows that the GTK main loop is catching some event every second or so even if nothing is happening. To see this, try just clicking and releasing the button and then letting go of the mouse (without moving the cursor at all).
Presumably this event is being generated by the X server (or the GTK main loop) as some sort of timing thing. I don't know what this event is called and google searches are failing me.
#include <gtk/gtk.h>
#include <glib.h>
#include <gmp.h>
#include <unistd.h>
#define UNUSED(x) (void)(x)
typedef struct _Data {
GtkWidget *window1,
*button1;
} Data;
int g=0;
void on_button1_clicked(GtkWidget *widget, Data *data) {
int l=0;
UNUSED(widget);
UNUSED(data);
for(l=0;l<10;++l) {
gtk_main_iteration();
printf("l=%d g=%d|",l,g++);
fflush(stdout);
}
printf("\n\n");
}
int main (int argc, char *argv[]) {
Data *data;
gtk_init(&argc, &argv);
data=g_slice_new0(Data);
/* add widgets and objects to our structure */
data->window1=gtk_window_new(GTK_WINDOW_TOPLEVEL);
gtk_window_set_default_size(GTK_WINDOW(data->window1),250,250);
data->button1=gtk_button_new_with_label("Start");
gtk_container_add(GTK_CONTAINER(data->window1),GTK_WIDGET(data->button1));
gtk_signal_connect(GTK_OBJECT(data->window1), "delete-event",
gtk_main_quit, NULL);
gtk_signal_connect(GTK_OBJECT(data->button1), "clicked",
G_CALLBACK(on_button1_clicked), NULL);
gtk_widget_show_all(GTK_WIDGET(data->window1));
gtk_main();
/* Don't forget to free the memory! */
g_slice_free(Data, data);
return 0;
}
I'm compiling with
gcc -Wall -Wextra -Wconversion -pedantic `pkg-config --cflags --libs gtk+-2.0` events.c -o events
this code may be pedagogical, but it's utterly broken, and not even remotely idiomatic:
the GtkButton::Clicked signal is not emitted by the windowing system: it's completely synthesized by GtkButton itself; it depends on receiving a button-release-event after a button-press-event while the pointer is still within the GtkButton that originated the press; X11 has no concept of "clicked" (within or without these semantics).
this example demonstrates basically nothing about the event stream that you may or may not get under X11. if you want to see the event stream, you can compile GTK+ with debugging messages enabled (./configure --enable-debug=yes) and set the GDK_DEBUG environment variable before running an application. GTK+ will print the details of each X event that it receives, including the event type.

CoreAudio: getting device count breaks when linking to Foundation on 10.6

I have a mixed C++/Objective-C project that uses AudioObjectGetPropertyDataSize to get the number of audio devices (such as USB headsets) plugged in. This API doesn't seem to work under certain conditions. For example, on 10.5 it will work but on 10.6 it won't detect when a new USB headset is plugged in.
I've pared down the problem to a small bit of code that reproduces the problem (it calls AudioObjectGetPropertyDataSize in a loop). The code will work on 10.6 (ie, it will detect when devices are plugged/unplugged) when its only linked against CoreAudio, but once you link against Foundation it will stop working.
I don't understand how linking to a framework can break code that otherwise works.
Here is the code (coreaudio-test.cpp):
#include <stdio.h>
#include <CoreAudio/AudioHardware.h>
int main(int argc, char **argv) {
printf("Press <enter> to refresh device list> \n");
while (1) {
getchar();
// get device count
UInt32 dataSize = 0;
AudioObjectPropertyAddress propertyAddress;
propertyAddress.mSelector = kAudioHardwarePropertyDevices;
propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
propertyAddress.mElement = kAudioObjectPropertyElementMaster;
OSStatus result =
AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize);
int count = -1;
if (result == noErr) {
count = dataSize / sizeof(AudioDeviceID);
}
printf("num devices= %d \n", count);
}
return 0;
}
And here is the Makefile:
LFLAGS= -framework CoreAudio
all: coreaudio-test coreaudio-test.broken
# create a test that works
coreaudio-test: coreaudio-test.cpp
g++ -o $# $^ $(LFLAGS)
# linking to foundation will break the test
coreaudio-test.broken: coreaudio-test.cpp
g++ -o $# $^ $(LFLAGS) -framework Foundation
Any thoughts on this bizarre behavior? (btw, I've also posted this question on the CoreAudio list.)
The CoreAudio List answered my question. We need to tell CoreAudio to allocate its own event-dispatching thread:
CFRunLoopRef theRunLoop = NULL;
AudioObjectPropertyAddress theAddress = { kAudioHardwarePropertyRunLoop, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster };
AudioObjectSetPropertyData(kAudioObjectSystemObject, &theAddress, 0, NULL, sizeof(CFRunLoopRef), &theRunLoop);
I suspect what's happening is that when the program is linked to Foundation, CoreAudio assumes the main thread is acting as an event dispatcher loop (very common since Objective-C is usually used for GUI programs). When not linking against Foundation, I guess it figures that it needs to allocate its own event thread.
Related behavior, which is like 9 years ago someone reported:
http://lists.apple.com/archives/coreaudio-api/2001/May/msg00021.html

Resources