glMapBufferRange crashing on Android GLES app - opengl-es

I am trying to morph some vertices on a GLES application on android and glMapBufferRange keeps crashing with the following error:
SIGSEGV (signal SIGSEGV: address access protected (fault address: 0xef13d664))
I more or less followed the example of this web-site:
http://www.songho.ca/opengl/gl_vbo.html#update
but not sure if I am missing something.
I created my VBOs at initialization time and I can draw the object with no issues. The code of creation goes:
void SubObject3D::CreateVBO(VBOInfo &vboInfoIn) {
// m_vboIds[0] - used to store vertex attribute data
// m_vboIds[l] - used to store element indices
glGenBuffers(2, vboInfoIn.vboIds);
// Let the buffer all dynamic for morphing
glBindBuffer(GL_ARRAY_BUFFER, vboInfoIn.vboIds[0]);
glBufferData(GL_ARRAY_BUFFER,
(GLsizeiptr) (vboInfoIn.vertexStride * vboInfoIn.verticesCount),
vboInfoIn.pVertices, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboInfoIn.vboIds[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
(GLsizeiptr) (sizeof(GLushort) * vboInfoIn.indicesCount),
vboInfoIn.pIndices, GL_STATIC_DRAW);
}
struct VBOInfo {
VBOInfo() {
memset(this, 0x00, sizeof(VBOInfo));
vboIds[0] = 0xdeadbeef;
vboIds[1] = 0xdeadbeef;
}
// VertexBufferObject Ids
GLuint vboIds[2];
// Points to the source data
GLfloat *pVertices; // Pointer of original data
GLuint verticesCount;
GLushort *pIndices; // Pointer of original data
GLuint indicesCount;
GLint vertexStride;
};
then later in the Rendering loop I tried to get the hold of my vertex pointer as such:
// I stored the information at creation time here:
VBOInfo mVBOGeometryInfo;
//later I call here to get the pointer
GLfloat *SubObject3D::MapVBO() {
GLfloat *pVertices = nullptr;
glBindBuffer(GL_ARRAY_BUFFER, mVBOGeometryInfo.vboIds[0]);
GLsizeiptr length = (GLsizeiptr) (mVBOGeometryInfo.vertexStride *
mVBOGeometryInfo.verticesCount);
pVertices = (GLfloat *) glMapBufferRange(
GL_ARRAY_BUFFER, 0,
length,
GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT
);
if (pVertices == nullptr) {
LOGE ("Could not map VBO");
}
return pVertices;
}
but it crashed right at glMapBufferRange.
This is an android application that uses NDK. The hardware is a Samsung S6 phone.
thx!

This was quite painful to resolve this issue, but there is no problem with the code above per se. It was basically the include. My code was based off the google sample "more teapots" located here:
https://github.com/googlesamples/android-ndk/tree/master/teapots
I had to follow their pattern and change my include to GLES from:
#include <GLES3/gl3.h>
to use their stubs:
#include "gl3stub.h"
why? I don't know, but likely causing the linker to link incorrect code.

Related

How do I get startup code that I have edited to compile?

I am trying to edit the startup code for the MCUXpresso LPC51U68 board so that I can flash an enhanced image to the board See instructions in section 3.5.6 (pg. 17) of this user manual (NXP_LPC51U68_UM). I changed the bolded lines from the user manual in the startup code that I downloaded from the SDK for the LPC51U68 board (on NXP's website). I wrote the IMAGEHEADER_T fuction from the user manual in the startup code also.
My edits to the startup code ended up looking like this:
__attribute__ ((used, section(".isr_vector")))
void (* const g_pfnVectors[])(void) = {
// Core Level - CM0P
&_vStackTop, // The initial stack pointer
ResetISR, // The reset handler
NMI_Handler, // The NMI handler
HardFault_Handler, // The hard fault handler
0, // Reserved
0, // Reserved
0, // Reserved
__valid_user_code_checksum, // LPC MCU checksum
0, // ECRP
(void*) 0xEDDC9494, // Enhanced image marker (This was added)
imageHeader, // Pointer to enhanced image header (This was added)
SVC_Handler, // SVCall handler
0, // Reserved
0, // Reserved
PendSV_Handler, // The PendSV handler
SysTick_Handler, // The SysTick handler
and this is the definition for the image header (should be exactly as appears in the user manual):
/*Image Header*/
const IMAGEHEADER_T imageHeader = {
IMAGE_ENH_BLOCK_MARKER, //Required marker for image header
IMG_NO_CRC, //No CRC, makes development easier
0x00000000, //crc32_len
0x00000000, //crc32_val
0x00000000 //version
HOWEVER, after making these edits I found that the startup code won't compile. When I checked the memory at offset 0x24 to see if it receive the enhanced image flag 0xEDDC9494, it wasn't there. I tried typing some garbage in the startup code and then building and compiling to see if I got an error and there was no error. How do I get my startup code to compile???
If i create a default LPC51U68 SDK project with MCUXpresso there are no defines for the mentioned structs and values. So you need to do it yourself.
//*****************************************************************************
//*****************************************************************************
#define IMAGE_SINGLE_ENH_SIG 0xEDDC9494
#define IMAGE_ENH_BLOCK_MARKER 0xFEEDA5A5
#define IMG_NORMAL 0
#define IMG_NO_CRC 1
typedef struct IMAGEHEADER_STRUCT
{
unsigned int header_marker;
unsigned int img_type;
unsigned int crc32_len;
unsigned int crc32_val;
unsigned int version;
} IMAGEHEADER_T;
/* Image header */
const IMAGEHEADER_T imageHeader = {
IMAGE_ENH_BLOCK_MARKER, /* Required marker for image header */
IMG_NO_CRC, /* No CRC, makes development easier */
0x00000000, /* crc32_len */
0x00000000, /* crc32_val */
0x00000000 /* version */
};
//*****************************************************************************
// The vector table.
// This relies on the linker script to place at correct location in memory.
//*****************************************************************************
extern void (* const g_pfnVectors[])(void);
extern void * __Vectors __attribute__ ((alias ("g_pfnVectors")));
__attribute__ ((used, section(".isr_vector")))
void (* const g_pfnVectors[])(void) = {
// Core Level - CM0P
&_vStackTop, // The initial stack pointer
ResetISR, // The reset handler
NMI_Handler, // The NMI handler
HardFault_Handler, // The hard fault handler
0, // Reserved
0, // Reserved
0, // Reserved
__valid_user_code_checksum, // LPC MCU checksum
0, // ECRP
(void*) IMAGE_SINGLE_ENH_SIG, // Enhanced image marker #0x24
(void*) &imageHeader, // Pointer to enhanced image header #0x28
SVC_Handler, // SVCall handler
0, // Reserved
0, // Reserved
PendSV_Handler, // The PendSV handler
SysTick_Handler, // The SysTick handler
// Chip Level - LPC51U68
...
}; /* End of g_pfnVectors */
This should compile. However i dont know whats the enhanced image is for.

GLEW crashing in XCode

I'm trying to run a simple OpenGL program using GLFW (version 3.0.2) and GLEW (version 1.10.0) in XCode (version 4.6.3) on OS X 10.8.4. The entire code is shown below.
#include <GLFW/glfw3.h>
#include <OpenGL/OpenGL.h>
#include <iostream>
using namespace std;
void RenderScene()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
void InitGL()
{
glClearColor(1, 0, 0, 1);
}
void ErrorFunc(int code, const char *msg)
{
cerr << "Error " << code << ": " << msg << endl;
}
int main(void)
{
GLFWwindow* window;
/* Report errors */
glfwSetErrorCallback(ErrorFunc);
/* Initialize the library */
if (!glfwInit())
return -1;
/* Window hints */
glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
/* Create a windowed mode window and its OpenGL context */
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
/* Make the window's context current */
glfwMakeContextCurrent(window);
/* Initialize OpenGL */
InitGL();
/* Loop until the user closes the window */
while (!glfwWindowShouldClose(window))
{
/* Render here */
RenderScene();
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Most of this came straight from GLFW's documentation; only the rendering function and GLEW initialization are mine. I have added frameworks for OpenGL, Cocoa and IOKit and linked against libGLEW.a and libglfw3.a. The program compiles successfully but appears to crash when attempting to execute functions GLEW was supposed to take care of. Here, the program crashes on glClearBufferfv. If I comment that out, I get a window with a black background. My guess is GLEW is secretly not working, since it reports no errors but doesn't seem to be doing its job at all.
The exact error message XCode throws at me is error: address doesn't contain a section that points to a section in a object file with an error code of EXC_BAD_ACCESS. If I replace glClearBufferfv with glClearColor the program doesn't crash, but still has a black background when it should actually be red. When queried, OpenGL returns the version string 2.1 NVIDIA-8.12.47 310.40.00.05f01, which explains why calls to newer functions aren't working, but shouldn't GLEW have set up the correct OpenGL context? Moreover, GLFW's documentation says that they've been creating OpenGL 3+ contexts since GLFW 2.7.2. I really don't know what to do.
glClearBuffer (...) is an OpenGL 3.0 function, it is not implemented in all versions of OS X (some only implement OpenGL 2.1). Because OS X does not use runtime extensions, GLEW is not going to fix this problem for you.
You will have to resort to the traditional method for clearing buffers in older versions of OS X (10.6 or older). This means setting the "clear color" and then clearing the color buffer as a two-step process. Instead of a single function call that can clear a specific buffer to a specific value, use this:
#define USE_GL3 // This code requires OpenGL 3.0, comment out if unavailable
void RenderScene()
{
GLfloat color[] = {1.0f, 0.0f, 0.0f};
#ifdef USE_GL3 // Any system that implements OpenGL 3.0+
glClearBufferfv (GL_COLOR, 0, color);
#else // Any other system
glClearColor (color [0], color [1], color [2]);
glClear (GL_COLOR_BUFFER_BIT);
#endif
}
This is not ideal, however. There is no point in setting the clear color multiple times. You should set the clear color one time when you initialize the application and replace the ! USE_GL3 branch of the code with glClear (GL_COLOR_BUFFER_BIT);
Now, because you mentioned you are using Mac OS X 10.8, you can ignore a lot of what I wrote above. OS X 10.8 actually implements OpenGL 3.2 if you do things correctly.
You need two things for glClearBuffer (...) to work on OS X:
Mac OS X 10.7+ (which you have)
Tell glfw to create an OpenGL 3.2 core context
Before you create your window in glfw, add the following code:
glfwWindowHint (GLFW_OPENGL_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_OPENGL_VERSION_MINOR, 2);
glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
Once you have an OpenGL 3.2 core context, you can also eliminate the whole ! USE_GL3 pre-processor branch from your code. This was a provision to allow your code to work on OS X implementations that do not support OpenGL 3.2.
GLEW doesn't really work on mac unless you enable the experimental option. Enable it after setting all your stuff in GLFW.
glewExperimental = GL_TRUE;
Edit:
And you also set to use OpenGL Core with
glfwOpenWindowHint( GLFW_OPENGL_VERSION_MAJOR, 3 );
glfwOpenWindowHint( GLFW_OPENGL_VERSION_MINOR, 2 );
Slightly different from yours.

Xcode executable cannot find glsl files

This is the first time I try to learn OpenGL, I'm following the examples of a book. I'm doing it under OS X 10.8 with Xcode. The code is the following:
#include "Angel.h"
const int numPoints = 5000;
typedef vec2 point2;
void init(){
point2 points[numPoints];
point2 vertices[3] = {
point2(-1.0, -1.0), point2(0.0, 1.0), point2(1.0, -1.0)
};
points[0] = point2(0.25, 0.5);
for (int k = 1; k < numPoints; k++) {
int j = rand()%3;
points[k] = (points[k-1]+vertices[j])/2.0;
}
GLuint program = InitShader("vertex.glsl", "fragment.glsl");
glUseProgram(program);
GLuint abuffer;
glGenVertexArraysAPPLE(1, &abuffer);
glBindVertexArrayAPPLE(abuffer);
GLuint buffer;
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(points), points, GL_STATIC_DRAW);
GLuint location = glGetAttribLocation(program, "vPosition");
glEnableVertexAttribArray(location);
glVertexAttribPointer(location, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glClearColor(1.0, 1.0, 1.0, 1.0);
}
void display(){
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_POINTS, 0, numPoints);
glFlush();
}
int main(int argc, char** argv){
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(640, 480);
glutCreateWindow("Sierpinski Gasket");
init();
glutDisplayFunc(display);
glutMainLoop();
return 0;
}
It compiles. But when I try to execute it the window does not appear. The problem arises when I call the init() function. Without it the window appears but with a black background. With it, there's no window. The code can be found here.
UPDATE
Apparently the program is exiting in the line GLuint program = InitShader("vertex.glsl", "fragment.glsl"); because it's not finding the shader-files. How can I tell the program to use the files? I mean I have the .glsl files in the same folder as the .h and .cpp but when Xcode builds the project the executable is not in the same place as the .glsl files. How to solve this within Xcode?
The GLSL files are loaded at the runtime of the program. So it's not XCode that doesn't find the files, but your program. The most likely cause is, that you used a relative path for the files (like in the code snippet you provided), but started your program with a working path that doesn't match up with the hardcoded file locations. Usually your program binary is built into a dedicated build directory.
A quick fix is copying the GLSL files into the same directory as the binary. The proper solution would be to place the filed in a well known location. In MacOS X you can use Application bundles for this. See in the MacOS X developer docs how to place application resources into the Application bundle and how to access them. XCode also provides tools to automatically copy files into the generated bundle.
Follow Below Step:
Select the project on the left panel.
Select the target and then select Build Phases
There you should fin a button called Add Build Phase
There will appear a box where you have to select the files (there's a little +sign). And be sure you selected Destination: Products directory
Build the project, run it and now it should work !!
If Xcode isn't importing the files, then check if it's adding it to the resource folder by going to Your project Name in the file chooser, build phases, Copy Bundle Resources and make sure your 2 files are in there.

My openGL shaders will not link outside of eclipse?

So I am trying to build a simple spirograph generator for school and everything went fine while in eclipse CDT for windows 7. My program assigns a default shader to each spirograph generated (5 max). There are also 3 other shader programs the user can assign by choice to any spirograph. Inside eclipse it works exactly as it should, but when being ran outside eclipse the shaders fail to link. The program uses GLUT and GLEW and I have included the necessary .dll's in the executable's directory. I've been trying to fix this for a good 4 hours and have no idea what would cause a failure to link outside of eclipse that wouldn't fail all the time.
Im not going to include all of the shaders but here are the first 2 that fail to link and cause the application to terminate
#version 330
layout (location = 0) in vec4 vPosition;
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
out vec4 color;
void main()
{
gl_Position = proj * view * model * vPosition;
color = vec4(
(4 - vPosition.z) * (4 - vPosition.z) / 16.0,
(2.0 - abs( 2.0 - vPosition.z )) / 2.0,
vPosition.z * vPosition.z / 16.0,
1.0
);
}
and fragment shader
#version 330
in vec4 color;
void main()
{
gl_FragColor = color;
}
and printlog
Vertex shader was successfully compiled to run on hardware.
Fragment shader was successfully compiled to run on hardware.
Fragment shader(s) failed to link, vertex shader(s) failed to link.
ERROR: error(#280) Not all shaders have valid object code
ERROR: error(#280) Not all shaders have valid object code
The InitShader() function that I use to compile and link the shaders has worked for the applications I have done in the past. The only thing I am doing different is I am using it to produce a few different shader programs and assign them to programs[] rather than just compile 1 and run it for the whole application.
program[0] = InitShader("shaders/vshader.glsl", "shaders/fshader.glsl");
program[1] = InitShader("shaders/vshader2.glsl", "shaders/fshader.glsl");
program[2] = InitShader("shaders/vshader3.glsl", "shaders/fshader.glsl");
program[3] = InitShader("shaders/vshaderw.glsl", "shaders/fshader.glsl");
But either way, here is the code for InitShader().
GLuint InitShader(const char* source, GLenum type)
{
GLuint shader = glCreateShader(type);
glShaderSource(shader, 1, (const GLchar**) &source, NULL );
glCompileShader(shader);
printLog( shader );
return shader;
}
GLuint InitShader(const char* vfile, const char *ffile) {
GLuint program = glCreateProgram();
GLuint shader;
// stringify and attach vshader
std::ifstream vertstream(vfile);
std::string vert((std::istreambuf_iterator<char>(vertstream)), std::istreambuf_iterator<char>());
shader = InitShader(vert.c_str(), GL_VERTEX_SHADER);
glAttachShader( program, shader );
// stringify and attach fshader
std::ifstream fragstream(ffile);
std::string frag((std::istreambuf_iterator<char>(fragstream)), std::istreambuf_iterator<char>());
shader = InitShader(frag.c_str(), GL_FRAGMENT_SHADER);
glAttachShader( program, shader );
// link program
glLinkProgram(program);
printLog(program);
// link and error check
GLint linked;
glGetProgramiv( program, GL_LINK_STATUS, &linked );
if ( !linked ) {
fprintf(stderr, "Shaders failed to link!\n");
exit( EXIT_FAILURE );
}
// use program object
glUseProgram(program);
return program;
}
Its 4am here so my grey cells are about spent haha. And fyi its not really homework help, the executable is not required to run outside of eclipse for the class, I just want to know how to create stand alone programs for myself.
The cause of your problem lies here:
program[0] = InitShader("shaders/vshader.glsl", "shaders/fshader.glsl");
The paths to the shader source files are relative. Chances are, that Eclipse runs your program from a different working directory (probably your project root) than what's the working directory when executing the program directly.
Solution: Either
make sure the working directory on program startup matches the relative paths used internally (very unreliable)
use absolute paths within the program (very unflexible)
or, what I suggest
determine the location of the shader files at runtime (command line option, location of the executable binary, etc) and adjust the paths accordingly at runtime.

Multiple Windows OpenGL/Glut

I would like to know how to open multiple OpenGL/Glut windows.
And I mean multiple windows at the same time
not subwindows and
not update the same window
While I believe the above answer is accurate, it is a little more complex then needed and it might be difficult when later having to deal with moving between the windows (say, for example, when drawing into them). This is what we've just done in class:
GLint WindowID1, WindowID2; // window ID numbers
glutInitWindowSize(250.0, 250.0); // set a window size
glutInitWindowPosition(50,50); // set a window position
WindowID1 = glutCreateWindow("Window One"); // Create window 1
glutInitWindowSize(500.0, 250.0); // set a window size
glutInitWindowPosition(500,50); // set a window position
WindowID2 = glutCreateWindow("Window Two"); // Create window 2
You will notice I'm using the same create window function but loading it into a GLint. That is because when we create a window this way, the function actually returns a unique GLint used by glut to identify windows.
We have to get and set windows to move between them and perform appropriate drawing functions. You can find the calls here.
The same way as you would create one window, except you should do it multiple times :
#include <cstdlib>
#include <GL/glut.h>
// Display callback ------------------------------------------------------------
float clr = 0.2;
void display()
{
// clear the draw buffer .
glClear(GL_COLOR_BUFFER_BIT); // Erase everything
// set the color to use in draw
clr += 0.1;
if ( clr>1.0)
{
clr=0;
}
// create a polygon ( define the vertexs)
glBegin(GL_POLYGON); {
glColor3f(clr, clr, clr);
glVertex2f(-0.5, -0.5);
glVertex2f(-0.5, 0.5);
glVertex2f( 0.5, 0.5);
glVertex2f( 0.5, -0.5);
} glEnd();
glFlush();
}
// Main execution function
int main(int argc, char *argv[])
{
glutInit(&argc, argv); // Initialize GLUT
glutCreateWindow("win1"); // Create a window 1
glutDisplayFunc(display); // Register display callback
glutCreateWindow("win2"); // Create a window 2
glutDisplayFunc(display); // Register display callback
glutMainLoop(); // Enter main event loop
}
This example shows how to set the same callback to render in both windows. But you can use different functions for windows.

Resources