Link a program object again - opengl-es

I have a program object which can be rendered successfully.
But sometime in my application at runtime, when I modify and compile its vertex & fragment shaders source, re-link it again by glLinkProgram(), I see the program can not be rendered.
Note that: the shaders and program were re-compiled/re-linked successfully.
I just check their status by
glGetShaderiv(fsId, GL_COMPILE_STATUS, &compileStatus);
and glGetProgramiv(progId, GL_LINK_STATUS, &linkStatus);
the result is compileStatus = linkStatus = 1
I'm wondering we can re-linking a program object in OpenGL ES 2.0 or not?
My GPU info:
GL_RENDERER: PowerVR SGX 530
GL_VENDOR: Imagination Technologies
GL_VERSION: OpenGL ES 2.0

Can you? By the OpenGL ES specification, yes. Should you? No.
The general rule when doing anything in OpenGL, even ES versions, is this: don't do anything unless you know it's commonly done. The farther off the beaten path you go, the more likely you are to encounter driver bugs.
In general, the usage pattern for programs is to link them, then use them a bunch, then delete them when you're closing the application. You should stick to that. If you need a new program, you create a new program.
Re-linking is going to trash all your uniform state anyway. So it's not like you're preserving something by re-linking inside an old program instead of creating a new one. Indeed, it's better this way; if the new link fails, you still have the old program. Whereas if you re-link on a program and it fails, the old data is destroyed.

Related

How to track/find out which userdata are GC-ed at certain time?

I've written an app in LuaJIT, using a third-party GUI framework (FFI-based) + some additional custom FFI calls. The app suddenly loses part of its functionality at some point soon after being run, and I'm quite confident it's because of some unpinned objects being GC-ed. I assume they're only referenced from the C world1, so Lua GC thinks they're unreferenced and can free them. The problem is, I don't know which of the numerous userdata are unreferenced (unpinned) on Lua side?
To confirm my theory, I've run the app with GC disabled, via:
collectgarbage 'stop'
and lo, with this line, the app works perfectly well long past the point where it got broken before. Obviously, it's an ugly workaround, and I'd much prefer to have the GC enabled, and the app still working correctly...
I want to find out which unpinned object (userdata, I assume) gets GCed, so I can pin it properly on Lua side, to prevent it being GCed prematurely. Thus, my question is:
(How) can I track which userdata objects got collected when my app loses functionality?
One problem is, that AFAIK, the LuaJIT FFI already assigns custom __gc handlers, so I cannot add my own, as there can be only one per object. And anyway, the framework is too big for me to try adding __gc in each and every imaginable place in it. Also, I've already eliminated the "most obviously suspected" places in the code, by removing local from some variables — thus making them part of _G, so I assume not GC-able. (Or is that not enough?)
1 Specifically, WinAPI.
For now, I've added some ffi.gc() handlers to some of my objects (printing some easily visible ALL-CAPS messages), then added some eager collectgarbage() calls to try triggering the issue as soon as possible:
ffi.gc(foo, function()
print '\n\nGC FOO !!!\n\n'
end)
[...]
collectgarbage()
And indeed, this exposed some GCing I didn't expect. Specifically, it led me to discover a note in luajit's FFI docs, which is most certainly relevant in my case:
Please note that [C] pointers [...] are not followed by the garbage collector. So e.g. if you assign a cdata array to a pointer, you must keep the cdata object holding the array alive [in Lua] as long as the pointer is still in use.

More specific OpenGL error information

Is there a way to retrieve more detailed error information when OpenGL has flagged an error? I know there isn't in core OpenGL, but is there perhaps some common extension or platform- or driver-dependent way or anything at all?
My basic problem is that I have a game (written in Java with JOGL), and when people have trouble with it, which they do on certain hardware/software configurations, it can be quite hard to trace down where the root of the problem lies. For performance reasons, I can't keep calling glGetError for each command but only do so at a few points in the program, so it's kind of hard to even find what command even flagged the error to begin with. Even if I could, however, the extremely general error codes that OpenGL have don't really tell me all that much about what happened (seeing as how the manpages on the commands even describe how the various error codes are reused for sometimes quite many different actual error conditions).
It would be tremendously helpful if there were a way to find out what OpenGL command actually flagged the error, and also more details about the error that was flagged (like, if I get GL_INVALID_VALUE, what value to what argument was invalid and why?).
It seems a bit strange that drivers wouldn't provide this information, even if in a completely custom way, but looked as I have, I sure haven't found any way to find it. If it really is that they don't, is there any good reason for why that is so?
Actually, there is a feature in core OpenGL that will give you detailed debug information. But you are going to have to set your minimum version requirement pretty high to have this as a core feature.
Nevertheless, see this article -- even though it only went core in OpenGL 4.3, it existed in extension form for quite some time and it does not require any special hardware feature. So for the most part all you really need is a recent driver from NV or AMD.
I have an example of how to use this extension in an answer I wrote a while back, complete with a few utility functions to make the output easier to read. It is written in C, so I do not know how helpful it will be, but you might find something useful.
Here is the sort of output you can expect from this extension (AMD Catalyst):
OpenGL Error:
=============
Object ID: 102
Severity: Medium
Type: Performance
Source: API
Message: glDrawElements uses element index type 'GL_UNSIGNED_BYTE' that is not
optimal for the current hardware configuration; consider using
'GL_UNSIGNED_SHORT' instead.
Not only will it give you error information, but it will even give you things like performance warnings for doing something silly like using 8-bit vertex indices (which desktop GPUs do not like).
To answer another one of your questions, if you set the debug output to synchronous and install a breakpoint in your debug callback you can easily make any debugger break on an OpenGL error. If you examine the callstack you should be able to quickly identify exactly what API call generated most errors.
Here are some suggestions.
According to the man pages, glGetError returns the value of the error flag and then resets it to GL_NO_ERROR. I would use this property to track down your bug - if nothing else you can switch up where you call it and do a binary search to find where the error occurs.
I doubt calling glGetError will give you a performance hit. All it does is read back an error flag.
If you don't have the ability to test this on the specific hardware/software configurations those people have, it may be tricky. OpenGL drivers are implemented for specific devices, after all.
glGetError is good for basically saying that the previous line screwed up. That should give you a good starting point - you can look up in the man pages why that function will throw the error, rather than trying to figure it out based on its enum name.
There are other specific error functions to call, such as glGetProgramiv, and glGetFramebufferStatus, that you may want to check, as glGetError doesn't check for every type of error. IE Just because it reads clean doesn't mean another error didn't happen.

How did Turbo Pascal overlays work?

I'm implementing an assemblinker for the 16-bit DCPU from the game 0x10c.
One technique that somebody suggested to me was using "overlays, like in Turbo Pascal back in the day" in order to swap code around at run time.
I get the basic idea (link overlayed symbols to same memory, swap before ref), but what was their implementation?
Was it a function that the compiler inserted before references? Was it a trap? Was the data for the overlay stored at the location of the overlay, or in a big table somewhere? Did it work well, or did it break often? Was there an interface for assembly to link with overlayed Pascal (and vice versa), or was it incompatible?
Google is giving me basically no information (other than it being a no-on on modern Pascal compilers). And, I'm just, like, five years too young to have ever needed them when they were current.
A jump table per unit whose elements point to a trap (int 3F) when not loaded. But that is for older Turbo Pascal/Borland Pascal versions (5/6), newer ones also support (286) protected mode, and they might employ yet another scheme.
This scheme means that when an overlay is loaded, no trap overhead happens anymore.
I found this link in my references: The Slithy Tove. There are other nice details there, like how call chains are handled that span multiple overlays.

wglGetProcAddress for OpenGL 1.1 functions

This wiki page on the OpenGL website claims that OpenGL 1.1 functions should NOT be loaded via wglGetProcAddress, and the wording seems to imply that some systems will by design return NULL if you try:
http://www.opengl.org/wiki/Platform_specifics:_Windows#wglGetProcAddress
(The idea being that only 1.2+ functions deserve loading by way of wglGetProcAddress).
The page does not tell us who reported these failed wglGetProcAddress calls on 1.1 functions, which I've never personally seen. And google searches so next to no information on the issue either.
Would wglGetProcAddress() actually return NULL for 1.1 functions for enough users such that I should actually care? Or does it just fail for a select few unlucky users with really broken GPU drivers (in which case I don't much care).
Has anybody else come across this?
The question you should be asking yourself is whether it matters to you at all and whether you should care.
Loading the OpenGL 1.1 functions manually would mean that you have to use different function names, or they will collide with the declarations in gl/gl.h. Or, you must define GL_NO_PROTOTYPES, but in this case you will also not have OpenGL 1.0 functionality.
So, in any case, doing this would mean extra trouble for no gains, you can simply use 1.1 functionality without doing anything.
Having said that, I've tried this once because I thought it would be an ingenious idea to load everything dynamically (when I sobered up, I wondered what gave me that idea), and I can confirm that it does not (or at least, did not, 2 years ago) work with nVidia drivers.
Though, thinking about it, it's entirely justifiable, and even a good thing, that something that doesn't make sense doesn't work.
I technically answered this on the discussion page of that Wiki article, but:
Would wglGetProcAddress() actually return NULL for 1.1 functions for enough users such that I should actually care?
It will return NULL for all users. I have tried it on NVIDIA and ATI platforms (recent drivers and DX10 hardware), and all of them do it.

Question about g++ generated code

Dear g++ hackers, I have the following question.
When some data of an object is overwritten by a faulty program, why does the program eventually fail on destruction of that object with a double free error? How does it know if the data is corrupted or not? And why does it cause double free?
It's usually not that the object's memory is overwritten, but some part of the memory outside of the object. If this hits malloc's control structures, free will freak out once it accesses them and tries to do weird things based on the corrupted structure.
If you'd really only overwrite object memory with silly stuff, there's no way malloc/free would know. Your program might crash, but for other reasons.
Take a look at valgrind. It's a tool that emulates the CPU and watches every memory access for anomalies (like trying to overwrite malloc's control structures). It's really easy to use, most of the time you just start your program inside valgrind by prepending valgrind on the shell, and it saves you a lot of pain.
Regarding C++: always make sure that you use new in conjunction with delete and, respectively, new[] in conjunction with delete[]. Never mix them up. Bad things will happen, often similar to what you are describing (but valgrind would warn you).

Resources