Why does mingw give an undefined reference to `glUseProgram'? - windows

The program is written in C using SDL2 and openGL. So long as I comment out
//glUseProgram(0);
the program compiles and runs and displays the glCleared color. Including the gl version checks:
const char* renderer = (const char*) glGetString(GL_RENDERER);
puts(renderer);
const char* version = (const char*) glGetString(GL_VERSION);
puts(version);
const char* glslVersion = (const char*) glGetString(GL_SHADING_LANGUAGE_VERSION);
puts(glslVersion);
Which print out:
ATI Radeon HD 5670
3.2.11927 Core Profile Context
4.20
And the gl error check:
GLenum error = glGetError();
switch(error){
case GL_NO_ERROR: puts("no error"); break;
case GL_INVALID_ENUM: puts("invalid enum"); break;
case GL_INVALID_VALUE: puts("invalid value"); break;
case GL_OUT_OF_MEMORY: puts("out of memory"); break;
case GL_INVALID_FRAMEBUFFER_OPERATION: puts("invalid framebuffer operation"); break;
default: break;
}
which prints:
no error.
But when glUseProgram(0) is uncommented I get the following error:
D:\TEMP\ccSTF4cr.o:App.c:(.text+0x320): undefined reference to 'glUseProgram
I also get:
App.c:54.2: warning: implicit declaration of function 'glUseProgram'
The included files are:
#include "SDL2/SDL.h"
#include "SDL2/SDL_opengl.h"
The program is executed using a .bat file on Windows XP. The .bat is:
del /F /Q bin\app.exe
set files=main.c App.c EventHub.c Game.c MainShader.c
set libs=-LD:\environments\minGW\mingw32\lib -lmingw32 -lopengl32 -lSDL2main -mwindows -lSDL2
set objs=bin\main.obj
gcc -Wall -o bin/app %files% %libs%
bin\app.exe
If you don't know .bats, the %file% and %libs% in the command are simply substituted for the strings in the variables.
The issue would seem to be that the context does not support the later glUseProgram function except for the fact that the context is version 3.2 which does support the function. In which case the issue seems to be around the SDL_opengl.h include is picking up a wrong -lopengl32. But frankly I don't really understand this linking stuff which why I am asking the question.

The Microsoft Windows OpenGL API only has functions from version 1.X at best. (Because they'd prefer you use Direct3D.)
To use modern OpenGL - heck, anything newer than 1995 - on MS Windows, you need an extensions manager such as GLEW or GLEE. See the OpenGL SuperBible, any edition, for details.

Related

How does the scanf_s function work on MinGW GCC?

#include <stdio.h>
int main (void) {
char str[100];
scanf_s ("%[^\n]", str);
printf ("%s\n", str);
return 0;
}
This code can be successfully compiled without any errors or warnings by MinGW GCC 11.3.0, and the exe file runs properly without any exception. Different from the function with the same name in MSVC, function scanf_s in MinGW GCC does not require a third argument when being used to input string. How does the function scanf_s work in MinGW GCC?
Both the ISO C version and the Microsoft version of scanf_s require that for every use of the %s conversion format specifier, an additional argument is provided which specifies the size of the buffer. The same probably also applies for the library that your version of MinGW is using.
By not providing the required number of arguments to scanf_s, your code has undefined behavior. The fact that your program may compile and even run properly does not change the fact that your code has undefined behavior, i.e. that anything may happen, including the possibility that it may work as intended. You cannot rely on this behavior.

"warning: invalid format string conversion" [printf-like-function] only in a CUDA source file

Using the ' format specifier in a printf-like-function leads to the warning.
class LogController
{
auto __attribute__((format(printf, 2, 3)))
insertLogEntry( const char * formatString, ... ) -> void;
}
...
LogController lc;
lc.insertLogEntry( "Some data %'d", int_value ); // warning in .cu file
// in .cpp file OK
I have the feeling this is an nvcc issue and the only way to avoid the warning is to move the one line foo.insertLogEntry() into a .cpp file.
Even it is in a .cu file it is a host and NOT a device function. Any idea how to get rid of the warning?
Update:
The ' is according to this site an extension supported on all
POSIX.1-2008-conforming systems.
The question is if cudafe does have to support this or not, even the underlying compiler is
gcc 4.9.3?
Update:
As talonmies suggested to use --dryrun, this causes the warning:
cudafe --allow_managed --m64 --gnu_version=40903 --c++11 -tused
--no_remove_unneeded_entities --debug_mode --gen_c_file_name "/tmp/tmpxft_000026f9_00000000-4_CudaDevice.cudafe1.c"
--stub_file_name "/tmp/tmpxft_000026f9_00000000-4_CudaDevice.cudafe1.stub.c"
--gen_device_file_name "/tmp/tmpxft_000026f9_00000000-4_CudaDevice.cudafe1.gpu" --nv_arch
"compute_30" --gen_module_id_file --module_id_file_name
"/tmp/tmpxft_000026f9_00000000-3_CudaDevice.module_id"
--include_file_name "tmpxft_000026f9_00000000-2_CudaDevice.fatbin.c" "/tmp/tmpxft_000026f9_00000000-7_CudaDevice.cpp1.ii"
I have to admit that I have no clue what I can do next...
Update:
OS: SLES 11 SP3
NSight 7.5
gcc 4.9.3 with -Wall -Werror -Wextra
Update:
Using const char * for the format string is not an option because I want to keep the format check performed by the compiler.
Mark from NVIDIA confirmed this bug and it will be fixed in CUDA 8.
For the moment I extracted the log messages into a .cpp file to avoid the warnings each time I compile the project.
Any idea how to get rid of the warning?
Try this:
LogController lc;
const char * my_fmt = "Some data %'d";
lc.insertLogEntry( my_fmt, int_value );

strdup error on g++ with c++0x

I have some C++0x code. I was able to reproduce it below. The code below works fine without -std=c++0x however i need it for my real code.
How do i include strdup in C++0x? with gcc 4.5.2
note i am using mingw. i tried including cstdlib, cstring, string.h and tried using std::. No luck.
>g++ -std=c++0x a.cpp
a.cpp: In function 'int main()':
a.cpp:4:11: error: 'strdup' was not declared in this scope
code:
#include <string.h>
int main()
{
strdup("");
return 0;
}
-std=gnu++0x (instead of -std=c++0x) does the trick for me; -D_GNU_SOURCE didn't work (I tried with a cross-compiler, but perhaps it works with other kinds of g++).
It appears that the default (no -std=... passed) is "GNU C++" and not "strict standard C++", so the flag for "don't change anything except for upgrading to C++11" is -std=gnu++0x, not -std=c++0x; the latter means "upgrade to C++11 and be stricter than by default".
strdup may not be included in the library you are linking against (you mentioned mingw). I'm not sure if it's in c++0x or not; I know it's not in earlier versions of C/C++ standards.
It's a very simple function, and you could just include it in your program (though it's not legal to call it simply "strdup" since all names beginning with "str" and a lowercase letter are reserved for implementation extensions.)
char *my_strdup(const char *str) {
size_t len = strlen(str);
char *x = (char *)malloc(len+1); /* 1 for the null terminator */
if(!x) return NULL; /* malloc could not allocate memory */
memcpy(x,str,len+1); /* copy the string into the new buffer */
return x;
}
This page explains that strdup is conforming, among others, to the POSIX and BSD standards, and that GNU extensions implement it. Maybe if you compile your code with "-D_GNU_SOURCE" it works?
EDIT: just to expand a bit, you probably do not need anything else than including cstring on a POSIX system. But you are using GCC on Windows, which is not POSIX, so you need the extra definition to enable strdup.
add this preprocessor "_CRT_NONSTDC_NO_DEPRECATE" to Project Properties->C/C++ Build->GCC C++ Compiler->Preprocessor->Tool Settings
Don't forget to check Preprocessor Only(-E)
This worked for me on windows mingw32.

How do you suppress GCC linker warnings?

I've been on a crusade lately to eliminate warnings from our code and have become more familiar with GCC warning flags (such as -Wall, -Wno-<warning to disable>, -fdiagnostics-show-option, etc.). However I haven't been able to figure out how to disable (or even control) linker warnings. The most common linker warning that I was getting is of the following form:
ld: warning: <some symbol> has different visibility (default) in
<path/to/library.a> and (hidden) in <path/to/my/class.o>
The reason I was getting this was because the library I was using was built using the default visibility while my application is built with hidden visibility. I've fixed this by rebuilding the library with hidden visibility.
My question though is: how would I suppress that warning if I wanted to? It's not something that I need to do now that I've figured out how to fix it but I'm still curious as to how you'd suppress that particular warning — or any linker warnings in general?
Using the -fdiagnostics-show-option for any of the C/C++/linker flags doesn't say where that warning comes from like with other compiler warnings.
Actually, you can't disable a GCC linker warning, as it's stored in a specific section of the binary library you're linking with. (The section is called .gnu.warning.symbol)
You can however mute it, like this (this is extracted from libc-symbols.h):
Without it:
#include <sys/stat.h>
int main()
{
lchmod("/path/to/whatever", 0666);
return 0;
}
Gives:
$ gcc a.c
/tmp/cc0TGjC8.o: in function « main »:
a.c:(.text+0xf): WARNING: lchmod is not implemented and will always fail
With disabling:
#include <sys/stat.h>
/* We want the .gnu.warning.SYMBOL section to be unallocated. */
#define __make_section_unallocated(section_string) \
__asm__ (".section " section_string "\n\t.previous");
/* When a reference to SYMBOL is encountered, the linker will emit a
warning message MSG. */
#define silent_warning(symbol) \
__make_section_unallocated (".gnu.warning." #symbol)
silent_warning(lchmod)
int main()
{
lchmod("/path/to/whatever", 0666);
return 0;
}
gives:
$ gcc a.c
/tmp/cc195eKj.o: in function « main »:
a.c:(.text+0xf): WARNING:
With hiding:
#include <sys/stat.h>
#define __hide_section_warning(section_string) \
__asm__ (".section " section_string "\n.string \"\rHello world! \"\n\t.previous");
/* If you want to hide the linker's output */
#define hide_warning(symbol) \
__hide_section_warning (".gnu.warning." #symbol)
hide_warning(lchmod)
int main()
{
lchmod("/path/to/whatever", 0666);
return 0;
}
gives:
$ gcc a.c
/tmp/cc195eKj.o: in function « main »:
Hello world!
Obviously, in that case, replace Hello world! either by multiple space or some advertisement for your wonderful project.
Unfortunately ld does not appear to have any intrinsic way of suppressing specific options. One thing that I found useful was limiting the number of duplicate warnings by passing -Wl,--warn-once to g++ (or you can pass --warn-once directly to ld).

sys_call_table in linux kernel 2.6.18

I am trying to set the sys exit call to a variable by
extern void *sys_call_table[];
real_sys_exit = sys_call_table[__NR_exit]
however, when I try to make, the console gives me the error
error: ‘__NR_exit’ undeclared (first use in this function)
Any tips would be appreciated :) Thank you
Since you are in kernel 2.6.x , sys_call_table isnt exported any more.
If you want to avoid the compilation error try this include
#include<linux/unistd.h>
however, It will not work. So the work around to "play" with the sys_call_table is to find the address of sys_call_table in SystemXXXX.map (located at /boot) with this command:
grep sys_call System.map-2.6.X -i
this will give the addres, then this code should allow you to modify the table:
unsigned long *sys_call_table;
sys_call_table = (unsigned long *) simple_strtoul("0xc0318500",NULL,16);
original_mkdir = sys_call_table[__NR_mkdir];
sys_call_table[__NR_mkdir] = mkdir_modificado;
Hope it works for you, I have just tested it under kernel 2.6.24, so should work for 2.6.18
also check here, Its a very good
http://commons.oreilly.com/wiki/index.php/Network_Security_Tools/Modifying_and_Hacking_Security_Tools/Fun_with_Linux_Kernel_Modules
If you haven't included the file syscall.h, you should do that ahead of the reference to __NR_exit. For example,
#include <syscall.h>
#include <stdio.h>
int main()
{
printf("%d\n", __NR_exit);
return 0;
}
which returns:
$ cc t.c
$ ./a.out
60
Some other observations:
If you've already included the file, the usual reasons __NR_exit wouldn't be defined are that the definition was being ignored due to conditional compilation (#ifdef or #ifndef at work somewhere) or because it's being removed elsewhere through a #undef.
If you're writing the code for kernel space, you have a completely different set of headers to use. LXR (http://lxr.linux.no/linux) searchable, browsable archive of the kernel source is a helpful resource.

Resources