I am following sysdev tutorial, and have not build compiler with sysroot flag. ../binutils-x.y.z/configure --target=$TARGET --prefix="$PREFIX" --with-sysroot (forgot the last one).
So the problem is, when I have included compiler defined header <stdint.h> (somewhere in source, it is not important for the error), the error arise:
/home/user/opt/cross/lib/gcc/i686-elf/8.3.0/include/stdint.h:9:16: fatal error: stdint.h: No such file or directory
# include_next <stdint.h>
I have looked at the file a find:
...
...
# include_next <stdint.h>
...
1)
So why is used preprocessor macro #include_next, how does it differ from basic #include.
2)
How to solve the error of course. As the header of this question suggest, I have found this question: stdint.h "no such file or directory" error on yocto sdk. There someone said, there is missing the
3)
sysroot, but I have no idea what it is (whether root permission?) and how to add it in my compilation (if even) with some flag? Or need to build once again the cross compiler (but rather avoid this option). Or does it require to have special folder with the header, so it is visible for the compiler? Really do not know.
Edit:
When tried to edit the file and change the #include_next to just #include, even more error arise, so rather do not do that (have changed that back), so that macro is even more magic to me, as I saw it for the first time
Related
Example code:
#define PROT_NONE 99
#include <sys/mman.h>
Both gcc and clang permit the above code fragment to compile; the PROT_NONE macro is redefined from within sys/mman.h with no warning. Looking at the actual header file, there is no #undef which would permit a redefinition.
This seems like a problem -- although this case is obviously contrived to show the problem, it does seem that identifier collisions between my code and the system header files can be silently ignored. The system header definition of PROT_NONE overrides my definition and doesn't even warn me that there's a potential problem. This seems to be specific to the system header file somehow; if I try to do the redefinition myself, I get the proper error.
My question is basically twofold:
Does anybody know the motivation behind allowing this behavior?
Is there any command line switch that will cause this to fail at the compilation stage?
What's happening/motivation
In both gnu and clang, warnings are suppressed in system headers.
The clang user manual just declares this is so:
Warnings are suppressed when they occur in system headers.
...but the gnu c preprocessor manual gives the following justification:
The header files declaring interfaces to the operating system and runtime libraries often cannot be written in strictly conforming C. Therefore, GCC gives code found in system headers special treatment.
Mitigation on the command line
Is there any command line switch that will cause this to fail at the compilation stage?
Yes. Make your system-headers non-system-headers.
In clang, you can do this merely with --no-system-header-prefix x/y/z, where x/y/z is a pattern matched starting at all system directories. For example, in your case, you can use --no-system-headers sys; or you can cherry pick further: --no-system-headers sys/mm (all files in a system directory included via the sys subdirectory that start with mm; it's just a prefix pattern, not a directory spec).
In gcc, this is a bit tricker. System headers by default are just headers in system directories, and there's no way to exclude a particular directory as a system directory. You can, however, ditch all system directories with -nostdinc, and add them back in as regular inclusion directories. For example:
gcc -nostdinc -I/usr/include -I/usr/lib/gcc/x86_64-pc-cygwin/5.4.0/include ...
You need -nostdinc; -I paths into your system inclusion paths just winds up being ignored.
GCC suppresses warnings in system headers by default. The reason is that the user usually cannot do anything about warnings generated by those headers because they cannot edit the code to fix those warnings. You can enable those warnings using -Wsystem-headers.
For your specific example, a redefinition of a macro not defined in a system header by a system header, GCC should probably warn even with -Wno-system-headers (it now has the infrastructure to do that). Someone already filed an RFE:
-Wno-system-headers hides warning caused by user header vs system header conflict
This question isn't specifically related to cross-compiling but has arisen as I have a problem related to architecture specific headers while trying to cross-compile a library.
I am trying to cross-compile OpenCV, the target is an ARM processor and I am compiling on an x86_64 processor. The build fails because a header file cannot be located:
/usr/include/zlib.h:34:19: fatal error: zconf.h: No
such file or directory #include "zconf.h"
Sure enough in zlib.h there is a reference to zconf.h:
#include "zconf.h"
However when I look under <path_to_arm_filesys>/usr/include I actually find zconf.h under <path_to_arm_filesys>/usr/include/arm-linux-gnueabihf directory. So as I understand it the C preprocessor won't find zconf.h as the reference to it does not include the reference to the architecture-specific sub-directory.
To try and understand how zconf.h is actually found, I referred to the host machine and where zconf.h is located. Similarly, it is located under /usr/include but under the architecture-specific x86_64-linux-gnu directory.
So if in the source code the is no specific reference to architecture (as to be expected) in any #include how does the (GNU) C pre-processor know where to look? Is it a case that the pre-processor already knows its architecture-target and can automatically append another architecture specific directory to all the include directories it knows about? Or must I specifically inform it with the use of the -I flag of these specific directories?
There are 3 ways to say the compiler where to find headers :
set the --sysroot option #preferred for cross-compilation
-sysroot=dir Use dir as the logical root directory for headers and libraries. For example, if the compiler normally searches for headers
in /usr/include and libraries in /usr/lib, it instead searches
dir/usr/include and dir/usr/lib.
In this case you must have all libraries and headers under the sysroot folder.
Directly say with -I options where to find the headers.
Set the C_INCLUDE_PATH/ CPLUS_INCLUDE_PATH env variables
For #include preprocessor will seachy in search system directories and directories that were provided by compiler.
For #include "filename" preprocessor will search in the same folder where you have file that have this include.
https://gcc.gnu.org/onlinedocs/gcc/Directory-Options.html
https://gcc.gnu.org/onlinedocs/gcc/Environment-Variables.html
I want to use frama-c for static C code analysis. It already took me some effort to install it (hopefully) properly. The files are located at C:\CodeAnalysis\frama-c. I want to apply it via Windows console, e.g.:
C:\CodeAnalysis\frama-c\bin\frama-c hello.c
hello.c is just a simple hello-world-program (I am no C programmer btw and a newbie in programming)
#include <stdio.h>
main()
{
printf("Hello World \n");
}
So when running the above command there is the following output:
[kernel] preprocessing with "gcc -C -E -I. hello.c"
C:/Strawberry/c/x86_64-w64-mingw32/include/stdio.h:141:[kernel] user error: syntax error
[kernel] user error: skipping file "hello.c" that has errors.
[kernel] Frama-C aborted: invalid user input
Yes, I have Perl installed but have no idea why Frama uses it. To me it seems that there is somehow something wrong with the stdio.h. Can this be? But I can compile my program successfully.
C:\Strawberry\c\bin\gcc hello.c produces a nicely working exe file.
When removing the include statement from the file, there is the following output:
[kernel] preprocessing with "gcc -C -E I. hello.c"
hello.c:5:[kernel] warning: Calling undeclared function printf. Old style K&R code?
So frama itself does work and this is the kind of output I expected to have.
I also have MinGW installed and tried to make Frama use this for compiling. So I removed the Strawberry entries in my Windows Path. After that calling frama-c produces the same output.
When completely uninstalling Strawberry Perl, frama doesn't work (stating gcc is an unknown command), although C:\MinGW\mingw64\bin is also added to my Windows Path, even as very first entry.
C:\MinGW\mingw64\bin\gcc hello.c works, gcc hello.c doesn't.
When Perl is installed gcc hello.c works, even when I delete the Strawberry parts from the Windows Path variable. Wtf?
How can I make things work properly?
There are several issues here, and we have to isolate them in order to fix things.
Strawberry Perl installs its own gcc (based on MinGW), binutils, C headers, etc., by default on directory C:\Strawberry\c\bin. It adds this directory (among others) to the Windows Path variable. Frama-C expects gcc to be in the path, and it is Windows who decides which gcc to choose, if there are several directories in the path which contain a gcc binary. This is why Frama-C seems to use it.
One common mistake (not Windows-specific, but which happens more often in Windows due to the nature of its graphical applications) is to modify environment variables and forget to restart the processes which still have old copies of them (such as Command Prompt). echo %path% should confirm which directories are present in the path for the current command prompt, if there are any doubts about its value.
In case echo %path% contains the expected value, this is what might have happened (unfortunately I cannot reproduce your configuration to test it thoroughly): during installation of Frama-C, it may use the settings present during installation time to choose which directory contains gcc (in your case, C:\Strawberry\c\bin) and later hardcode this directory in its scripts.
This could explain why, after uninstalling Strawberry Perl, even if another gcc was in the path, it was not considered by Frama-C. Ideally, reinstalling Frama-C with a single gcc in the path could allow it to find the right version this time. Note that this is just a hypothesis, I may be completely wrong here.
In any case, the major problem you're having is not with gcc itself, but with the headers included with Strawberry Perl, as explained in the next item.
Concerning the error message:
C:/Strawberry/c/x86_64-w64-mingw32/include/stdio.h:141:[kernel] user error: syntax error
[kernel] user error: skipping file "hello.c" that has errors.
It is indeed not extremely informative and might change in future versions, but it does point to the source line which causes the error (file stdio.h, line 141):
int __cdecl __mingw_vsscanf (const char * __restrict__ _Str,
const char * __restrict__ Format,va_list argp);
In particular, it seems that __restrict__ is the source of the error here (Frama-C Sodium accepts restrict and __restrict, but not __restrict__; this may change in future versions).
Unfortunately, even fixing this (by adding e.g. #define __restrict__ restrict before #include <stdio.h> in your file) does not guarantee that the rest of the file will be parsed, since it seems to be a Windows-specific, C++-prone header that likely contains other C definitions/extensions that are not in the C99 standard, and possibly not accepted by Frama-C.
The best solution would be to ensure Frama-C uses its own stdio.h header, instead of Strawberry Perl's. It is usually installed in share/frama-c/libc (that is, it could be in C:\CodeAnalysis\frama-c\share\frama-c\libc in your installation), but depending on your configuration the headers might not have been found during execution, and Strawberry Perl's headers were included instead.
A quick hack for this specific case might be replacing:
#include <stdio.h>
with:
#include "C:\CodeAnalysis\frama-c\share\frama-c\libc\stdio.h"
But it is far from ideal and likely to lead to other errors.
If you manage to find out how to prevent Strawberry Perl's headers from being included, and ensure Frama-C's header files are included instead, you should be able to run Frama-C.
Note about Cygwin/MinGW path issues
I've had some issues when using a MinGW compiler and a Cygwin build (which is not necessarily a good idea), so here are some quick instructions on how to build Frama-C Sodium with a MinGW-based OCaml compiler using a Cygwin shell (but not a Cygwin-based OCaml compiler), in case it might help someone:
When running ./configure, you'll need to specify a --prefix using a Windows-based path instead of a Cygwin-based one, such as:
./configure --prefix="C:/CodeAnalysis/build"
If you don't, when running Frama-C (after make/make install) it will fail to find the libc/__fc_builtin_for_normalization.i file because it will try to use the Cygwin-based path, which will not work with the MinGW-based OCaml compiler.
Note that you cannot use backslashes (\) when specifying the prefix path, since they will not be correctly converted later.
I had to use the following command to ensure the makefile worked correctly:
make FRAMAC_TOP_SRCDIR="$(cygpath -a -m $PWD)"
Again, this is due to Cygwin paths not being recognized by the MinGW compiler (in particular, the absolute paths used by the plug-ins).
The previous steps are sufficient to compile and run Frama-C (plus the GUI, if you have lablgtk and other dependencies installed). However, there are still some issues, e.g. absolute Windows filenames are not always handled correctly. This can often be avoided by specifying the file names directly in the command line with relative paths (e.g. frama-c-gui -val hello.c), but in the general case, MinGW+Cygwin is not a very robust combination and other issues may arise.
Overall, mixing Cygwin and MinGW is not a good idea due to path issues, but it is nevertheless possible to compile and run Frama-C in such conditions.
I am trying to install hqp on OS X, but seems the gcc compiler is quite different.
When running make, I first come to an error like malloc.h not found, I wrap the #include header like:
#if !defined(__APPLE__)
#include <malloc.h>
#endif
In this way, the first problem is solved.
But when I continue to run make, I got things like:
g++ -shared -o libhqp.so Hqp_Init.o Hqp.o sprcm.o Meschach.o spBKP.o matBKP.o bdBKP.o Hqp_impl.o Hqp_Program.o Hqp_Solver.o Hqp_Client.o Hqp_IpsFranke.o Hqp_IpsMehrotra.o Hqp_IpMatrix.o Hqp_IpSpBKP.o Hqp_IpRedSpBKP.o Hqp_IpLQDOCP.o t_mesch.o Hqp_IpSpSC.o meschext_hl.o Hqp_SqpSolver.o Hqp_SqpPowell.o Hqp_SqpSchittkowski.o Hqp_HL.o Hqp_HL_Gerschgorin.o Hqp_HL_DScale.o Hqp_HL_BFGS.o Hqp_HL_SparseBFGS.o Hqp_SqpProgram.o Hqp_Docp.o hqp_solve.o \
../meschach/*.o ../iftcl/*.o -L"/sw/lib" -Wl,-rpath,"/sw/lib" -ltclstub8.5
i686-apple-darwin11-llvm-g++-4.2: ../meschach/*.o: No such file or directory
i686-apple-darwin11-llvm-g++-4.2: ../iftcl/*.o: No such file or directory
Does anyone know what component is different this time? I tried reinstall the latest version of tcl, but it seems not to be the problem. Find it really hard to google a solution...
Without actually testing the result, I got this to work using the following steps. I have to say that this set of makefiles does not work as it should, especially with regard to how the dependencies are set up.
First, edit meschach/machine.h and remove the #include <malloc.h>, or make it conditional like you did with the __APPLE__ ifdef. The only reason why malloc.h is included seems to be for malloc() and free() and those get included via stdlib.h anyway.
Then edit makedirs.in and append -I/usr/include/malloc to MES_INCDIR, leaving you with MES_INCDIR = -I.. -I/usr/include/malloc.
With these two steps in place, doing ./configure followed by make should already give you libhqp.so in the lib directory, which might be sufficient for you.
However, there is also an executable called docp in the directory hqp_docp which gets executed during the make process. It does not work because it can not find the shared libary libhqp.so. I resolved that by cd-ing into the lib directory and setting export DYLD_FALLBACK_LIBRARY_PATH=$PWD. I am not sure whether running docp is an essential part of the process though.
Finally, the building of a library called omu breaks because the linker is not passed any reference to the required library libhqp.so. I did not figure out why this would work on other systems, and I do not know whether you need that libomu at all. I just did a quick fix by adding -L../lib -lhqp to the end of the linker-command in omu/Makefile. That is the command starting with $(LD).
I hope I did not forget any of the steps I took, let me know if it still breaks for you somewhere.
I have an application using PDCurses. It compiles fine under debug, but when I try to compile in release mode, I get the following error:
main.cpp(1): fatal error C1083: Cannot open include file: 'curses.h': No such file or directory
I don't know if I havn't set up the linker properly or what the cause may be. Any ideas?
This message is looking for the header file: curses.h. This specific message is not related to the linker. There is probably an #ifdef which includes this only when doing a release build, but does not include it when doing a debug build (which may or may not be an error). Most likely the path for the header file is not specified to your compiler. I am not sure which compiler/version you are using, but you can add additional paths to search for header files. This will be in the compiler options.
EDIT: Look in the file main.cpp and you should see a line which #includes the file 'curses.h'. However, it could also be one of the header files you include there which in turn includes this file.