How does one get gcc to not stop compiling after the first error. Is there a compiler flag that will do this?
Basically I'm wanting to remove a class, but i'm not sure how much of an impact that will have, so i'm wanting to determine how many classes would have provblems if i, say, remove the class from the makefile.
Is there a better way to determine this impact?
There's a GCC compiler option -Wfatal-errors to stop after the first error:
-Wfatal-errors
This option causes the compiler to abort compilation on the first error occurred rather than trying to keep going and printing further
error messages
You can also use -Werror if you want to treat warnings as errors so that you'll catch any warning that might be generated when you remove your class.
Is there a better way to determine this impact?
Use the refactoring support, built-in in many IDEs. For example, with NetBeans, you can choose to rename a class and preview all affected places.
Without an IDE, you can rename the class/method/field, instead of deleting it and gradually, with several compilation runs, change all usages of the old name, where the compiler gives an error. Then grep for the new name.
Related
Recently I've been poking around and trying to figure out how C compilers work.
I've noticed that different computers seem to exhibit different behaviours for the following actions:
// a.c
// #include"oui.h"
int main() {
oui();
}
// oui.h
void oui();
gcc -c a.c
On one computer, I am warned that function oui in a.c is undefined. This makes sense to me, and uncommenting the include fixes the problem.
On another computer, however, even with the include commented out, the compiler produces the object a.o without complaint. This does not make sense to me, because isn't oui undefined without the header file?
Which is the normal behaviour? Are some settings on one of my computers messed up?
Don't bother with the following questions if you don't want to, they just popped into my mind and I'll make another thread if need be :).
Side question: The -c flag produces object files but does not link, so is there a special "link" flag afterwards to put the object files together, or is it just gcc? Why not just gcc everything together at the start?
Side question #2: if I do gcc filea fileb filec, is filea inherently special for being the first argument? Or does the order of gcc not matter because the compiler finds whichever file has main by itself?
It's maybe a version difference between the two computers.
Which is the normal behaviour?
The compiler should warn, if you use a function, that isn't declared.
Are some settings on one of my computers messed up?
I don't think so, that you have one newer gcc and a older one.
Side question #1:
If you use gcc without -c, it will try to invoke the linker. In this example, the linker will fail, as oui is undefined.
-c is often used in Makefiles/for bigger projects, as you can recompile faster, because the compiler doesn't have to compile every source file again.
Side question #2:
The order of the source files doesn't matter, as the linker finds main by itself.
For reasons best left unmentioned, I want to treat all warnings as erros, except for a single warning (deprecated) which I would like to tread as a warning.
Is there any more convenient way to do this than listing out all the warnings I want to treat as errors by hand?
You can do -Werror -Wno-error=deprecated.
I get a weird error when I attempt to compile some code I'm writing. I have several Fortran modules that I use for linear algebra computations; I don't want to make an application have to use all of them, so I wrote a wrapper module around them:
module linear_algebra_mod
use sparse_matrix_mod
use csr_matrix_mod
(etc.)
so that the end user can write use linear_algebra_mod to get all of them. However, I get the following error when I compile the linear algebra module:
gfortran -c sparse_matrix_mod.f90
gfortran -c csr_matrix_mod.f90
gfortran -c linear_algebra_mod.f90
linear_algebra_mod.f90:5.8:
use csr_matrix_mod
1
Internal Error at (1):
free_pi_tree(): Unresolved fixup
This was brought up in bug reports here and here but I wasn't able to glean from those what I should do.
To muddy the waters even further, when I instead use the csr_matrix module first, like so:
module linear_algebra_mod
use csr_matrix_mod
use sparse_matrix_mod
the error disappears.
In case this background information is helpful: the sparse_matrix module defines an abstract data type which the csr_matrix module extends and actually implements.
Internal compiler errors are always an indication of a bug in the compiler. Check if you have the latest version of the compiler, and if you do, file a bug report (you may have a look at the open bugs section to see if it has been reported by someone else already, but it is better to have a bug reported twice than to have one not reported at all, so don't worry too much about possibly filing a duplicate bug report).
I'm trying out OpenCobol with a simple Hello World example.
IDENTIFICATION DIVISION.
PROGRAM-ID. HELLO.
PROCEDURE DIVISION.
DISPLAY "Hello World".
STOP RUN.
I compile with
cobc -x -free -o hello hello.cbl
And get a workable executable, but also a lot of these warnings from gcc
warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
From a Google search all I can find is that I can apparently just ignore these without ill effect. But for various reasons I'd like to actually get rid of them, if nothing else then at least suppressing them somehow. How can I do this?
Use -O to tone down optimisation.
I would have expected this was being applied to the C Code generation, and not passed through to gcc.
If you prefer absolute control over gcc,
Write a script to wrap your build.
(pass 1) To produce translated C code from cobc.
(pass 2) To compile (with lesser Optimisation) using gcc.
On a large project you can nest scripts for a full build ala make.
Excuse the late note;
For control over the C compiling phase, OpenCOBOL respects a few environment variables during the build chain.
See http://opencobol.add1tocobol.com/#does-opencobol-work-with-llvm for a list, including COB_CFLAGS and COB_LDFLAGS
From this OpenCobol forum thread it looks like you need to use the -fno-strict-aliasing option. Can't try it here because we don't use OpenCobol.
Somewhere under the covers you are invoking the gcc compiler. Try setting compiler options to turn the warning off as described here
The option should look something like: -Wno-strict
I am trying to save space in my executable and I noticed that several functions are being added into my object files, even though I never call them (the code is from a library).
Is there a way to tell gcc to remove these functions automatically or do I need to remove them manually?
If you are compiling into object files (not executables), then a compiler will never remove any non-static functions, since it's always possible you will link the object file against another object file that will call that function. So your first step should be declaring as many functions as possible static.
Secondly, the only way for a compiler to remove any unused functions would be to statically link your executable. In that case, there is at least the possibility that a program might come along and figure out what functions are used and which ones are not used.
The catch is, I don't believe that gcc actually does this type of cross-module optimization. Your best bet is the -Os flag to optimize for code size, but even then, if you have an object file abc.o which has some unused non-static functions and you link statically against some executable def.exe, I don't believe that gcc will go and strip out the code for the unused functions.
If you truly desperately need this to be done, I think you might have to actually #include the files together so that after the preprocessor pass, it results in a single .c file being compiled. With gcc compiling a single monstrous jumbo source file, you stand the best chance of unused functions being eliminated.
Have you looked into calling gcc with -Os (optimize for size.) I'm not sure if it strips unreached code, but it would be simple enough to test. You could also, after getting your executable back, 'strip' it. I'm sure there's a gcc command-line arg to do the same thing - is it --dead_strip?
In addition to -Os to optimize for size, this link may be of help.
Since I asked this question, GCC 4.5 was released which includes an option to combine all files so it looks like it is just 1 gigantic source file. Using that option, it is possible to easily strip out the unused functions.
More details here
IIRC the linker by default does what you want ins some specific cases. The short of it is that library files contain a bunch of object files and only referenced files are linked in. If you can figure out how to get GCC to emit each function into it's own object file and then build this into a library you should get what you are looking.
I only know of one compiler that can actually do this: here (look at the -lib flag)