What are the practical implications of compilation warnings? [closed] - gcc

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm taught to generally compile with gcc -Wall. Sometimes the warnings point me to logic errors in my code and I can find and correct them quickly (e.g. if I declared a variable that has not been used because I confused/mistyped variable names). But if I compile code from other sources, which seems to run fine for me, I frequently get warnings about deprecated or nonstandard language elements etc.
Do I have to worry about it? Should I update or correct these sources? From this reasoning, wouldn't it be safer to have printing all compile warnings as default and a compiler flags like -Wnone instead?

As it happens so often in software development, the answer is "it depends".
Personally, I find compiler warnings very useful. I try to write portable code and it is actually quite instructive to compile with different compilers under different OSes and then compare the warnings. More than once they helped me to pinpoint innocent-looking but real problems.
On the other hand, depending on your experience, you might find warnings annoying, if you really know what you're doing. Most compilers allow you to ignore particular warnings, it is advisable to consult the compiler manuals for details.

Related

What is wrong with configure/make that we need new build tools? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Is make not general enough?
makedepend, autoconf, automake all work to build Makefiles. Is there a flaw in make that causes this type of usage to break down for some languages?
How does ant, Bazel, Maven or other system compile or build a project better than make?
make come from Unix and is generally good for "if you have X you need to do Y to get Z" one file at a time (like invoking the C compiler on a C source). All the autoconf/automake/configure tooling is to customize C programs to the actual platform, and is essentially not overlapping with make.
This did not work well for Java as the compiler was fast for compiling multiple files but the overhead of starting the JVM was much too high for compiling one file at a time. So, a different approach was needed. First plain javac, then ant (which for all practical purposes is a scripting language), and then maven (which isn't because that was a bad idea).
So, the answer is that different tools arose for different needs.

gcc assembly vs direct to machine code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I recently started learning how to program, and I found this one thing curious:
Why does gcc go the extra mile of compiling the c-code to assembly and then to machine code?
Wouldn't it be just as reasonable to compile direct to machine code? I know that e.g. the MS c Compiler does this, so what's the reason for gcc for behaving like this?
Because for one thing, there's already the assembler who does a fairly good job at tranlating assembly to machine code -- there would be no point in gcc re-implementing that functionality. (also keep in mind that assembly still is /somewhat/ symbolic) On a second point, one probably doesn't /always/ want to compile straight down to machine code -- on embedded systems, there's a good chance the generated assembly undergoes a final by-hand optimization.Last but not least, it's very helpful in debugging the compiler itself in case it misbehaves. Nobody likes to read raw machine code.
GCC is very much unix and this is the unix way to make separate tools that build on each other rather than integrating. You need to have an assembler and linker, the job of the assembler is to generate machine code for a target, makes no sense to repeat that. the job of the compiler is to boil down a higher level language to a lower one. Generating assembly language allows for much easier debugging of the compilers output, and it lets the assembler do its job rather than repeating the task in two places.
Of course it is not only unix compilers that do this, it makes a lot of sense to do this on all platforms and has been done this way forever. Straight to machine code is the exception rather than the rule, usually when there is a specific reason to do so.
I dont understand the fascination with this question and why it keeps getting asked over and over again. Please search for previous answers before asking...

Designing a makefile for installing / uninstalling software that I design [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm writing a compiler and there are certain things that I simply can't do without knowing where the user will have my compiler and its libraries installed such as including and referencing libraries that add built-in functionality like std I/O. Since this is my first venture into compilers I feel that it's appropriate to only target Linux distributions for the time being.
I notice that a lot of compilers (and software projects in general) include makefiles or perhaps an install.py file that move parts of the application across the user's file system and will ultimately leave the user with something like a new shell command to run the program, which, (in a compiler such as python's case) knows where the necessary libraries are and where the other necessary files have been placed in order to run the program properly.
How does this work? Is there some sort of guideline to follow when designing these install files?
I think the best guideline I can give you at a high level would be:
Don't do this yourself. Just don't.
Use something like the autotools or any of the dozen or so other build systems out there that handle much of the details involved here for you.
That being said they also add a certain amount of complexity when you are just starting out and that may or may not be worth the effort to start with but they will all pay off in the end assuming you use them appropriately and don't need anything too extensively specialized that they don't provide nicely.

Why Phing/Ant over Bash and Make? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've been using Phing at work (it was set up when I got there), and thinking of using it for some personal projects. One thing I haven't got my head around yet though is what the big appeal is?
What, if any, are the killer features of Phing or Ant? What are the big reasons people choose to use them instead of (for example) just a collection of bash scripts that execute their build actions? I'm sure I'm missing the obvious, hopefully someone can help me. While I understand that some people may prefer not to use phing/ant, I'm hoping to hear from people who do prefer them about why they prefer them. Just so I can make a more informed decision.
Thanks for any direction or links.
The main feature of Ant is to add frustration to your day, when you know you could achieve something in 30 seconds in a Makefile, but end up fighting with Ant for an hour :)
It was a fresh implementation without requiring a functional shell and all the other standard commands that you expect to be available with a shell. I think that's the real killer feature - you can use it on Windows OS.
Ant XML is far more structured and machine-readable - whereas Makefile+shell is essentially Turing complete and extremely generic. Your IDE has a hope of being able to understand Ant XML, the same can't be said in the general case for Makefiles.
Sadly, the reality after all this time seems to be that the IDEs don't make good use of this potential win. Case in point, opening build.xml in Eclipse just shows you XML.
Which I think just leaves the Windows OS rationale. If there was no Windows OS, probably there would be no Ant either.

Anybody tried the Crystal Programming Language (machine-code compiled Ruby)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Like many others, I always hold true that "A pure compiler will never exist for Ruby because the language is far too dynamic for a static compiler to work."
But I recently stumbled upon these:
The Crystal programming language at GitHub
Statically compiled Ruby
Both projects seem to be very interesting. They could give us the speed of a native-compiled language (and the often commercially-required, obfuscated code of a compiled language) while keeping all (or most) of the elegance and flexibility of Ruby. Add a good support library (or, more likely, the possibility to access the existing C++ libraries) and you can easily understand why this stuff could be interesting.
Has anybody tried the Crystal language?
(I didn't yet, because of compilation problems with ruby-llvm)
Which was his/her feeling about it?
Do you think that, given those design choices, would it be actually possible to develop a native-code (machine-code) compiler for Ruby (with a reasonable effort and in a reasonable amount of time)? Would it be meaningful?
I'm the developer of crystal. Currently not everything is implemented from the bulleted point list. In fact classes were just started to be implemented.
I really like the idea of it though. But I need to think more about how to implement it. And I also need more time, hehe.
The second article has a completely different approach because it won't introduce a new language: it'll just try to compile a subset of Ruby, or maybe will be compiled to native code but still allow some dynamism with performance costs (I talked to the author of that article some months ago).
My feeling toward both approaches: I really with it could happen. We need a fast language with an elegant, readable, joy to use syntax and library (like what Ruby offers).
I'm the developer of Foundry; the second article is mine.
A more recent article on the same topic would be "A language for embedded developers"; or you could also track development progress by subscribing at foundry-lang.org.
Please note, however, that my project is commercial, (at least initially) not open-source, and is primarily focused on embedded development. You could still use it on desktops or servers, of course.
I'm also one of ruby-llvm maintainers; please report the problems you've encountered as bugs on the project page.

Resources