Static code analysis tools often watch the build process by inserting their own tool which then runs the normal build process.
Thus the compiling output in the console can be monitored by the specific tool.
Is there a simple, general way to run the compilation process without real compiling. Meaning, not a dedicated solution depending on a specific compiler but a general trick.
The idea would be to reduce the compilation time as only the output is relevant.
Remarks are welcome.
Monica
Related
As far as I know, the compiler compiles the code by converting it to a language that a computer can understand which is the machine language and this is done before running the code.
So, does the compiler compile my code each time I write a character in the file?
And if so, does it check the whole code? Or just the line that updated.
An important part to this question is the type of programming language(PL) we are talking about. Generally speaking, I would categorize PL into 3 groups:
Traditional PLs. Ex: C, C++, Rust
The compiler compiles the code into machine language when you hit the "build" button or the "run" button.
It doesn't compile every time you change the code, but a code linter does continuously observe your code and check it for errors.
Another note, when you change part of the code and compile it, the compiler doesn't recompile everything. It usually only recompile the current assembly file (or module or whatever you call them).
It is also important to note that a lot of modern IDEs, compile when you save the files.
There is also the hot reload feature. It is a smart compiler feature that can swap certain parts of the code while it is running.
Interpreted PLs Ex: python, JS and PHP
Those languages never get compiled; Rather, they get interpreted or translated into native code on the fly and in-memory when you run them.
Those languages usually employee a cache to accelerate the subsequent code execution.
Intermediary Code PL. Ex: Kotlin, java, C#
Have 2 stages of compilation:
Build time compilation.
Just in time (run-time) compilation.
Build time compilation converts the code into intermediary language (IL) machine code, which is special to the run-time.
This code only understood by the run time like Java runtime or dot net runtime
The second compilation happens when the programs get installed or ran for the first time. This is called just in time compilation (JIT)
The run-time convert the code into native code specific to the run-time OS.
I'm currently using Haxe, specifically haxeflixel for development. One thing that really bugs me is the build/compile time. I'm not even compiling to c++ target but decided to compile to neko vm as I thought it maybe faster. However; the compile time to neko debug (or release) is about 4 or 5 seconds. Having to wait this long every time I want to see a result makes it dreadfull :).
I even tried to debug with -v command and the parts that take the most time are:
Running command: BUILD
- Copying library file:
C:\HaxeToolkit\haxe\lib\lime/2,9,1/legacy/ndll/Windows/lime-legacy.ndll -> export/windows/neko/
bin/lime-legacy.ndll
- Running command: haxe export/windows/neko/haxe/release.hxml
From the above excerpt it seems like everything is behaving normally, which worries me because I do not want normal to be this slow.
Now 4 or 5 seconds might not seem a lot to some people but with the Golang, javascript , java and other super faster compiled languages out there - i'm spoiled.
Is there another target I can compile to that I dont know about which would be faster than neko vm compilation? Is there anything I can do to increase compile speed or further debug the cause of the compile slowness?
You can consider using the compilation server:
From a terminal, run haxe --wait 6000
In your hxml, add --connect 6000
This will make your build use the compilation server, which caches unchanged modules and compiles only changed modules. This will speed up your build.
Had a similar concern with running a large number of unit tests very quickly. Ended up building to JS and running the tests in node.
Pair that with gulp to build the code and process resources things can end up running pretty quick.
Is there a tool which can do static analysis and find possible forward null and possible null dereference cases.
I know coverity is pretty much used and also cpp check.
But I dnt find it usefull when comes to user defined data-type comes to picture.
Please provide a solution which can handle user defined data types also and works on C++ code.
You might try
Cppcheck - Cppcheck is a static analysis tool for C/C++ code. Unlike C/C++ compilers and many other analysis tools it does not detect syntax errors in the code. Cppcheck primarily detects the types of bugs that the compilers normally do not detect. The goal is to detect only real errors in the code (i.e. have zero false positives).
Coverity-Scan - STATIC ANALYSIS Find and fix defects in your Java, C/C++, C# or JavaScript open source project for free. Test every line of code and potential execution path.
There are a lot of other tools available, both open source and commercial.
Good luck.
I have a dll, call it core.dll which I want to optimize using Visual Studio's excellent Profile Guided Optimization. Most of the code is the dll actually compiles into a library called core.lib which is then wrapped by core.dll.
To unit-test this code I also have a tester executable called test_core.exe. this executable links to core.lib and activates various functions from it. The DLL core.dll has very few exports, only enough to start its main functionality. It cannot be unit tested fully using these exports.
What I want is to do the PGO data collection by activating some of the tests in test_core.exe and then to use this PGO data to link and optimize core.dll.
It seems that the Visual Studio framework was designed so that the collecting executable and optimized executable are the same.
One option is to add the relevant tests to be inside core.dll and run them using a special export but that would bloat core.dll with test code which is not used in any other circumstance.
It seems that the Visual Studio framework was designed so that the collecting executable and optimized executable are the same.
That was very, very intentional. Profile guided optimization can only work properly when it uses profile data that was collected from a realistic simulation of the way your users are going to run your program. That requires the actual executables as deployed to the user and using realistic data that's a reasonable match with the data the program is going to process at the user's site.
Trying to spike it with test unit profile results will achieve the opposite, your user isn't going to run the code the same way. Significant odds that you'd end up with a less optimized program it that was possible. The profile data you've gather is only good enough to optimize the unit test, that's not useful.
Don't try to cook the profile data, it can't work. This does mean that you can't necessarily easily measure the effectiveness of the optimization if you require a unit test to see a signal. In that case you need to just assume that PGO gets the job done.
It seems that the Visual Studio framework was designed so that the
collecting executable and optimized executable are the same.
This is true, but in you're case, you want to optimize a DLL, not an executable. You can compile the static library and the DLL using the /GL switch and link the DLL using the /LTCG:PGINSTRUMENT switch. This creates a DLL that is instrumented. The test_core.exe image doesn't have to be instrumented, so you can just compile it normally (in Debug or Release mode). Then, by running test_core.exe, a PGC file will be generated containing a profile of the behavior of core.dll only. This profile can then be used to optimize core.dll by compiling it again and specifying the /LTCG:PGOPTIMIZE switch. As long as test_core.exe exercises core.dll for common usage scenarios, you'll certainly benefit from it. See this for more information.
I am trying to debug an intermittent parallel build issue in my cmake build system around some generated files. It is however difficult to reliably test or reproduce the issue.
Does anyone know any way to exacerbate or sensitise such issues? Or other strategies for debugging them?
It is likely a missing add_dependencies to force one target to build completely before another begins, or an add_custom_command output that is used in more than one library.
If both libraries start building at the same time, and they both trigger running the custom command at the same time, then you'll get two competing custom commands running, and they may overwrite each other's results, or intermingle results.
Is your code public? Can you post it for others to inspect?
One good strategy is simply exposing it to other developers for "more eyes"...