What is the difference between .ilk and .iobj files? - visual-studio

I noticed that Visual Studio generates *.ilk files for debug builds and *.iobj files for release builds. As I understand, both of these file types are used as input for incremental linker. What is the difference between them? Can they be used together? How can I disable these files in project settings?

According to this answer, .iobj files are produced to support incremental link-time code generation (aka LTCG, and what used to be called, I believe, 'whole program optimization') and LTCG is normally only enabled for release builds.
One optimisation that LTCG can perform is inline a function from another compilation unit (i.e. source file). The compiler (of course) can't do this. There are no doubt others.
.ilk files, on the other hand, support incremental linking for debug builds, to get fast link times. This is not the same as incremental LTCG, where the linker tries to make use of cross-compilation-unit optimisations that it has done before, again to speed things up, but in a different way.
It follows that to suppress generation of .iobj files, turn off 'incremental link time code generation' for your project, and to suppress generation of .ilk files, turn off 'incremental linking'. I believe that both of these are linker options. But why bother? - they speed up development. Instead, I delete these files when I archive [a version of] my project.
Incremental linking is normally turned off for release builds, although I'm not sure why. Perhaps the two options are mutually incompatible, I've never tried enabling them both at once. Or maybe MS figured that we were fed up with them cluttering up our hard disks with build products, who knows?

Related

Why might it be necessary to force rebuild a program?

I am following the book "Beginning STM32" by Warren Gay (excellent so far, btw) which goes over how to get started with the Blue Pill.
A part that has confused me is, while we are putting our first program on our Blue Pill, the book advises to force rebuild the program before flashing it to the device. I use:
make clobber
make
make flash
My question is: Why is this necessary? Why not just flash the program since it is already made? My guess is that it is just to learn how to make an unmade program... but I also wonder if rebuilding before flashing to the device is best practice? The book does not say why?
You'd have to ask the author, but I would suggest it is "just in case" no more. Lack of trust that the make file specifies all possible dependencies perhaps. If the makefile were hand-built without automatically generated dependencies, that is entirely possible. Also it is easier to simply advise to rebuild than it is to explain all the situations where it might be necessary or otherwise, which will not be exhaustive.
From the author's point of view, it eliminates a number of possible build consistency errors that are beyond his control so it ensures you don't end up thinking the book is wrong, when it might be something you have done that the author has no control over.
Even with automatically generated dependencies, a project may have dependencies that the makefile or dependency generator does not catch, resource files used for code generation using custom tools for example.
For large projects developed over a long time, some seldom modified modules could well have been compiled with an older version of the tool chain, a clean build ensures everything is compiled and linked with the current tool.
make builds dependencies based on file timestamp; if you have build variants controlled by command-line macros, the make will not determine which dependencies are dependent on such a macro, so when building a different variant (switching from a "debug" to "release" build for example), it is good idea to rebuild all to ensure each module is consistent and compatible.
I would suggest that during a build/debug cycle you use incremental build for development speed as intended, and perform a rebuild for release or if changing some build configuration (such as enabling/disabling floating-point hardware for example or switching to a different processor variant.
If during debug you get results that seem to make no sense, such as breakpoints and code stepping not aligning with the source code, or a crash or behaviour that seems unrelated to some small change you may have made (perhaps that code has not even been executed), sometimes it may be down to a build inconsistency (for a variety of reasons usually with complex builds) and in such cases it is wise to at least eliminate build consistency as a cause by performing a rebuild all.
Certainly if you if you are releasing code to a third-party, such as a customer or for production of some product, you would probably want to perform a clean build just to ensure build consistency. You don't want users reporting bugs you cannot reproduce because the build they have been given is not reproducible.
Rebuilding the complete software is a good practice beacuse here you will generate all dependencies and symbol files path along with your local machine paths.
If you would need to debug any application with a debugger then probably you would need symbol file and paths where your source code is present and if you directly flash the application without rebuilding , then you might not be able to debug certain paths because you dont get to know at which path the application was compiled and you might miss the symbol related information.

Should the STM32 HAL be included as a precompiled library

I have a Keil STM32 project for a STM32L0. I sometimes (more often than I want) have to change the include paths or global defines. This will trigger a complete recompile for all code because it needs to ‘check’ for changed behaviour because of these changes. The problem is: I didn’t necessarily change relevant parameters for the HAL and as such it isn’t needed (as far as I understand) that these files are completely recompiled. This recompilation takes up quite a bit of time because I included all the HAL drivers for my STM32L0.
Would a good course of action be to create a separate project which compiles the HAL as a single library and include that in my main project? (This would of course be done for every microcontroller separately as they have different HALs).
ps. the question is not necessarily only useful for this specific example but the example gives some scope to the question.
pps. for people who aren't familiar with the STM32 HAL. It is the standardized interface with which the program interfaces with the underlying hardware. It is supplied in .c and .h files instead of the precompiled form of the STD/STL.
update
Here is an example of the defines that need to be managed in my example project:
STM32L072xx,USE_B_BOARD,USE_HAL_DRIVER, REGION_EU868,DEBUG,TRACE
Only STM32L072xx, and DEBUG are useful for configuring the HAL library and thus there shouldn't be a need for me to recompile the HAL when I change TRACE from defined to undefined. Therefore it seems to me that the HAL could be managed separately.
edit
Seeing as a close vote has been cast: I've read the don't ask section and my question seeks to constructively add to the knowledge of building STM32 programs and find a best practise on how to more effectively use the HAL libraries. I haven't found any questions on SO about building the HAL as a static library and therefore this question at least qualifies as unique. This question is also meant to invite a rich answer which elaborates on the pros/cons of building the HAL as a separate static library.
The answer here is.. it depends. As already pointed out in the comments, it depends on how you're planning to manage your projects. To answer your question in an unbiased way:
Option #1 - having HAL sources directly in your project means rebuilding HAL every time anything in its (and underlying) headers changes, which you've already noticed. Downside of it is longer build times. Upside - you are sure that what you build is what you get.
Option #2 - having HAL as a precompiled static library. Upside - shorter build times, downside - you can no longer be absolutely certain that the HAL library you include actually works as you want it to. In particular, you'd need to make sure in some way that all the #defines are exactly the same as when the library has been built. This includes project-wide definitions (DEBUG, STM32L072xx etc.), as well as anything in HAL config files (stm32l0xx_hal_conf.h).
Seeing how you're a Keil user - maybe it's just a matter of enabling multi-core build? See this link: http://www.keil.com/support/man/docs/armcc/armcc_chr1359124201769.htm. HAL library isn't so large that build times should be a concern when it comes to rebuilding its source files.
If I was to express my opinion and experience - personally I wouldn't do it, as it may lead to lower reliability or side effects that will be very hard to diagnose and will only get worse as you add more source files and more libraries like this. Not to mention adding more people to work on the project and explaining to them how they "need to remember to rebuild X library when they change given set of header files or project-wide definitions".
In fact, we've ran into the same dilemma for the code base I work on - it spans over 10k source and header files in total, some of which are configuration-specific and many of which are shared. It's highly modular which allows us to quickly create something new (both hardware- and software-wise) just by configuring existing code, mainly through a set of header files. However because this configuration is done through headers, making a change in them usually means rebuilding a large portion of the project. Even though build times get annoying sometimes, we opted against making static libraries for the reasons mentioned above. To me personally it's better to prioritize reliability, as in "I know what I build".
If I was to give any general tips that help to avoid rebuilds as your project gets large:
Avoid global headers holding all configuration. It's usually tempting to shove all configuration in one place, create pretty comments and sections for each software module in this one file. It's easier to manage this way (until this file becomes too big), but because this file is so common, it means that any change made to it will cause a full rebuild. Split such files to separate headers corresponding to each module in your project.
Include header files only where you need them. I sometimes see an approach where there are header files created that only "bundle" other header files and such header file is later included. In this case, making a change to any of those "smaller" headers will have an effect of having to recompile all source files including the larger file. If such file didn't exist, then only sources explicitly including that one small header would have to be recompiled. Obviously there's a line to be drawn here - including too "low level" headers may not be the greatest idea either, e.g. they may not be meant to be included as being internal library files which may change any time.
Prioritize including headers in source files over header files. If you have a pair of your own *.c (*.cpp) and *.h files - let's say temp_logger.c/.h and you need ADC - then unless you really need some ADC definition in your header (which you likely won't), then include the ADC header file in your temp_logger.c file. Later on, all files making use of the temp_logger functions won't have to be re-compiled in case HAL gets rebuilt again.
My opinion is yes, build the HAL into a library. The benefit of faster build time outweighs the risk of the library getting out of date. After some point early in the project it's unusual for me to change something that would affect the HAL. But the faster build time pays off many times.
I create a multi-project workspace with one project for the HAL library, another project for the bootloader, and a third project for the application. When I'm developing, I only rebuild the application project. When I make a release build, I select Project->Batch Build and rebuild all three projects. This way the release builds always use all the latest code and build settings.
Also, on the Options for Target dialog, Output tab, unchecking Browse Information will greatly reduce the build time.

Visual Studio: debug information in release build

I'm tempted to include debug information in my release builds that go out to customers. As far as I see the only down side is 25% increase in the binary file size. The advantage is that I can get an immediately usable crash dump, much easier to analyze.
I'm willing to live with the 25% increase. Are there any other disadvantages I'm missing?
This is a C project and all I want to do is Linked/Debugging/Generate Debug Info
The size of the executable should increase much less than 25%.
I'm actually a little surprised that it increases much at all, but some quick tests show that at least one large example project (ScummVM) increases the .exe from 10,205,184 bytes to 10,996,224 bytes just by adding the /DEBUG option to the link step (about an 8% increase). /DEBUG is specified using the "Linker | Debugging | Generate Debug Info" option in the IDE. Note that this settings should have no effect on the optimizations generated by the compiler.
I know that a pointer to the .pdb file is put in the executable, but there's not much to that. I experimented a bit and found that enabling the /OPT:REF linker option changed the size difference to 10,205,184 vs. 10,205,696. So the non /DEBUG build stayed the same size, but the /DEBUG build dropped to only 512 bytes larger (which could be accounted for by the pointer-to-.pdb - maybe the linker rounds to some multiple of 512 or something). Much less than 1% increase. Apparently, adding /DEBUG causes the linker to keep unreferenced objects unless you also specify /OPT:REF. ("Linker | Optimization | References" option in the IDE).
The program will run fine without the .pdb file - you can choose to send it to customers if you want to provide a better debugging experience at the customer site. If you just want to be able to get decent stack traces, you don't need to have the .pdb file on the customer machine - they (or some tool/functionality you provide) can send a dump file which can be loaded into a debugger at your site with the .pdb file available and get the same stack trace information port-mortem.
Of course, one thing to be aware of is that you'll need to archive the .pdb files along with your releases. The "Debugging Tools for Windows" package (which is now distributed in the Windows SDK) provides a symbol server tool so you can archive .pdb files and easily retrieve them for debugging.
The only drawback that I can think of to distributing .pdb files is that it can make reverse engineering your application easier, if that's a concern for you. Note that Microsoft distributes symbols for Windows (using a public symbol server - as well as packages of the full symbols sets for some specific releases). However, the symbols they distribute do get run through a sanitizing step that removes certain items they consider sensitive. You can do the same (or similar) using the linker's /PDBSTRIPPED option ("Linker | Debugging | Strip Private Symbols" in the IDE). See the MSDN documentation for details on what the option removes. If you're going to distribute symbols, it's probably appropriate to use that option.
According to the Visual Studio 2005 documentation at Visual Studio 2005 Retired documentation:
/DEBUG changes the defaults for the /OPT option from REF to NOREF and
from ICF to NOICF (so, you will need to explicitly specify /OPT:REF or
/OPT:ICF).
In my case it helped when I enabled both:
/O2 /DEBUG /OPT:REF /OPT:ICF
You don't mention what language you're in, and there might be different answers for C++ vs. C#.
I'm not 100% sure what change you're considering making. Are you going to tell Visual Studio to make its standard Debug compile, and ship that, or are you going to edit a couple settings in the Release compile? A careful modification of a couple of settings in the Release build strikes me as the best approach.
Whatever you end up with, I'd make sure that optimizations are turned on, as that can make a significant difference in the performance of the compiled code.
I always send out the debug build, never the release build. I can't think of any disadvantages, and the advantages are as you mention.

Why have separate Debug and Release folders in Visual Studio?

By default, of course, Visual Studio creates separate bin folders for Debug and Release builds. We are having some minor issues dealing with those from the perspective of external dependencies, where sometimes we want release binaries and sometimes debug. It would make life slightly easier to just have a single bin folder on all projects and make that the target for both Debug and Release. We could then point our external scripts, etc. at a single location.
A co-worker questioned why we couldn't just do that--change the VS project settings to go to the same bin folder? I confess I couldn't really think of a good reason to keep them, other than easily being able to see on my local filesystem which are Debug and which are Release. But so what; what does that gain?
My question(s):
How do you leverage having distinct Debug and Release folders? What processes does this enable in your development?
What bad thing could happen if you fail to retain this distinction?
Inversely, if you have gone the "single folder" route, how has this helped you?
I am NOT asking why have separate Debug and Release builds. I understand the difference, and the place of each. My question concerns placing them in separate folders.
Dave, if you will compile Debug and Release to single folder, you may encounter the situation where some dll-s will not be recompiled after switching from Release to Debug and vice versa because dll files will be newer than source files. Yes, the "Rebuild" should help you, but if you forget this - you can have a few extra hours of debugging.
The way I see it, this is simply a convenience on the developer's machine allowing them to compile and run both Debug and Release builds simultaneously.
If you have scripts or tools running inside Visual Studio, the IDE allows you to use the ConfigurationName and other macros to obtain paths which are configuration-independent.
If you are running scripts and tools externally from the command-line (i.e. you are structuring some kind of release or deployment process around it), it is better to do this on a build server, where the distinction between Debug and Release goes away.
For example, when you invoke msbuild from the command-line (on the build server) you can specify the Configuration property for Debug or Release, and the OutputPath property to build to one location only (regardless of the Configuration).
One reason I use separate folders is that it guarantees that I only generate installers that use Release-build code. I use WiX, which allows me to specify the exact path to the files I want to include in the installer, so I end up specifying the path in the Release folder. (Of course, you can do the same using normal VS installers, so that isn't really the point.) If you forget to switch your project to Release before building, the installer doesn't build unless you have old code in the Release folder, in which case you end up with an old installer, so that's a bit of a pitfall. The way I get around that is to use post-build event on the WiX installer project that clears out the release folder after the WiX installer builds.
In a previous company we got round this problem by changing the names of the debug executable and dlls by appending a "D". So
MainProgram.exe & library.dll
become
MainProgramD.exe & libraryD.dll
This meant that they could co-exist in the same output folder and all scripts, paths etc. could just reference that. The intermediate files still need to go to separate folders as the names of these can't be changed.
Obviously you need to change all your references to point to the modified name for debug - which you will forget to do at some point.
I usually compile in Debug mode, but sometimes need to compile in Release mode. Unfortunately, they don't behave exactly the same in certain error situations. By having separate folders, I don't need to recompile everything just to change modes (and a full recompile of our stuff in Release mode will take a while).
I have an experience from somewhat bigger project. If there are few solutions using file references to other solutions, you have to point the reference to ONE directory, so obviously it has to be the "release" one for continuous/night build. Now you can imagine what happens if developer wants to work with debug versions - all the references point to release ones. If it pointed to the same directory, switching to debug would be only matter of recompiling all related stuff in debug mode and the file references would automatically point to debug versions since then.
On the other side, I don't see the point why developer would ever want to work with release versions (and switching back and forth) - the release mode is only useful for full/nighlty builds, so the solutions in VS can stay by default in debug mode, and build script (anyway) always does clean, release build.
Occasionally one may run into a particularly-nasty uninitialized memory problem that only occurs with a release build. If you are unable to maintain (as ChrisF suggests) separate names for your debug vs. release binaries it's really easy to loose track of which binary you're currently using.
Additionally, you may find yourself tweaking the compiler settings (i.e. optimization level, release-with-debug symbols for easy profiling, etc.) and it's much easier to keep these in order with separate folders.
It's all a matter of personal preference though - which is why Visual Studio makes it easy to change the options.
Visual Studio kinds of IDEs works what best for the crowd. They create the default project structure, binary folders. You could map the binaries to the single folder. Then you need to educate the other developers that Release/Debug files are stored in the same folder.
Developers would ask you, who you do like that?
In VC++, we have different libraries generated and you need to link the appropriate versions. Otherwise you will get linker error.
Being consistent in your assemblies is a good thing. You don't want to deal with issues around conditional compilation/etc. where your release and debug dlls are incompatible, but you're trying to run them against each other.
What everyone elsesaid about technical aspects are important. Another aspect is that you may run into race conditions if one build relies on the single-output-location build, but there's no synchronization between the two builds. If the first build can be re-run (especially in a different mode) after the second build starts, you won't really know if you're using a debug of release build.
And don't forget the human aspect: it's far easier to know what you're working with (and fix broken builds) if the two builds output to different locations.

Visual Studio 2008 Unnecessary Project Building

I have a C# project which includes one exe and 11 library files. The exe references all the libraries, and lib1 may reference lib2, lib3, lib4, etc.
If I make a change to a class in lib1 and built the solution, I assumed that only lib1 and the exe would need to be changed. However, all dll's and the exe are being built if I want to run the solution.
Is there a way that I can stop the dependencies from being built if they have not been changed?
Is the key this phrase? "However, all dll's and the exe are being built if I want to run the solution"
Visual Studio will always try to build everything when you run a single project, even if that project doesn't depend on everything. This choice can be changed, however. Go to Tools|Options|Projects and Solutions|Build and Run and check the box "Only build startup projects and dependencies on Run". Then when you hit F5, VS will only build your startup project and the DLLs it depends on.
I just "fixed" the same problem with my VS project. Visual Studio did always a rebuild, even if didn't change anything. My Solution: One cs-File had a future timestamp (Year 2015, this was my fault). I opened the file, saved it and my problem was solved!!!
I am not sure if there is a way to avoid dependencies from being built. You can find some info here like setting copylocal to false and putting the dlls in a common directory.
Optimizing Visual Studio solution build - where to put DLL files?
We had a similar problem at work. In post-build events we were manually embedding manifests into the outputs in the bin directory. Visual Studio was copying project references from the obj dir (which weren't modified). The timestamp difference triggered unnecessary rebuilds.
If your post-build events modify project outputs then either modify the outputs in the bin and obj dir OR copy the modified outputs in the bin dir on top of those in the obj dir.
You can uncheck the build option for specified projects in your Solution configuration:
(source: microsoft.com)
You can can create your own solution configurations to build specific project configurations...
(source: microsoft.com)
We actually had this problem on my current project, in our scenario even running unit tests (without any code changes) was causing a recompile. Check your build configuration's "Platform".
If you are using "Any CPU" then for some reason it rebuilds all projects regardless of changes. Try using processor specific builds, i.e. x86 or x64 (use the platform which is specific to the machine architecture of your machine). Worked for us for x86 builds.
(source: episerver.com)
Now, after I say this, some propeller-head is going to come along and contradict me, but there is no way to do what you want to do from Visual Studio. There is a way of doing it outside of VS, but first, I have a question:
Why on earth would you want to do this? Maybe you're trying to save CPU cycles, or save compile time, but if you do what you're suggesting you will suddenly find yourself in a marvelous position to shoot yourself in the foot. If you have a library 1 that depends upon library 2, and only library 2 changes, you may think you're OK to only build the changed library, but one of these days you are going to make a change to library 2 that will break library 1, and without a build of library 2 you will not catch it in the compilation. So in my humble opinion, DON'T DO IT.
The reason this won't work in VS2005 and 2008 is because VS uses MSBuild. MSBuild runs against project files, and it will examine the project's references and build all referenced projects first, if their source has changed, before building the target project. You can test this yourself by running MSBuild from the command line against one project that has not changed but with a referenced project that has changed. Example:
msbuild ClassLibrary4.csproj
where ClassLibrary4 has not changed, but it references ClassLibrary5, which has changed. MSBuild will build lib 5 first, before it builds 4, even though you didn't mention 5.
The only way to get around all these failsafes is to use the compiler directly instead of going through MSBuild. Ugly, ugly, but that's it. You will basically be reduced to re-implementing MSBuild in some form in order to do what you want to do.
It isn't worth it.
Check out the following site for more detailed information on when a project is built as well as the differences between build and rebuild.
I had this problem too, and noticed these warning messages when building on Windows 7 x64, VS2008 SP1:
cl : Command line warning D9038 : /ZI is not supported on this platform; enabling /Zi instead
cl : Command line warning D9007 : '/Gm' requires '/Zi'; option ignored
I changed my project properties to:
C/C++ -> General -> Debug Information Format = /Zi
C/C++ -> Code Generation -> Enable Minimal Build = No
After rebuilding I switched them both back and dependencies work fine again. But prior to that no amount of cleaning, rebuilding, or completely deleting the output directory would fix it.
I don't think there's away for you to do it out of the box in VS. You need this add-in
http://workspacewhiz.com/
It's not free but you can evaluate it before you buy.
Yes, exclude the non-changing bits from the solution. I say this with a caveat, as you can compile in a way where a change in build number for the changed lib can cause the non built pieces to break. This should not be the case, as long as you do not break interface, but it is quite common because most devs do not understand interface in the .NET world. It comes from not having to write IDL. :-)
As for X projcts in a solution, NO, you can't stop them from building, as the system sees a dependency has changed.
BTW, you should look at your project and figure out why your UI project (assume it is UI) references the same library as everything else. A good Dependency Model will show the class(es) that should be broken out as data objects or domain objects (I have made an assumption that the common dependency is some sort of data object or domain object, of course, but that is quite common). If the common dependency is not a domain/data object, then I would rethink my architecture in most cases. In general, you should be able to create a path from UI to data without common dependencies other than non-behavioral objects.
Not sure of an awesome way to handle this, but in the past if I had a project or two that kept getting rebuilt, and assuming I wouldn't be working in them, I would turn the build process off for them.
Right click on the sln, select configuration manager and uncheck the check boxes. Not perfect, but works when Visual Studio isn't behaving.
If you continue to experience this problem, it may be due to a missing or out of date calculated dependency (like a header) that is listed in your project, but does not exist.
This happens to me especially common after migrating to a new version (for example: from 2012 to 2013) because VS may have recalculated dependencies in the conversion, or you are migrating to a new location.
A quick check is to double-click every file in offending project from solution explorer. If you discover a file does not exist, that is your problem.
Failing a simple missing file: You may have a more complicated build date relationship between source and target. You can use a utility to find out what front-end test is triggering the build. To get that information you can enable verbose CPS logging. See: Andrew Arnott - Enable C++ and Javascript project system tracing (http://blogs.msdn.com/b/vsproject/archive/2009/07/21/enable-c-project-system-logging.aspx). I use the DebugView option. Invaluable tool when you need it.
(this is a C# specific question, but a different post was merged as identical)

Resources