In my project, I want to get rid of tons of empty and pointless cpp files for interfaces in IBM Rational Rhapsody.
Setting CPP_CG:File:Generate to Specification yields only header file generation of a class, which is almost what I want. But, the makefile (gpj) still looks for the *Ifc.cpp file. Is there a straight way to exclude these cpp files from makefile?
There is an option CG::File::AddToMakefile which does only work for component files. I found some info that it was working before but with Rhapsody 8, it stopped working.
You should be able to force the suppression of the either header or implementation file of the interface using those properties. However!
Rhapsody expects to find the cpp file of the interface and suppressing it will cause problems with roundtrip function - Roundtrip doesn't just occur explicitly, it will also trigger implicitly by default when you save the project or change focus from code editor to model browser.
During this roundtrip, Rhapsody will try to "fix" the model by replacing the missing cpp file. This will be followed by roundtrip error messages. Disregarding the errors and continuing with roundtrip will probably cause duplicate elements and all sorts of mess.
In other words, what you're trying to do is not really supported and is a bad idea.
Related
I have a bigger C++ Project. I use the Micrsoft C++ Extension with the default settings.
Intellisense takes sometimes over 1 minute to catch up with suggestions.
In order to solve the problem, I went through the settings. I've seen it is possible to set the Intellisense engine to "Tag Parser", it solves all the performance problems as expected, but unfortunately, it deactivates Error Squiggles.
Is it possible to combine Tag Parser Intellisense with the default error checking, if the error checking takes a minute it wouldnt be a such a problem?
Or do you have other ideas to solve the performance problem?
Depends on what compiler your running. I'm using Visual Studio Build Tools. For my UE4 project, source file Intellisense is pretty snappy but header files are really slow. Sadly I believe headers will always be slow on large project's using the default Intellisense. This is because VSCode recalculates Intellisense with every line of a header file you write.
Off topic info:
This is also why you should disable Auto PCH with “C_Cpp.intelliSenseCacheSize” set to 0. Let's say you have a class with a large AutoPCH cache file of 1 Gigabyte (not uncommon with Unreal Engine). VSCode is now deleting and writing this cache with every line you type in a header file. This won't help with Intellisense speed unfortunately.
Back to topic:
Make sure to custom tailor your "C_Cpp.default.includePath" and only include the paths you need. This is used for Default Intellisense but will also be used for Tag Parser if you don't specify any Tag Parser path with "C_Cpp.default.browse.path". Also note that any path in "C_Cpp.default.includePath" is NOT recursively searched for subfolders. You can override this behavior by adding /** to the end of the path. But use it wisely.
For a large project, a better way than using the includePath settings is to use compile commands (compilation database). "C_Cpp.default.compileCommands" lets you use a json file to custom tailor every include path for every header and source file you have. So now default Intellisense is only using the include files it needs for a particular file. You'll have to research on how to create a compile commands json file. There are tools that can auto create this. Unreal Engine actually moved to compile commands for 4.26 but there were no speedups because they still include everything for every file. I believe they have to do it this way based on how UE4 header includes are setup but I'd have to research.
Another way is to limit the #includes in header files and use forward declarations. Put your #includes in source files. This is probably one of the main reasons Unreal Engine header Intellisense is so slow. When creating a class it auto includes header files in the classes header file. Those included headers also include other headers and so on. So when VSCode recalculates header file Intellisense it's very slow. Source files don't seem to care though.
More info:
I forgot to mention about Tag Parser Intellisense. Tag parser info relies on "C_Cpp.default.browse.path" for include files. If you don't specify anything in "C_Cpp.default.browse.path" then:
Anything in "C_Cpp.default.includePath" will be use for tag parser Intellisense.
Your project's path will automatically be added (see Note below on why this might be bad)
Note: Any path used for the browse path that doesn't have /* at the end will be searched recursively for any subfolders that also have header files to cache.
Note: For multifolder projects, all folder project directories(recursive) will also be added to the tag parser includes for one big Tag Parser cache file.
The Tag Parser is still used when using Default Intellisense. It's used for switching between header and source (Alt+O) and for finding symbols. So you should still tweak the Tag Parser paths to your liking.
For large projects the Tag Parser can take 20+ minutes to finish creating it's cache file when you first open the project. You wont get Tag Parser Intellisense until it's finished. For Unreal Engine 4.26 this file is around 1.5 Gigabytes. Good thing though is that is doesn't get deleted and recreated like the Default Intellisense cache files.
You may need to tweak your "C_Cpp.default.browse.path" instead of not specifying one and and letting VSCode do what it wants.
I will say for source files you will miss having Default (Context Aware) Intellisense. But you can try Tag Parser Intellisense if you can get it to work.
Note: You can see when Intellisense is working. Default Intellisense is a fire symbol and Tag Parser Intellisense is a cylinder found on the bottom bar.
I have a Keil STM32 project for a STM32L0. I sometimes (more often than I want) have to change the include paths or global defines. This will trigger a complete recompile for all code because it needs to ‘check’ for changed behaviour because of these changes. The problem is: I didn’t necessarily change relevant parameters for the HAL and as such it isn’t needed (as far as I understand) that these files are completely recompiled. This recompilation takes up quite a bit of time because I included all the HAL drivers for my STM32L0.
Would a good course of action be to create a separate project which compiles the HAL as a single library and include that in my main project? (This would of course be done for every microcontroller separately as they have different HALs).
ps. the question is not necessarily only useful for this specific example but the example gives some scope to the question.
pps. for people who aren't familiar with the STM32 HAL. It is the standardized interface with which the program interfaces with the underlying hardware. It is supplied in .c and .h files instead of the precompiled form of the STD/STL.
update
Here is an example of the defines that need to be managed in my example project:
STM32L072xx,USE_B_BOARD,USE_HAL_DRIVER, REGION_EU868,DEBUG,TRACE
Only STM32L072xx, and DEBUG are useful for configuring the HAL library and thus there shouldn't be a need for me to recompile the HAL when I change TRACE from defined to undefined. Therefore it seems to me that the HAL could be managed separately.
edit
Seeing as a close vote has been cast: I've read the don't ask section and my question seeks to constructively add to the knowledge of building STM32 programs and find a best practise on how to more effectively use the HAL libraries. I haven't found any questions on SO about building the HAL as a static library and therefore this question at least qualifies as unique. This question is also meant to invite a rich answer which elaborates on the pros/cons of building the HAL as a separate static library.
The answer here is.. it depends. As already pointed out in the comments, it depends on how you're planning to manage your projects. To answer your question in an unbiased way:
Option #1 - having HAL sources directly in your project means rebuilding HAL every time anything in its (and underlying) headers changes, which you've already noticed. Downside of it is longer build times. Upside - you are sure that what you build is what you get.
Option #2 - having HAL as a precompiled static library. Upside - shorter build times, downside - you can no longer be absolutely certain that the HAL library you include actually works as you want it to. In particular, you'd need to make sure in some way that all the #defines are exactly the same as when the library has been built. This includes project-wide definitions (DEBUG, STM32L072xx etc.), as well as anything in HAL config files (stm32l0xx_hal_conf.h).
Seeing how you're a Keil user - maybe it's just a matter of enabling multi-core build? See this link: http://www.keil.com/support/man/docs/armcc/armcc_chr1359124201769.htm. HAL library isn't so large that build times should be a concern when it comes to rebuilding its source files.
If I was to express my opinion and experience - personally I wouldn't do it, as it may lead to lower reliability or side effects that will be very hard to diagnose and will only get worse as you add more source files and more libraries like this. Not to mention adding more people to work on the project and explaining to them how they "need to remember to rebuild X library when they change given set of header files or project-wide definitions".
In fact, we've ran into the same dilemma for the code base I work on - it spans over 10k source and header files in total, some of which are configuration-specific and many of which are shared. It's highly modular which allows us to quickly create something new (both hardware- and software-wise) just by configuring existing code, mainly through a set of header files. However because this configuration is done through headers, making a change in them usually means rebuilding a large portion of the project. Even though build times get annoying sometimes, we opted against making static libraries for the reasons mentioned above. To me personally it's better to prioritize reliability, as in "I know what I build".
If I was to give any general tips that help to avoid rebuilds as your project gets large:
Avoid global headers holding all configuration. It's usually tempting to shove all configuration in one place, create pretty comments and sections for each software module in this one file. It's easier to manage this way (until this file becomes too big), but because this file is so common, it means that any change made to it will cause a full rebuild. Split such files to separate headers corresponding to each module in your project.
Include header files only where you need them. I sometimes see an approach where there are header files created that only "bundle" other header files and such header file is later included. In this case, making a change to any of those "smaller" headers will have an effect of having to recompile all source files including the larger file. If such file didn't exist, then only sources explicitly including that one small header would have to be recompiled. Obviously there's a line to be drawn here - including too "low level" headers may not be the greatest idea either, e.g. they may not be meant to be included as being internal library files which may change any time.
Prioritize including headers in source files over header files. If you have a pair of your own *.c (*.cpp) and *.h files - let's say temp_logger.c/.h and you need ADC - then unless you really need some ADC definition in your header (which you likely won't), then include the ADC header file in your temp_logger.c file. Later on, all files making use of the temp_logger functions won't have to be re-compiled in case HAL gets rebuilt again.
My opinion is yes, build the HAL into a library. The benefit of faster build time outweighs the risk of the library getting out of date. After some point early in the project it's unusual for me to change something that would affect the HAL. But the faster build time pays off many times.
I create a multi-project workspace with one project for the HAL library, another project for the bootloader, and a third project for the application. When I'm developing, I only rebuild the application project. When I make a release build, I select Project->Batch Build and rebuild all three projects. This way the release builds always use all the latest code and build settings.
Also, on the Options for Target dialog, Output tab, unchecking Browse Information will greatly reduce the build time.
I have a 20+ yo .dll, written in C that none of my colleagues want to touch. With good reason, it uses macros, macro constants and casting EVERYWHERE, making the symbol table quite lean.
Unfortunately, I have to sometimes debug this code and it drives me crazy that it doesn't use something as simple as enums which would put symbols in the .pdb file to make debugging just that little bit easier.
I would love to convert some of the #defines to enums, even if I don't change the variable types as yet, but there is a genuine fear that it will cause possible issues in terms of performance if it were to change the code generated.
I need to show definitively, that no compiled code changes will occur, but it would seem that the .dll is actually changing significantly in a 64 bit build. I looked at one of the function's disassembly code and it appears to be unaffected, but I need to show what is and is not changing in the binary to alleviate the fears of my colleagues as well as some of my own trepidation, plus the bewilderment as to why any changes would propagate to the .dll at all, though the .dlls are of the same size.
Does anyone have any idea how I could do this? I've tried to use dumpbin, but I'm not that familiar with it and am getting some mixed results, prolly because I'm not understanding the output as much as I like.
The way I did this was as follows:
Turn on /FAs switch for project.
Compile that project.
Move the object file directory (Release => Release-without-enums)
Change #defines to enums
Compile that project again.
Move the object file directory (Release => Release-with-enums)
From a bash command line. Use the command from the parent of the Release directory:
for a in Release-without-enum/*.asm; do
git diff --no-index --word-diff --color -U10000 $a "Release-with-enum/$(basename $a)";
done | less -R
The -U10000 is just so that I can see the entire file of each file. Remove it if you just want to see the changes.
This will list all of the modifications in the generated assembly code.
The changes found were as follows:
Symbol addresses were moved about for apparently no reason
Referencing __FILE__ seems to result in not getting a full path when using enums. Why this would translate to removing the full path when using enums is a mystery as the compiler flags have not changed.
Some symbols were renamed for apparently no reason.
Edit
2 and 3 seem to be caused by a corrupted .pdb error. This might be due to the files being used in multiple projects in the same solution. Rebuilding the entire solution fixed those 2 problems.
I'm currently trying to make splint available as an external tool in Visual Studio 2010.
It has problems with finding all includes for the file, since it seems that the INCLUDE variable is only set at build time and I haven't found any other possibility to extract the include files in any way.
My question: Would there be any way to extract the IncludeDir field from the current file's project's Properties page, ideally with the VC++'s AdditionalIncludeDirectories?
Note also that AdditionalIncludeDirectories is per file, as it can be changed for individual source files as well as on the project level, and if it contains macros it can evaluate differently for each source file too!
I'm not familiar with driving the MSBuild objects via the API, but that's used by the IDE. Whether that way or by simply running MSBuild.exe, you need to get it to figure out all the properties, conditions, etc. and then tell you the result. If everything is well behaved, you could create a target that also uses the ClCompile item array and emits the %(AdditionalIncludeDirectories) metadata somehow such as writing it to a file or passing it to your other tool somehow. That's what's used to generate the /I parameters to CL, and you can get the same values.
If things are not well behaved in that necessary values are changed during the detailed build process, you would need to get the same prelims done just like the ClCompile target normally does, too. Or just override ClCompile with your own (last definition of a target is used) so it certainly is in the same context.
Either way, there are places where build script files can be automatically included into all projects, so you can add your stuff there or use a command argument (I think) to MSBuild to add another Include.
—John
I inherited a CXF/Hibernate/JBoss based project that includes a filename named database.xsd. I combed the project to find out which subsystem/component in the system uses database.xsd but that yielded only one reference in a file used by the maven-war-plugin to create the WAR file (webapp-cache.xml).
To me this suggests that database.xsd is some standard filename expected by some framework or plugin. But which is it? Hibernate? CXF? Other?
Is there documentation that actually describes the role of database.xsd in the package that relies on it?
UPDATE: Temporarily removing database.xsd and trying to rebuild, resulted in numerous compilation errors that led to a principle XML2SQL.java file using a package referenced by *.hbm.xml DTO files. This tells me that the culprit is... Hibernate!
It's Hibernate, because temporarily removing database.xsd and trying to rebuild, resulted in numerous compilation errors that led to a principle XML2SQL.java file using a package referenced by *.hbm.xml DTO files only. (see update above)
I would approach this the "old fashioned way", Good logging + a binary search approach.
First in a test environment:
1) Make sure your code is setup to use log4j and has the most extensive logging level on.
2) With the database.xsd removed, determine roughly the "half way point" of failure. For example, say with the system setup correctly it produces 1000 lines of logging. With database.xsd removed it fails to load and halts with just 500 lines of logging. Looking at the logging, determine what classes/methods are being called. ( Another way to work with this instead of removing the database.xsd is to introduce a copy with syntax errors to your test environment. )
3) With your study in step 2 add more logging and try/catches to capture more information. If this does not allow you to narrow down the target. Repeat, focusing on the "250 logging line" point. Continue cutting the problem space in half each time until you have found the target classes.
Many Java programs I have seen just code for the "happy path" and rely on top level exception handling to catch and log errors, but this can lead to very thick (huge number of lines) and hard to read logging entrails files.