How to make a MIB configurations file accessible by three different libraries? - snmp

So my question is based around the scenario :
I have three snmp libraries that i built myself i.e. one in nodejs, one in c++, one in java.
There is a project that responds to all of these 3 libs snmp queries. (In this we can add new configurations to the MIB file)
I have a MIB file with all the configurations and mapping of query, queryname and their oid's.
The problem is that MIB file needs to be accessed by all three libraries quite frequently.
How do i keep the file synchronized between three libraries while the project keeps on adding new configurations.
The summary is how do i keep that file synchronized and make it a single source of truth used by all three libs and am able to update that single source of truth.

Related

Should the STM32 HAL be included as a precompiled library

I have a Keil STM32 project for a STM32L0. I sometimes (more often than I want) have to change the include paths or global defines. This will trigger a complete recompile for all code because it needs to ‘check’ for changed behaviour because of these changes. The problem is: I didn’t necessarily change relevant parameters for the HAL and as such it isn’t needed (as far as I understand) that these files are completely recompiled. This recompilation takes up quite a bit of time because I included all the HAL drivers for my STM32L0.
Would a good course of action be to create a separate project which compiles the HAL as a single library and include that in my main project? (This would of course be done for every microcontroller separately as they have different HALs).
ps. the question is not necessarily only useful for this specific example but the example gives some scope to the question.
pps. for people who aren't familiar with the STM32 HAL. It is the standardized interface with which the program interfaces with the underlying hardware. It is supplied in .c and .h files instead of the precompiled form of the STD/STL.
update
Here is an example of the defines that need to be managed in my example project:
STM32L072xx,USE_B_BOARD,USE_HAL_DRIVER, REGION_EU868,DEBUG,TRACE
Only STM32L072xx, and DEBUG are useful for configuring the HAL library and thus there shouldn't be a need for me to recompile the HAL when I change TRACE from defined to undefined. Therefore it seems to me that the HAL could be managed separately.
edit
Seeing as a close vote has been cast: I've read the don't ask section and my question seeks to constructively add to the knowledge of building STM32 programs and find a best practise on how to more effectively use the HAL libraries. I haven't found any questions on SO about building the HAL as a static library and therefore this question at least qualifies as unique. This question is also meant to invite a rich answer which elaborates on the pros/cons of building the HAL as a separate static library.
The answer here is.. it depends. As already pointed out in the comments, it depends on how you're planning to manage your projects. To answer your question in an unbiased way:
Option #1 - having HAL sources directly in your project means rebuilding HAL every time anything in its (and underlying) headers changes, which you've already noticed. Downside of it is longer build times. Upside - you are sure that what you build is what you get.
Option #2 - having HAL as a precompiled static library. Upside - shorter build times, downside - you can no longer be absolutely certain that the HAL library you include actually works as you want it to. In particular, you'd need to make sure in some way that all the #defines are exactly the same as when the library has been built. This includes project-wide definitions (DEBUG, STM32L072xx etc.), as well as anything in HAL config files (stm32l0xx_hal_conf.h).
Seeing how you're a Keil user - maybe it's just a matter of enabling multi-core build? See this link: http://www.keil.com/support/man/docs/armcc/armcc_chr1359124201769.htm. HAL library isn't so large that build times should be a concern when it comes to rebuilding its source files.
If I was to express my opinion and experience - personally I wouldn't do it, as it may lead to lower reliability or side effects that will be very hard to diagnose and will only get worse as you add more source files and more libraries like this. Not to mention adding more people to work on the project and explaining to them how they "need to remember to rebuild X library when they change given set of header files or project-wide definitions".
In fact, we've ran into the same dilemma for the code base I work on - it spans over 10k source and header files in total, some of which are configuration-specific and many of which are shared. It's highly modular which allows us to quickly create something new (both hardware- and software-wise) just by configuring existing code, mainly through a set of header files. However because this configuration is done through headers, making a change in them usually means rebuilding a large portion of the project. Even though build times get annoying sometimes, we opted against making static libraries for the reasons mentioned above. To me personally it's better to prioritize reliability, as in "I know what I build".
If I was to give any general tips that help to avoid rebuilds as your project gets large:
Avoid global headers holding all configuration. It's usually tempting to shove all configuration in one place, create pretty comments and sections for each software module in this one file. It's easier to manage this way (until this file becomes too big), but because this file is so common, it means that any change made to it will cause a full rebuild. Split such files to separate headers corresponding to each module in your project.
Include header files only where you need them. I sometimes see an approach where there are header files created that only "bundle" other header files and such header file is later included. In this case, making a change to any of those "smaller" headers will have an effect of having to recompile all source files including the larger file. If such file didn't exist, then only sources explicitly including that one small header would have to be recompiled. Obviously there's a line to be drawn here - including too "low level" headers may not be the greatest idea either, e.g. they may not be meant to be included as being internal library files which may change any time.
Prioritize including headers in source files over header files. If you have a pair of your own *.c (*.cpp) and *.h files - let's say temp_logger.c/.h and you need ADC - then unless you really need some ADC definition in your header (which you likely won't), then include the ADC header file in your temp_logger.c file. Later on, all files making use of the temp_logger functions won't have to be re-compiled in case HAL gets rebuilt again.
My opinion is yes, build the HAL into a library. The benefit of faster build time outweighs the risk of the library getting out of date. After some point early in the project it's unusual for me to change something that would affect the HAL. But the faster build time pays off many times.
I create a multi-project workspace with one project for the HAL library, another project for the bootloader, and a third project for the application. When I'm developing, I only rebuild the application project. When I make a release build, I select Project->Batch Build and rebuild all three projects. This way the release builds always use all the latest code and build settings.
Also, on the Options for Target dialog, Output tab, unchecking Browse Information will greatly reduce the build time.

CMake - Build custom build paths for different configurations

I have a project whose make process generates different build artifacts for each configuration, e.g. if initiated with make a=a0 b=b0 it would build object files into builds/a0.b0, generate a binary myproject.a0.b0, and finally update an ambiguous executable link to point to the most recently built project ln -s myproject.a0.b0 myproject. For this project, this is a useful feature mainly because:
It separates the object files for different configurations (so when I rebuild in another configuration I don't have to recompile every single source with new defines and configurations set, (unfortunately) a very common procedure).
It retains the binaries for each configuration (so it's not required to rebuild to use a different configuration if it has already been built).
It makes a copy (or link) to the last built binary which is useful in testing.
Right now this is implemented in an ugly decades-old non-portable Makefile. I'd like to reproduce the same behavior in CMake to allow easier building on other platforms but I have not been able to reproduce it in any reasonable manner. It seems like adding targets with add_library/executable, but doing this for each possible permutation of input parameters seems wrong. And I'm not sure I could get the utilization correct, allowing make, make a=a0, make b=b0 a=a0 as opposed to what specifying a cmake target would require: make myproject-a0.b0.
Is this possible to do in cmake? Namely:
Specify the build directory based on input parameters.
Accept the parameters as make arguments which can be left out (defaulted) to select the appropriate target at the level of the makefile (so it's not required to rerun cmake for a new configuration).

How to reference projects not in same root

Like most people we use third party libraries. Many have source which we keep in our VCS.
Currently if these libraries are updated, we need to pull the source manually and rebuild the binaries.
I am trying to find a way to instead reference them from the various solutions that use them, so that they will be automatically pulled from source control when you pull the dependant project, and automatically built if they are out of date. It would also be nice to be able to debug into them with the provided source.
The first problem I am having is that the libraries are not in the same solution root as the dependant projects. eg.
\Libraries
\External
\Lib1
Lib1.sln
\Products
\Product1
Product1.sln
Attempting to add Lib1.csproj to my Product1 solution gives me the warning:
The project that you are attempting to add to source control may cause
other source control users to have difficulty opening this solution or
getting newer versions of it. To avoid this problem, add the project
from a location below the binding root (C:\depot\Products\Products1)
of the other source controlled projects in the solution.
If I ignore this then I can set up build dependencies properly, but it still doesn't allow pulling the entire source tree in one go.
I was wondering how other people have third party libraries set up, particularly when there is source code. (We are using Perforce but I guess the question is relevant for any VCS)
One way to solve this in perforce is to put all modules / 3rdparty-software that are about to be reused to a separate location (depot), for examples "//shared" or similar.
Products (trees in your SCMS / perforce) can "link" the required modules by mapping them into the workspace. In perforce you can do that via clientviews.
If you have many people working on many products you'll need a easy mechanism to set up a personal workspace for a product properly (without requiring the developers to setup their clientview manually).
One possibility to achieve that is a small self-written tool/script that sets up a workspace and prepares the personal clientview based on a template that is located in the product-root and that defines what modules from the "//shared" depot need to be mapped to which location in the client workspace.
We are using this practice since years and it works fine. The danger is that the clientviews can get very complex.

Techniques for creating custom wix installers for multiple clients

I'm creating an installer for a client at the moment but I know that I'll have to create another in a couple of weeks for a different client. What techniques do people use to keep things tidy? The only differences will be whether to include certain dlls with the installer and which initial config file to include.
I was thinking of creating a main wxs file which has most of the share installation information on it and a secondary file which would be customised to the client which would control which components should be included.
Either that or rewrite the main wxs file for each client but that means maintaining a full wxs file for each client with lots of duplicated information.
I assume many other people have come across this situation and I would like to know if I'm on the right path or if there are other much better solutions.
Thanks for any help, Neil.
The solution you choose will depend on how extensive the differences are going to be between the different client installers and how many different clients you'll have to support.
If you only have to maintain 2-3 client installers with max of 10 variables files between them just create a single shared .wxs file that brings in a client-specific .wxi based on build-time parameters. It's easy enough to manually create and maintain 2-3 client-specific .wxi files.
If you have to maintain more clients, or there are 50 different possible dll/config file permutations I'd make use of WiX's heat.exe "harvester" tool. You would create a staging directory for every client you had to support that contained the dlls/config files required for each installer, use heat to harvest each directory into separate .wxs files, and then create a single shared .wxs file that would compile against each of the different harvested .wxs files to create the client installers. This solution requires the build process to be a little more complicated, but it's easier than trying to maintain 20 different client-specific wix files.

Visual Studio Solution Shared Data Source for Report Projects?

Simple question... I have a VS 2005 solution that encompasses several reporting services projects. Currently, each project has it's own shared data source making changing the database target very tedious.
Is there a way to share the data source across the entire solution (i.e. all the projects in the solution will use the data source defined in one place?).
I thought I could create a project that just held one data source item and then make all of the other projects dependent upon that one, however, the shared date source in the new project does not appear in the other projects for me to select.
Help! I have looked around the web for info, but not much available. There must be a simple solution to this.
Thanks
I am sorry I somehow overlooked your question when I posted the same.
Nonetheless, a technique I am using is described in an answer to it. It feels a little shady and underhanded but seems to be working so far:
Make a new report project to hold your shared data source. I called mine Data Source.
Copy your shared data source (let's pretend it's called My Shared Data Source) to that new project.
If necessary, copy My Shared Data Source to each actual report project and link things up the way you want. But probably you're already set up like this.
Close Visual Studio to make sure all changes are saved in the filesystem and to make sure it doesn't end up clobbering some of our next, "backstage" edits.
In plain old Windows Explorer (or whatever), delete the My Shared Data Source.rds file from every project folder except Data Source's.
Using a text editor or XML-file editor, edit each project's .rptproj file to change the text of the Project.DataSources.ProjectItem.FullPath element from My Shared Data Source.rds to ..\Data Source\My Shared Data Source.rds.
Now each project still has its own reference to a data source, but all those references happen point to the same underlying physical file, and thus they all share one data source specification.
According to this post by Paul Turley, it appears as if this is not possible. You'll have to copy the data source into each project. The good news is that if you deploy them to the same location, only one data source should exist on the server.
This may not be what you're thinking, but when I'm writing an app consisting of several distinct applicaitons accessing the same data I usually take one of two approaches.
write all of my data access logic into a Class Library project and reference it from the other projects.
Write my data access logic into a Web Service library and add a web reference.
I usually go for option 2 if the data I am accessing is likely to be used in future development, such as accessing company-wide customer lists, etc.

Resources