Only put .proto protocol buffer file in a repository? - protocol-buffers

I wonder what is the best practice for protocol buffer regarding source repository (e.g. git) :
Do I have to put ONLY the .proto file in the repository and let anyone else who uses the source code to regenerate classes code with protoc compiler ? or is it a best pratice to put both .proto files AND source code generated by protoc compiler ?

You should never check in generated code if you can avoid it.
If you check in generated code, you take on multiple risks, such as:
You risk losing the knowledge of how to correctly regenerate that code. If it's not automated as part of the build, it's too easy to forget to document, or to have the documentation be wrong.
You risk the generated code getting out-of-sync with the schema. For example, someone could make a change to the .proto file but forget to update the generated code. Their changes won't actually "take effect" until someone else later on regenerates the generated code for some other reason -- and then all of the sudden they see side effects they weren't expecting.
Your generated code might be for a different version of protocol buffers than what the builder has installed. In this case it won't work correctly, since it's necessary to use the exact same version of the compiler and runtime library.
If for some reason you absolutely have to check in generated code, I highly recommend creating an automated test that checks if the checked-in code matches what protoc would generate if run fresh. (For example, the protobuf repository itself contains checked-in copies of generated code for descriptor.proto because this code is needed to compile protoc, creating a circular dependency. But there is a unit test that checks that the checked-in code matches what protoc would generate.)

If your project is commonly used in its source code form (e.g. a library or a program every user is supposed to compile himself), I would make available release packages that have the generated files.
But I wouldn't put the generated files into the repository directly. And if most users will use a compiled binary, it is not that important to provide easy-to-compile source packages either. The protobuf generator then becomes just another build dependency.

Related

What does it mean when a package is in the go/pkg/mod/cache dir but it has no source code extracted?

I'm trying to understand how the source code for third-party dependencies is or is not compiled into my Go binary. I'm building in a Docker container, so I can see precisely what's fetched for my build without interference from other builds.
After my go build completes I see source code files for several dependencies under go/pkg/mod/$module#$version directories. The Module cache documentation tells me that these directories contain "extracted contents of a module .zip file. This serves as a module root directory for a downloaded module." My best guess is that the presence of extracted source code for these dependencies indicates that "yes, these dependencies are definitely compiled into your binary."
I also see many more dependencies pulled into go/pkg/mod/cache/download/$module directories. The Module cache documentation tells me that this directory contains "files downloaded from module proxies and files derived from version control systems," which I don't fully understand. As far as I can see, these files do not include any extracted source code, though there are several .zip files that I assume contain the source. For the most part these files seem to be .mod files that just contain text representing some sort of dependency graph.
My question is: if a third-party dependency has module files under go/pkg/mod/cache/download but no source code under go/pkg/mod/$module#$version, does that mean that dependency's code was NOT compiled into my Go binary?
I don't understand why the Go build pulls in all these module files but only has extracted source code for some of the third-party modules. Perhaps Go preemptively parses and pulls module information for the full transitive set of modules referenced from the modules my first-party code imports, but perhaps many of those modules don't end up being needed for my binary's compile + build process and therefore don't get extracted. If that's not true and the answer to my question is no, then I don't understand how or why my binary can link in those dependencies without go build fetching their source code.
As mentioned in "Compile and install packages and dependencies"
Compiled packages are cached automatically.
GOPATH and Module includes:
When using modules, GOPATH is no longer used for resolving imports.
However, it is still used to store downloaded source code (in GOPATH/pkg/mod) and compiled commands (in GOPATH/bin).
So if you see sources in pkg/mod which are not in pkg/mod/cache, try a go mod tidy
add missing and remove unused modules
From there, you should have the same modules between sources (pkg/mod) and compiled modules (pkg/mod/cache)
Based on the OP's comment
I need to know exactly what's included in the binary for compliance reasons.
I would recommend a completely different approach: dumping the list of symbols contained in the binary and then correlating it with whatever information is needed.
The command
go tool nm -type /path/to/the/executable/image/file
would dump the symbols — names of the functions — whose code was taken from both the standard library packages, 3rd-party and/or vendored packages and internal packages, compiled and linked into the binary, and print to its standard output stream a sequence of lines
address type name
which you can then process programmatically.
Another approach you might employ is to use go list with various flags to query the program's source code about the packages and/or modules which will be used when building: whatever that command outputs describing the full dependency graph of the source code is whatever go build will use when building — provided the source code is not changed between these calls.
Yet another possibility is to build the program using go build -x, save the debug trace it produces on its standard error stream and parsing it for exact module names the command reported as used during building.

Should the STM32 HAL be included as a precompiled library

I have a Keil STM32 project for a STM32L0. I sometimes (more often than I want) have to change the include paths or global defines. This will trigger a complete recompile for all code because it needs to ‘check’ for changed behaviour because of these changes. The problem is: I didn’t necessarily change relevant parameters for the HAL and as such it isn’t needed (as far as I understand) that these files are completely recompiled. This recompilation takes up quite a bit of time because I included all the HAL drivers for my STM32L0.
Would a good course of action be to create a separate project which compiles the HAL as a single library and include that in my main project? (This would of course be done for every microcontroller separately as they have different HALs).
ps. the question is not necessarily only useful for this specific example but the example gives some scope to the question.
pps. for people who aren't familiar with the STM32 HAL. It is the standardized interface with which the program interfaces with the underlying hardware. It is supplied in .c and .h files instead of the precompiled form of the STD/STL.
update
Here is an example of the defines that need to be managed in my example project:
STM32L072xx,USE_B_BOARD,USE_HAL_DRIVER, REGION_EU868,DEBUG,TRACE
Only STM32L072xx, and DEBUG are useful for configuring the HAL library and thus there shouldn't be a need for me to recompile the HAL when I change TRACE from defined to undefined. Therefore it seems to me that the HAL could be managed separately.
edit
Seeing as a close vote has been cast: I've read the don't ask section and my question seeks to constructively add to the knowledge of building STM32 programs and find a best practise on how to more effectively use the HAL libraries. I haven't found any questions on SO about building the HAL as a static library and therefore this question at least qualifies as unique. This question is also meant to invite a rich answer which elaborates on the pros/cons of building the HAL as a separate static library.
The answer here is.. it depends. As already pointed out in the comments, it depends on how you're planning to manage your projects. To answer your question in an unbiased way:
Option #1 - having HAL sources directly in your project means rebuilding HAL every time anything in its (and underlying) headers changes, which you've already noticed. Downside of it is longer build times. Upside - you are sure that what you build is what you get.
Option #2 - having HAL as a precompiled static library. Upside - shorter build times, downside - you can no longer be absolutely certain that the HAL library you include actually works as you want it to. In particular, you'd need to make sure in some way that all the #defines are exactly the same as when the library has been built. This includes project-wide definitions (DEBUG, STM32L072xx etc.), as well as anything in HAL config files (stm32l0xx_hal_conf.h).
Seeing how you're a Keil user - maybe it's just a matter of enabling multi-core build? See this link: http://www.keil.com/support/man/docs/armcc/armcc_chr1359124201769.htm. HAL library isn't so large that build times should be a concern when it comes to rebuilding its source files.
If I was to express my opinion and experience - personally I wouldn't do it, as it may lead to lower reliability or side effects that will be very hard to diagnose and will only get worse as you add more source files and more libraries like this. Not to mention adding more people to work on the project and explaining to them how they "need to remember to rebuild X library when they change given set of header files or project-wide definitions".
In fact, we've ran into the same dilemma for the code base I work on - it spans over 10k source and header files in total, some of which are configuration-specific and many of which are shared. It's highly modular which allows us to quickly create something new (both hardware- and software-wise) just by configuring existing code, mainly through a set of header files. However because this configuration is done through headers, making a change in them usually means rebuilding a large portion of the project. Even though build times get annoying sometimes, we opted against making static libraries for the reasons mentioned above. To me personally it's better to prioritize reliability, as in "I know what I build".
If I was to give any general tips that help to avoid rebuilds as your project gets large:
Avoid global headers holding all configuration. It's usually tempting to shove all configuration in one place, create pretty comments and sections for each software module in this one file. It's easier to manage this way (until this file becomes too big), but because this file is so common, it means that any change made to it will cause a full rebuild. Split such files to separate headers corresponding to each module in your project.
Include header files only where you need them. I sometimes see an approach where there are header files created that only "bundle" other header files and such header file is later included. In this case, making a change to any of those "smaller" headers will have an effect of having to recompile all source files including the larger file. If such file didn't exist, then only sources explicitly including that one small header would have to be recompiled. Obviously there's a line to be drawn here - including too "low level" headers may not be the greatest idea either, e.g. they may not be meant to be included as being internal library files which may change any time.
Prioritize including headers in source files over header files. If you have a pair of your own *.c (*.cpp) and *.h files - let's say temp_logger.c/.h and you need ADC - then unless you really need some ADC definition in your header (which you likely won't), then include the ADC header file in your temp_logger.c file. Later on, all files making use of the temp_logger functions won't have to be re-compiled in case HAL gets rebuilt again.
My opinion is yes, build the HAL into a library. The benefit of faster build time outweighs the risk of the library getting out of date. After some point early in the project it's unusual for me to change something that would affect the HAL. But the faster build time pays off many times.
I create a multi-project workspace with one project for the HAL library, another project for the bootloader, and a third project for the application. When I'm developing, I only rebuild the application project. When I make a release build, I select Project->Batch Build and rebuild all three projects. This way the release builds always use all the latest code and build settings.
Also, on the Options for Target dialog, Output tab, unchecking Browse Information will greatly reduce the build time.

Bazel has a bug when handling protobuf files. How can I resolve this temporarly?

I'm having an issue building my code base with Bazel, Go and Protobuf. The Protobuf files aren't mapped right by Bazel Gezelle when generated. I think it's a known bug. I've opened up my own ticket. If this is the problem, I don't expect it to be resolved anytime soon. It was tagged as a P2 and has been open since October. I'm looking for a solution to the problem in the mean time.
We use a custom Protobuf plugin when buliding our protobuf files. My thought process is to generate the protobuf files by hand and have Bazel ignore the protobuf files, just using the already generated code.
Does this sound like something that would work? How would I do this? Can I have Gezelle ignore the Protobuf files for me?
Thank you for your time
Copying my reply on bazelbuild/bazel-gazelle#209 for anyone running into the same issue.
Yep, bazelbuild/bazel#3867 is the issue.
Whenever you build a proto_library with Bazel (for in any language, not just Go), Bazel thinks the imports are relative to a repository root (either in your local repository or in any of your external repositories). I'm guessing brand.proto is importing github.com/xxx/jscode/jsge/pkg/paging/proto/page.proto. Since the file you want to import is actually jsge/pkg/paging/proto/page.proto, protoc won't find it when invoked by Bazel.
Since this is an issue with proto_library, there's not much Gazelle or rules_go can do about it. If bazelbuild/bazel#3867 is implemented, you'll be able to adjust proto_library import paths. Until then, you won't be able to build proto_library rules without modifying them.
Unfortunately, the best advice I can offer at the moment is to check in pre-generated .pb.go files and include those in your go_library rules. If you add a comment # gazelle:proto disable in your root build file, it will ignore .proto files and will include .pb.go files.

Working with digital signatures in Go

I would like to use signatures for a program that I am writing in Go, but I can't figure out the documentation, which is here. In particular, I would like to use the SignPKCS1v15 and VerifyPKCS1v15 functions, but I'm not sure exactly what I have to pass as arguments. I would greatly benefit from some example code of these two functions. Thanks.
Note: The message that I would like to send is a struct that I defined.
I think the src\pkg\crypto\rsa\pkcs1v15_test.go file in the Go source tree should be a good start.
An update striving provide more context… Go source contains many tests for the code in its standard library (and the crypto/rsa package is a part of it), so whenever you have no idea how to use a standard package (or, actually, any other Go package), a good place to start is to look at the tests involving that package as testing code naturally uses the package! Tests are kept in files ending in _test.go, usually have meaningful names and are located in the same directories actual code implementing a particular package is kept.
So in your particular case you could do this:
Download the Go source package of the version matching your compiler (what go version shows) and unpack it somewhere.
Navigate to the directory matching the package of interest. Code for standard Go packages is located in the "pkg" directory under the "src" top-level directory, so if you're interested in the crypto/rsa package, you need the src/pkg/crypto/rsa directory.

ada95 have 3 files .ali, .adb and .o - can I compile

I've found some old college work, with my final Ada95 project on it. Sadly, the disc was corrupted, and I have only managed to recover 3 files (the source and executable couldnt be recovered):
project.adb, project.ali and project.o
Are these 3 files enough to compile a new exe? I'm downloading the gnat compiler now, but have to admit, I have forgotten almost everything ada related...
Frank
[EDIT]
shucks.... using GCC to compile the project.adb throws an error about a missing ads file, which I cannot recover.
Is it possible to extract this / compile just the ".o" or ".ali" files? Or, am I stuffed?
project.adb is a source file.
Since you say that gcc complains about a missing .ads file, that indicates that project.adb contains a package body. You can manually construct a corresponding package spec by putting the following into package.ads:
package Project is
end Project;
Now that's almost certainly not enough, because the project spec probably had some type and constant declarations in it, so you'd have to analyze your package body and identify what it references. Infer what those declarations should look like and add them. Oh, and if your package body "with's" any packages that are not part of the standard Ada library, you'll have to recover those as well.
If you do manage to get your reverse engineered spec and the body to compile, you'll still have to create a "driver" program that "with's" the project package, and calls whatever functions and/or procedures that carried out the function of your project (and you'll have to pull the specs of those subprograms--which match their appearance in the package body--into the spec as well.)
Frankly, if it were me, I'd spend more time on trying to use some disk recovery tools to pull whatever else I could off the disk.
In Ada95 (and 2005) one mostly work with adb files (occasionally with ads files) everything else is generated on the run. In your case the adb file is surely other linked up to other ads files.
However, ads files are usually small programs (Obviously, if you are not attempting really exotic things as 'the dining philosophers') which pertain to the algorithmic/mathematical structure of the program, if you can dig out what you did in your project then it should not be impossible to restore it !

Resources