what does shared library .so.5 mean? - gcc

I have compile a shared library(libcurl). Finally I found it generated "libcurl.so.5".
".so" means a shared library. But what does the number 5 means?
How can I generate library without number 5? just like "libcurl.so"

Most fundamentally, it's simply a version number. If the version number increases from, say 5 to 6, then it's supposed to indicate that all previous programs that were linked against version 5 are binary-incompatible with version 6 and thus need to be recompiled. For example, if a function was removed from version 6 then clearly any application that used it wouldn't work any longer, so it's clearly unsafe for the application to automatically switch to the newer library version. Bug fixes to an existing function, on the other hand, wouldn't require that the application be recompiled or ported and thus it's safe to use the .5 version with dynamically loading even thought the application was compiled against "a previous version (which is, um, still 5)".
In practice it's a bit more messy, as different people use the version number in different ways (often increasing it when they really didn't need to).
The libtool project has a much more strict, and helpful, guide about when you should update the library version number.
In the end, you should not generate a library without the version number. It's a promise to your users about whether the library is binary compatible in the future or not.

Related

How do I compare the protobuf version used to compile the .proto files and the version of the Golang library to make sure they are compatible?

I have a Golang service which uses Protobuf. The library used by Go and the version of the compiler need to be similar enough as per the documentation:
Users should use generated code produced by a version of protoc-gen-go that is identical to the runtime version provided by the protobuf module. This project promises that the runtime remains compatible with code produced by a version of the generator that is no older than 1 year from the version of the runtime used, according to the release dates of the minor version. Generated code is expected to use a runtime version that is at least as new as the generator used to produce it. Generated code contains references to protoimpl.EnforceVersion to statically ensure that the generated code and runtime do not drift sufficiently far apart.
The last sentence seems to imply that there is already code to ensure compatibility.
Is that correct?
That being said, I would like to be able to display the version of the library and of the compiled protobuf files in my logs to at least be able to manually verify that the versions are indeed compatible. How do I get both of these versions?
Update: Here is the section I mention in the comments.
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)

unresolved reference to boost::program_options::options_description::options_description

First off I know there are several posts similar to this,but I am going to ask anyway. Is there a known problem with boost program_options::options_description in Debian "Buster" and Boost 1.67 packages?
I have code that was developed back in Debian 7, system upgraded to 8.3 then 8.11 now using Boost 1.55.
Code builds and runs. Now upgrade system to Debian Buster with Boost 1.67 and get the link errors for unresolved reference to options_description(const std::string& , unsigned int, unsigned int) along with several other program_options functions. All unresolved, expect the options_description, are from boost calling another boost function, so not even directly called from within my code. boost_program_options IS in the link line.
I AM not a novice and understand link order and this has nothing to do with link order.
I am going to try getting the source code for boost and building to see if that works, if not I will be building a system from scratch and testing against that.
Since this is all on a closed network simply saying try a newer version of boost or Debian is not an option because I am contractually obligated to only use Debian "Buster" and Boost 1.67 as the newest revisions, so if the package is unavailable (newer) in Buster it is out of the question, without having a new contract be drafted and go through approvals which could take months.
So to the point of this question, is there an issue with the out of the box version of Boost in Buster?
I don't think there's gonna be an issue with the package in Buster.
My best bet is that either
you're relinking old objects with the new libraries - and they don't match (did you do a full make clean e.g. to eliminate this possibility?).
Often build systems do not do complete header dependencies, so the
build system might not notice that the boost headers changed,
requiring the objects to be be rebuilt.
if that doesn't explain it, there could be another version of boost on the include path, leading to the same problem as under #1 even when rebuilding.
You could establish this by inspecting the command lines (make -Bsn or compile_commands.json e.g. depending on your tools). Another trick is to include boost/version.hpp and see what BOOST_VERSION evaluates to
Lastly, there could be a problem where the library was built with different compiler version or compiler flags, leading to incompatible synbols (this is a QoI issue that you might want to report with the Boost Developers).
This is assuming ABI/ODR issues, in case you want to validate this possibility.

Mac multiple dylibs

I know this question is not specific to crypto++. But I compiled crypto++ on a Mac OS X using Qt. After building I see 4 files with the dylib extention:
libcryptopp.1.0.0.dylib
libcryptopp.1.0.dylib
libcryptopp.1.dylib
libcryptopp.dylib
What is the difference between them? Which one is actually used to compile against my application?
The multiple files exist in case your application needs to link to a specific version of it. Of course, you have just one version of the library, so it doesn't seem helpful, but consider these on my system:
libnetsnmp.15.1.2.dylib
libnetsnmp.15.dylib
libnetsnmp.25.dylib
libnetsnmp.5.2.1.dylib
libnetsnmp.5.dylib
libnetsnmp.dylib
Only .25, .15.1.2 and .5.2.1 are actual files, the rest are symbolic links for either. In your case, they're probably all symlinks except 1.0.0, so you case use either.
If you look carefully, there's just one actual dylib (libcryptopp.1.0.0.dylib) and 3 links to that one. These give you the version info for the library.
If you want to link to a specific version you can specify that, else if you always expect your app to work with the latest version, you can point to libcryptopp.dylib.

Multiple Boost.Thread Instances OK in a C++ application?

I have an application with a plug-in architecture that is using Boost.Threads as a DLL (specifically, a Mac OS X framework). I am trying to write a plug-in that uses Boost.Threads as well, and would like to link in the library statically. Everything builds fine but the application quickly crashes in my plug-in, deep within the Boost.Threads code. Linking to the DLL version of Boost.Threads seems to resolve the problem, but I'd like my plug-in to be self-contained.
Is it possible to have two instances of Boost.Threads with such a setup (one as a DLL, one statically linked in another DLL)? If so, what might I be missing to make the two instances get along?
Once my team faced a similar problem. For reasons I will not mention at this time, we had to develop a system that used 2 different versions of Boost (threads, system, filesystem).
The idea we came up with and executed was to grab the source code of both versions of Boost we needed, and then tweak one of them to change the symbols and function names to avoid name clashing.
In other words, we replaced all references to the name boost for bubbles inside the sources (or some other name) and also made changes to the compilation so it would build libbubbles instead of libboost.
This procedure gave us 2 sets of libraries, each with having their own binaries and header files.
If you looked at the source code of our application you would see something like:
#include <bubbles/thread.hpp>
#include <boost/thread.hpp>
bubbles::thread* thread_1;
boost::thread* thread_2;
I imagine some of the guys here already faced a similar situation. There are probably better alternatives to the one I suggested above.

Project management and bundling dependencies

I've been looking for ways to learn about the right way to manage a software project, and I've stumbled upon the following blog post. I've learned some of the things mentioned the hard way, others make sense, and yet others are still unclear to me.
To sum up, the author lists a bunch of features of a project and how much those features contribute to a project's 'suckiness' for a lack of a better term. You can find the full article here: http://spot.livejournal.com/308370.html
In particular, I don't understand the author's stance on bundling dependencies with your project. These are:
== Bundling ==
Your source only comes with other code projects that it depends on [ +20 points of FAIL ]
Why is this a problem, especially given point 3, that you have modified your projects dependencies to fit your project's needs, doesn't it therefore make even greater sense that your code should be distributed with its dependencies?
If your source code cannot be built without first building the bundled code bits [ +10 points of FAIL ]
Doesn't this necessarily have to be the case for software built against 3rd party libs? Your code needs that other code to be compiled into its library before the linker can work?
If you have modified those other bundled code bits [ +40 points of FAIL ]
If this is necessary for your project, then it naturally follows that you've bundled said code with yours. If you want to customize a build of some lib,say WxWidgets, you'll have to edit that projects build scripts to bulid the library that you want. Subsequently, you'll have to publish those changes to people who wish to build your code, so why not use a high level make script with the params already written in, and distribute that? Furthermore, (especially in a windows env) if your code base is dependent on a particular version of a lib (that you also need to custom compile for your project) wouldn't it be easier to give the user the code yourself (because in this case, it is unlikely that the user will already have the correct version installed)?
So how would you respond to these comments, and what points may I be failing to take into consideration? Would you agree or disagree with the author's take (or mine), and why?
Edited for clarification.
Your source only comes with other code projects that it depends on.
My project requires project X.
However, since my project depends on secret inner mysteries of X, or a previous release of X, then my project includes a copy of X. Specifically release n.m of X. And no other.
Try to install the latest and greatest X and see what breaks in my project. Since upgrading X broke my project, forget my project. They will not struggle with something that spontaneously breaks after an update. They will find a better open source component.
Hence a FAIL score.
If your source code cannot be built without first building the bundled code bits.
My project doesn't rely on the API to X. It relies on deep, inner linking to specific parts of X, bypassing the API.
If my project on depended on the API to X, then -- for some languages like C or C++ -- my project could compile with only the C or C++ headers, not the binaries.
For Java this is less true, since there is not independent, non-binary header. And for dynamic languages (like Python) this makes no technical sense.
However, even Java and Python have ways to separate interface from implementation. If I rely on implementation (not interface), then I've still created the same essential problem.
If my project depends on C or C++ binaries, and they build things out of order, or upgrade another component without rebuilding mine, things may go badly for them. They may see weirdness, breakage, "instability". My product appears broken. They won't (and can't) debug it. They're done. They'll find something more stable.
Hence a FAIL score.
If you have modified those other bundled code bits.
I have two choices when I modify X.
Get it accepted as part of X.
Fix my program to work with unmodified X.
If my project depends on a modified X, no one can install X simply, correctly and independently. They can't upgrade X, they can't maintain X. They probably can't apply bug fixes or security patches to X.
I've essentially made their job impossible by modifying X.
Hence the FAIL score.
Subsequently, you'll have to publish those changes to people who wish to build your code.
Actually, they'll hate me for that. They don't want to know about the mysterious changes to X. They want to build X according to the rules, then build my stuff according to the rules. They don't want to read, think or be sure that the mystery update patch kit was applied correctly.
Rather than joke around with that, they'll download a competing package. FAIL.
if your code base is dependent on a particular version of a lib (that you also need to custom compile for your project)
That's really shabby. If I depend on a version with custom compiles, they're done looking at my package. They'll find something without version-specific inner mysteries and custom compiles before they'll struggle. FAIL.

Resources