How to manage vcpkg across many projects? - windows

I would love to use vcpkg to manage our dependencies with third party libraries, but I'm not sure how this is would work in our environment. We have hundreds of projects in our solution, and release new versions of our software over time. This is a very common situation for software development, but I don't understand how vcpkg can work effectively.
Global installation will not work. We need to tie our releases to specific versions of packages. So we're left with using manifests. In a manifest, you specify a "builtin-baseline" which is a great way to pin the build of a project to a point in time, with some guarantees that the dependencies between the different packages are correct.
This is all great for a project. How does one manage hundreds of projects?
We would like to use the same baseline across all projects. Is there a way to specify the builtin-baseline in one place?
Some packages are common to all projects. Is there a way to specify this in one place?
We use project files for building. By default, vcpkg uses a project-local install directory. We don't need 100s of copies of boost. There is a vcpkg project setting for "Installed Directory" which allows a global location. But again, this setting is per-project. I don't see any integration with .props files. Is there a way to manage this?
Is there a way to get a solution-wide listing of all packages?
Would using CMake make this all easier?

Related

Language/Platform/Build-Independent Dependency Manager

I'm in need of a dependency manager that is not tied to a particular language or build system. I've looked into several excellent tools (Gradle, Bazel, Hunter, Biicode, Conan, etc.), but none satisfy my requirements (see below). I've also used Git Submodules and Mercurial Subrepos.
My needs are well described in a presentation by Daniel Pfeifer at Meeting C++ 2014. To summarize the goals of this dependency tool (discussed #18:55 of the linked video):
Not just a package manager
Supports pre-built or source dependencies
Can download or find locally - no unnecessary downloads
Fetches using a variety of methods (i.e. download, or VCS clones, etc.)
Integrated with the system installer - can check if lib is installed
No need to adapt source code in any way
No need to adapt the build system
Cross-platform
Further requirements or clarifications I would add:
Suitable for third-party and/or versioned dependencies, but also capable of specifying non-versioned and/or co-developed dependencies (probably specified by a git/mercurial hash or tag).
Provides a mechanism to override the specified fetching behavior to use some alternate dependency version of my choosing.
No need to manually set up a dependency store. I'm not opposed to a central dependency location as a way to avoid redundant or circular dependencies. However, we need the simplicity of cloning a repo and executing some top-level build script that invokes the dependency manager and builds everything.
Despite the requirement that I should not have to modify my build system, obviously some top-level build must wield the dependency manager and then feed those dependencies to the individual builds. The requirement means that the individual builds should not be aware of the dependency manager. For example, if using CMake for a C++ package, I should not need to modify its CMakeLists.txt to make special functional calls to locate dependencies. Rather, the top-level build manager should invoke the dependency manager to retrieve the dependencies and then provide arguments CMake can consume in traditional ways (i.e find_package or add_subdirectory). In other words, I should always have the option of manually doing the work of the top-level build and dependency manager and the individual build should not know the difference.
Nice-to-have:
A way to interrogate the dependency manager after-the-fact to find where a dependency was placed. This would allow me to create VCS hooks to automatically update the hash in dependency metadata of co-developed source repo dependencies. (Like submodules or subrepos do).
After thoroughly searching the available technologies, comparing against package managers in various languages (i.e. npm), and even having a run at my own dependency manager tool, I have settled on Conan. After diving deep into Conan, I find that it satisfies most of my requirements out of the box and is readily extensible.
Prior to looking into Conan, I saw BitBake as the model of what I was looking for. However, it is linux only and is heavily geared toward embedded linux distros. Conan has essentially the same recipe features as bb and is truly cross-platform
Here are my requirements and what I found with Conan:
Not just a package manager
Supports pre-built or source dependencies
Conan supports classic release or dev dependencies and also allows you to package source. If binaries with particular configurations/settings do not exist in the registry (or "repository", in Conan parlance), a binary will be built from source.
Can download or find locally - no unnecessary downloads
Integrated with the system installer - can check if lib is installed
Conan maintains a local registry as a cache. So independent projects that happen to share dependencies don't need to redo expensive downloads and builds.
Conan does not prevent you from finding system packages instead of the declared dependencies. If you write your build script to be passed prefix paths, you can change the path of individual dependencies on the fly.
Fetches using a variety of methods (i.e. download, or VCS clones, etc.)
Implementing the source function of the recipe gives full control over how a dependency is fetched. Conan supports the recipes that do the download/clone of source or can "snapshot" the source, packaging it with the recipe itself.
No need to adapt source code in any way
No need to adapt the build system
Conan supports a variety of generators to make dependencies consumable by your chosen build system. The agnosticism from a particular build system is Conan's real win and ultimately what makes dependency management from the likes of Bazel, Buckaroo, etc. cumbersome.
Cross-platform
Python. Check.
Suitable for third-party and/or versioned dependencies, but also capable of specifying non-versioned and/or co-developed dependencies (probably specified by a git/mercurial hash or tag).
Built with semver in mind, but can use any string identifier as version. Additionally has user and channel to act as namespaces for package versions.
Provides a mechanism to override the specified fetching behavior to use some alternate dependency version of my choosing.
You can prevent the fetch of a particular dependency by not including it in the install command. Or you can modify or override the generated prefix info to point to a different location on disk.
No need to manually set up a dependency store. I'm not opposed to a central dependency location as a way to avoid redundant or circular dependencies. However, we need the simplicity of cloning a repo and executing some top-level build script that invokes the dependency manager and builds everything.
Despite the requirement that I should not have to modify my build system, obviously some top-level build must wield the dependency manager and then feed those dependencies to the individual builds. The requirement means that the individual builds should not be aware of the dependency manager. For example, if using CMake for a C++ package, I should not need to modify its CMakeLists.txt to make special functional calls to locate dependencies. Rather, the top-level build manager should invoke the dependency manager to retrieve the dependencies and then provide arguments CMake can consume in traditional ways (i.e find_package or add_subdirectory). In other words, I should always have the option of manually doing the work of the top-level build and dependency manager and the individual build should not know the difference.
Conan caches dependencies in a local registry. This is seamless. The canonical pattern you'll see in Conan's documentation is to add some Conan-specific calls in your build scripts, but this can be avoided. Once again, if you write your build scripts to consumer prefix paths and/or input arguments, you can pass the info in and not use Conan at all. I think the Conan CMake generators could use a little work to make this more elegant. As a fallback, Conan lets me write my own generator.
A way to interrogate the dependency manager after-the-fact to find where a dependency was placed. This would allow me to create VCS hooks to automatically update the hash in dependency metadata of co-developed source repo dependencies. (Like submodules or subrepos do).
The generators point to these locations. And with the full capability of Python, you can customize this to your heart's content.
Currently co-developing dependent projects is the biggest question mark for me. Meaning, I don't know if Conan has something out of the box to make tracking commits easy, but I'm confident the hooks are in there to add this customization.
Other things I found in Conan:
Conan provides the ability to download or build toolchains that I need during development. It uses Python virtualenv to make enabling/disabling these custom environments easy without polluting my system installations.

Does it still make sense to use Maven when dependent jars are checked in with source code?

We check all of our source code's dependent third-party JARs into source control along with our source code. When needed, we manually download updates to third party JARs and replace those JARs that are in source control with the newer versions. We haven't felt the need to use Maven yet as this process seems simple enough for us. But are we missing something of great value by not using Maven? Or does our scenario not warrant using Maven?
"JARs dont change much", I hear this all the time.....
Storing jars in the SCM is simple in the beginning of the project. Over time the number of jars gets larger and larger.... Wait 2 or 3 years and nobody remembers where the jars came from, what their licensing terms were and most commonly what versions are being used (important to know when analysing security vulnerabilities).....
The best article I've read recently making the case for a repository manager is:
http://www.sonatype.com/people/2012/07/wait-you-dont-have-a-repository-manager/
A little irreverant, but does make a valid point about the kind of technical inertia one encounters all the time.
Switching a project team from ANT to Maven can be scary.... Maven works quite differently, so I find it is best deployed with greenfield or adventurous project teams. For the old-school ANT users, I recommend using the Apache ivy plugin. Ivy allows such teams to outsource the management of their dependencies but keep the build technology they're comfortable with.
Ultimately the biggest benefit of using Maven are not dependency management. It's the standized build process. I've seen several failed attempts to create a "standard" ANT build process. Problem every build engineer has his opinion on what the standard should be.... Maven's approach of forcing users to write build plugins may appear restrictive in the beginning, but just like the iPhone eventually developers discover "there's a Maven plugin for that" :-)
When it comes to dependency management Maven really can be quite valuable. As Mark O'Connor suggests, running a local repository manager would likely be better than checking the artifacts into source control.
There are many tools (like m2e in eclipse) that can help with dependency management and provide valuable feedback on which modules or dependencies require which other dependencies. Maven will also make sure to get the appropriate version of a dependency even if different modules depend on different versions of a given library. That will help prevent duplicate versions of the same jar showing up in your deployed project as long as they have the same group and artifact id.
Even for a very simple project I don't think I would resort to checking dependencies into the source control system.
It's not only about 3rd Party Libraries. Mostly if you have multiple repositories. In our case, we had four repositories with lots of inter- and intra-dependencies.
Actually I started this answer and then I had to go for 15 minutes to talk to some colleague about a problem happened after someone forgot to update the .jar of one project in the other's lib directory.
And it looks more professional :)

Xcode, update project file automatically

I have many projects that use a bunch of exact same class.
Is there a way to add a script to Xcode, so, each time i compile, he go to a network folder and update is files from there... If newer. (i do this step manually, but could be great to automate it)
Thanks
You could add a "run script" build phase to copy over files before compiling if that's really what you want to do. That would catch updates for you but I don't think it would help you if new files are added (though copying them into a location your project has a folder reference rather than a group pointing to might work).
That said I think there's a better solution. It sounds like you're reinventing a process for managing project dependencies when you could use existing tools. I would publish those shared classes as a library and add it to each project using CocoaPods and a reference to the library's git repository. That way you just need to run a pod install to get the latest version of your library. A good dependency manager gives you a clear understanding of which version of your dependencies you're currently using, control over when to update them, handles installing dependencies of your dependencies, and will avoid link errors from multiple static libraries attempting to each include a copy of the same common dependency.

Using source controlled libraries in source controlled projects [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have several projects that build reusable libraries. All these projects are under source control.
When I use these libraries in a project I simply link to the same ONE version on my local drive. However as you can imagine, this can cause problems when I commit back, and a different developer tries to clone the repository.
What is the best practice when using components also under source control? Should I include the "library projects" in the "main project" source control? Will this cause problems?
NB: The libraries take quite a few compiler directives so its almost impossible to just compile a static version and link to that. Plus I'm still developing them in parallel.
You have two main kind of dependencies:
source dependencies (you need to include, within the sources of your project, source from another project),
binary dependencies (you need to include a packaged set of files, like the ones found in a shared library).
If, when you say "I use these libraries in a project", you mean you need the binaries in order for your project to compile, then you could store said binaries in an external repository (i.e. not a (D)VCS like Mercurial, but an artifact repository like Nexus)
But if you mean you need to include sources, because you are also making some evolutions to those libraries while using them to develop your project, then Mercurial subrepos are a better fit.
In my own experience, maintaining compatibility with libraries that you are writing simultaneously is drastically improved by using export maps to provide multiple versions of your interfaces to client programs. The best guide I know of is Ulrich Drepper's http://people.redhat.com/drepper/dsohowto.pdf
If the libraries are under your source control, life should be easy. What I tend to do is the same as I do for different versions of third party libraries: Have different folders for different versions.
The third party library folder structure looks like this:
- General
- Delphi
- Components
- LibX
- LibX 9.2.1.3890
- LibX 10.1.0.7151
- LibY
- LibY 3.6
- LibY 5.1
- Plugins
Each and every project defines it dependencies on specific versions of each library. Reverting back to an old version of a project, thus also reverts the dependency to older versions of the library(ies).
Now with third party libraries you generally don't have as many different versions as you can do with your own libraries, but the same principals apply. And to aid in "current development" - where you don't have a particular version number yet, you could simply have a "head" version. Then when you "release" a version of your library, just add that version's folder and adjust the project definitions that have up till know used the "head" because of parallel development, to depend on the new version number...

MSBuild - can it work out project dependencies in a solution file? If so how?

I have an msbuild project which builds a SLN file from visual studio which holds all the projects in (about 70+ project), and a lot of the projects are dependent on each other meaning they need to be build in order - sometimes a developer forgets to set the build order manually in visual studio in the solution file causing the msbuild on a clean solution to fail as something gets built out of order/cant find a dll.
Is there a way for msbuild to take all projects and work out the dependencies and build the projects in order, if there is how do i do this? using an MSBuild task? With current tries it seems to just build in the order it reads the projects in - if i pass in a list of project files+paths.
Currently the only way i can think to solve this is a external app which scans the proj files and references and then manually creates a solution each time.. but this seems overkill for such a simple thing.
Anyone solved / seen this before?
How are you calling MSBuild? If you point MSBuild to the solution file, it should be able to work out the dependencies. If you point it to individual project files, then it won't be able to resolve any project references.
If you don't use project references you can still control the dependency order in a solution by using the "Project Dependencies" dialog to manually set the dependencies.
While Project Dependencies are hard to maintain and not shared across .sln files, Project References are honoured and do dictate the order consistently - see the ResolveReferences task in Microsoft.Common.targets.
ASIDE: A 'friend of mine' may 'during a refactoring' have accidentally stubbed out their Build Task and it's DependsOnTargets linkage to the Microsoft.Common.targets ResolveReferences task and ended up with ProjectReferences not being honoured in ways that sound like the question here. If you read some of the posts, you might get the idea that it's all mad shaky - it's not; the shaky bits are the Project dependencies, not the Project references.
See this excellent MSDN Blog article by Dan Moseley that really explains the topic, including some useful workaround strategies. (via this mildly related issue with building xUnit.net).
If all of your dependent projects are in the solution and you are using Project references, Visual Studio should manage the dependencies for your and build in order of that dependency list.
It sounds like you are not using project references. I always recommend project references.
This is an old question but the issue was most likely that projects in the solution used direct references to dependent DLLs (Add Reference > select Browse tab > select dependent DLL) instead of using project references (Add Reference > select Projects tab > select dependent project). With direct references, Visual Studio can't figure out the dependency chain. You must tell it by right clicking on the solution node and select Properties. Pick Common Properties > Project Dependencies to set the required projects. Mr. Klaus is correct but I wanted to document how to fix this issue.
While it is correct that MSBuild should observe the build order when you use project dependencies there is one caveat. It doesn't at present observe the reverse build order when building the clean target (as I have blogged about here). For regular build however it works nicely as described by others here.
I am using Msbuild 4 found at c:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe
It seems to solve the problem.
There is no Microsoft tool that will examine all the dependencies of your 70+ projects and generate a solution file with dependencies clearly declared for you.
You have to do that on your own by using 2 different methods:
Manually specify a dependency, for the solution, in visual studio.
Specify a project reference in the project file itself.
If you don't want to do that, then you will have to swallow the medicine and accept that you will to use an external tool to do that for you. Yes it's clunky but it can be made to work. If you check in your solution file to your source control you can mitigate these problems. As long as you have an active solution file to work with.
I at one point didn't, and I had 600+ projects in the build. So I wrote a tool (years ago) that would automate 99% of this work. It uses the .NET MSBuild API's to read the msbuild files (no recreating the wheel here with xml api's). It then examines outputs and inputs and generate a dependency tree which I can then do a few things with it:
Spit out a solution file.
Do a dependency sort (also a topological sort in academia), and spit out those projects in order they should be built (for a non-parallel type of build, which can be useful sometimes).
print out all sorts of diagnostic information about dependencies.
The only limitation I have seen with the tool is with a few crazy COM dependencies which are pretty sketchy anyways. Which I added a super simple work-around.

Resources