Building two versions of maven artefacts - maven

A dependency that we rely on (https://github.com/bytedeco/javacpp-presets/tree/master/ffmpeg) has recently split into a LGPL and GPL version, depending on how the underlying ffmpeg is configured.
There are two different sets of artifacts that are released (e.g. https://mvnrepository.com/artifact/org.bytedeco/ffmpeg-platform-gpl/4.3.2-1.5.5 and https://mvnrepository.com/artifact/org.bytedeco/ffmpeg-platform/4.3.2-1.5.5).
The API is the same for our purposes - we don't need to change our code. Its dynamic at runtime in terms of what is supported, but that is OK. Its already flexible given different hardware support.
I'd like to build two sets of artifacts as a parallel path up through our tree (e.g. two versions of core, api, viewer, examples, etc) as jar with different license dependencies to allow the user to choose which one they prefer. The goal is that the user can choose a particular version of our code and the dependencies "just work" in that the right dependencies are either included in the uber-jar or get pulled in via maven.
So I have a jmisb-api-lgpl-${version}.pom/jar (and maybe jar-with-dependency uber-jar) that depends on jmisb-core-lgpl-${version}.[pom, jar] that in turn depends on ffmpeg-platform-${other version}.[pon,jar]. And built at the same time, jmisb-api-gpl-${version}.pom/jar that depends on jmisb-core-gpl-${version}.[pom,jar] that in turn depends on ffmpeg-platform-gpl-${other version}.jar.
What is the preferred / recommended way to do that, or alternatively, what is a relatively clean way that builds both without needing to manually select which one to generate?

Related

How can Gradle plugin access information about included builds?

I know you can access different modules (included using include) in a project via org.gradle.api.Project#getSubprojects(), and I know you can get the name and directories of separate builds that have been included (using includeBuild) via org.gradle.api.invocation.Gradle#getIncludedBuilds().
But how can my plugin get information such as the locations of Java source files and class files for projects included using includeBuild?
My goal here is to determine which files have changed in the current git branch (which I can do), and then collect their corresponding class files into a jar file that's used for our patching mechanism that inserts the patch jars at the front of the classpath rather than redeploying the whole application.
I don’t think it is a goal of Gradle to provide including builds with detailed information on included builds. Currently, the Gradle docs basically only state two goals for such composite builds:
combine builds that are usually developed independently, […]
decompose a large multi-project build into smaller, more isolated chunks […]
Actually, isolation between the involved builds seems to be an important theme in general:
Included builds do not share any configuration with the composite build, or the other included builds. Each included build is configured and executed in isolation.
For that reason, it also doesn’t seem to be possible or even desired to let an including build consume any build configurations (like task outputs) of an included build. That would only couple the builds and hence thwart the isolation goal.
Included builds interact with other builds only via dependency substitution:
If any build in the composite has a dependency that can be satisfied by the included build, then that dependency will be replaced by a project dependency on the included build.
So, if you’d like to consume specific parts of an included build from the including build, then you have to do multiple things:
Have a configuration in the included build which produces these “specific parts” as an artifact.
Have a configuration in the including build which consumes the artifact as a dependency.
Make sure that both configurations are compatible wrt. their capabilities so that dependency substitution works.
Let some task in the including build use the dependency artifact in whatever way you need.
Those things happen kind of automatically when you have a simple dependency between two Gradle projects, like a Java application depending on a Java library. But you can define your own kinds of dependencies, too.
The question is: would that really be worth the effort? Can’t you maybe solve your goal more easily or at least without relying on programmatically retrieved information on included builds? For example: if you know that your included build produces class files under build/classes/java/main, then maybe just take the classes of interest from there via org.gradle.api.initialization.IncludedBuild#getProjectDir().
I know, this may not be the answer you had hoped to get. I still hope it’s useful.

Automatically force update pom to newer versions from inside a test

This is not related to update the pom dependencies using a maven goal. I already have that sorted out.
So I am responsible for creating, packaging and maintaining common utilities. These common utilities are in turn used by all the teams in the org. Problem is that the teams using these utilities do not update the dependencies unless it is the last resort. We would like them to use the latest release version of our common utilities barring very few.
Now I have come across this Maven Versions Plugin by mojohaus which I think serves my need by using two goals - versions:update-properties and versions:use-latest-releases. It serves my purpose except two things:
I do not see a way to exclude certain groupid:artifactid from the update dependency/property
We really want this to be a compulsory thing (maybe part of the Test execution - this is for test automation utilities mainly) rather than a maven goal. Because if it is a maven goal, it needs to be invoked separately and hence becomes optional for teams.
We know that forcefully updating to latest version might cause some issues with defect re-produciblity, but we are willing to take that risk. Our utilities are really test products.
Any direction/help on this is appreciated.
Edit: We run our tests using maven goals clean install already. So they use existing pom. We want dependencies update to happen before the tests run. Also it is desirable to commit the changes to source control (bitbucket) if possible.
We have our tests setup using Jenkins but teams also run multiple test on local machines.
Edit: Found the answer to #1. The plugin provides to exlude regex for group and artifact id. using tags excludes and excludesList
Sorry for insisting. From the Maven point of view I see two main solutions:
Your projects and your utility jar are tightly coupled and every project always needs to use the latest version. Then you can bundle all projects and your utility in one multi-module project. This makes sure that everything is up to date all the time, but it requires that all projects and your utilities are always build together (not separately).
You distribute your utilities to different projects which build and release at different times. Then it is up to the projects to decide when to update. There is unfortunately no standard way to deprecate jars.
If I understand you correctly, you want something "in the middle". This may be hard to achieve.

Language/Platform/Build-Independent Dependency Manager

I'm in need of a dependency manager that is not tied to a particular language or build system. I've looked into several excellent tools (Gradle, Bazel, Hunter, Biicode, Conan, etc.), but none satisfy my requirements (see below). I've also used Git Submodules and Mercurial Subrepos.
My needs are well described in a presentation by Daniel Pfeifer at Meeting C++ 2014. To summarize the goals of this dependency tool (discussed #18:55 of the linked video):
Not just a package manager
Supports pre-built or source dependencies
Can download or find locally - no unnecessary downloads
Fetches using a variety of methods (i.e. download, or VCS clones, etc.)
Integrated with the system installer - can check if lib is installed
No need to adapt source code in any way
No need to adapt the build system
Cross-platform
Further requirements or clarifications I would add:
Suitable for third-party and/or versioned dependencies, but also capable of specifying non-versioned and/or co-developed dependencies (probably specified by a git/mercurial hash or tag).
Provides a mechanism to override the specified fetching behavior to use some alternate dependency version of my choosing.
No need to manually set up a dependency store. I'm not opposed to a central dependency location as a way to avoid redundant or circular dependencies. However, we need the simplicity of cloning a repo and executing some top-level build script that invokes the dependency manager and builds everything.
Despite the requirement that I should not have to modify my build system, obviously some top-level build must wield the dependency manager and then feed those dependencies to the individual builds. The requirement means that the individual builds should not be aware of the dependency manager. For example, if using CMake for a C++ package, I should not need to modify its CMakeLists.txt to make special functional calls to locate dependencies. Rather, the top-level build manager should invoke the dependency manager to retrieve the dependencies and then provide arguments CMake can consume in traditional ways (i.e find_package or add_subdirectory). In other words, I should always have the option of manually doing the work of the top-level build and dependency manager and the individual build should not know the difference.
Nice-to-have:
A way to interrogate the dependency manager after-the-fact to find where a dependency was placed. This would allow me to create VCS hooks to automatically update the hash in dependency metadata of co-developed source repo dependencies. (Like submodules or subrepos do).
After thoroughly searching the available technologies, comparing against package managers in various languages (i.e. npm), and even having a run at my own dependency manager tool, I have settled on Conan. After diving deep into Conan, I find that it satisfies most of my requirements out of the box and is readily extensible.
Prior to looking into Conan, I saw BitBake as the model of what I was looking for. However, it is linux only and is heavily geared toward embedded linux distros. Conan has essentially the same recipe features as bb and is truly cross-platform
Here are my requirements and what I found with Conan:
Not just a package manager
Supports pre-built or source dependencies
Conan supports classic release or dev dependencies and also allows you to package source. If binaries with particular configurations/settings do not exist in the registry (or "repository", in Conan parlance), a binary will be built from source.
Can download or find locally - no unnecessary downloads
Integrated with the system installer - can check if lib is installed
Conan maintains a local registry as a cache. So independent projects that happen to share dependencies don't need to redo expensive downloads and builds.
Conan does not prevent you from finding system packages instead of the declared dependencies. If you write your build script to be passed prefix paths, you can change the path of individual dependencies on the fly.
Fetches using a variety of methods (i.e. download, or VCS clones, etc.)
Implementing the source function of the recipe gives full control over how a dependency is fetched. Conan supports the recipes that do the download/clone of source or can "snapshot" the source, packaging it with the recipe itself.
No need to adapt source code in any way
No need to adapt the build system
Conan supports a variety of generators to make dependencies consumable by your chosen build system. The agnosticism from a particular build system is Conan's real win and ultimately what makes dependency management from the likes of Bazel, Buckaroo, etc. cumbersome.
Cross-platform
Python. Check.
Suitable for third-party and/or versioned dependencies, but also capable of specifying non-versioned and/or co-developed dependencies (probably specified by a git/mercurial hash or tag).
Built with semver in mind, but can use any string identifier as version. Additionally has user and channel to act as namespaces for package versions.
Provides a mechanism to override the specified fetching behavior to use some alternate dependency version of my choosing.
You can prevent the fetch of a particular dependency by not including it in the install command. Or you can modify or override the generated prefix info to point to a different location on disk.
No need to manually set up a dependency store. I'm not opposed to a central dependency location as a way to avoid redundant or circular dependencies. However, we need the simplicity of cloning a repo and executing some top-level build script that invokes the dependency manager and builds everything.
Despite the requirement that I should not have to modify my build system, obviously some top-level build must wield the dependency manager and then feed those dependencies to the individual builds. The requirement means that the individual builds should not be aware of the dependency manager. For example, if using CMake for a C++ package, I should not need to modify its CMakeLists.txt to make special functional calls to locate dependencies. Rather, the top-level build manager should invoke the dependency manager to retrieve the dependencies and then provide arguments CMake can consume in traditional ways (i.e find_package or add_subdirectory). In other words, I should always have the option of manually doing the work of the top-level build and dependency manager and the individual build should not know the difference.
Conan caches dependencies in a local registry. This is seamless. The canonical pattern you'll see in Conan's documentation is to add some Conan-specific calls in your build scripts, but this can be avoided. Once again, if you write your build scripts to consumer prefix paths and/or input arguments, you can pass the info in and not use Conan at all. I think the Conan CMake generators could use a little work to make this more elegant. As a fallback, Conan lets me write my own generator.
A way to interrogate the dependency manager after-the-fact to find where a dependency was placed. This would allow me to create VCS hooks to automatically update the hash in dependency metadata of co-developed source repo dependencies. (Like submodules or subrepos do).
The generators point to these locations. And with the full capability of Python, you can customize this to your heart's content.
Currently co-developing dependent projects is the biggest question mark for me. Meaning, I don't know if Conan has something out of the box to make tracking commits easy, but I'm confident the hooks are in there to add this customization.
Other things I found in Conan:
Conan provides the ability to download or build toolchains that I need during development. It uses Python virtualenv to make enabling/disabling these custom environments easy without polluting my system installations.

Best way to support different processor architectures in native code in Gradle build

I have a Gradle build that predominantly consists of Java code, which also contains some native code. The native components are published into an Ivy repository (Artifactory). They contain DLLs, LIBs, headers and so forth. These components are currently published using a manual process; I don't yet have a solution that uses Gradle to build the C++ code.
The native components exist in both 32-bit and 64-bit variants, for both release and debug builds. So far I've been publishing them using classifiers such as release-x86, release-x64, etc. (and putting artifacts marked with all classifiers in the same configuration).
I haven't been able to use the classifier to declare dependencies on these components (I asked about this here: Does Gradle support classifiers for Ivy repositories? but didn't get any answers, I think I failed the first 'S' in SSCCE).
The only way I've found to filter the artifacts is to depend on the configuration that delivers e.g. the DLLs and to then filter the downloaded files by name to get e.g. just the x86 release DLLs (as the classifier is part of it), which seems a bit of a kludge.
I've considered having separate configurations for each combination of x86/x64, release/debug, but it doesn't feel like the right solution. That's four configurations just to encapsulate the DLLs, for runtime dependencies; I'd need four more for the corresponding compile-time dependencies (LIBs, PDBs headers).
Has anyone else achieved this in a way they're happy with?

Automating Maven artifact releasing

For a project with a large number of Maven artifacts (both internally generated as well as external ones), how does one go about automating the releasing of the internally controlled artifacts as part of an overall product release.
Things to be aware of about this question, we use Jenkins and the Maven release plugin. So the operation of releasing a single artifact is automated (albeit the operation to kick-start the process is manual). However the process of releasing all the changed artefacts over the course of a release is not automated (i.e. one has to manually kick-start the release of each artifact). Part of the problem is that almost nothing is released until the end of the release, prior to that everything remains in SHAPSHOT. We have a huge number of components as well as numerous applications/services (over 30) which rely on the plethora of components. So it is not just the case of picking a component and releasing, there are release dependency hierarchies that must be followed (i.e. start at the bottom releasing components that do not use other components and then work your way up until all the applications/services are released).
It is also worth noting that we use two common parent poms which, for the most part, control the versions of the external artifact dependencies and the internal component dependencies. Some pom files for components and applications may override this, but this is (or should be) an exception and should be for a good, but temporary, reason. So when an internal artifact is released, the version in the corresponding parent dependency pom should also be updated.
The product has a release number (of course), however the various pom files technically do not share this version number. While this is not strictly true, the idea as that when parts of the software are set to end-of-life, they will not be updated in the future, thus while a limited number of artifact versions match the product's version at present, this will eventually not be the case.
Any thoughts on ways to get this process automated would be greatly appreciated. Also if you feel what I have described seems to be a crazy way to manage the software, then please provide a comment. Thank you.
You might be able to make use of the Maven Versions plugin which can help formalise versions for projects.
For example, the use-next-releases goal may allow you to release the lowest level of project and then more rapidly bring those released versions into their dependencies.
There may also be scope to use the use-next-versions goal if you fancy releasing components as necessary and simply bring your projects to the "latest" version thats been formally released.

Resources