gradle wrapper downloaded multiple times for different projects - gradle

I have got micro-services where each service is a separate project. In the gradle wrapper properties files, I have defined the same distribution URL:
distributionUrl=http://nexus-server.com/nexus/content/repositories/software/gradle/2.14.1/gradle-2.14.1-bin.zip
Our nexus is migrated to a different server therefore I had to update the gradle wrapper properties file to use the new server address. I have got 2 questions:
Is there a way that I could tell gradle not to download the
distribution from scratch as (in principal) the distribution is the
same, only server address is changed?
Even if gradle wrapper downloads the distribution from the new nexus
server, why is it repeating the action for each micro-service
project instead of reusing the one downloaded for the first project
(after it was build)?

Related

GitLab CI: Spring boot dependecy on another project

I have a clustered application architecture, where 3 of my primary services make use of a dependency artifact (lets call it commons) that contains the modal files and other utils used by other 3 services.
Presently, I have all the 3 spring boot applications deployed on k8s through Gitlab CI via artifactory for image management.
Now, each time I make changes to my commons service, I have to change the version of the commons in pom.xml(so that it doesn't conflict with the previous artifactory image) and also change the pom versions of my other 3 services that depend on this new version and push all the 4 (first push commons so that the new build image is available in artifactory, and then the other 3) services.
Is there a better way to manage this. I would have preferred if, my 3 services where able to fetch the latest common version and add it to my pom version
This is currently supported in Reliza Hub (disclaimer: I'm developing the project).
The workflow to get latest release is documented here (see workflow 2.Get Latest Release Of A Project Or Product).
Idea is the following:
you define project for your Shared Library and configure from GitLab CI to automatically stream build metadata to Reliza Hub on every build using Reliza Client.
Automatic versioning can also be maintained via Reliza Hub (meaning that Hub would increment versions for you on every build based on your chosen versioning schema) - you need to use getversion command of Reliza Client for that.
You can then use this automatic version increments to update version in your pom.xml at build time. So this process will be fully automated.
Once that is done, in your CI pipelines for each of the 3 dependent services, you include call to Reliza Hub using getlatestrelease command of Reliza Client for your shared library. This call will return you back all metadata for the latest release of shared library, including its version.
You can then plug this version into pom files of your dependent services.
Hope this helps.

Reg upload to artifactory

We maintain an artifactory within our intranet which is used by the development team.
When ever any new dependency is added to any project , we upload the new jars into artifactory.
This is currently a tedious process and we are trying to find if there is any simple way out.
The current process is - if a project defines a new dependency , we need to connect to internet and build the project using gradle so that we get to know what are the new dependencies ( we in fact track the logs what are the dependant and transitive dependant jars which are getting downloaded fresh )
Then we create a zip of these new jars alone and upload to artifactory. This is time consuming and error prone as well
Is there any better way to achieve this ? When i build using gradle connecting to internet , is it possible to publish the new dependencies as well to maven local repo or to some new folder so that we can zip that folder alone and upload to artifactory ?
Kindly revert if anybody has a simple solution for the above problem.
This is a maven answer but the same will apply to gradle.
You should be able to define a virtual repository in your artifactory, which is a combination of the local (artifactory hosted) and the maven central repo (internet hosted).
Your maven/gardle users will configure the virtual repository (not the internet) in their settings.xml, then when a dependency is loaded maven will look in repostories in the following order:
1) local user repo at ~/.m2/repo
2) artifactory local repository
3) maven central
Each time a new artifact is loaded from 3 (no one has ever asked for it before) it will be added to 2 and 1, so the next user who calls for that dependency will only ever go as far as 2.
See https://www.jfrog.com/confluence/display/RTF/Virtual+Repositories

Reusing Mule connectors and validation flows

How to reuse mule code (flows, exception strategies, database connectors, validators) across several projects. It's a application specific reusable artifacts, not an enterprise wide reuse.
For ex: I have some master code( validators, flows, and exception stratagies) which should be reused in a 15 different flows. (i.e 15 different mule projects). We are not using maven at the moment. One way I explored is, we could jar it and publish to local nexus repo, and re-use it via pom. Is there any other way ?
If possible, I also would like to make it dynamic, such that if I change the master code and deploy, it should be in effect without having to redeploy the ones that are using it.
You can reuse flows etc. (everything which is in Mule xml files) and Java classes by placing them in a plain Java project, building a jar from it and placing the jar on the classpaths of the importing Mule projects.
To use the stuff in the xml files, import them with .
Your question sounds like you already know this part.
I recommend building all Mule projects and the so called master project with Maven, Mule projects with packaging Mule, the master project with packaging jar.
Maven will pack the master part inside the using projects, so there is no dynamic update.
When you want this dynamic update, don't build with Maven or set the scope to "provided". In this case the master is not packaged in the other Mule projects. You have to make sure it is on the server classpath, e.g. in lib/user. Then you can change it there, restart the Mule server and all projects get the update.
Same with another level of indirection/possibility for grouping can be done with Mule domains.
All the dynamic stuff described so far does only for on premise Mule servers, not for CloudHub.

Updating KIE Execution Server container with updated rules file

This is where I am at:
I am using Drools 6.2 and calling drools engine remotely via KIE Execution Server running on jboss.
I used workbench to create my initial drl file and fact objects and then used Build & Deploy option of workbench to create and deploy the jar file. I then created the container using the jar file and got the end point that I am using to access the rule engine from my client application. At this point every thing is working fine and I am able to fire the rules remotely.
My requirement is to modify the rules file (.drl) outside the workbench, let's say in notepad and update the container with this new drl file. Is there an easy way to create the jar file programmatically that i can deploy to the central maven repository? I can then run the KIE scanner to look for the latest version of my jar file and automatically update my container. Or is there another recommended way to update the running container with an updated .drl file?
My client application is not in Java so I am not looking for an integrated solution where I can write java code to create the knowledge base and use kie builder to build the drl file.
Is there an easy way to create the jar file programmatically that i can deploy to the central maven repository?
2 options that I can think of, one "easy" and one not so much:
Option 1
Use Maven and the maven drools plugin (you don't have to write Java code, just create your maven project and run mvn package to get a jar. See here: https://docs.jboss.org/drools/release/6.0.1.Final/drools-docs/html/KIEChapter.html#KIEModuleIntroductionBuildingIntroductionSection
Option 2
A JAR file is simply a zip file with a specified structure. That means that you should be able to update your whatever.drl file, put it in the directory structure that the KIE server expects and deploy it.
For instance, create a directory structure like:
META-INF/kmodule.xml
com/site/project/drools/rules/myrule/SomeRule.drl
Zip those files into somefile.jar and deploy it.

Is there any way to configure Archiva to download missing Maven project modules if they aren't in the local workspace?

I'm confused about how Archiva fully works. I understand that if we had a core set of dependencies, we could use Archiva as our local maven repo.
The thing I don't understand, is how Archiva manages build artifacts from your own projects.
Say I have a multi-module maven project - we can even use the one from the Sonaytpe for example. http://www.sonatype.com/books/mvnex-book/reference/multimodule-sect-building-multimodule.html
What if I wanted to have one team working on the Simple Model app, while I wanted another to work on the Simple webapp. But I didn't want either to have the projects they AREN'T assigned to, in their local workspace. Webapp needs Model to build, but I don't want the Wepapp team having direct access to Model.
Is there any way Maven can detect that the build artifact for Model wasn't in a Webapp dev's workspace, and pull it from our local Archiva repo, so they can still build the Webapp despite not having the model (maven module project) code in their workspace?
The Model project will be like any other third-party dependency and be downloaded by Archiva automatically, provided
the Webapp project specifies Model project as a dependency
the Model project is deployed to Archiva periodically (by a Continuous Integration system or other means).

Resources