OSGi Bundle Repositories (How to host them on a Server)? - osgi

I would like to ship a product that is merely a configured OSGi container. When the container is launched it should, based on the configuration of the container, download and install the application - the most recent bundle(s) which comprise all the functionality of the application. When a newer version of the application becomes available I would like the application to automatically download and update/install the newer versions of the bundles so it appears seamless to the end user.
Based on the Motivation section of the Apache Felix OSGi Bundle Repository documentation (which is currently outdated) I think this is the purpose of OBR. Is that correct?
If not then what solutions exist for this scenario?
If so:
What does an OBR repository look like? I envision it to look somewhat like a maven repository that is accessible via the Internet with an XML file that describes the available bundles in the repository.
How does one manage a repository that can be accessed from the Internet?
The aforementioned documentation states that:
An OBR "repository server" is not necessary, since all functionality may reside on the client side.
But I assume that there does exist some "OBR repository server" for those repositories that should be accessible from the Internet. In this example situation, I would rather that the repository does not sit on the client side, but on a server so clients can be easily updated. To accomplish this, would I just set up an HTTP server that hosts 1) some xml file that describes the bundles available in the repository, and 2) the bundles?
Lastly, is there a simple example somewhere that demonstrates how all this works together?

Your understanding is exactly right. The main thing to watch out for when using OBR is that the XML format changed a lot when the soecification was finalised. Sometimes 'OBR support' means the old style, and sometimes the new style. As long as client and server are consistent, either should be fine. The new format is richer, but at the moment there's wider support for the old style.
To set up your own http hosting, you can use repoindex (formerly known as bindex). Apache Aries also has a repository generation tool.
Using the 'index' goal with the maven bundle plugin will generate OBR metadata, and bnd can also be directly configured to generate the repository.xml.
Finally, if you don't want to manage the http server directly, Nexus and Karaf Cave provide OBR repository hosting.
You may also find this question helpful: OSGI OBR repository hosting?

This is how I did it once for one of my projects, that compiles with Maven.
What you need :
a Maven repository (such as Sonatype Nexus) to deploy the jar files
a ftp or a webDAV server to upload the repository.xml file (Nexus Pro can do this, but you have to pay...)
In my POM file, I added the following
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<version>2.3.7</version>
<extensions>true</extensions>
<executions>
<execution>
<goals>
<goal>deploy</goal>
</goals>
</execution>
</executions>
<configuration>
<remoteOBR>repository.xml</remoteOBR>
<altDeploymentRepository>my.obr.name::default::ftp://obr.example.com</altDeploymentRepository>
<prefixUrl>http://my.maven.repository.com/content/groups/public</prefixUrl>
<ignoreLock />
</configuration>
</plugin>
When you execute the mvn deploy command, this will update the ftp://obr.example.com/repository.xml file that will link to the jars on your repository (thus the prefixURL tag).
The my.obr.name mention is the id of the server to put in the settings.xml file for authentication.
To achieve this, I read the following sites:
http://www.flexthinker.com/2012/06/release-an-osgi-bundle-into-an-obr-with-maven/
http://books.sonatype.com/mcookbook/reference/ch01s09.html
In felix, you just have to do:
g! obr:repos add ftp://obr.example.com/repository.xml
g! obr:deploy my-artifact
The only problem I still have is that the dependencies are not listed on the repository.xml file.
Maybe you can work with Eclipse Orbit to retrieve some dependencies :
http://download.eclipse.org/tools/orbit/downloads/
https://bugs.eclipse.org/bugs/show_bug.cgi?id=406259
But I did not try this yet.

Related

Maven toolchain and heroku/cloud deployment

This is going to be a broad topic so please bear with me. So I build a microservices app started as a hobby and now in the few months I put into it I made something useful. So, far I used STS (spring tool) with maven and Eureka client.
I have an independent microservices talking to each other and a UI microservice that present the results to you. It's a single page app I will lay down the structure (GitHub repo also same) :
--my-app
--my-microservice-discovery
--my-microservice-domain (jdk12)
--my-microservice-searcher
--my-microservice-orchestrator
--my-microservice-ui
--my-transferobjects (common jar not microservice)
--pom.xml (my main module pom)
So, this is what I am dealing with in GitHub I made a single repository of my-app containing all these spring boot projects, in IDE everything works, now comes the deployment part on some cloud provider. I choose Heroku as I had some experience with it in the past but also because I cannot make this work with Google (although they have an attractive starter scheme going on right now). Connecting to a private GitHub repo from Google Cloud build is pain, but Heroku does this with style.
I decided to go command line because that's how I have to deal on the cloud and that's where all hell broke loose, I got lots of dependencies issues between the version of JDKs managed well by IDE but not defined correctly for maven yet.
I am still managed to make my local build success (command line) but I had to do some hard-code configuration to fix jdk12 for a my-microservice-domain pom like below and similar fix for my-transfer objects but because of the Lombok issue no idea why I have to provide jdk8 specifically for this project.
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>12</source>
<target>12</target>
<executable>/home/Downloads/jdk-12</executable>
</configuration>
</plugin>
</plugins>
</build>
My questions are as follows :
Do we have a toolchain example to handle this different JDK compilation issue plus do we always have to provide the JDK local installation path?
How to deploy this on Heroku, how it will know where is my jdk12 is, or just defining the source/target version in pom will do the trick on the cloud, also Heroku supports jdk12 or google cloud?
Is it a good idea to have multi-repo or single repo deployment I can still change but I like it this way.
I get it that I need to create docker images for each of my microservices, but someone has a tutorial to do so locally first, or some GitHub repo so that I can look some examples.
Once I add all those docer files in each microservice is this sufficient to deploy its production level? I read a lot about APIGateway, load balancer .. where they fit into my architecture.
I also use localhost everywhere for EurekaServer and Eureka/Feign client properties will it also work the same way on the cloud and Eureka server will be able to find all my services as it does locally with no config changes needed on cloud?
What is better Google cloud or Heroku, google cloud seems a bit of a hassle for now.
These are my worries please advise.
Ok I am going to answer my own question I did bit of reading and what #cool suggested above and I ended up going multiple repository way and I achieved what was needed.
I also chosen Heroku simply because the ease of things there felt like my local environment and such a simple app like mine on latest binaries has no problem whatsoever.
I followed few links for setting up my Eureka server and client and Procfile and some environment variable direcrlt from your Apps settting page from dashboard.
Needless to say I also maintained multiple profiles (dev,prod,test) and for ui I use vaadin hence some additional step needed in your UI app pom.xml for production.
I am bit of concerned with the way my services were terminated of no activity on heroku plus right now the issue with services discovery among each other. So my eureka server reports > all instances but they cannot contact each other.
Right now I am busy with other things [viz. fixing bugs] as launch is a month later. I will soon post a question about this ;).

Create maven repository in web hosting

I need to host several jar files in maven repository. There are may free web hosting companies which provide free web space. Do I need some special configuration to create a simple maven repository and upload maven jars?
In essence, a Maven repository is simply a place to store and retrieve files from. Nowadays, this is mainly done via the HTTP protocol. In an over-simplified example (as it was in the early days of Maven), things were simply hosted in a web server - you would deploy them via an HTTP PUT and retrieve them over HTTP GET. As things evolved, Maven artifact repository managers evolved and they started keeping record of various kinds of metadata.
As an over-simplified answer: if you have a proper <distributionManagement/> and <repositories/> section in your pom.xml and you can issue HTTP PUT and HTTP GET operations against a web server, then you can store these artifacts in a web server, if you really don't want to use an artifact repository manager (which is not really advisable, but hey, who am I to stop you?!). Clearly, this example doesn't cover adding credentials (which is handled by the Maven settings.xml's <servers/> id mappings to <repository/> id-s.
If your free hosting service allows you to install an artifact repository manager, you should consider picking one and installing it.

Mock Maven Release

We have a basic Maven parent POM for all our projects, which is tested with integration tests. However a big part of the customization is for the Maven release plug-in:
<plugin>
<artifactId>maven-release-plugin</artifactId>
<configuration>
<tagBase>https://my-url</tagBase>
<preparationGoals>clean verify org.acme:my-plugin:my-goal</preparationGoals>
<completionGoals>org.acme:my-other-plugin:other-goal<completionGoals>
<resume>false</resume>
</configuration>
</plugin>
I tried testing it via "release:prepare" and got Can't release project due to non released dependencies for the parent POM, which can't even be removed via -DallowTimestampedSnapshots=true.
I could test via "release:prepare -DdryRun=true", but that doesn't even test the preparation goals. So the only other way I could think of was to release the POM and then try to release an arbitrary project. So now I'm at version 1.0.14 and have reverted about 50 times, and I don't think that's the right way anymore.
Is there any way to mock a Maven release? Maybe tell him to tag to a local path and have him commit changes there? And he shouldn't deploy to our Nexus either, but I'm at the point where I'm not picky anymore.
I also had a need to do this, and like you I was not interested in actually doing SVN commits or deploys to a remote repo - in my mind that verification was part of other integration tests. I figured that the maven-release-plugin developers would also have a similar need, and indeed they did. They wrote mock SCM and wagon providers.
You can see the mocks used in the release plugin POM profile with id run-its. Note the config uses setupIncludes to be sure the mocks are built and installed in the local repo prior to running any actual tests.
The projects themselves need to use the mocks. Look at one of the integration tests to see how to define the scm element and add the dependency on the Wagon mock.
I used a log verification technique to verify that the appropriate executions were run during the tests.
Note: There are 3 mocks in the setup directory I linked. I found I only needed to use two of them, the ones with suffix "-dummy."
Modularize your process with profiles. Have a profile that triggers your 'prepare' actions, and a profile that triggers your 'perform' actions, and test those instead of or before running the release plugin. Configure the release plugin to do these things by activating the profile.

How to get extra attributes in "resource" element of an OBR repository?

We try to get Apache Felix Web Console OBR plugin working with OBR repos produced by the current BND Tools/Bindex.
The problem is that the generated repositories (for example, produced with "Release Bundles" from Eclipse) cannot be read by the Felix Web plugin.
To the contrary, the Felix' own OBR repository is properly understood by the plug-in, since it seems to have additional attributes within the "resource" element, which are missing in our repository:
...
<resource
id="org.apache.felix.bundlerepository/1.4.1"
symbolicname="org.apache.felix.bundlerepository"
presentationname="Apache Felix Bundle Repository"
uri="http://repo1.maven.org/maven2/org/apache/felix/org.apache.felix.bundlerepository/1.4.1/org.apache.felix.bundlerepository-1.4.1.jar"
version="1.4.1">
...
The schema specification in http://www.osgi.org/download/rfc-0112_BundleRepository.pdf leaves the room for the attributes within the "resource" XML element, however they are defined as a part of Java API.
This gitHub fork seems to do the job https://github.com/rkrzewski/bindex but does anybody know what is the status of this? Will this be integrated to Bindex some day? UPDATE: here is the answer from RafaƂ, this forks' owner https://github.com/rkrzewski/bindex/issues/3#issuecomment-27784279
So, I have reasked on bndtools-users Google group: https://groups.google.com/forum/#!topic/bndtools-users/ZdY0ASnLNmc
Or are there any other ways to get the OBR repos be generated with the missing resource attributes? Thanks.
The development of Bindex moved to https://github.com/bndtools/bindex.
(source: https://groups.google.com/d/msg/bndtools-users/R3U2SDazTjY/OyOVTK8DZHUJ)
Does this version create the proper format for you?

adding artefacts in Archiva not through its interface

How can I insert artefact in archiva not through its web interface.
It is possible to upload artifacts using maven.
Please refer to the Archiva Users Guide, Section Deploying to Repository for the details.
The following methods are available:
upload via the user interface (I presume this is the one you refer to as the web interface)
connect via any WebDAV client at http://localhost:8080/archiva/repository/repo-name (adjust according to your configuration)
use HTTP PUT with basic authentication to the same location as the WebDAV URL (this is the method that other tools like Maven, Ivy, etc. would use)
drop the file into the correct place in the file system and wait for Archiva's scanner to pick up the changed artefact
As Torsten's answer indicates, uploading using Maven's deploy phase or deploy:deploy-file goals (or equivalent from another build tool) is likely what you want since it will take care of constructing the correct path for the artefact and pushing any associated metadata, assuming you are using Archiva as a Maven artefact repository.
You have an upload screen tru the web ui.
See http://www.youtube.com/watch?v=LSXe26inf0Y

Resources