I am developing an OSGI-based application, which deploys to Karaf container. Karaf has an auto-deployment feature, whereby copying a bundle to its karaf/deploy directory should automatically deploy that bundle into the container. More often than not, however, I am getting errors similar to the one below when I copy bundles into the deploy directory:
org.osgi.framework.BundleException: Bundle symbolic name and version are not unique: legacy-services-impl:8.0.0.ALPHA-SPRINT9-SNAPSHOT
at org.apache.felix.framework.BundleImpl.createRevision(BundleImpl.java:1225)
at org.apache.felix.framework.BundleImpl.<init>(BundleImpl.java:95)
at org.apache.felix.framework.Felix.installBundle(Felix.java:2979)
at org.apache.felix.framework.BundleContextImpl.installBundle(BundleContextImpl.java:165)
at org.apache.felix.fileinstall.internal.DirectoryWatcher.installOrUpdateBundle(DirectoryWatcher.java:1030)[6:org.apache.felix.fileinstall:3.3.11.fuse-71-047]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.install(DirectoryWatcher.java:944)[6:org.apache.felix.fileinstall:3.3.11.fuse-71-047]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.install(DirectoryWatcher.java:857)[6:org.apache.felix.fileinstall:3.3.11.fuse-71-047]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.process(DirectoryWatcher.java:483)[6:org.apache.felix.fileinstall:3.3.11.fuse-71-047]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.run(DirectoryWatcher.java:291)[6:org.apache.felix.fileinstall:3.3.11.fuse-71-047]
Instead of redeploying an already deployed bundle, the container tells me that I am trying to deploy a duplicate bundle.
The Karaf indeed has that bundle deployed, but why wouldn't it redeploy the bundle? What is causing this behavior? How to avoid such errors on auto-deploy?
Thank you,
Michael
I suspect that your bundle does not stop correctly. That may be the reason why karaf thinks it is still there. Do you have some code in your activator that is executed when stopping? Perhaps you are also running some threads. You should make sure the stop method of your activator works and cleanly closes all resources and stops all threads of your bundle.
Related
I am new to Karaf and Camel and I'm trying to deploy custom camel routes (java) and I'm facing a lot of problems at the time of deploying the camel bundle (.jar) in the hot deploy directory.
What I got so far:
Apache Karaf 4.3.1 running in docker container
Bundle .jar with the java defined route
My idea is to have a /deploy directory mapped to the karaf container so any .jar that's added to that directory is deployed (or maybe build a new image for karaf).
When I tried to add my current bundle to the directory I got the following error message:
20:19:32.490 INFO [fileinstall-/opt/karaf/deploy] Installing bundle org.apache.karaf.examples.karaf-camel-example-java / 4.3.1
20:19:32.535 WARN [fileinstall-/opt/karaf/deploy] Error while starting bundle: file:/opt/karaf/deploy/karaf-camel-example-java-4.3.1.jar
org.osgi.framework.BundleException: Unable to resolve org.apache.karaf.examples.karaf-camel-example-java [111](R 111.0): missing requirement [org.apache.karaf.examples.karaf-camel-example-java [111](R 111.0)] osgi.wiring.package; (&(osgi.wiring.package=org.apache.camel)(version>=3.6.0)(!(version>=4.0.0))) Unresolved requirements: [[org.apache.karaf.examples.karaf-camel-example-java [111](R 111.0)] osgi.wiring.package; (&(osgi.wiring.package=org.apache.camel)(version>=3.6.0)(!(version>=4.0.0)))]
at org.apache.felix.framework.Felix.resolveBundleRevision(Felix.java:4368) ~[?:?]
at org.apache.felix.framework.Felix.startBundle(Felix.java:2281) ~[?:?]
at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998) ~[?:?]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.startBundle(DirectoryWatcher.java:1260) [!/:3.6.8]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.startBundles(DirectoryWatcher.java:1233) [!/:3.6.8]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.doProcess(DirectoryWatcher.java:520) [!/:3.6.8]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.process(DirectoryWatcher.java:365) [!/:3.6.8]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.run(DirectoryWatcher.java:316) [!/:3.6.8]
I think this can be solve with a maven bundle "wrap" but I'm not sure if this is correct, and if so, how should I wrap the bundle?
Thank you for reading :D
A bit late but hope this helps someone as I've fiddled with this setup quite a bit for the past year while exploring OSGi, Karaf, Camel and Docker.
If you want to do local development with karaf you can actually map your maven repository to the container which can make installing bundles and features quite a bit easier.
Example Docker compose for Karaf
Here's a docker-compose for karaf 4.2.11 but you can probably change it to 4.3.1 without any problems. (add :z on volumes if using SELinux)
version: "2.4"
services:
karaf-runtime:
container_name: karaf
image: apache/karaf:4.2.11
ports:
- 8101:8101
- 8181:8181
- 1098:1098
volumes:
- ./karaf/etc:/opt/apache-karaf/etc
- ./karaf/deploy:/opt/apache-karaf/deploy
- karaf-data:/opt/apache-karaf/data
- ~/.m2:/root/.m2
- karaf-history:/root/.karaf
command: [ karaf, server ]
volumes:
karaf-data:
karaf-history:
Just save it to a empty folder somewhere as docker-compose.yml. Create folder named karaf to the folder and then fetch the default configurations from karaf using couple docker commands:
# Start detached karaf container with name karaf
docker run --name karaf -d apache/karaf:4.2.11
# copy files from container to host-system
docker cp karaf:opt/apache-karaf/etc ./karaf/
# stop the container
docker stop karaf
Setting karafs etc folder as shared volume makes it easy to tweak and share the configurations through version control for other developers.
To start Apache karaf with docker compose you can use following commands:
# Start
docker compose up -d
docker-compose up -d
# Stop
docker compose down
docker-compose down
# note: docker compose = newer version of docker-compose command
Creating bundles
Easy way to create bundles is to use one of the official archetypes karaf-bundle-archetype or karaf-blueprint-archetype when creating the project.
For projects using Apache Camel it is generally easier to use karaf-blueprint-archetype. With it you configure the CamelContext in the xml blueprint file found in projects resources/OSGI-INF/blueprint/ folder.
Example:
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<bean id="ExampleRoute" class="com.example.ExampleRouteBuilder" />
<camelContext id="ExampleContext" xmlns="http://camel.apache.org/schema/blueprint">
<routeBuilder ref="ExampleRoute" />
</camelContext>
</blueprint>
With ~/.m2:/root/.m2 shared volume you can just package the project using mvn clean install to your local maven repository which you can then use in karaf to install the bundle using bundle:install mvn:groupId/artifactId/version
If you want to use deploy folder you can copy artifacts to container using docker cp ./target/exampleBundle.jar karaf:/opt/apache-karaf/deploy
Adding camel feature to karaf
As for camel you can follow the official guide on how to add camel feature repository and the features you need.
But the steps are basically:
# Add camel feature repo
feature:repo-add camel <version>
# Install camel feature
feature:install camel
# List available camel features for install
feature:list | grep camel
# Install camel features you need
feature:install <feature-name>
Missing requirements
When installing bundles often encounter missing requirement exception that tells you that package that the bundle depends on is missing from karaf which means you'll have to install bundle or feature that exports the said package.
These messages are usually best read starting from the end:
(osgi.wiring.package=org.apache.camel)(version>=3.6.0)(!(version>=4.0.0)
The above tells you that karaf installation doesn't have camel installed. OSGi bundles expect OSGi framework/runtime to satisfy their dependencies which is quite a bit different from say standalone SpringBoot projects.
Shared volumes and new files
When it comes to sharing config files or karaf deploy folder it's good to know that Docker has some issues related to new files in shared volumes. If new file is added/created using host-systems file-system there's chance that karaf will not detect these files or changes made to them.
It's generally better to use docker cp path/to/file/on/host karaf:/path/to/folder/on/container to deploy new files to container even if its shared volume.
otherwise you might have to shell in to the container and make copy of the file in question and
Currently I have a situation, where I develop a project, then run mvn install and it get's put into my local Maven repository as a simple JAR file
Then, I have a crafted by some other guys "environment" which includes a whole lot of bundles and stuff, and is ultimately run via mvn pax:run and it takes like 5 minutes to run
I would like to be able to just run felix:update <bundle-name> but I cannot fill the gap between a Maven JAR artifact in local Maven repo, and a ready-for-provisioning bundle that I could put somewhere to just run felix:update or whatever, maybe uninstall/install
When I try to run mvn pax:create-bundle with my project, it throws a Containing project does not have packaging type 'pom' exception
Any help is highly appreciated
UPDATE: I've noticed that problems with re-installed bundle begin in it's activator, with a ClassNotFoundException (although the class mentioned is and always present in the bundle, so it must an issue with classpath, ClassLoader setup or whatever)
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation BundleWiringImpl.java:1574 at org.apache.felix.framework.BundleWiringImpl.access$400 BundleWiringImpl.java:79 at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass BundleWiringImpl.java:2018 at java.lang.ClassLoader.loadClass ClassLoader.java:357 at some.external.adapters.package.guice.SomeModule.configure SomeModule.java:46 at com.google.inject.AbstractModule.configure AbstractModule.java:59
If you have a path to a file which is the newly built bundle, you can update it from the Gogo shell as follows:
felix:update <bundleid> file:/path/to/file
refresh
Where <bundleid> is the numeric ID of the bundle that you want to update. The refresh command is needed in case any bundles depend on or import packages from the bundle you are updating; this command will cause the Framework to attempt to re-resolve them using the new dependency.
I am happy to accept #neil-bartlett's answer, though I have to add some more context:
1) one of the biggest issues I had initially is that a JAR-file in local Maven repo IS NOT THE SAME as an OSGi bundle. In order to create bundle, I had to run mvn bundle:bundle AFTER mvn install. And the bundle got created in target/ folder.
2) afterwards, in a GoGo shell, I could indeed run felix:update <bundle-id> file:C:/Users/blablabla/bundle-SNAPSHOT-2.0.jar, and for some reasons, these days it just works. The exceptions, mentioned in updates to original post, are indeed occurring, but they do not prevent proper installation of an updated bundle.
I try to migrate an eclipse plugin from Java8 to Java9. If I start a debug session (Run as Eclipse Application...) all works fine.
However, after installing my plugin I am not able to use it. If I use ss in the OSGI console I get following status for my plugin:
1102 STARTING org.treez.core_1.0.0.201712191435
and if I manually try to start it I get
osgi> start 1102
gogo: BundleException: Error loading bundle activator.
I tried to start a remote debug session, as suggested here:
Debugging Eclipse plug-ins
I set a break point in the constructor of my Activator but that break point is never reached.
=> How can I get additional information about why the loading of the bundle activator fails? Is there some log file? Can I somewhere set a logging level to TRACE?
I assume that the issue might be that a resource can be found while debugging the Eclipse Application but not when using the bundled jar. More info, e.g. the name of the resource that could not be found, would be very helpful.
Related questions:
Debugging Eclipse plug-ins
CQ5 OSGi bundle does not start:- Activator cannot be found
When plugins fail to start there is normally a message in the .log file in the workspace .metadata directory.
On Linux, Unix and macOS this file and directory are hidden so you may need to do something special to see them.
I created a maven project in eclipse and installed the bundle in osgi console but the bundle is in Installed state. All the dependencies are reolved and there are no errors but still status is not active.
How to call the OSGI service from my AEM component page. Can i invoke the osgi service from my component jsp page only if bundle state is "active"? Do i need a ServiceID to be generated for my bundle for invoking the service.
Try this -
Tail your error logs
On browser goto /system/console/bundles
Search for you bundle and try activating it manually (use the play button on right side of bundle entry)
If bundle is active successfully then possibly u need to fix you deploy script so it starts the bundle after installing
If bundle still doesn't start, look at the logs. There could be possible wiring issue or in case you have activator class for bundle its throwing exception while activating the bundle
Please go to felix console and activate the bundle using play button .
If still it doesn't get activated,expand the bundle there will be some error in the bundle(it will be shown in red).
So you need to resolve the class where the error is shown.
I hope this is helpful.
Thanks.
I have a build configuration that uses the Team City deployer plugin.
I'm using a container deploy to deploy the war file to Tomcat.7.0.63 installed as a service on a Windows Server 2012 R2 box.
The first time I run the the build, the artifact (a war file_ deploys successfully.
The second time, and all subsequent runs, the deploy fails.
The error message:
Build failure message received: org.codehaus.cargo.container.ContainerException: Failed to undeploy
The log file error:
Caused by: org.codehaus.cargo.container.tomcat.internal.TomcatManagerException: FAIL - Unable to delete
When I go to the webapps folder on the remote server, the war file is deleted, but the expanded folder is only partially deleted. Most files are gone, except for a png file.
I am not able to manually delete the folder because Tomcat still has a lock on it.
If I restart tomcat, I'm then able to run the build successfully (war file deploys).
One thought I had was to restart Tomcat before or after each deploy.
How would I restart Tomcat from TeamCity?
Or perhaps, does anyone have suggestions on how to fix this problem?
You can configure the Tomcat Context using the antiResourceLocking option, as detailed further in the online documentation. This does come with some trade-offs however; definitely worth reading the documentation in full and evaluating if it's a suitable option for your application.