I'm using Hazelcast 3.6.2 and cant get the classloader to work when in a multi-bundle environment. What is the approach on this? Setting the classloader in the config only works if the class to load is in the same bundle. In my case the class to load is in another bundle than the one creating the hazelcast instance.I world like you to use the HazelcastOsgiService and HazelcastOsgiInstance.
Any input is appreciated.
You have to provide your own classloader trick by writing a Delegating ClassLoader that keeps track of installed bundles. I did one of those "hacks" in the past to test it. You can find some code for the same issue, solved using a custom Serializer, on github (https://github.com/noctarius/hazelcast-mapreduce-demo/blob/master/musicdb-model/src/main/java/com/hazelcast/example/musicdb/server/ModelMapReduceActivator.java), anyhow Hazelcast does not yet officially support that out of the box.
Related
I'm creating a Liferay 7.1 OSGi bundle, which has some external dependencies in it. In consideration of time, we opted to embed the external JAR in our OSGi Bundle. I've managed to create a bnd file, which includes all of the ElasticSearch dependencies, and put them on the bundle classpath. I've used the source-code from github (https://github.com/liferay/liferay-portal/blob/master/modules/apps/portal-search-elasticsearch6/portal-search-elasticsearch6-impl/build.gradle) and the bnd.bnd file, to check what's imported.
When activating the bundle, an exception is thrown:
The activate method has thrown an exception
java.util.ServiceConfigurationError: org.elasticsearch.common.xcontent.XContentBuilderExtension: Provider org.elasticsearch.common.xcontent.XContentElasticsearchExtension not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.elasticsearch.common.xcontent.XContentBuilder.<clinit>(XContentBuilder.java:118)
at org.elasticsearch.common.settings.Setting.arrayToParsableString(Setting.java:1257)
The XContentBuilderExtension is from the elasticsearch-x-content-6.5.0.jar,
the XContentElasticsearchExtension class, is included in the elasticsearch-6.5.0.jar. Both are Included Resources, and have been put on the classpath.
The Activate-method initializes a TransportClient in my other jar, hence it happens on activation ;).
Edit:
I've noticed that this error does NOT occur when installing this the first time, or when the portal restarts. So it only occurs when I uninstall and reinstall the bundle. (This is functionality I really prefer to have!). Maybe a stupid thought.. But could it be that there is some 'hanging thread'? That the bundle is not correctly installed, or that the TransportClient still is alive? I'm checking this out. Any hints are welcome!
Edit 2:
I'm fearing this is an incompatibility between SPI and OSGi? I've checked: The High Level Rest Client has the same issue. (But then with another Extension). I'm going to try the Low-Level Rest Client. This should work, as there are minimal dependencies, I'm guessing. I'm still very curious on why the incompatibility is there. I'm certainly no expert on OSGi, neither on SPI. (Time to learn new stuff!)
Seems like a case where OSGi uses your bundle to solve a dependency from another bundle, probably one that used your bundle to solve a package when the system started.
Looking at the symptoms: it does not occur when booting or restarts. Also it is not a subtype.
When OSGi uses that bundle to solve a dependency, it will keep a copy around, even when you remove it. When the bundle comes back a package that was previously used by another bundle may still be around and you can have the situation where a class used has two version of itself, from different classloaders, meaning they are not the same class and therefore, not a subtype.
Expose only the necessary to minimize the effects of this. Import only if needs importing. If you are using Liferay Gradle configuration to include the bundle inside, stop - it's a terrible way to include as it exposes a lot. If using the bnd file to include a resource and create an entry for the adicional classpath location, do not expose if not necessary. If you have several bundles using one as dependency, make sure about the version they use and if the exchange objects from the problematic class, if they do, than extra care is required.
PS: you can include attributes when exporting and/or importing in order to be more specific and avoid using packages from the wrong origin.
You can have 2 elastic search connections inside one Java app and Liferay is by default not exposing the connection that it holds.
A way around it is to rebuild the Liferay ES connector. It's not a big deal because you don't need to change the code only the OSGi descriptor to expose more services.
I did it in one POC project and worked fine. The tricky thing is to rebuild the Liferay jar but that was explained by Pettry by his google like search blog posts. https://community.liferay.com/blogs/-/blogs/creating-a-google-like-search (it is a series but it's kind of hard to navigate in the new Liferay blogs but Google will probably help) Either way it is all nicely documented here https://github.com/peerkar/liferay-gsearch
the only thing then what needs to be done is to add org.elasticsearch.* in the bnd.bnd file in the export section. You will then be able to work with the native elastic API.
I'm having a problem I simply can't get my head around.
I'm creating a jsf application where I (as administrator) can upload a jar file, and that way around update the application.
Through this tutorial: http://docs.oracle.com/javase/tutorial/deployment/jar/jarclassloader.html I have managed to classload the jar file. The first issue is that I can only class load as "Object" and not cast to an other class type. I get a ClassCastException.
JarClassLoader jcl=new JarClassLoader(url);
Class cl= jcl.retreiveClass(jcl.getMainClassName());
Object ob=cl.newInstance(); //I would like to make this a RouteBuilder object instead of Object
Concretely I'm using Apache Camel, where I can add routes to an existing "core" (CamelContext). What's happening is that I have the core of apache camel running in my web app, and I can then add routes runtime. It's a route I want to package as a jar and load into the app runtime (I want to develop and test the route in my local environment, and then afterwords upload it to the production application). A concrete route is just a simple java class that extends RouteBuilder. I want to upload the jar, classLoad (URLClassLoader) the main class (maybe the whole jar? - does that make sense?), and convert it to a RouteBuilder. This seems to be impossible. I have chosen to upload the jar file to a folder outside my war, so that the routes do not get erased when I restart the webapp (is this smart or should this be done in an other way?). Is that a problem regarding name spaces? Furthermore, I haven't been able to find out whether I need to traverse my entire jar file and classload ever single class inside. Any comments on that?
To me it seems like I have some fundamental misconceptions on how extending / updating a java application is done. I simply can't find any good examples/explanations on how to solve my problem and therefore I conclude that I don't get this on a conceptual level.
Could someone elaborate on how to extend a running jsf application (I guess the same issues are relevant in native java apps), explain how to upload and class load a jar at runtime, and how to cast loaded classes to other class types than Object?
Thanks
Lasse
If you are interested in loading Routes without stopping your application you could consider using an OSGi container like Karaf. Karaf provides support for Apache Camel routes out-of-the-box: http://camel.apache.org/karaf.html
All class loading is managed by the OSGi container and you just need to run some commands to update things. I am not sure if this could work with your JSF application but it worths to take a look.
I am trying to get an apache camel app using CXF working on WebSphere.
Noticed a number of errors
Caused by: java.lang.IncompatibleClassChangeError: org.apache.neethi.AssertionBuilderFactory
at java.lang.ClassLoader.defineClassImpl(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:262)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:69)
This occurs because the org.apache.neethi classes are loaded from the WAS run-time instead of the neeti3.0.2.jar from WEB-INF/lib
here is the info from class loader:
class load: org.apache.neethi.builders.AssertionBuilder from: file:/D:/Tools/WebSphere/AppServer/plugins/org.apache.axis2.jar
The Web Application config has been changed to use the class loader policy. It is set to Parent Last. Yet this class seems to be using the Parent First Policy.
Is there anything in the CXF package that over-rides this policy?
I noticed that using Axis2 and WAS
http://axis.apache.org/axis2/java/core/docs/app_server.html
Avoiding conflicts with WebSphere's JAX-WS runtime has some additional steps mentioned for Axis2. Is there something similar that is required to get this to work?
Thanks
Manglu
Not specific to Camel, but we are dealing with a similar issue.
1) You can use an isolated shared library if you want. This seems to fix the classloader from pulling in these other libraries via OSGi. This solution isn't for everyone.
2) When setting an app that PARENT_LAST, make sure you're doing it in the correct place. If you have jars inside your wars, then you'll have to set PARENT_LAST on the WAR modules, not the application. In the WAS console> Applications > Application Type > websphere enterprise applications> MyEAR > manage Modules > MyWar > Change the "Class loader order" and make it "parent last"
3) After you make your changes, you can export the ear file. In the ear, there is a file META-INF/ibmconfig/cells/defaultCell/applications/defaultApp/deployments/defaultApp/deployment.xml that has this config. For maven ear's, you can put this into src/main/application for it to be packaged up with the ear.
Hope that this helps. Good luck.
We are discussing a similar topic at the Camel mailing lists. I suggest to take a look there: http://camel.465427.n5.nabble.com/camel-cxf-in-WebSphere-without-geronimo-jetty-depdendencies-possible-tp5726490.html
We run Restlet 2.1 in an OSGi environment (Equinox) as bundle (ie. not as library within a bundle). The problem is that the Restlet Engine does not detect helpers (like converters) that are provided by Restlet extensions. Specifically, the EngineClassLoader#getResources() call does not return any result. The extensions are also deployed as OSGi bundles in the target platform.
Is automatic converter registration actually supposed to work within OSGi environments?
In fact, Restlet supports such feature thanks to a dedicated activator (see the Activator class in the package org.restlet.engine.internal).
This activator introspects bundles to find out the following things:
servers corresponding to registered servers
servers corresponding to registered clients
authenticators corresponding to registered clients
converters
Be aware that to use this feature, we must use the OSGi edition of Restlet since it's the only that has the MANIFEST file of the org.restlet bundle with the activator class specified. Otherwise you don't have to care about the bundle loading order...
Hope it helps you.
Thierry
Unless the Restlet-bundle explicitly imports the packages that contain the extensions (and I doubt it does, and it shouldn't), it wouldn't be able to load them, because bundles have isolated class-spaces.
A possible solution would be to provide the extensions as fragments attached to the Restlet-bundle. Thus, if you make it use the bundle-classloader (the documentation says this can be done by setting the Engines classloader), it would be able to load classes from the fragments.
Indeed it doesn't quite work for OSGi, as it depends on the ability to see the entire class space.
The way to do this in OSGi would be to use the service registry for the extensions, but that only works for OSGi aware libraries.
There is some help on the way: In the recently released OSGi 5 (Service Loader Mediator) there will be support to 'bridge' META-INF/services (I don't know if Restlet uses those, though) onto OSGi services, so 'legacy' libraries should work well within OSGi.
There is an implementation in Apache Aries called Spi-Fly. I looked at it briefly a while back. It might do the trick for you, it might not.
as hbase is not available as osgi-ified bundle yet I managed to create the bundle with the maven felix plugin (hbase 0.92 and the corresponding hadoop-core 1.0.0), and both bundles are starting up in OSGi :)
also the hbase-default.xml is added to the resulting bundle. in the resulting osgi-jar, when I open it, the structure looks like this:
org/
META-INF
hbase-default.xml
This was achieved with <Include-Resource>#${pkgArtifactId}-${pkgVersion}.jar!/hbase-default.xml</Include-Resource>
The problem comes up when I actually want to connect to hbase. hbase-default.xml can not be found and thus I can not create any configuration file.
The hbase osgi bundle is used from within another osgi-bundle that should be used to get an hbase connection and query the database. This osgi-bundle is used by an RCP application.
My question is, where do I have to put my hbase-default.xml so that it will be found when the bundle is started? or why does it not realize that the file is existing?
Thank you for any hints.
-- edit
I found a decompiler so I could view the source where the loading of the configuration is executed (hadoop-core which does not provide any sources via maven) and I now see that the Threads contextClassLoader is used (and if not available the classLoader of the Configuration class itself), so it seems to me that it can't find the resource, but, it should, according to the description, also check the parents (but who is the parent in an OSGi environment?)?
I tested to get the resource from the OSGi-bundle that should use hbase, where I added hbase-default.xml to the created jar file (see above), and there I get a resource when I get the contextClassLoader of the thread. When I explored the code a bit more I realized that there is no way to set the classloader for the HBaseConfiguration (although it would be possible to set the classloader for a "simple" hadoop-Configuration, HBaseConfiguration inherits from, but the creation procedure of HBaseConfiguration does not allow it, as it simply creates a new object within the create() method.
I really hope you have some idea how to get this up and running :)
Thread.currentThread().setContextClassLoader(HBaseConfiguration.class.getClassLoader());
Make sure the HBaseConfiguration class loaded in your OSGI bundle.hbase will make use of the thread context classloader, in order to load resources (hbase-default.xml and hbase-site.xml). Setting the TCCL will allow you to load the defaults and override them later.
If hbase-default.xml is in the .jar file which is in the CLASSPATH, that file normally can be find by java program.
I have read the hbase mailing list.
check your pom.xml:
in 'process-resource' phase, hbase-default.xml's '###VERSION###' would be replaced with the actual version string. however, if this phase configuration is set to be 'target', not 'tasks', the replacement would not occur.
You could have a look at your pom.xml, ant correct the label to if so.
faced this issue, actually fixed it by putting hbase-site.xml in the bundle which I was calling hbase from, found advise here:
Using this component in OSGi: This component is fully functional in an OSGi environment however, it requires some actions from the user. Hadoop uses the thread context class loader in order to load resources. Usually, the thread context classloader will be the bundle class loader of the bundle that contains the routes. So, the default configuration files need to be visible from the bundle class loader. A typical way to deal with it is to keep a copy of core-default.xml in your bundle root. That file can be found in the hadoop-common.jar.