OSGi ClassNotFoundException while loading Database Driver - osgi

I know this is a common problem, but creating a new thread just in case if some one is stuck like me even after trying some of the options already explained in documentions.
I have deployed an osgi bundle (Say Bundle B) which have all the code related to Data layer access. Basically when a service method from this bundle is accessed, it creates a JDBC connection for the very first time only, by loading the driver. The Driver is deployed as another wrapped bundle of sqljdbc 4.0. (Say Bundle C)
Now I have Bundle TestApp (Say Bundle A) which creates an instance the above said service method from Bundle B. So the flow here is Bundle A code creates an instance of Bundle B's exported service which in turn access Bundle C's Driver class.
Caused by: java.lang.ClassNotFoundException: com.microsoft.sqlserver.jdbc.SQLServerDriver not found by DataServices [417]
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1556)
at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:77)
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:1993)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
The individual classloader is making the probelm here, but how to overcome is the situation here for me. I tried adding DynamicImport-Package to bundles A, B, tried creating a Fragment bundle as expained here https://gist.github.com/rotty3000/1291842 but still the same exception. Continues.
If anyone came across such an issue, it would be a great help if you could give me a step by step approach to get rid of this issue.
Thank you in advance and Happy new year of all.

The Driver approach is not working well in OSGi. Instead use a DataSource.
See example at msdn. You can simply create a datasource using new. This avoids any classloading problems.
If you want to decouple from the actual DB provider then you can use the OSGi jdbc spec which defines that db providers should expose a DataSourceFactory that can create a DataSource for you.
Pax-jdbc supports this spec for a big range of db providers. It allows has a pax-jdbc-config module that allows to create a DataSource as an OSGi service using just a config. So in you code you can just inject the service and are done. Additionally pax-jdbc-config also can take care of pooling and making the DataSource XA aware. So it produces a fully production ready DataSource.

Related

java.util.ServiceConfigurationError Provider not a subtype while using OSGi bundle

I'm creating a Liferay 7.1 OSGi bundle, which has some external dependencies in it. In consideration of time, we opted to embed the external JAR in our OSGi Bundle. I've managed to create a bnd file, which includes all of the ElasticSearch dependencies, and put them on the bundle classpath. I've used the source-code from github (https://github.com/liferay/liferay-portal/blob/master/modules/apps/portal-search-elasticsearch6/portal-search-elasticsearch6-impl/build.gradle) and the bnd.bnd file, to check what's imported.
When activating the bundle, an exception is thrown:
The activate method has thrown an exception
java.util.ServiceConfigurationError: org.elasticsearch.common.xcontent.XContentBuilderExtension: Provider org.elasticsearch.common.xcontent.XContentElasticsearchExtension not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.elasticsearch.common.xcontent.XContentBuilder.<clinit>(XContentBuilder.java:118)
at org.elasticsearch.common.settings.Setting.arrayToParsableString(Setting.java:1257)
The XContentBuilderExtension is from the elasticsearch-x-content-6.5.0.jar,
the XContentElasticsearchExtension class, is included in the elasticsearch-6.5.0.jar. Both are Included Resources, and have been put on the classpath.
The Activate-method initializes a TransportClient in my other jar, hence it happens on activation ;).
Edit:
I've noticed that this error does NOT occur when installing this the first time, or when the portal restarts. So it only occurs when I uninstall and reinstall the bundle. (This is functionality I really prefer to have!). Maybe a stupid thought.. But could it be that there is some 'hanging thread'? That the bundle is not correctly installed, or that the TransportClient still is alive? I'm checking this out. Any hints are welcome!
Edit 2:
I'm fearing this is an incompatibility between SPI and OSGi? I've checked: The High Level Rest Client has the same issue. (But then with another Extension). I'm going to try the Low-Level Rest Client. This should work, as there are minimal dependencies, I'm guessing. I'm still very curious on why the incompatibility is there. I'm certainly no expert on OSGi, neither on SPI. (Time to learn new stuff!)
Seems like a case where OSGi uses your bundle to solve a dependency from another bundle, probably one that used your bundle to solve a package when the system started.
Looking at the symptoms: it does not occur when booting or restarts. Also it is not a subtype.
When OSGi uses that bundle to solve a dependency, it will keep a copy around, even when you remove it. When the bundle comes back a package that was previously used by another bundle may still be around and you can have the situation where a class used has two version of itself, from different classloaders, meaning they are not the same class and therefore, not a subtype.
Expose only the necessary to minimize the effects of this. Import only if needs importing. If you are using Liferay Gradle configuration to include the bundle inside, stop - it's a terrible way to include as it exposes a lot. If using the bnd file to include a resource and create an entry for the adicional classpath location, do not expose if not necessary. If you have several bundles using one as dependency, make sure about the version they use and if the exchange objects from the problematic class, if they do, than extra care is required.
PS: you can include attributes when exporting and/or importing in order to be more specific and avoid using packages from the wrong origin.
You can have 2 elastic search connections inside one Java app and Liferay is by default not exposing the connection that it holds.
A way around it is to rebuild the Liferay ES connector. It's not a big deal because you don't need to change the code only the OSGi descriptor to expose more services.
I did it in one POC project and worked fine. The tricky thing is to rebuild the Liferay jar but that was explained by Pettry by his google like search blog posts. https://community.liferay.com/blogs/-/blogs/creating-a-google-like-search (it is a series but it's kind of hard to navigate in the new Liferay blogs but Google will probably help) Either way it is all nicely documented here https://github.com/peerkar/liferay-gsearch
the only thing then what needs to be done is to add org.elasticsearch.* in the bnd.bnd file in the export section. You will then be able to work with the native elastic API.

WSO2 Identity Server - Custom JDBC User Store Manager - JDBC Pools

WSO2 Identity Server 5.0.0 (and some patches ;))
It does not appear that custom JDBC user store managers (child of JDBCUserStoreManager) use a JDBC pool. I'm noticing that I can end up session closed errors and sql exceptions whereas the Identity Server itself is still operating OK with its separate database connection (a configured pool).
So I guess I have two questions about this:
Somewhere up the chain, is there a JDBC pool for the JDBCUserStoreManager? If so, are there means to configure that guy more robustly?
Can I create another JDBC datasource in master-datasources.xml which my custom JDBC user store manage could reference?
Instead of using your own datasources/connections, you can import Carbon Datasources and use those (they come with inbuilt pooling and no need to worry about any configurations etc). You can either access these programmatically by directly calling ndatasource component or access them via JNDI.
To access them directly from ndatasource component:
The dependency:
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.ndatasource.core</artifactId>
<version>add_correct_version_here</version>
</dependency>
(You can check repository/components/plugins to find out the correct version for above dependency)
You can inject DataSourceService as in this code (the #scr.reference tag refers to the service you need to inject, this uses maven scr plugin to parse these dependencies when building the bundle).
Note that when you follow this approach you'll have to build the jar as an OSGi bundle as it uses declarative services (and have to place it in repository/components/dropins). Otherwise the dependencies won't be injected at runtime.
Next, you can access all the data sources as:
List<CarbonDataSource> dataSources = dataSourceService.getAllDataSources();
Rajeev's answer was really insightful and helped with investigating and evaluating what I should do. But, I didn't end up using that route. :)
I ended up looking through the Identity Server and Carbon source code and found out that the JDBCUserStoreManager does end up creating a JDBC pool configured by the properties you set for that manager. I had a class called CustomUserStoreConstants for my custom user store manager which had setMandatoryProperty called by default to set:
JDBCRealmConstants.DRIVER_NAME
JDBCRealmConstants.URL
JDBCRealmConstants.USER_NAME
JDBCRealmConstants.PASSWORD
So the pool was configured with these values, BUT that was it...nothing else. So no wonder it wasn't surviving the night!
It turned out that the code setting this up, if it found a value for the JDBCRealmConstants.DATASOURCE in the config params, it would just load up that datasource and ignore any other params set. Seeing that, I got rid of those 4 params listed above and forced my custom user store to only allow having a DATASOURCE and I set it in code with the default JNDI name that I would name that datasource always. With that, I was able to configure my JDBC pool for this datasource with all params such as testOnBorrow, validationQuery, validationInterval, etc in master-datasources.xml. Now the only thing that would ever need to change is the datasource's configuration in that file.
The other reason I went with the datasource in the master-datasources.xml is that I didn't have to decided in my custom user store's code which parameters I would want to have or not have and just manage it all in the xml file easily. This really has advantages with portability of configs and IT involvement for deployments and debugging. I already have other datasources in this file for the IS deployment.
All said, my user store is now living through the night and weekends. :)

Is it possible to integrate OSGi with Spring Data?

I'm currently working on an OSGi application running under apache Karaf that uses JPA and QueryDSL.
I was wondering if I could use Spring Data with QueryDSL instead of the current approach.
The reason for this is that I find Spring repositories to be quite useful and having a template for NoSQL database accesses might be useful in the future.
I have tried to start a normal spring application without a web context with OSGi but I get a ClassNoutFoundException when it tries to load the applicationContext.xml or the ApplicationContext.class.
I don't want to use Spring DM since it is discontinued.
Basically the sole reason for wanting to try this integration is for the Spring Repositories, but if you think this is not necessary please tell me. Any information regarding how to achive this or if it's ok to persue this would be more than welcome.
Thank you
Update
I've managed to make spring work by starting the application context with org.eclipse.gemini.blueprint.context.support.OsgiBundleXmlApplicationContext. The applicationContext is exported in OSGi as a service and I can get all the beans that I need by calling it.
The problem I'm having right now is that when I declare <jpa:repositories base-package="x.y.z" /> I get the following exception:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor#0' defined in URL [bundle://251.13:0/META-INF/spring/applicationContext.xml]: Initialization of bean failed; nested exception is java.lang.IllegalStateException: No persistence exception translators found in bean factory. Cannot perform exception translation.
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527)[185:org.springframework.beans:3.1.4.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)[185:org.springframework.beans:3.1.4.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)[185:org.springframework.beans:3.1.4.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)[185:org.springframework.beans:3.1.4.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)[185:org.springframework.beans:3.1.4.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)[185:org.springframework.beans:3.1.4.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1109)[187:org.springframework.context:3.1.4.RELEASE]
at org.eclipse.gemini.blueprint.context.support.AbstractDelegatedExecutionApplicationContext.registerBeanPostProcessors(AbstractDelegatedExecutionApplicationContext.java:502)[193:org.eclipse.gemini.blueprint.core:1.0.0.RELEASE]
at org.eclipse.gemini.blueprint.context.support.AbstractDelegatedExecutionApplicationContext.registerBeanPostProcessors(AbstractDelegatedExecutionApplicationContext.java:451)[193:org.eclipse.gemini.blueprint.core:1.0.0.RELEASE]
at org.eclipse.gemini.blueprint.context.support.AbstractDelegatedExecutionApplicationContext$4.run(AbstractDelegatedExecutionApplicationContext.java:306)[193:org.eclipse.gemini.blueprint.core:1.0.0.RELEASE]
at org.eclipse.gemini.blueprint.util.internal.PrivilegedUtils.executeWithCustomTCCL(PrivilegedUtils.java:85)[193:org.eclipse.gemini.blueprint.core:1.0.0.RELEASE]
at org.eclipse.gemini.blueprint.context.support.AbstractDelegatedExecutionApplicationContext.completeRefresh(AbstractDelegatedExecutionApplicationContext.java:290)[193:org.eclipse.gemini.blueprint.core:1.0.0.RELEASE]
at org.eclipse.gemini.blueprint.extender.internal.dependencies.startup.DependencyWaiterApplicationContextExecutor$CompleteRefreshTask.run(DependencyWaiterApplicationContextExecutor.java:137)[194:org.eclipse.gemini.blueprint.extender:1.0.0.RELEASE]
at java.lang.Thread.run(Thread.java:662)[:1.6.0_37]
Caused by: java.lang.IllegalStateException: No persistence exception translators found in bean factory. Cannot perform exception translation.
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.detectPersistenceExceptionTranslators(PersistenceExceptionTranslationInterceptor.java:142)[195:org.springframework.transaction:3.1.4.RELEASE]
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.<init>(PersistenceExceptionTranslationInterceptor.java:79)[195:org.springframework.transaction:3.1.4.RELEASE]
at org.springframework.dao.annotation.PersistenceExceptionTranslationAdvisor.<init>(PersistenceExceptionTranslationAdvisor.java:70)[195:org.springframework.transaction:3.1.4.RELEASE]
at org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor.setBeanFactory(PersistenceExceptionTranslationPostProcessor.java:103)[195:org.springframework.transaction:3.1.4.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeAwareMethods(AbstractAutowireCapableBeanFactory.java:1475)[185:org.springframework.beans:3.1.4.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1443)[185:org.springframework.beans:3.1.4.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)[185:org.springframework.beans:3.1.4.RELEASE]
As a JPA provider I'm using OpenJPA. The entityManagerFactory is a service which I can get by using the blueprint. I think I need to reference it in <jpa:repositories base-package="x.y.z" />, but how do I do that since the applicationContext.xml is read by spring and not the blueprint?
I would really appreciate any hint in the right direction.
Thank you
Use Querydsl-SQL directly in your code and
it will work well within OSGi as it does not use class loading, weaving, enhancing, caching and other tricks that sound really good but causes chaos
your code will run much faster than with any of the "cache-enhanced" JPA engines
others will be able to understand your code (not like JPA Criteria API queries)
you will know exactly what SQL commands run on the Database Server that minimizes problem-solving time
your code will be as database independent as with any ORM tool
Do not use Spring, spring-data, JPA and other monoholitic technologies together with OSGi as
they were designed to work within monoholitic systems where everything is in one application context, not in separate bundles
by using these technologies together with OSGi you will spend most of your time to fix bugs like this and looking for workarounds
People who argue with this, already spent lots of time on finding such workarounds. They managed to implement some business logic. They hope that they now truly found workarounds for every conceptual issue and they do not have to spend the same amount of work next time. They are in a bidding fee auction. Be honest guys! Somewhere deep you know I am right ;-).
I am saying this with the experience that I
tried the perfect stack based on Hibernate and Don't repeat the DAO article of IBM (much before Spring-Data hype began). Twice
wrote hibernate-osgi-adapter for Hibernate 4.1.x
Re-implemented the complete JPA chapter of OSGi Enterprise specification
Well you have a couple of choices here, try to get it to run with blueprint (probably the hardest - since you need to call spring beans, but I think could still be done), use Karaf 3.0.0.RC1 it also supports Blueprint Geminin which does have a tighter support for Spring and last but not least use Spring-DM, even if it is discontinued you are able to use and probable the best approach is to use spring-dm for certain Spring specific parts and std. Blueprint for the rest. Because you just use services through both frameworks everything will work, just don't mix the spring and blueprint descriptors in one bundle.

Trying to get a Camel sample using CXF working on WebSphere Application Server

I am trying to get an apache camel app using CXF working on WebSphere.
Noticed a number of errors
Caused by: java.lang.IncompatibleClassChangeError: org.apache.neethi.AssertionBuilderFactory
at java.lang.ClassLoader.defineClassImpl(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:262)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:69)
This occurs because the org.apache.neethi classes are loaded from the WAS run-time instead of the neeti3.0.2.jar from WEB-INF/lib
here is the info from class loader:
class load: org.apache.neethi.builders.AssertionBuilder from: file:/D:/Tools/WebSphere/AppServer/plugins/org.apache.axis2.jar
The Web Application config has been changed to use the class loader policy. It is set to Parent Last. Yet this class seems to be using the Parent First Policy.
Is there anything in the CXF package that over-rides this policy?
I noticed that using Axis2 and WAS
http://axis.apache.org/axis2/java/core/docs/app_server.html
Avoiding conflicts with WebSphere's JAX-WS runtime has some additional steps mentioned for Axis2. Is there something similar that is required to get this to work?
Thanks
Manglu
Not specific to Camel, but we are dealing with a similar issue.
1) You can use an isolated shared library if you want. This seems to fix the classloader from pulling in these other libraries via OSGi. This solution isn't for everyone.
2) When setting an app that PARENT_LAST, make sure you're doing it in the correct place. If you have jars inside your wars, then you'll have to set PARENT_LAST on the WAR modules, not the application. In the WAS console> Applications > Application Type > websphere enterprise applications> MyEAR > manage Modules > MyWar > Change the "Class loader order" and make it "parent last"
3) After you make your changes, you can export the ear file. In the ear, there is a file META-INF/ibmconfig/cells/defaultCell/applications/defaultApp/deployments/defaultApp/deployment.xml that has this config. For maven ear's, you can put this into src/main/application for it to be packaged up with the ear.
Hope that this helps. Good luck.
We are discussing a similar topic at the Camel mailing lists. I suggest to take a look there: http://camel.465427.n5.nabble.com/camel-cxf-in-WebSphere-without-geronimo-jetty-depdendencies-possible-tp5726490.html

Glassfish 2.1 EJB 3.0 Exposing local EJB to other applications running in the same domain/jvm

I have an existing project that I am in need of configuring different. This needs to happen without major code changes. I am actually hoping I could somehow do this only with configuration. I have spent the past 2 to 3 days reading everything I can find on this issue. I understand the glassfish classloaders, and what is available to me.
I have a current sample project that has an EJB which defines a #Local interface.
The ejb is deployed inside an ejb-module as an ejb-module into the glassfish domain.
Now I am trying to find a way for another application which was deployed as an ear into the same domain, to be able to access that EJB via it's local interface.
I have read documentation that says this is not possible.
Then I have seen posts on here at StackOverflow, and other's on the web saying it is possible. But, I cannot find the actual solution.
With investigation, I have realised that the #Local EJB does not register itself onto jndi (atleast according to the logs), if I use the glassfish JNDI browser, I also do not see it visible. So it makes sense to me, that either it's not possible, or the deployment of the EJB project is at fault, and somehow I need to expose it.
#Remote is a possibility, if it can be by reference, and no performance overhead. But the preferred method allowing #Local EJB access is really the ultimate need.
Does anyone know what I would need to do to expose the #Local EJB to another application?
Or is this plainly not possible?
I am using Glassfish 2.1 With EJB 3.0
If Glassfish 2.1 can handle EJB 3.1 I would be willing to move to it if it provided this capability, but I doubt it's that easy.
Please assist.
Thank you.
I am adding a bounty. To complete the bounty, it would be required to run 2 ear applications in the same domain, where A.ear contains an #Local EJB that is used as well by the application in a B.ear.
The link #Peter gave you almost solves your problem. (link)
One trick which needs to be done to solve #Xavier's problem is to provide common.jar to both ears in the same version (loaded by the same class loader). If you do it, the class cast exception will not be thrown and you will be able to use the ejb with local interface.
To do it you need to put common.jar into glassfish/domains/domain1/lib folder (substitute domain1 with you domain name). This way this jar will be loaded by a Glassfish's shared class loader.
I made a quick test with Eclipse and Glassfish 3 with following structure:
package com.example;
JarShared
- #Local class Server
EarServer
- EjbServer
- #Stateless class ServerBean implements Server
EarClient
- EjbClient
- #Stateless #LocalBean class ClientBean
Lookup from ClientBean:
InitialContext ic = new InitialContext();
Server server = (Server) ic.
lookup("java:global/EarServer/EjbServer/ServerBean!com.example.Server");
I did not get ClassCastException and I could call sample method from ServerBean.
Important notes:
both EarServer and EarClient should not contain JarShared in lib folder, they should reuse the one which is in domains lib folder
remember to restart Glassfish after adding JarShared to it.
to make both ejb projects compile you must add JarShared to their build paths, but nothing more
If you have questions, post a comment.

Resources