Liberty: How to change default JPA provider? - websphere-liberty

In order to program against the JPA 2.1 API, I would like to use eclipseLink, rather than the default OpenJPA JPA provider. How can this be achieved in the WAS 8.5 Liberty profile?
I tried with not using the jpa-2.0 feature and setting up a shared library which is referenced by my webapp, but with no success.
Here's my server.xml:
<server description="new server">
<!-- Enable features -->
<featureManager>
<feature>jsp-2.2</feature>
<feature>localConnector-1.0</feature>
</featureManager>
<httpEndpoint host="localhost" httpPort="9080" httpsPort="9443"
id="defaultHttpEndpoint" />
<applicationMonitor updateTrigger="mbean" />
<webApplication id="System" location="System.war" name="System">
<classloader commonLibraryRef="mysql" />
<classloader commonLibraryRef="eclipseLink" />
</webApplication>
<library id="mysql" name="mysql-jdbc-driver">
<fileset dir="C:\Users\jacomac\.m2\repository\mysql\mysql-connector-java\5.1.26" includes="*.jar"/>
</library>
<library id="eclipseLink" name="eclipse-jpa-impl">
<fileset dir="C:\Users\jacomac\.m2\repository\org\eclipse\persistence\eclipselink\2.5.2-M1" includes="*.jar"/>
<fileset dir="C:\Users\jacomac\.m2\repository\org\eclipse\persistence\javax.persistence\2.1.0" includes="*.jar"/>
<fileset dir="C:\Users\jacomac\.m2\repository\org\eclipse\persistence\commonj.sdo\2.1.1" includes="*.jar"/>
</library>
</server>
This is the error I get:
java.lang.NoClassDefFoundError: javax/persistence/Persistence
An addition: I know it works if I supply the eclipseLink libraries in my webapp, but I would like to use it as a shared resource across multiple webapps.

Liberty 8.5.5.x only seems to support JPA 2.0 even if EclipseLink supports JPA 2.1.
To have Liberty use EclipseLink, in a shared library, you need to set a "parentLast" classloader. My example use an ear but you can also set a classloader for a war.
You should only include eclipselink.jar and let Liberty use its own implementation of javax.persistence.
server.xml:
<fileset dir="${shared.resource.dir}/EclipseLinkLibs" id="EclipseLinkFileset" includes="eclipselink.jar"/>
<library filesetRef="EclipseLinkFileset" id="EclipseLinkLib"/>
<enterpriseApplication id="myEAR" location="myEAR.ear" name="myEAR">
<classloader delegation="parentLast" commonLibraryRef="EclipseLinkLib"/>
</enterpriseApplication>
You also need to set EclipseLink as the provider in persistence.xml:
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
Depending on your EclipseLink version and use of JTA you may also have to consider this bug and use the suggested workaround:
WebSphereTransactionController does not handle JTA on WebSphere 8.5

As of WebSphere Liberty 8.5.5.6 EE7 support was introduced, which includes JPA 2.1 support. With JPA 2.1 feature (jpa-2.1), we changed the default JPA Provider to Eclipselink 2.6.
OpenJPA is still available, and is the default JPA Provider, with the jpa-2.0 feature, which itself is forward compatible with the other EE7 features to support those who do not want to move their JPA-enabled applications off of OpenJPA and do not need to take advantage of the capabilities added by JPA 2.1.
Bear in mind that only one JPA feature can be enabled at one time, so using jpa-2.0 with EE7 features requires enabling individual features rather than using the EE7 convenience feature.

Related

WebLogic 12c (12.2.1.4) with Hibernate 5.4

I have an application deployed on WebLogic 12c (12.2.1.4) using Hibernate 5.2.18. Weblogic 12c doc references JPA 2.1 compatibility and Hibernate 5.3+ requires JPA 2.2. Can I prepend the JPA 2.2 API to my startup classpath and use Hibernate 5.3+ or should I stick with Hibernate 5.2 for the time being?
Yes, this configuration is possible.
To avoid conflicts with WebLogic built-in JPA capabilities you should do the following:
According to this
In a full Java EE environment, consider obtaining your EntityManagerFactory from JNDI. Alternatively, specify a custom persistenceXmlLocation on your LocalContainerEntityManagerFactoryBean definition (for example, META-INF/my-persistence.xml) and include only a descriptor with that name in your application jar files. Because the Java EE server looks only for default META-INF/persistence.xml files, it ignores such custom persistence units and, hence, avoids conflicts with a Spring-driven JPA setup upfront.
You can use something like this in the spring context config.
<?xml version="1.0" encoding="UTF-8"?>
<beans>
<!-- ... -->
<jee:jndi-lookup id="DS" jndi-name="appDS" />
<bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceXmlLocation" value="classpath:META-INF/app-persistence.xml" />
<property name="dataSource" ref="DS" />
</bean>
<!-- ... -->
</beans>
According to this
To configure the FilteringClassLoader to specify that a certain package is loaded from an application, add a prefer-application-packages descriptor element to weblogic-application.xml which details the list of packages to be loaded from the application.
You should add the following snippet to your META-INF/weblogic-application.xml
<?xml version="1.0" encoding="UTF-8"?>
<weblogic-application>
<prefer-application-packages>
<!-- ... -->
<package-name>javax.persistence.*</package-name>
</prefer-application-packages>
</weblogic-application>

Hazelcast with spring namespace - init the node when context is loaded

i have hazelcast instance defined using the hazelcast name space and a map in it. also using spring cache abstraction to define cacheManager.
<bean name="siteAdminPropertyPlaceHolderConfigurer"
class="org.sample.SiteAdminPropertyPlaceHolderConfigurer">
<property name="order" value="1000"/>
<!-- last one-->
</bean>
<!-- hazelcast cache manager -->
<hz:hazelcast id="instance" lazy-init="true">
<hz:config>
<hz:group name="${HAZEL_GROUP_NAME}" password="${HAZEL_GROUP_PASSWORD}"/>
<hz:network port="${HAZEL_NETWORK_PORT}" port-auto-increment="true">
<hz:join>
<hz:multicast enabled="${HAZEL_MULTICAST_ENABLED}"
multicast-group="224.2.2.3"
multicast-port="54327"/>
<hz:tcp-ip enabled="${HAZEL_TCP_ENABLED}">
<hz:members>${HAZEL_TCP_MEMBERS}</hz:members>
</hz:tcp-ip>
</hz:join>
</hz:network>
<hz:map name="oauthClientDetailsCache"
backup-count="1"
max-size="0"
eviction-percentage="30"
read-backup-data="true"
eviction-policy="NONE"
merge-policy="com.hazelcast.map.merge.PassThroughMergePolicy"/>
</hz:config>
</hz:hazelcast>
<bean id="hazelcastCacheManager" class="com.hazelcast.spring.cache.HazelcastCacheManager" lazy-init="true"
depends-on="instance">
<constructor-arg ref="instance"/>
</bean>
The problem is that ,this spring context is also used for other tools we have besides the server and that hazelcast starts listening on the port and the tool actually never exit.
i tried to disable all network join (enabled=false) and i though to enable them programatically only when the server starts. but it does not work hazelcast still starts.
i don't want to give up the spring name space as its very convenient for developers to define new maps(spring caches). also i want as little hazelcast code in there.
any idea how to achieve this ?
thanks
Shlomi
I didn't find a way to do this except telling hazecast to shutdown at the end of each tool run.
i also moved the definition above to separated XML context file so it would not be loaded by the tools (at least not all of them)
Hazelcase.shutdownAll();

Dependency version issue with Spring, Spring Neo4j and Neo4j

I am trying to setup a Java project which uses Spring-Neo4j and Neo4j but unable to get around with dependency issues. I am using maven for dependency management and have tried several version combinations of Spring, Spring Neo4j and Neo4j.
spring: 3.2.6.RELEASE
spring-data-neo4j: 3.0.0.RELEASE
neo4j: 2.0.1
application-context.xml file
<neo4j:config storeDirectory="data/graph.db" />
Error:
Caused by: org.neo4j.kernel.impl.storemigration.UpgradeNotAllowedByConfigurationException: Failed to start Neo4j with an older data store version. To enable automatic upgrade, please set configuration parameter "allow_store_upgrade=true"
at org.neo4j.kernel.impl.storemigration.ConfigMapUpgradeConfiguration.checkConfigurationAllowsAutomaticUpgrade(ConfigMapUpgradeConfiguration.java:39)
at org.neo4j.kernel.impl.storemigration.StoreUpgrader.attemptUpgrade(StoreUpgrader.java:71)
at org.neo4j.kernel.impl.nioneo.store.StoreFactory.tryToUpgradeStores(StoreFactory.java:144)
at org.neo4j.kernel.impl.nioneo.store.StoreFactory.newNeoStore(StoreFactory.java:119)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource.start(NeoStoreXaDataSource.java:323)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:503)
... 64 more
I have enabled allow_store_upgrade=true in my neo4j.properties file.
Your embedded neo4j most likely doesn't pick up the neo4j file (this documentation says you need to set it manually).
Initialise your neo4j like this
<bean id="graphDatabaseService" class="org.neo4j.kernel.EmbeddedGraphDatabase"
destroy-method="shutdown">
<constructor-arg index="0" value="target/config-test"/>
<!-- optionally pass in neo4j-config parameters to the graph database
<constructor-arg index="1">
<map>
<entry key="allow_store_upgrade" value="true"/>
</map>
</constructor-arg>
-->
</bean>
<neo4j:config graphDatabaseService="graphDatabaseService"/>
Source:
http://docs.spring.io/spring-data/data-neo4j/docs/3.0.1.RELEASE/reference/html/setup.html#d0e3597
I'm trying to use spring-data-neo4j 3.1.1.RELEASE and neo4j 2.1.2 and I think this is incomplete. Indeed, at least with these versions, map is not optionnal. Moreover there is a third mandatory argument in constructor of type Dependencies.
The problem is that I don't really know what is this third parameter and moreover EmbeddedGraphDatabase and Dependecies are deprecated. Do you know which is the good way to start a webapp (with these versions) in embedded mode ?

Unexpected problems with JBoss connection pooling

I've spend the last days trying to locate the cause of some new problem during development that raised a few days ago... and I've not found it yet. But I've found a workaround. But let's start with the problem itself.
We are using JBoss EAP 6.1.0.GA (AS 7.2.0.Final-redhat-8) as our application server for a quite large enterprise project. The JPA layer is handled by Hibernate Core {4.2.0.Final-redhat-1} using oracle.jdbc.OracleDriver (Version 11.2) connecting Oracle 11.2.0.3.0.
A few weeks ago everything worked as expected and we had no database related problems. We were using the following datasource:
<datasource jta="true" jndi-name="java:/myDS" pool-name="myDS" enabled="true" use-java-context="true" use-ccm="true">
<connection-url>jdbc:oracle:thin:#192.168.0.93:1521:DEV</connection-url>
<driver>oracle</driver>
<transaction-isolation>TRANSACTION_READ_COMMITTED</transaction-isolation>
<pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>20</max-pool-size>
<prefill>true</prefill>
<use-strict-min>false</use-strict-min>
<flush-strategy>FailingConnectionOnly</flush-strategy>
</pool>
<security>
<user-name>MY_DB</user-name>
<password>pass</password>
</security>
</datasource>
Most of the time we had 5-10 open connections with 1-3 in use (single development environment)... the pool held that level and worked just fine.
But with some unknown changes to our code that pool stopped working... didn't release it connections anymore... even did not re-use them at all! It took a few simple requests to fill the pool up to the maximum of 20 connections and JPA refused any new database queries.
We've spend several days to find the relevant changes to our code... without success!
Today I've discovered a workaround. We changes persistence.xml a little bit:
<persistence-unit name="myPU">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:/myDS</jta-data-source>
<properties>
<property name="jboss.entity.manager.factory.jndi.name" value="java:/myDSMF" />
<property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect" />
<property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.JBossTransactionManagerLookup" />
<property name="hibernate.default_batch_fetch_size" value="1000" />
<property name="hibernate.jdbc.batch_size" value="0" />
<property name="hibernate.connection.release_mode" value="after_statement" />
<!-- <property name="hibernate.connection.release_mode" value="after_transaction" /> -->
<property name="hibernate.connection.SetBigStringTryClob" value="true" />
</properties>
</persistence-unit>
Changing hibernate.connection.release_mode from after_transaction to after_statement did the trick. But that setting has never been touched before. Now connections are released as expected and the pooling is usable again.
I don't get why after_transaction doesn't work anymore... because changes are committed. We see all these things in the database. And committing a transaction should end it - doesn't it?
Although I've found that simple workaround I'd really get to know the problem. I've no good feeling to delay that knowledge until production time. So any feedback is very well appreciated! Thanks!
You are using JTA . So after_transaction mode in never recommended for JTA transactions.
Here is the document from JBOSS site.
after_transaction - says to use
ConnectionReleaseMode.AFTER_TRANSACTION. This setting should not be
used in JTA environments. Also note that with
ConnectionReleaseMode.AFTER_TRANSACTION, if a session is considered to
be in auto-commit mode connections will be released as if the release
mode were AFTER_STATEMENT.
so you should either use auto or after_statement explicitly, to aggressively release the connection.
References
Connection Release Modes.

Hibernate DDL database generation stopped when I use Maven

Previously, my Java web projects used Eclipse-ordinary structure, and at the start of the container (in case, Tomcat), Hibernate generated the schemes correctly.
Now I'm using Maven infrastructure. I've relocated the needed files and configured all well (I think, because all is working right: Spring is starting, Hibernate is connecting the database - when it was previously created and there's some data to fetch). I've tested all CRUD operations and it's working.
The problem is that Hibernate refuses to generate the schemes (DDL) as it did when over Eclipse-ordinary infrastructure.
Additional information:
My persistence.xml is almost empty (as always) because Spring applicationContext.xml is starting it. I have not changed the file, it continues the same way as before.
<!-- Location: src/main/resources/META-INF/persistence.xml -->
<persistence>
<persistence-unit name="jpa-persistence-unit" transaction-type="RESOURCE_LOCAL"/>
</persistence>
Part of the Spring configuration goes here (applicationContext.xml):
<!-- Location: src/main/webapp/WEB-INF/applicationContext.xml -->
<!-- ... -->
<bean id="jpaVendorAdapter" class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="database" value="[DATABASE-NAME]" />
<property name="showSql" value="true" />
<property name="generateDdl" value="true" /> <!-- THIS CONFIGURATION WORKED PREVIOUSLY, NOW WITH MAVEN, IT'S IGNORED -->
<property name="databasePlatform" value="[DIALECT]" />
</bean>
<!-- ... -->
I'm not using any Maven Hibernate plugin, because I just want the default behavior that occurred earlier.
Did Maven invalidate this "generateDdl" property!? Why!? What should I do!? I can't find any solution.
I found out the solution.
Maven has any fault about that.
Hibernate was not able to create my database because the "DIALECT" was wrong.
I remembered that I changed the dialect from MySQL to MySQL-InnoDB. Hibernate was logging this problem but I couldn't see it because the slf4j-simple dependency was not explicity imported.
Thank you for your time, Shawn.

Resources