spring integration ftp inbound adapter in distributed mode - ftp

I am using Spring Integration: FTP Inbound Channel Adapter to read files from remote FTP server. My problem is, will it able to handle around 5 millions files per day?
If I deploy my project war on 2 different servers in an distributed mode, then will it problematic? Because on both the servers FTP Inbound Channel Adapter will be running. Then both the adapters will read same file twice.
Please help me setting up this system in distributed mode.
EDIT:
I have set up my Spring Integration project war on 2 servers. It is using FTP Inbound Channel Adapter. Both servers' adapter remote-location is pointing to same ftp file location. When I start both the servers, then both the servers starts transferring same files and generates messages multiple times. I am using Redis MetaData Store as per Gary's suggestion.
My Ftp Inbound Channel Adapter on both the servers looks like this:
<bean id="redisMessageStore" class="org.springframework.integration.redis.store.RedisMessageStore">
<constructor-arg ref="redisConnectionFactory"/>
</bean>
<bean name="metadataStore" class="org.springframework.integration.redis.metadata.RedisMetadataStore">
<constructor-arg name="connectionFactory" ref="redisConnectionFactory"/>
</bean>
<bean id="fileSystemPersistantFilter" class="org.springframework.integration.file.filters.FileSystemPersistentAcceptOnceFileListFilter">
<constructor-arg name="store" ref="metadataStore"/> <constructor-arg name="prefix" value="" />
</bean>
<bean id="ftpPersistantFilter" class="org.springframework.integration.ftp.filters.FtpPersistentAcceptOnceFileListFilter">
<constructor-arg name="store" ref="metadataStore"/> <constructor-arg name="prefix" value="" />
</bean>
<int-ftp:inbound-channel-adapter id="ftpInboundAdapter"
session-factory="ftpClientFactory" channel="ftpChannel"
filter="ftpPersistantFilter"
local-filter="fileSystemPersistantFilter" delete-remote-files="false"
remote-directory="${ftp.remote_directory}/test/" local-directory="${ftp.local_directory}/test/"
temporary-file-suffix=".writing" auto-create-local-directory="true">
<int:poller fixed-rate="1000" max-messages-per-poll="-1" />
</int-ftp:inbound-channel-adapter>
The output log of 1st server is:
19-Feb-2016 10:34:41.634 INFO [task-scheduler-1] org.springframework.integration.file.FileReadingMessageSource.receive Created message: [GenericMessage [payload=/home/harsh/test/test_input_file1.txt, headers={id=1793c207-2d8a-542c-c5a7-eac9165e4cc5, timestamp=1455858281634}]]
19-Feb-2016 10:34:42.886 INFO [task-scheduler-4] org.springframework.integration.file.FileReadingMessageSource.receive Created message: [GenericMessage [payload=/home/harsh/test/test_input_file1.txt, headers={id=c909b6cc-9f78-2f6f-2a27-036f0186b959, timestamp=1455858282886}]]
File /home/harsh/test/test_input_file1.txt transformed by 1st war 1793c207-2d8a-542c-c5a7-eac9165e4cc5
File /home/harsh/test/test_input_file1.txt transformed by 1st war c909b6cc-9f78-2f6f-2a27-036f0186b959
19-Feb-2016 10:34:47.892 INFO [task-scheduler-4] org.springframework.integration.file.FileReadingMessageSource.receive Created message: [GenericMessage [payload=/home/harsh/test/test_input_file1.txt, headers={id=8c5c8941-fbfd-91d8-9a25-75d46e450930, timestamp=1455858287892}]]
19-Feb-2016 10:34:49.325 INFO [task-scheduler-2] org.springframework.integration.file.FileReadingMessageSource.receive Created message: [GenericMessage [payload=/home/harsh/test/test_input_file1.txt, headers={id=dbdddd0f-1ac5-0753-8873-f0f9c77cb48b, timestamp=1455858289325}]]
Service Activator /home/harsh/test/test_input_file1.txt 1st war 24632436-d297-db0c-c9ea-ac596c57a91e
19-Feb-2016 10:34:50.372 INFO [task-scheduler-2] org.springframework.integration.file.FileReadingMessageSource.receive Created message: [GenericMessage [payload=/home/harsh/test/test_input_file1.txt, headers={id=5cc843ae-c1d7-814f-b9fd-a7c5c2515674, timestamp=1455858290372}]]
19-Feb-2016 10:34:51.759 INFO [task-scheduler-2] org.springframework.integration.file.FileReadingMessageSource.receive Created message: [GenericMessage [payload=/home/harsh/test/test_input_file1.txt, headers={id=428ba015-e2f3-6948-fc13-ca0df31ee9c0, timestamp=1455858291759}]]
19-Feb-2016 10:34:53.670 INFO [task-scheduler-2] org.springframework.integration.file.FileReadingMessageSource.receive Created message: [GenericMessage [payload=/home/harsh/test/test_input_file1.txt, headers={id=ac1fca37-838f-39fc-f9ed-cc373f8f8b12, timestamp=1455858293670}]]
19-Feb-2016 10:34:55.648 INFO [task-scheduler-8] org.springframework.integration.file.FileReadingMessageSource.receive Created message: [GenericMessage [payload=/home/harsh/test/test_input_file1.txt, headers={id=f9197ec2-e73a-19be-e94b-94bffe515569, timestamp=1455858295647}]]
File /home/harsh/test/test_input_file1.txt transformed by 1st war 45718961-2a99-d368-d88a-9bc2ceb955cd
The 2nd server is generating the same log with different message ids.
Am I missing something in this?
Do I need to write my custom filter for this??

My problem is, will it able to handle around 5 millions files per day?
That depends on the size of the files and the bandwidth of the network; the use of Spring Integration is unlikely to be a factor.
You should probably remove files locally after processing, though, to avoid large directory scans.
To avoid duplicates in a cluster, you need to use a FtpPersistentAcceptOnceFileListFilter backed by a shared metadata store so that each instance will skip files handled by other instances.
See the documentation for more information.
EDIT:
I just tested with your configuration and see no problems. Are you sure both instances are using the same Redis server?
If you run redis-cli and then monitor, you should see something like:
1459258131.934949 [0 127.0.0.1:55237] "HSETNX" "MetaData" "bar.txt" "1384837200000"
1459258131.935129 [0 127.0.0.1:55237] "HSETNX" "MetaData" "baz.txt" "1384837200000"
1459258131.940125 [0 127.0.0.1:55237] "HSETNX" "MetaData" "/tmp/test/bar.txt" "1459258131000"
1459258131.940353 [0 127.0.0.1:55237] "HSETNX" "MetaData" "/tmp/test/baz.txt" "1459258131000"
In this case, the remote directory had 2 files; the first 2 lines are from the remote filter, the last two are from the local filter (setting the initial values).
You should then see a bunch of
1459258142.073316 [0 127.0.0.1:55237] "HSETNX" "MetaData" "bar.txt" "1384837200000"
1459258142.073506 [0 127.0.0.1:55237] "HGET" "MetaData" "bar.txt"
(once per poll - checking to see if the timestamp changed).

Related

In 3 nodes of Nifi cluster, 1 node is restarted then files are getting duplicated

Suppose I have 3 nodes in cluster,
Node A
node B
Node C
In state-management.xml file I am having below configuration,
<cluster-provider>
<id>zk-provider</id>
<class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">192.168.0.10:2181,192.168.0.11:2181,192.168.0.12:2181</property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">Open</property>
</cluster-provider>
I am using external zookeeper with below zoo.cfg configuration,
tickTime=2000
initLimit=10
syncLimit=5
dataDir=./zookeeper
clientPort=2181
autopurge.snapRetainCount=30
autopurge.purgeInterval=2
quorumListenOnAllIPs=true
admin.serverPort=2515
server.1=192.168.0.10:2666:3666
server.1=192.168.0.11:2666:3666
server.1=192.168.0.12:2666:3666
All nodes are up and running. If we are processing 10000 files and in middle of the processing if we restart the any node in the cluster then it results in duplicate file processing. And at the end we can see more than 10000 files are processed. In number I can say 12000 files are processed.
Processor configuration which generates the flowfile,
Processor A Which Generates Flowfile - 1
Processor A Which Generates Flowfile - 2
I am suspecting that somewhere state is not getting updated or managed properly. If 1 node processed few files then other node should not process those files. May be I am missing some configuration. can someone please help me to get this sort out.
Thanks in advance..

SocketException Broken pipe receives when the server is under load (Spring AMQP)

We are using Spring AMQP 2.8 with RabbitMQ 2.8.7 version. We are building our connection factory as below.
<!-- RabbitMQ Local connectivity -->
<rabbit:connection-factory
id="localWhispirConnectionFactory"
addresses="${system.local.rabbitmq.host}"
username="${system.local.rabbitmq.username}"
password="${system.local.rabbitmq.password}"
connection-factory="rabbitWhispirLocalFactory"/>
<!-- Heartbeat configuration every 10sec -->
<bean id="rabbitWhispirLocalFactory" class="com.rabbitmq.client.ConnectionFactory">
<property name="requestedHeartbeat" value="10" />
</bean>
But when the server is under load, we received the below exceptions. Tries several ways, but appreciate any comments to overcome this issue.
2015-04-20 12:01:00,174 INFO [SimpleMessageListenerContainer] Restarting Consumer: tag=[amq.ctag-wfazQuIuS-BM-CosxP_2GJ], channel=Cached Rabbit Channel: AMQChannel(amqp://whispir#10.50.50.128:5672/,62), acknowledgeMode=AUTO local queue size=0
2015-04-20 12:01:00,156 WARN [SimpleMessageListenerContainer] Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: java.net.SocketException: Broken pipe
2015-04-20 12:01:00,174 INFO [SimpleMessageListenerContainer] Restarting Consumer: tag=[amq.ctag-AjjxOJ2doe4yi2GtTHKumM], channel=Cached Rabbit Channel: AMQChannel(amqp://whispir#10.50.50.128:5672/,29), acknowledgeMode=AUTO local queue size=0
Thanks.
There is no such version (2.8) of Spring AMQP. Currently, the latest version is 1.4.4.
Check the server logs to see if there are any clues there.
That said, 2.8.7 is a very old broker; I am quite sure the rabbitmq guys would recommend upgrading to a more recent version, currently the latest is 3.5.1.

OpenAM with OpenDJ - NameNotFoundException: ldap/idp/userDN - when starting up JBoss

I'm using OpenAM, with its embedded OpenDJ as the LDAP service, to protect my web application running on JBoss 7.
When I start my JBoss I get this error:
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ldapUserDN'
...
Caused by: javax.naming.NameNotFoundException: ldap/idp/userDN -- service jboss.naming.context.java.ldap.idp.userDN
So apparently Spring is looking for the JNDI node ldap/idp/userDN. But the jboss configuration file that I got with the project has these entries:
<simple name="ldap/opendj/url" value="ldap://localhost:50389"/>
<simple name="ldap/opendj/userDN" value="cn=Directory Manager"/>
<simple name="ldap/opendj/password" value="mypassword"/>
<simple name="ldap/opendj/baseDN" value="dc=opensso,dc=java,dc=net"/>
And these properties are added to my JNDI tree on JBoss.
If I change these to "ldap/idp/userDN", for instance, then I get rid of the error, but I was wondering if there's anywhere, where "ldap/opendj/userDN" should be mapped to "ldap/idp/userDN", that I've missed.
If you're using Spring LDAP, the actual configuration of the ldap-context-source goes in the a spring config file, and might look like this:
<jee:jndi-lookup jndi-name="ldap/idp/url" id="ldapUrl"/>
<jee:jndi-lookup jndi-name="ldap/idp/userDN" id="ldapUserDN"/>
<jee:jndi-lookup jndi-name="ldap/idp/password" id="ldapPassword"/>
<jee:jndi-lookup jndi-name="ldap/idp/baseDN" id="ldapBaseDN"/>
<ldap:context-source url="#{ldapUrl}"
username="#{ldapUserDN}"
password="#{ldapPassword}"
base="#{ldapBaseDN}"
native-pooling="true"/>
So the jndi entries in your jboss config file should match the ones above.

Active MQ clustering using http auto discovery with multi cast on Amazon EC2

We are trying to set up the active MQ cluster on production environment on Amazon EC2 with Auto discover and multicast.
I was able to configure successfully auto discovery with multi-cast on my local active mq server but on Amazon EC2 it is not working.
From the link
I found that Amazon EC2 does not support multi-cast. Hence we have to use HTTP transport or VPN for multi-cast. I tried HTTP transport for multi-cast by downloading activemq-optional-5.6.jar (we are using Active-MQ 5.6 version). It requires httpcore and httpClient jars to servlet in it class path.
In broker configuration(activemq.xml)
`
&ltnetworkConnectors>
&ltnetworkConnector name="default" uri="http://localhost:8161/activemq/DiscoveryRegistryServlet"/>
&lt/networkConnectors>
&lttransportConnectors>
&lttransportConnector name="activemq" uri="tcp://localhost:61616" discoveryUri="http://localhost:8161/activemq/DiscoveryRegistryServlet"/>
&lt/transportConnectors>`
are added.
But broker is not identifying the DiscoveryRegistryServlet.
Any help is much appreciated.
Finally figured out how to setup active MQ auto discovery with HTTP
Active-MQ Broker configuration:
In $ACTIVEMQ_HOME/webapps folder create a new folder
|_activemq
|_WEB-INF
|_classes
|_web.xml
create a web.xml file with the following contents
&ltweb-app>
&ltdisplay-name>ActiveMQ Message Broker Web Application&lt/display-name>
&ltdescription>
Provides an embedded ActiveMQ Message Broker embedded inside a web application
&lt/description>
&lt!-- context config -->
&ltcontext-param>
&ltparam-name>org.apache.activemq.brokerURL&lt/param-name>
&ltparam-value>tcp://localhost:61617&lt/param-value>
&ltdescription>The URL that the embedded broker should listen on in addition to HTTP&lt/description>
&lt/context-param>
&lt!-- servlet mappings -->
&ltservlet>
&ltservlet-name>DiscoveryRegistryServlet&lt/servlet-name>
&ltservlet-class>org.apache.activemq.transport.discovery.http.DiscoveryRegistryServlet&lt/servlet-class>
&ltload-on-startup>1&lt/load-on-startup>
&lt/servlet>
&ltservlet-mapping>
&ltservlet-name>DiscoveryRegistryServlet&lt/servlet-name>
&lturl-pattern>/*&lt/url-pattern>
&lt/servlet-mapping>
&lt/web-app>
Place httpclient-4.0.3.jar, httpcore-4.3.jar, xstream-1.4.5.jar and activemq-optional-5.6.0.jar in $ACTIVEMQ_HOME/lib directory.
In $ACTIVEMQ_HOME/config directory, modify the jetty.xml file to expose activemq web app.
&ltbean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler">
...
&ltproperty name="handler">
&ltbean id="sec" class="org.eclipse.jetty.server.handler.HandlerCollection">
&ltproperty name="handlers">
...
...
&ltbean class="org.eclipse.jetty.webapp.WebAppContext">
&ltproperty name="contextPath" value="/activemq" />
&ltproperty name="resourceBase" value="${activemq.home}/webapps/activemq" />
&ltproperty name="logUrlOnStart" value="true" />
&ltproperty name="parentLoaderPriority" value="true" />
...
...
&lt/list>
&lt/property>
&lt/bean>
&lt/property>
&lt/bean>
Modify activemq.xml file in $ACTIVEMQ_HOME/conf directory to use http protocol
&ltbroker name=”brokerName”>
...
&ltnetworkConnectors>
&ltnetworkConnector name="default" uri="http://&ltloadbalancer_IP>:&ltlocadbalancer_Port>/activemq/DiscoveryRegistryServlet?group=test"/>
&lt!--&ltnetworkConnector name="default-nc" uri="multicast://default"/>-->
&lt/networkConnectors>
&lttransportConnectors>
&lttransportConnector name="http" uri="tcp://0.0.0.0:61618" discoveryUri="http://&ltloadbalancer_IP>:&ltlocadbalancer_Port>/activemq/test"/>
&lt/transportConnectors>
...
&lt/broker>
make sure that the broker names are unique. “test” in url is the group name of brokers.
Client configuration:
1. Keep httpclient-4.0.3.jar, httpcore-4.3.jar, xstream-1.4.5.jar and activemq-optional-5.6.0.jar in classpath of client
2. URL to be use by client
discovery:(http://&ltloadbalancer_IP>:&ltlocadbalancer_Port>/activemq/test)connectionTimeout=10000
here “test” is the group name.

Spring transaction/entity manager don't compensate for stale connections (on websphere + openJPA)?

The goal is to have a J2EE application running on a WebsphereApplicationServer 7, which accesses a JDBC datasource (DB2) via OpenJPA 2.0. On most of our testservers, my code is working fine; however, we have one testserver where the EntityManager aborts / does not get initialized properly because of a stale connection ("java.net.SocketException: Broken pipe"):
<openjpa-2.1.2-SNAPSHOT-r422266:1384519 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: Failed to execute query "select count(x.profSurname) from Surname x where x.profUsrstate = 0". Check the query syntax for correctness. See nested exception for details.
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:872)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:794)
at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:542)
at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:315)
at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:331)
at org.apache.openjpa.persistence.QueryImpl.getSingleResult(QueryImpl.java:359)
(...)
Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: [jcc][t4][2030][11211][4.13.127] A communication error occurred during operations on the connection's underlying socket, socket input stream,
or socket output stream. Error location: T4Agent.sendRequest() - flush (-1). Message: Broken pipe. ERRORCODE=-4499, SQLSTATE=08001 {prepstmnt 1826931080 SELECT COUNT(t0.PROF_SURNAME) FROM EMPINST.SURNAME t0 WHERE (t0.PROF_USRSTATE = CAST(? AS BIGINT)) optimize for 1 row [params=?]} [code=-4499, state=08001]
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:281)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:265)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.access$700(LoggingConnectionDecorator.java:72)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPreparedStatement.executeQuery(LoggingConnectionDecorator.java:1183)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:284)
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$CancelPreparedStatement.executeQuery(JDBCStoreManager.java:1787)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:274)
at org.apache.openjpa.jdbc.sql.SelectImpl.executeQuery(SelectImpl.java:499)
at org.apache.openjpa.jdbc.sql.SelectImpl.execute(SelectImpl.java:424)
at com.ibm.ws.persistence.jdbc.sql.SelectImpl.execute(SelectImpl.java:89)
at org.apache.openjpa.jdbc.sql.SelectImpl.execute(SelectImpl.java:391)
at org.apache.openjpa.jdbc.sql.LogicalUnion$UnionSelect.execute(LogicalUnion.java:427)
at org.apache.openjpa.jdbc.sql.LogicalUnion.execute(LogicalUnion.java:230)
at org.apache.openjpa.jdbc.sql.LogicalUnion.execute(LogicalUnion.java:220)
at org.apache.openjpa.jdbc.kernel.SelectResultObjectProvider.open(SelectResultObjectProvider.java:94)
at org.apache.openjpa.kernel.QueryImpl$PackingResultObjectProvider.open(QueryImpl.java:2070)
at org.apache.openjpa.kernel.QueryImpl.singleResult(QueryImpl.java:1320)
at org.apache.openjpa.kernel.QueryImpl.toResult(QueryImpl.java:1242)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:1007)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:863)
... 113 more
---- Begin backtrace for Nested Throwables
com.ibm.websphere.ce.cm.StaleConnectionException: [jcc][t4][2030][11211][4.13.127] A communication error occurred during operations on the connection's underlying socket, socket input stream,
or socket output stream. Error location: T4Agent.sendRequest() - flush (-1). Message: Broken pipe. ERRORCODE=-4499, SQLSTATE=08001
at sun.reflect.GeneratedConstructorAccessor91.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:39)
at java.lang.reflect.Constructor.newInstance(Constructor.java:527)
at com.ibm.websphere.rsadapter.GenericDataStoreHelper.mapExceptionHelper(GenericDataStoreHelper.java:607)
at com.ibm.websphere.rsadapter.GenericDataStoreHelper.mapException(GenericDataStoreHelper.java:666)
at com.ibm.ws.rsadapter.AdapterUtil.mapException(AdapterUtil.java:2271)
at com.ibm.ws.rsadapter.jdbc.WSJdbcUtil.mapException(WSJdbcUtil.java:1185)
at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeQuery(WSJdbcPreparedStatement.java:726)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:286)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPreparedStatement.executeQuery(LoggingConnectionDecorator.java:1181)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:284)
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$CancelPreparedStatement.executeQuery(JDBCStoreManager.java:1787)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:274)
at org.apache.openjpa.jdbc.sql.SelectImpl.executeQuery(SelectImpl.java:499)
at org.apache.openjpa.jdbc.sql.SelectImpl.execute(SelectImpl.java:424)
at com.ibm.ws.persistence.jdbc.sql.SelectImpl.execute(SelectImpl.java:89)
at org.apache.openjpa.jdbc.sql.SelectImpl.execute(SelectImpl.java:391)
at org.apache.openjpa.jdbc.sql.LogicalUnion$UnionSelect.execute(LogicalUnion.java:427)
at org.apache.openjpa.jdbc.sql.LogicalUnion.execute(LogicalUnion.java:230)
at org.apache.openjpa.jdbc.sql.LogicalUnion.execute(LogicalUnion.java:220)
at org.apache.openjpa.jdbc.kernel.SelectResultObjectProvider.open(SelectResultObjectProvider.java:94)
(...)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:103)
at java.net.SocketOutputStream.write(SocketOutputStream.java:147)
at com.ibm.db2.jcc.t4.fb.b(fb.java:1685)
at com.ibm.db2.jcc.t4.fb.a(fb.java:1633)
at com.ibm.db2.jcc.t4.a.D(a.java:421)
... 138 more
I am working with OpenJPA using the Springframework 3.0 JpaTransactionManager and LocalContainerEntityManagerFactory to get my persistence context:
<tx:annotation-driven transaction-manager="lctxManager" />
<bean id="lctxManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="lcentityManagerFactory"></property>
</bean>
<bean id="lcentityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitName" value="activities"/>
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.OpenJpaVendorAdapter">
<property name="showSql" value="false"></property>
</bean>
</property>
</bean>
The persistence.xml is as follows:
<persistence-unit name="activities" transaction-type="RESOURCE_LOCAL">
<non-jta-data-source>jdbc/activities</non-jta-data-source>
<!-- My classes -->
<exclude-unlisted-classes>true</exclude-unlisted-classes>
<properties>
<property name="openjpa.TransactionMode" value="local" />
</properties>
I need to restart the server on which the application runs for the Exception to vanish again -- before it (randomly?) pops up again.
On googling the problem, I found a site that mentioned it was faulty code (no commit on transactions) that causes the problem: http://mikeschubert.com/2006/08/03/javanetsocketex/
However, I am under the impression that the JPATransactionManager is supposed to take care of that.
Other websites mentioned that implementing a connection pool would help (when using Hibernate in communication with a Tomcat-server, e.g. elegantly handling stale database connections in Hibernate/Spring Transactions); however, the Websphere Application Server already manages a connections pool for the jdbc/activities datasource (minSize: 1; maxSize: 10; Connection timeout: 180 sec; Reap time: 180sec; Unused timeout: 1800 sec, Purge policy: FailingConnectionOnly).
Any hints on where I should start looking into the problem would be great.
There is a possibility that if you don't set DB2COMM registry variable then communication error might happen.
DB2COMM should be set to TCPIP.

Resources