Bitronix Transaction manager is throwing this error - oracle

We are running a Mule 4 application where we have introduced the btm (bitronix manager) to manage transaction commits and rollback.
For the btm XA data resource we used a default configuration like this (the service details and user details are not provided here....)
Here are a few observations we have seen:
The threads are spiking sharply
Getting constant warning (2nd message) continuously every minute
What could be the issue?
We are using Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Thanks.
XADataSource in connector StandardXADataSource: connection count=<0> number of dead connection=<0> dead lock max wait=<300000> dead lock retry wait=<10000> driver name=<oracle.jdbc.driver.OracleDriver> number of free connections=<0> max con=<0> min con=<50> prepared stmt cache size=<16> transaction manager= xid connection size=<0> StandardConnectionPoolDataSource: master prepared stmt cache size=<0> prepared stmt cache size =<16> StandardDataSource: driver= url=<jdbc:oracle:thin:#//oracle service name> user= CoreDataSource : debug = description = login time out =<60> user = verbose = . A default pool will be created. To customize define a bti:xa-data-source-pool element in your config and assign it to the connector.
?error running recovery on resource '270015947-default-xa-session', resource marked as failed (background recoverer will retry recovery)
java.lang.IllegalStateException: No TransactionContext associated with current thread: 914397
at com.hazelcast.transaction.impl.xa.XAResourceImpl.getTransactionContext(XAResourceImpl.java:305) ~[hazelcast-3.12.jar:3.12]
at com.mulesoft.mule.runtime.module.cluster.internal.vm.ClusterQueueSession.getXaTransactionContext(ClusterQueueSession.java:175) ~[mule-ee-distribution-standalone-4.3.0-20210622-patch.jar:4.3.0-20210622]
at com.mulesoft.mule.runtime.module.cluster.internal.vm.ClusterQueueSession.recover(ClusterQueueSession.java:136) ~[mule-ee-distribution-standalone-4.3.0-20210622-patch.jar:4.3.0-20210622]
at bitronix.tm.recovery.RecoveryHelper.recover(RecoveryHelper.java:103) ~[mule-btm-2.1.14.jar:2.1.14]
at bitronix.tm.recovery.RecoveryHelper.recover(RecoveryHelper.java:61) ~[mule-btm-2.1.14.jar:2.1.14]
at bitronix.tm.recovery.Recoverer.recover(Recoverer.java:276) [mule-btm-2.1.14.jar:2.1.14]
at bitronix.tm.recovery.Recoverer.recoverAllResources(Recoverer.java:233) [mule-btm-2.1.14.jar:2.1.14]
at bitronix.tm.recovery.Recoverer.run(Recoverer.java:146) [mule-btm-2.1.14.jar:2.1.14]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_241]

Related

IBM App Connect Enterprise Could not locate JDBC Provider policy

I have created a JDBCProvider policy in an IBM App Connect Enterprise (ACE v11) in Windows called CLINIC
which is also the name of the database I have a mapping Node where I'm trying to select from or insert into the Oracle Database
then i deployed the policy to the integration Node after i set credentials to the node
JDBCProviders
CLINIC
connectionUrlFormat='jdbc:oracle:thin:[user]/[password]#[serverName]:[portNumber]:[connectionUrlFormatAttr1]'
connectionUrlFormatAttr1='XE'
connectionUrlFormatAttr2=''
connectionUrlFormatAttr3=''
connectionUrlFormatAttr4=''
connectionUrlFormatAttr5=''
databaseName='CLINIC'
databaseSchemaNames='useProvidedSchemaNames'
databaseType='Oracle'
databaseVersion='11.2'
description=''
environmentParms=''
jarsURL='C:\\oraclexe\\app\\oracle\\product\11.2.0\server\\jdbc\\lib'
jdbcProviderXASupport='TRUE'
maxConnectionPoolSize='0'
portNumber='1521'
securityIdentity='mySecIdentity'
serverName='localhost'
type4DatasourceClassName='oracle.jdbc.xa.client.OracleXADataSource'
type4DriverClassName='oracle.jdbc.OracleDriver'
useDeployedJars='FALSE'
then when i test the message flow i always getting this error:
Exception. BIP2230E: Error detected whilst processing a message in node &apos;MappSelect.Mapping&apos;. : C:\ci\product-build\WMB\src\DataFlowEngine\PluginInterface\jlinklib\ImbJniNode.cpp: 433: ImbJniNode::evaluate: ComIbmMSLMappingNode: MappSelect#FCMComposite_1_3
BIP6253E: Error in node: &apos;Mapping&apos;. Could not locate JDBC Provider policy &apos;&apos;XE&apos;&apos;, which was given for the data source name property for this node. : JDBCCommon.java: 575: JDBCDatabaseManager::constructor: :
so What am I missing? any help please?
I don't know what you did in your mapping node, but it should be one of the following :
You mentioned a wrong resource name. You have to mention CLINIC in your mapping node as a source
You did not restart the Integration Server after applying this configuration
https://www.ibm.com/support/pages/ibm-app-connect-enteprise-bip6253e-error-node-java-compute-could-not-locate-jdbc-provider-policy-mypoliciesmypolicy

'java.sql.SQLRecoverableException: IO Error: Operation interrupted' after updating the Oracle Driver to 12.2.0.1.0

I recently updated our Oracle JDB driver to 12.2.0.1.0.
After the update we just get some errors from the Oracle driver that we haven't experienced yet, and I haven't found a discussion pointing how to solve this.
The application that we develop in our company has a dispatcher that manages the execution of different jobs.
The jobs can open connections to the data base and perform some SQL queries on it (and then of course close the connections).
The jobs are executed in parallel (using a fork mechanism).
Of course, there is a maximum of jobs that can be executed in parallel.
If a job is not executed at the moment, it waits for being executed.
The ordering of which jobs can be executed is managed using a Queue.
The error below occurs under the following circumstances: the dispatcher executes in parallel the maximum number of jobs allowed to run simultaneously and there are jobs waiting to be executed.
In the moment in which a waiting job is going to be started (that means a running job is finished and a new can be started) the following error occurs:
Caused by: de.fact.first.process.data.ProcessDataException:
java.sql.SQLRecoverableException: IO Error: Operation interrupted
at
JobDataFactoryImplJDBC.getByJobId(JobDataFactoryImplJDBC.java:210)
... 19 more
Caused by: java.sql.SQLRecoverableException: IO Error: Operation interrupted
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:761)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:904)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1082)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3780)
at oracle.jdbc.driver.T4CPreparedStatement.executeInternal(T4CPreparedStatement.java:1343)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3822)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1165)
at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)
at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)
at de.fact.first.process.data.JobDataFactoryImplJDBC.getByJobId(JobDataFactoryImplJDBC.java:205)
... 19 more
Caused by: java.io.InterruptedIOException: Operation interrupted
at oracle.net.nt.TimeoutSocketChannel.handleInterrupt(TimeoutSocketChannel.java:311)
at oracle.net.nt.TimeoutSocketChannel.write(TimeoutSocketChannel.java:221)
at oracle.net.ns.NIOPacket.writeToSocketChannel(NIOPacket.java:211)
at oracle.net.ns.NIONSDataChannel.writeDataToSocketChannel(NIONSDataChannel.java:181)
at oracle.net.ns.NIONSDataChannel.writeDataToSocketChannel(NIONSDataChannel.java:132)
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForReading(T4CMAREngineNIO.java:96)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:534)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:485)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:612)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:226)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:59)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:747)
... 28 more
My first thought was that maybe the application exceeds the number of connections and therefore Oracle interrupts the connections.
This was not the problem, as I increased the number of processes (and sessions) and additionally distributed_lock_timeout.
Even after adjusting these options, the problem still occurs.
There are no connection kept open by the waiting jobs.
For sure I can say that the error occurs only in the new Oracle driver, the issue is not reproducible in the old one (12.1.0.1.0).
Please update the dependency and check.
<dependency>
<groupId>com.github.noraui</groupId>
<artifactId>ojdbc7</artifactId>
<version>12.1.0.2</version>
</dependency>
We fixed the problem by setting the configuration option testOnBorrow to true for the OJDBC driver. Simirarly, you need to set the same property to true also for the Tomcat configuration, if you are using Tomcat as a server:
<Context reloadable="true" >
<Resource name="jdbc/..."
auth="Container"
type="org.apache.commons.dbcp2.PoolingDataSource"
factory=""
scope="Shareable"
dataSource="oracle"
minIdle="0"
maxIdle="50"
maxActive="500"
minEvictableIdleTimeMillis="1800000"
numTestsPerEvictionRun="3"
validationQuery="SELECT COUNT(*) FROM LANGUAGES"
testOnBorrow="true"
testOnReturn="false"
testWhileIdle="true"
timeBetweenEvictionRunsMillis="300000"/>

MySql server has gone away when run in queue

We have maradb 10.1 and beanstalkd 1.10 and laravel 4.2
We have one query that run successfully without queue. but when run it in beanstalkd not afected and we get 'MySql server has gone away' error in log file
config:
wait_timeout = 120
max_allowed_packet = 1024M
Why different behavior between with and without queue
We had similar issues and either it was because the code was running in different thread, and connection being lost, or a strange garbage collection and closing of connection for long running processes.
Anyway what we implemented is
- when a job is reserved and starts processing we reconnect the DB always
- when we detect a connection gone away, we release the job (so it will be picked up again)
In case it happens in the middle of the processing flow, you may want to reconnect to lose work done so far on that job, if the job is somehow transactional.

Configuration Issue for IBM Filenet 5.2

I installed IBM Filenet Content Engine 5.2,on my machine.I am getting problem while configuring GCD datasources for new profile.
Let me first explain the setps I did,then I would mention the problem that I am getting.
First,I created GCD database in DB2,then I created datasources required for configuration of profile in WAS Admin Console.I created J2C Authentication Alias,for user which has access to GCD database and configured it with datasources.I am getting test database connection as successful but when I run task of configuring GCD datasources,it fails with the following error:-
Starting to run Configure GCD JDBC Data Sources
Configure GCD JDBC Data Sources ******
Finished running Configure GCD JDBC Data Sources
An error occurred while running Configure GCD JDBC Data Sources
Running the task failed with the following message: The data source configuration failed:
WASX7209I: Connected to process "server1" on node Poonam-PcNode01 using SOAP connector; The type of process is: UnManagedProcess
testing Database connection
DSRA8040I: Failed to connect to the DataSource. Encountered java.sql.SQLException: [jcc][t4][2013][11249][3.62.56] Connection authorization failure occurred. Reason: User ID or Password invalid. ERRORCODE=-4214, SQLSTATE=28000 DSRA0010E: SQL State = 28000, Error Code = -4,214.
It looks like simple error of user id and password not valid.I am using same alias for other datasources as well and they are working fine.so not sure,why I am getting error.I have also tried changing scope of datasources,but no success.Can somebody please help?
running "FileNet Configuration Manager" task of configuring GCD datasources will create all the needs things in WAS (including Alias), do not created it before manually.
I suspect it had an issue with exciting JDBC data sources/different names Alias
Seems from your message that you are running it from Filene configuration manager. Could you please double check from your database whether user id is authorised to execute query in GCD database. It is definitely do it with permission issue.

WebSphere: serializable messages in System.out

I administer several WebSphere 6.1 servers running the same application in a load balancing configuration. For one of the servers, the WebSphere System.out file is getting filled with these sorts of messages:
[6/5/14 20:20:35:602 EDT] 0000000f SessionContex E Miscellaneous
data: Attribute "rotatorFiles" is declared to be serializable but is
found to generate exception "java.io.NotSerializableException" with
message "com.company.storefront.vo.ImageRotatorItemVO". Fix the
application so that the attribute "rotatorFiles" is correctly
serializable at runtime.
The same code is not generating these messages in the other WebSphere servers log files. I suspect there is some configuration setting that is causing these messages to be logged on one server but not the others. Does anyone out there know what setting that may be?
At least two come to my mind:
you may have session replication enabled on that server, check in Application servers > server1 > Session management > Distributed environment settings
you may have PMI counter that monitors session size (Servlet Session Manager.SessionObjectSize) enabled, check in Application servers > server1 > Performance Monitoring Infrastructure (PMI)
The paths in the console are from v8, so they might be a bit different in v6.1, but you should get the idea.

Resources