Is there a maximum size for a QPID DerbyMessageStore file? - derby

We have been running the Java QPID broker on CentOS for several years without issues when one of our production brokers failed and does not seem to be working properly after multiple restarts. This error continues to occur:
Caused by: org.apache.qpid.AMQStoreException: Error writing enqueued message with id 5836 for queue Ingest.instrument_particles with id 716760ac-c686-3454-b645-c86f15e29877 to database [error code 541: internal error]
at org.apache.qpid.server.store.derby.DerbyMessageStore.enqueueMessage(DerbyMessageStore.java:1242)
at org.apache.qpid.server.store.derby.DerbyMessageStore$DerbyTransaction.enqueueMessage(DerbyMessageStore.java:1989)
at org.apache.qpid.server.txn.AsyncAutoCommitTransaction.enqueue(AsyncAutoCommitTransaction.java:265)
... 12 more
Caused by: java.sql.SQLException: An unexpected exception was thrown
There's more to this stack trace, but the gist of it seems to be that QPID is failing to write to the message store. Is there a maximum size of files within the store? One of them exceeds 410GB. We are running JDK 1.8.0_332 on CentOS 7.9, if that makes a difference.

Related

While generate HTML report via cmd show error

Error generating the report: org.apache.jmeter.report.dashboard.GenerationException: Error while processing samples: Consumer failed with message :Consumer failed with message :Consumer failed with message :Consumer failed with message :Begin size 0 is not equal to fixed size 5
Sounds like a JMeter bug, I think you should report it via JMeter Bugzilla
Try downgrading to i.e. Java 8 (the minimum Java version you can run JMeter 5.4 with) and it should resolve your issue.

'java.sql.SQLRecoverableException: IO Error: Operation interrupted' after updating the Oracle Driver to 12.2.0.1.0

I recently updated our Oracle JDB driver to 12.2.0.1.0.
After the update we just get some errors from the Oracle driver that we haven't experienced yet, and I haven't found a discussion pointing how to solve this.
The application that we develop in our company has a dispatcher that manages the execution of different jobs.
The jobs can open connections to the data base and perform some SQL queries on it (and then of course close the connections).
The jobs are executed in parallel (using a fork mechanism).
Of course, there is a maximum of jobs that can be executed in parallel.
If a job is not executed at the moment, it waits for being executed.
The ordering of which jobs can be executed is managed using a Queue.
The error below occurs under the following circumstances: the dispatcher executes in parallel the maximum number of jobs allowed to run simultaneously and there are jobs waiting to be executed.
In the moment in which a waiting job is going to be started (that means a running job is finished and a new can be started) the following error occurs:
Caused by: de.fact.first.process.data.ProcessDataException:
java.sql.SQLRecoverableException: IO Error: Operation interrupted
at
JobDataFactoryImplJDBC.getByJobId(JobDataFactoryImplJDBC.java:210)
... 19 more
Caused by: java.sql.SQLRecoverableException: IO Error: Operation interrupted
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:761)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:904)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1082)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3780)
at oracle.jdbc.driver.T4CPreparedStatement.executeInternal(T4CPreparedStatement.java:1343)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3822)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1165)
at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)
at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)
at de.fact.first.process.data.JobDataFactoryImplJDBC.getByJobId(JobDataFactoryImplJDBC.java:205)
... 19 more
Caused by: java.io.InterruptedIOException: Operation interrupted
at oracle.net.nt.TimeoutSocketChannel.handleInterrupt(TimeoutSocketChannel.java:311)
at oracle.net.nt.TimeoutSocketChannel.write(TimeoutSocketChannel.java:221)
at oracle.net.ns.NIOPacket.writeToSocketChannel(NIOPacket.java:211)
at oracle.net.ns.NIONSDataChannel.writeDataToSocketChannel(NIONSDataChannel.java:181)
at oracle.net.ns.NIONSDataChannel.writeDataToSocketChannel(NIONSDataChannel.java:132)
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForReading(T4CMAREngineNIO.java:96)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:534)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:485)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:612)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:226)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:59)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:747)
... 28 more
My first thought was that maybe the application exceeds the number of connections and therefore Oracle interrupts the connections.
This was not the problem, as I increased the number of processes (and sessions) and additionally distributed_lock_timeout.
Even after adjusting these options, the problem still occurs.
There are no connection kept open by the waiting jobs.
For sure I can say that the error occurs only in the new Oracle driver, the issue is not reproducible in the old one (12.1.0.1.0).
Please update the dependency and check.
<dependency>
<groupId>com.github.noraui</groupId>
<artifactId>ojdbc7</artifactId>
<version>12.1.0.2</version>
</dependency>
We fixed the problem by setting the configuration option testOnBorrow to true for the OJDBC driver. Simirarly, you need to set the same property to true also for the Tomcat configuration, if you are using Tomcat as a server:
<Context reloadable="true" >
<Resource name="jdbc/..."
auth="Container"
type="org.apache.commons.dbcp2.PoolingDataSource"
factory=""
scope="Shareable"
dataSource="oracle"
minIdle="0"
maxIdle="50"
maxActive="500"
minEvictableIdleTimeMillis="1800000"
numTestsPerEvictionRun="3"
validationQuery="SELECT COUNT(*) FROM LANGUAGES"
testOnBorrow="true"
testOnReturn="false"
testWhileIdle="true"
timeBetweenEvictionRunsMillis="300000"/>

Cassandra start error - Exception encountered during startup: The seed provider lists no seeds

I am thus far unable to successfully run cassandra. I have arrived at a point in my efforts where I believe it is more efficient to reach out for help.
Installation method: datastax-ddc-64bit-3.9.0.msi
OS: Windows 7
Symptoms:
cmd> net start DataStax-DDC-Server
results in cmd output 'service is starting' and 'service was started successfully'.
datastax_ddc_server-stdout.log has this subsequent output, that is likely relevant:
WARN 10:38:17 Seed provider couldn't lookup host *myIPaddress*
Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: The seed provider lists no seeds.
ERROR 10:38:17 Exception encountered during startup: The seed provider lists no seeds.
$
cmd> nodetool status
results in the following error message:
Error: Failed to connect to ‘127.0.0.1:7199’: Connection refused
I would also like to note that the Cassandra CQL Shell closes immediately after I open it.. I think that it quickly flashes an error similar to above.
Please be patient with me if I have included some useless information or am not approaching my issue from the correct perspective. I have not worked with apache cassandra before, nor have I configured a machine to facilitate an installation of any database engine.
Any help/insight is much appreciated,
Thanks!

[Error]: Failed to run command with error: Error Domain=Parse Code=428

I get this error sometimes when trying to save things to Parse or to fetch data from it.
This is not constant and appear once in a while making the operation to fail.
I have contacted Parse for that. Here is their answer:
Starting on 4/28/2016, apps that have not migrated their database may see a "428" error code if the request cannot be handled by the remaining shared pool of resources. If you see this error in your logs, we highly recommend migrating the database for your app without delay.
Means this happens because of starting this date all apps are on low priority but those who started DB migration. So, Migration of the DB should resolve that.

SQLException like archiver error. Connect internal only, until freed

can any one give me an idea regarding this below error:
2011-02-11 05:48:42,858
-[c=STATS_VITALS] Error running system monitor for connectionCloseTime:
java.sql.SQLException: ORA-00257: archiver error.
Connect internal only, until freed.
java.sql.SQLException: ORA-00257: archiver error.
Connect internal only, until freed.
This usually occurs when the system encounters an error while trying to archive a redo log. Was this working previously and it just failed now for the first time, or is this a new installation? Do you know where your logs are being archived? If so, check to see if that location is out of space as a first step. Once you have more information we might be able to help you a little more effectively.

Resources