I'm currently testing failure scenarios using 3 cockroachDB nodes.
Using this scenario:
Inserting records in a loop
Shutdown 2 nodes out of 3 (to simulate Quorum lost)
Wait long enough so the Postgres JDBC driver throws a IO Exception
Restart one node to bring back Quorum
Retry previous failed statement
I then hit the following exception
Cause: org.postgresql.util.PSQLException: ERROR: duplicate key value (messageid)=('71100358-aeae-41ac-a397-b79788097f74') violates unique constraint "primary"
This means that the insert succeeded on first attempt (from which I got the IO Exception) when the Quorum became available again. Problem is that I'm not aware of it.
I cannot make the assumption that a "duplicate key value" exception will be cause by application logic issues. Is there any parameters I can tuned so the underlying statement rollbacks before the IO Exception ? Or maybe a better approach ?
Tests were conducted using
CockroachDB v1.1.5 ( 3 nodes )
MyBatis 3.4.0
PostgreSQL driver 42.2.1
Java 8
There's a couple things that could be happening here.
First, if one of the nodes you're killing is the gateway node (the one your
Java process is connecting to), it could just be that the data is being
committed, but the node is dying before it's able to send the confirmation back
to the client. In this case, there's not much that can be done by CockroachDB
or any other database.
The more subtle case is where the nodes you're killing are nodes besides
the gateway node. That is, where the node you were talking to sent you back an
error, despite the data being committed successfully. The problem here is that
the data is committed as soon as it's written to raft, but it's possible that
if the other nodes have died (and could come back up later), there's no way for
the gateway node to know whether they have committed the data that it asked
them to. In situations like this, CockroachDB returns an "ambiguous result error".
I'm not sure how jdbc exposes the specifics of the errors returned to the
client in cases like this, but if you inspect the error itself it should say
something to that effect.
Ambiguous results in CockroachDB are briefly discussed in its Jepsen analysis, and see this page in the CockroachDB docs for information on the kinds of errors that can be returned.
Related
In my application I have used Oracle (OCI) bulk executions using the following function.
OCIStmtExecute
For all the normal conditions it works as expected. Once Oracle node failover happened it gives rejections like "ORA-25405" in commits.
ORA-25405: transaction status unknown
According to the guide lines available all says "The user must determine the transaction's status manually".
My Question is will there be a scenario where my bulk insert/update works partially giving the above error?
From http://docs.oracle.com/cd/B10500_01/appdev.920/a96584/oci16m89.htm
With global transactions, it is possible that the transaction is now in-doubt, meaning that it is neither committed nor aborted.
This is exactly your case. This means the transaction isn't committed.
OCITransCommit() attempts to retrieve the status of the transaction from the server. The status is returned.
The solution then is to try again to commit the transaction with OCITransCommit(), then get the return value and check it.
I am currently working on an application that continuously queries a database for real time data to be displayed.
In order to have minimal impact on systems which are writing to database, which are essential to the business operation, I am connecting to the Read Only replica in the availability group directly (using the read only replica server name as opposed to Read Only routing via the Always On listener by means of applicationintent=readonly).
Even in doing so we are noticing response time increases on the inserting of data to the primary server.
To my understanding of secondary replicas this should not be the case. I am using NOLOCK hints in the query as well. I am very perplexed by this and do not quite understand what is causing this increase in response times. All I have thought of so far is that SQL is, regardless of the NOLOCK hint, locking the table I am reading from and preventing the synchronous replication to the read only replica, which is in turn locking the primary instances table, which is holding up the insert query.
Is this the case or is there something I am not quite understanding with regard to Always on Read only replicas?
I found this document which I think best describes what could be possible causes of the increases in response times on the primary server.
In general it's a good read for those who are looking into using their AlwaysOn Availability group to distribute the load between their primary and secondary replicas.
for those who don't wish to read the whole document it taught me the following (in my own rough words):
Although very unlikely, it is possible that workloads running on the secondary replica can impact the the time taken to send the acknowledgement that the transaction has committed(the replication to the secondary). When using synchronous commit mode the primary waits for this acknowledgement before committing the transaction it is running (an insert for example). So the increase in time for the acknowledgement of the secondary replica causes the primary replica to take longer on the insert.
It is much better explained in the document though, under the 'Impact on Primary Workload' section. And again, if you want to know more, I really suggest you read it.
I am running into ORA-01555: snapshot too old errors with Oracle 9i but am not running any updates with this application at all.
The error occurs after the application has been connected for some hours without any queries, then every query (which would otherwise be subsecond queries) comes back with a ORA-01555: snapshot too old: rollback segment number 6 with name "_SYSSMU6$" too small.
Could this be cause of transaction isolation set to TRANSACTION_SERIALIZABLE? Or some other bug in the JDBC code? This could be caused by a bug in the jdbc-go driver but everything I've read about this bug has led me to believe scenarios where no DML statements are made this would not occur.
Read below a very good insight on this error by Tom Kyte. The problem in your case may come from what is called 'delayed block cleanout'. This is a case where selects creates redo. However, the root cause is almost sure improper size of rollback segments(but Tom adds as correlated causes: too frequently commits, a too big read after many updates, etc).
snaphot too old error (Tom Kyte)
When you run a query on an Oracle database the result will be what Oracle calls a "Read consistent snapshot".
What it means is that all the data items in the result will be represented with the value as of the the time the query was started.
To achieve this the DBMS looks into the rollback segments to get the original value of items which have been updated since the start of the query.
The DBMS uses the rollback segment in a circular way and will eventually wrap around - overwriting the old data.
If your query needs data that is no longer available in the rollback segment you will get "snapshot too old".
This can happen if your query is running for a long time on data being concurrently updated.
You can prevent it by either extending your rollback segments or avoid running the query concurrently with heavy updaters.
I also believe newer versions of Oracle provides better dynamic management of rollback segments than what is the case for Oracle 9i.
I'm using the DataStax Cassandra Java driver 2.1.2 to have clients connect to one of three data centers, like so:
.withLoadBalancingPolicy(new TokenAwarePolicy(new DCAwareRoundRobinPolicy("DC1",1)))
This sets DC1 as the local data center, but also has the driver make one connection to the other two remote data centers.
Now if some of the nodes are down in the local data center, the client will fail to get a local quorum on an insert statement, and an UnavailableException will be thrown. But there are sufficient nodes available in the remote data centers for the insert to get a quorum there and succeed, so I would like the driver to retry the insert in the other data centers. But how do I tell the driver to do this?
It looks like there is a way to set a RetryPolicy to retry with a lower consistency level, but I don't see anything about retrying to a remote data center.
If all the nodes in DC1 are down, then the driver does try the insert at a remote data center where it succeeds.
The way I ended up getting this to work is I first try the insert with these settings (note that I'm using the "IF NOT EXISTS" clause on the insert):
statement.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
statement.setSerialConsistencyLevel(ConsistencyLevel.LOCAL_SERIAL);
This tells Cassandra to only do the insert if it can get a local quorum for both the "IF NOT EXISTS" check and for the write. If there aren't enough replicas alive to get a local quorum, I catch the UnavailableException and NoHostAvailableException exceptions and change the consistency level to:
statement.setConsistencyLevel(ConsistencyLevel.QUORUM);
statement.setSerialConsistencyLevel(ConsistencyLevel.SERIAL);
Then I try the insert again and this time it will try to get a quorum across all the data centers and succeed. So with this approach I get decent performance for most inserts by restricting the very expensive "IF NOT EXISTS" check to the local DC, while getting the reliability of not being dead in the water when some of the local replicas are down.
We have a daily batch job executing a oracle-plsql function. Actually the quartz scheduler invokes a java program which makes a call to the oracle-plsql function. This oracle plsql function deletes data (which is more than 6 months) from 4 tables and then commits the transaction.
This batch job was running successfully in the test environment but started failing when new data was dumped to the tables which happened 2 weeks ago (The code is supposed to go into production this week). Earlier the number of rows in each table was not more than 0.1 million. But now it is 1 million in 3 tables and 2.4 million in the other table.
After running for 3 hours, we are getting a error in java (written in the log file) "...Connection reset; nested exception is java.sql.SQLException: Io exception: Connection reset....". When the row-counts on the tables were checked, it was clear that no record was deleted from any of the tables.
Is it possible in oracle database, for the plsql procedure/function to be automatically terminated/killed when the connection is timed out and the invoking session is no longer active?
Thanks in advance,
Pradeep.
The PL/SQL won't terminate because it is inactive, since by definition it isn't - it is still doing something. It won't be generating any network traffic back to your client though.
It appears something at the network level is causing the connection to be terminated. This could be a listener timeout, a firewall timeout, or something else. If it's consistently after three hours then it will almost certainly be a timeout configured somewhere rather than a network glitch, which would be more random (and possibly recoverable).
When the network connection is interrupted, Oracle will notice at some point and terminate the session. That will cause the PL/SQL call to be terminated, and that will cause any work it has done to be rolled back, which may take a while.
3 hours seems a long time for your deletes though even for a few million records. Perhaps you're deleting inefficiently, with individual inserts within your procedure. Which doesn't really help you of course. It might be worth pointing out that your production environment might not have whatever setting is killing your connection, or might have a shorter timeout, so even reducing the runtime might not make it bullet-proof in live. You probably need to find the source of the timeout and check the equivalent in the live env. to try to pre-empt similar problems there.