Big Query Timeout Errors - google-api

I am trying to insert data into big query tables and the request fails with this message:
Post URL: read tcp IP_ADD:22465-\u003eIP_ADD:443: read: connection timed out
Could some one explain what is timing out exactly? Retrying does not fix the problem.

Related

Duplicate key value when retrying SQL insert after quorum lost

I'm currently testing failure scenarios using 3 cockroachDB nodes.
Using this scenario:
Inserting records in a loop
Shutdown 2 nodes out of 3 (to simulate Quorum lost)
Wait long enough so the Postgres JDBC driver throws a IO Exception
Restart one node to bring back Quorum
Retry previous failed statement
I then hit the following exception
Cause: org.postgresql.util.PSQLException: ERROR: duplicate key value (messageid)=('71100358-aeae-41ac-a397-b79788097f74') violates unique constraint "primary"
This means that the insert succeeded on first attempt (from which I got the IO Exception) when the Quorum became available again. Problem is that I'm not aware of it.
I cannot make the assumption that a "duplicate key value" exception will be cause by application logic issues. Is there any parameters I can tuned so the underlying statement rollbacks before the IO Exception ? Or maybe a better approach ?
Tests were conducted using
CockroachDB v1.1.5 ( 3 nodes )
MyBatis 3.4.0
PostgreSQL driver 42.2.1
Java 8
There's a couple things that could be happening here.
First, if one of the nodes you're killing is the gateway node (the one your
Java process is connecting to), it could just be that the data is being
committed, but the node is dying before it's able to send the confirmation back
to the client. In this case, there's not much that can be done by CockroachDB
or any other database.
The more subtle case is where the nodes you're killing are nodes besides
the gateway node. That is, where the node you were talking to sent you back an
error, despite the data being committed successfully. The problem here is that
the data is committed as soon as it's written to raft, but it's possible that
if the other nodes have died (and could come back up later), there's no way for
the gateway node to know whether they have committed the data that it asked
them to. In situations like this, CockroachDB returns an "ambiguous result error".
I'm not sure how jdbc exposes the specifics of the errors returned to the
client in cases like this, but if you inspect the error itself it should say
something to that effect.
Ambiguous results in CockroachDB are briefly discussed in its Jepsen analysis, and see this page in the CockroachDB docs for information on the kinds of errors that can be returned.

How to increase the responding time for ReQL query in RethinkDB

By now I've 1Million records in my table. When I trying to add a new column/variable to the table it is showing time out error.I even tried to limit the data intake but it doesn't. Can anyone tell me how to tackel it. Any help would be appreaciable.
e: HTTP ReQL query timed out after 300 seconds in:
r.table("interestdata").update({"pick": 0});
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Thanks in advance!
From the rethink tutorial we can increase the connection timeout using javascript or python etc.
r.connect({
host: 'localhost',
port: 28015,
db: 'test',
timeout:600
}
This worked for me.

BizTalk send port returns ORA-01013: user requested cancel of current operation

I have an application that inserts many rows into an Oracle database. The send port returns "ORA-01013: user requested cancel of current operation"
The send port is "WCF-Custom" using OracleDbBinding for the connection to the database.
I have the same issue in the past. My problem was using "WCF-Custom" port with OracleDBBinding to invoke an Oracle PL. That PL was very slow in their response and finally I received the error "ORA-01013: user requested cancel of current operation".
My issue was resolved changing the PL. I think that the error was produced by the "ReceiveTimeout" property of the Send Port. This property says that "Specifies the WCF message receive timeout. Essentially, this means the maximum amount of time the adapter waits for an inbound message.", I suspect that when ReceiveTimeout is accomplished, the WCF-Custom cancels the operation and then Oracle sends the error.
What's happening:
When inserting large numbers of records WCF makes several parallel requests to insert the data. The default 'UseAmbientTransaction' setting wraps all the inserts within a single transaction. If one of the inserted rows breaks a database constraint it tries to rollback the transaction for all the inserts. The transactions all return the oracle 1013 exception and the real cause of the failure is lost.
Solution:
On the send port 'Transport Advanced Options' tab set the 'Ordered Delivery' check box. This prevents the parallel insert operations and the real cause of the error will be logged.

DB2 Query Timeout issue - How to handle

This may have been asked numerous times but none of them helped me so far.
Here's some history:
QueryTimeOut: 120 secs
Database:DB2
App Server: JBoss
Framework: Struts 2
I've one query which fetches around a million records. Yes, we need to fetch it all at once for caching purpose, sadly can't change the design.
Now, we've 2 servers Primary and DR. In DR server, the query is getting executed within 30 secs, so no timeout issue there. But in Primary serverit is getting time out due to some unknown reason. Sometimes it is getting timed out in rs.next() and sometime in pstmt.executeQuery().
All DB indexes, connection pool etc are in place. The explain plan shows, there are no full table scan as well.
My Analysis:
Since query is not the issue here, there might be issue in Network delay?
How can I get the root cause behind this timeout. How can I make sure there are no connection leakage? (Since all connection are closed properly).
Any way to recover from the timeout and again execute the query with increased timeout value for e.g: pstmt.setQueryTimeOut(600)? <- Note that this has no effect whatsoever. Don't know why..!
Appreciate any inputs.
Thank You!

Preventing the oracle connection being get lost

How to prevent the connection to the oracle server being gets lost if it is kept for some ideal time
If you use the newest JDBC spec 4.0 there is a isValid() method available for a connection that allows you to check if the connection is usable, if not then get a new (reconnect) connection and execute your SQL.
One possible way that I knew to save the database connection from being getting lost is to send a dummy query after the threshhold time, By Threash hold I mean the time after which the connection to the database is expected to become idle or get lost.
Some thing like
Ping_time_to_DB=60
if(Current_time - Last_Ping_time > Ping_time_to_DB)
{
--send a dummy query like
select 1 from dual;
}

Resources