ucanaccess weird exception i cannot handle - ucanaccess

I started using ucanaccess lately for connection to access database (obviously) and it all worked fine until now.
I'm starting to insert about 500,000 rows into the database.... when I reach about 400,000 the program stops.... the problem is I cant see the exception! i see this -
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.createNewUsageMapPage(UsageMap.java:763)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.addOrRemovePageNumber(UsageMap.java:747)
at com.healthmarketscience.jackcess.impl.UsageMap.removePageNumber(UsageMap.java:337)
at com.healthmarketscience.jackcess.impl.PageChannel.allocateNewPage(PageChannel.java:354)
at com.healthmarketscience.jackcess.impl.TempPageHolder.setNewPage(TempPageHolder.java:115)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.createNewUsageMapPage(UsageMap.java:763)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.addOrRemovePageNumber(UsageMap.java:747)
at com.healthmarketscience.jackcess.impl.UsageMap.removePageNumber(UsageMap.java:337)
at com.healthmarketscience.jackcess.impl.PageChannel.allocateNewPage(PageChannel.java:354)
at com.healthmarketscience.jackcess.impl.TempPageHolder.setNewPage(TempPageHolder.java:115)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.createNewUsageMapPage(UsageMap.java:763)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.addOrRemovePageNumber(UsageMap.java:747)
at com.healthmarketscience.jackcess.impl.UsageMap.removePageNumber(UsageMap.java:337)
at com.healthmarketscience.jackcess.impl.PageChannel.allocateNewPage(PageChannel.java:354)
at com.healthmarketscience.jackcess.impl.TempPageHolder.setNewPage(TempPageHolder.java:115)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.createNewUsageMapPage(UsageMap.java:763)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.addOrRemovePageNumber(UsageMap.java:747)
at com.healthmarketscience.jackcess.impl.UsageMap.removePageNumber(UsageMap.java:337)
at com.healthmarketscience.jackcess.impl.PageChannel.allocateNewPage(PageChannel.java:354)
at com.healthmarketscience.jackcess.impl.TempPageHolder.setNewPage(TempPageHolder.java:115)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.createNewUsageMapPage(UsageMap.java:763)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.addOrRemovePageNumber(UsageMap.java:747)
at com.healthmarketscience.jackcess.impl.UsageMap.removePageNumber(UsageMap.java:337)
at com.healthmarketscience.jackcess.impl.PageChannel.allocateNewPage(PageChannel.java:354)
at com.healthmarketscience.jackcess.impl.TempPageHolder.setNewPage(TempPageHolder.java:115)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.createNewUsageMapPage(UsageMap.java:763)
I cant see where the exception began!
it happened to me once already but after doing the "compress and repair" , it worked again...
now it does not.
Does anyone know whats going on with it?

check the following three things.
First of all, if you are using any IDE, make sure to increase the log buffer so that you will get to the start of this exception. May not offer much help, but will be better than looking at somewhere in-between.
Secondly, how much is the DB file size? There is a limitation on Access DB file size, which will result in such unconventional errors at times. Check that out.
Third - Do not open OR try to do something else with that particular DB while updating/inserting using ucanaccess. Doing so, most of the times results in this issue.

Related

why stmt.Close is slow?

We implemented an Clickhouse driver according to sql/driver with ConnPrepareContext interface which use stmt to do the real querying work.
For some queries, our driver cost must of the time during stmt.Close(). I can find the query execution time in Clickhouse system.query_log.
For 10 out of 4000 queries in production environment, I found they executed very fast in Clikchouse but cost too much in driver.
Take one query as example: it run 100ms in Clickhouse and resulted 0 rows, but the driver cost 10s on the Close() method.
This issue is not easy to reproduce. I want to ask you for help. If this is database/sql package issue, it should not only emerge in my driver, but also other drivers. Does anyone meet the same issue?
Can you give some suggestion on how to debug this.
I find some hints.
In one sql.DB.QueryContext() call, our driver will
step 1: call sql.DB.PrepareContext() to prepare sql.Stmt, and
step 2: call sql.Stmt.QueryContext(), and
step 3: run sql.Stmt.finalClose() due to Rows.Close()
Both in step 1 and step 2, sql.Stmt will apply one sql.driverConn, and append it into sql.Stmt.css.
In step 3, sql.Stmt will be removed in each sql.driverConn in sql.Stmt.css, which will acquire sql.driverConn lock. This will cause contention.
In our case, we don't care so much about security. We will implement QueryerContext interface in Golang sql/driver avoid unnecessary PrepareContext.
refer http://go-database-sql.org/prepared.html

DetailsODBC: ERROR[HY000] [Simba][BigQuery] (115) Operation Timeout. PollJob

Ive got a problem when trying to create a new visual for a query (source is Google BigQuery).
I'm using a similar query, that is already working (but with an extra filter), making the new query take a little longer.
It runs for about 10 minutes, and return this error message:
Details: "ODBC: ERROR[HY000] [Simba][BigQuery] (115) Operation timeout.PollJob"
So add an time limit parameter: "connect timeout=10000", but it doesnt work neither.
driver={Simba ODBC Driver for Google BigQuery};oauthmechanism=1;refreshtoken=TOKEN;catalog=mm-datamart-kd;connect timeout=10000
If someone get the error or have a solution for this error message. I would be grateful if you give me your advised.
Thanks!

Loading bulk data into neo4j with Neography results in connection error

I'm trying to read a lot of bulk data (probably around 1-2G) into neo4j with a simple ruby script and Neography. My code mostly just consists of a lot of create_node and create_relationship methods.
It seems to work fine, but after around 5,000 create methods I reach an error:
/home/earlz/.gem/ruby/2.1.0/gems/excon-0.44.3/lib/excon/socket.rb:127:in `connect_nonblock': Cannot assign requested address - connect(2) for 127.0.0.1:7474 (Errno::EADDRNOTAVAIL) (Excon::Errors::SocketError)
How do I fix this? I've tried increasing the HTTP timeouts and such, but this didn't help anything
It seems like your script is opening so many connections that it ran out of ephemeral ports to pick from. Try this:
echo "32768 61000" >/proc/sys/net/ipv4/ip_local_port_range

Client application hangs when inserting into table on Oracle using ArrayBinding

Here is our environment:
.Net version: 4.5
Database: Oracle 12.1.0.2 (odp.net)
We are using LLBL "Adapter" but I don't think that has anything to do with the issue
LLBLGen Pro version: 4.1
Llbl Gen Pro Runtime: 4.1.13.1213
When we do an Insert(always into different tables which we are using for the short period and then removing) we use the following code:
int numRecords = strings.Count();
var insertCmd = "insert into " + tableName + " (StringField) values (:StringField)";
var oracleCommand = new OracleCommand();
oracleCommand.CommandText = insertCmd;
oracleCommand.CommandType = CommandType.Text;
oracleCommand.BindByName = true;
oracleCommand.ArrayBindCount = numRecords;
oracleCommand.Parameters.Add(":StringField", OracleDbType.NVarchar2, strings.ToArray(), ParameterDirection.Input);
// this is an LLBL adapter. Like I said, I think the issue is below the LLBL layer.
this.adapter.ExecuteActionQuery(new ActionQuery(oracleCommand));
When the database is getting hit hard with multiple of these inserts in parallel, we get the following error and the insert call never returns from the database.
WG_6.Index_586.TVD: An exception was caught during the execution of an action query: ORA-24381: error(s) in array DML
ORA-12592: TNS:bad packet
ORA-12592: TNS:bad packet
ORA-12592: TNS:bad packet
ORA-12592: TNS:bad packet
ORA-03111: break received on communication channel
ORA-03111: break received on communication channel
ORA-03111: break received on communication channel
On the database, using Toad's session browser, I can see that the "Current Statement" is correct.
insert into schemaX.tableY(StringField) values(:Stringfield)
Under the Waits tab in Toad, there is the following message:
“Waiting for SQL*Net more data from client - waited X hundred seconds, so far” and the X keeps incrementing until we hit our database timeout.
We tried with batches of 1 million and this gave us the best performance for our scenario. However, this hanging issue arose. I then decrease the ArrayBindCount to 500K, 100K, 50K, 10K and then 5K. Only when I used 5K did it stop happening.
A couple of notes:
This happens more frequently when the database is on a different physical machine than the client. When using a local VM, it rarely happens. The network that we are using is generally very reliable with no other noted issues.
From the error message(ORA-12592: TNS:bad packet), it seems that the issue might be on the client and perhaps related to code in the "Oracle.DataAccess.Client"(ODAC) dll.
My next steps for troubleshooting are to use Reflector to debug the call from the ODAC code and also to get more reliable client side tracing while forcing this error to occur.
I had the same situation when trying to insert into an Oracle table using the ArrayBinding.
Using a small number for oracleCommand.ArrayBindCount seemed to improve the frequency of the errors (same like yours) but not completely.
The solution was to use the Managed data access. I suggest you get the latest ODP.NET, add a reference to ManagedDataAccess and change to:
using Oracle.ManagedDataAccess.Client;
using Oracle.ManagedDataAccess.Types;
This fixed problem in my case and with no need to change anything in the code.

Hibernate - Getting exception : maximum number of processes (550) exceeded

I am using hibernate 3 along with spring.My Hibernate configurations are as under :
hibernate.dialect=org.hibernate.dialect.Oracle8iDialect
hibernate.connection.release_mode=on_close
But after starting application, even if only one user accesses it then also I am getting this exception :
ORA-00020: maximum number of processes (550) exceeded
This is stacktrace:
Caused by: java.sql.SQLException: ORA-00020: maximum number of processes (550) exceeded
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:799)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1038)
at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:839)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1133)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3329)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:76)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1953)
at org.hibernate.loader.Loader.doQuery(Loader.java:802)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:274)
at org.hibernate.loader.Loader.loadEntity(Loader.java:2037)
I have kept connection pool time out = 5000. I have also tried to found the cause and got that release mode may affect the mechanism of closing DB resources. But I couldn't find exact solution for that.
Please help..
Thanks in advance..
This is a database error not an application error so you need to go to the database to solve it. 550 processes is a lot more than it sounds so either someone has gone insane or you have a lot of inactive processes running.
The best way to find out is to query the v$session view or Gv$session if you're using a RAC, look at the STATUS column.
Take careful not of where all these sessions are coming from; the OSUSER, TERMINAL and PROGRAM will probably be the most useful. It might almost be worth creating a temporary table with this information - proof and a record afterwards. Then after checking that you're not going to break anything, and with your DBAs if you have any, kill all the inactive sessions simultaneously or one at a time.
That'll remove the error but if it's occurred once it can occur again, so you need to solve it. Either:
You've got a lot of people using the database.
There is an application / program somewhere that is not closing it's
sessions after it's finished.
Someone is connecting in the middle of a loop.
Whichever reason it is you need to track it down and correct it. I'd start with the program or terminal from v$session that had the most number of processes.

Resources