I am running a long query(having lot of subqueries) with rownum from VB6 which is giving ORA-03113 end of file on communication after approximately 1 minute. The query run fine from Toad. When the same query is run from VB6 without ROWNUM then query works fine. Also this is parameterised query, if i remove parameters and enter the values directly in query, then also query run fines from vb6.
This query was running fine few days back. Now it is not.
I tried increasing the connection timeout but still i am getting error after 1 minute. Could anyone suggest what could be the problem?
This often indicates that an ORA-00600 internal error has been thrown on the server. Check the alert log and trace files.
ORA-03113 on the client side is one of the oracle catch-all error that is thrown. You need to see if there is any additional error that accompany that error to give you an idea of the problem. The problem can be server side if there is an ORA-00600 that accompanies the ORA-03113 with the same sid/serial of the session. Check the server logs for both the ORA-03113 and any addition error. If there are no server side error, then the error is client side. Check for any network/connection related issue, but since the query works without the rownum network is probably not the cause. That means its probably a client side bug, now comes the need to enable client side tracing of the connection and see if you can generate a consistent issue and trace that you can then use to raise a case with oracle support to see if there is a bug you can get patch/fixed on the client.
Related
I am connecting to oracle 12 in Oracle cloud, from PowerBI Desktop windows server 2016.
Oracle client is installed and TNS file configured.
Oracle is hosted by a vendor so my only access is to directly query the database.
In powerBI, when using an oracle connection, i get ORA-03113 errors about 50% of the time when refreshing data. There is no discernible pattern to the appearance of the error.
If i connect via a System ODBC connection set up in windows, I dont get any issues or errors, although the data load is a bit slower.
I would appreciate ideas on what may be causing this issue or what to check to help get more information.
I'm afraid your issue needs some deeper analysis as ORA-03113 might have various reasons, but typically it means that the 'oracle' executable has terminated unexpectedly once there was an existing connection. You should try to isolate the SQL command that is executing when the error occurs. It can be done either by checking the trace files on the server or by using SQL*Net trace if you don't have access to the server. If a statement can be isolated which consistently raises the ORA-3113 error, then it can be further analysed (like execution plan, triggers, etc), or maybe the best to raise an SR so Oracle Support can work on the issue. If you have access to Oracle Support you can find more information about ORA-03113 troubleshooting in MOS Doc ID 1506805.1. Let me know if I can help you any further.
I use datagrip to move some data from a mysql installation to another postresql-database.
That worked for 3 other tables like a charm. The next one, over 500.000 rows big, could not be imported.
I use the function "Copy Table To... (F5)".
This is the log.
16:28 Connected
16:30 user#localhost: tmp_post imported to forum_post: 1999 rows (1m
58s 206ms)
16:30 Can't save current transaction state. Check connection and
database settings and try again.
For other errors like wrong data types, null data on not null columns, a very helpful log is created. But not now.
The problem is also relevant when using the database plugin for IntelliJ-based IDEs, not only DataGrip
The simplest way to solve the issue is just to add "prepareThreshold=0" to your connection string as in this answer:
jdbc:postgresql://ip:port/db_name?prepareThreshold=0
Or, for example, if you a using several settings in the connection string:
jdbc:postgresql://hostmaster.com:6432,hostsecond.com:6432/dbName?&targetServerType=master&prepareThreshold=0
It's a well-known problem when connecting to the PostgreSQL server via PgBouncer rather than a problem with IntelliJ itself. When loading massive data to the database IntelliJ splits data into chunks and loads them sequentially, each time executing the query and committing the data. By default, PostgreSQL starts using server-side prepared statements after 5 execution of a query.
The driver uses server side prepared statements by default when
PreparedStatement API is used. In order to get to server-side prepare,
you need to execute the query 5 times (that can be configured via
prepareThreshold connection property). An internal counter keeps track
of how many times the statement has been executed and when it reaches
the threshold it will start to use server side prepared statements.
Probably your PgBouncer runs with transaction pooling and the latest version of PbBouncer doesn't support prepared statements with transaction pooling.
How to use prepared statements with transaction pooling?
To make prepared statements work in this mode would need PgBouncer to
keep track of them internally, which it does not do. So the only way
to keep using PgBouncer in this mode is to disable prepared statements
in the client
You can verify that the issue is indeed because of the incorrect use of prepared statements with the pgbouncer via viewing IntelliJ log files. For that go to Help -> Show Log in Explorer, and search for "org.postgresql.util.PSQLException: ERROR: prepared statement" exception.
2022-04-08 12:32:56,484 [693272684] WARN - j.database.dbimport.ImportHead - ERROR: prepared statement "S_3649" does not exist
java.sql.SQLException: ERROR: prepared statement "S_3649" does not exist
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.jdbc.PgConnection.executeTransactionCommand(PgConnection.java:755)
at org.postgresql.jdbc.PgConnection.commit(PgConnection.java:777)
I'm currently troubleshooting a VB6 application that sporadically comes up with the following error:
[Oracle][ODBC][Ora]ORA-01013: user requested cancel of current operation
All of the research I've done on this error states that it is either an actual request for cancellation by the user or a timeout. It can't be a request for cancellation because the input is coming in from an automated source, so it must be a timeout. One thing I read online was to un-check the query timeout checkbox in the DSN configuration box but my program uses a DSN-less connection to the database, which is an Oracle 10g database.
There are several queries in this program but it always fails on one query in particular, however I can't reproduce the error in a test environment using all of the same input to the program that caused the error in the first place.
A co-worker of mine suggested doing a rollback after each query even though the queries are read only because some kind of buffer might be getting filled up or something of the like, but this didn't work. At this point I don't even know how to continue troubleshooting it because I can't reproduce the error. If someone could give me any idea of what is going on and how to fix the problem I'd greatly appreciate it. Thanks in advance!
All of the options that you can choose when setting up a DSN can be specified in the connection string if you are using a DSN-less connection. If you want to disable query timeouts, you would add
QTO=F
to the connection string. So your new connection string would be something like
DRIVER={Oracle ODBC Driver};UID=Kotzwinkle;PWD=whatever;DBQ=instl_alias;QTO=F;
Do you know any reason to get the following error when I'm taking a few Oracle Reports?
"FRM-40735: ON-ERROR trigger raised unhandled exception ORA-03114"
This happens some times to a few users.
ORA-03114: not connected to ORACLE is an error with several possible causes. As it suggests, it means your client (Forms in this case) has been disconnected from the database.
It's possible that the database or the listener has shut down. Or perhaps a problem with the client has caused the database to disconnect you. There may well be some messages in the database alert log.
ORA-03114 is "Not Connected to Oracle" - i would begin by troubleshooting this issues first.
Either you got disconnect in some way (perhaps an idle connection timeout)
or someone shut down the DB while you were connected. Either way, it means you are disconnected from the database.
Two related applications use a function in a package in several queries to return some data as CSV. The column being selected and concatenated is a CLOB field and can contain HTML, special characters, etc. The applications have few users and so are not heavily used. One is a Flex application that consumes Oracle HTTP services, and the other is an ASP.NET application that uses ODP.NET. The applications are really one integrated application with hyperlinks to each other.
Yesterday, I received several notifications of a strange error:
ORA-01031: insufficient privileges ORA-06512
The line number in the package in the error details indicates that the error was caused by the function being used in the select clause. It would occur when called by either application about 75% of the time.
Am I correct that an ORA-06512 occurred in the function that then caused an ORA-01031 insufficient privilege error? Unfortunately, ORA-06512 is a very generic error and doesn't tell me anything. And why would it cause an invalid privilege error? The Oracle user accounts being used by both applications have the execute privilege on the package that contains the function.
Regarding the function... it has been used for about 2 years in production without any issue. Also, when I imported the data to QA yesterday and tested it, no error would occur, no matter how many times I hammered the server with requests. But in production, the error would occur about 75% of the time with exactly the same parameters.
The DBA tried to help me with a trace, but we could not find the error message in the trace files.
Today, everything is back to normal in production. Even if I hammer the server with requests the error will stubbornly refuse to occur.
What caused this very strange behaviour yesterday? Do any of the gurus here have any ideas?
EDIT: I just realized one important detail. The column in the table that is being selected and concatenated into CSV by the function is a CLOB.
If the client applications were running "SELECT clob_to_csv(clob_col) FROM ..." and it returned an invalid privilege SOMETIMES, then it is probably something the function is trying to do, rather than the select statement not having sufficient privilege to execute the function.
Not quite clear on what it might do that may require a privilege. Does it use a file (UTL_FILE) or network connection / web service ?
Could be some sort of odd data (a very large clob perhaps).