When using (oracle) jdbc addbatch for an insert statement having a blob, when should we free the blob - after the addBatch or after the executeBatch?
You should free them only after execution. The JDBC 4.3 specification (section 14.1.4) says:
The sets of parameter values together with their associated
parameterized update commands can then be sent to the underlying data
source engine for execution as a single unit.
In other words, the blob will be used on execution of the batch, freeing it earlier would mean it is no longer available to be used at time of execution.
Related
I am new to oracle database. Does direct path load perform SQL processing like syntax check, semantic check etc .. If direct path engine bypass SGA and use PGA to load the data in parallel faster above the high water mark of object, where does SQL processing happen? In PGA?
Thanks in advance!
I'm writing a simple database migration code in clojure using the new seancorfield/next.jdbc library.
How do I execute several SQL statements at once? The usecase is that I have a SQL file containing query code for migrating from one version to the next. next.jdbc/execute! only executes one statement as designed.
Whether you can execute multiple statements in a single JDBC operation is database-dependent. Some databases allow multiple statements in a single operation, separated by a semicolon. If the JDBC driver supports it, next.jdbc will also support it.
If your JDBC driver does not support it, you'll need to make multiple execute! calls. Some databases allow you to wrap multiple DDL operations in a transaction (some of them ignore the transaction and commit each DDL operation separately anyway), some databases explicitly disallow a transaction around DDL operations.
This is what I see from Sybase ASE documentation:
"It instructs Adaptive Server not to save a plan for this procedure. A new plan is created each time the procedure is executed."
So this means ASE does plan the stored procedure code in order to execute it, but what about oracle? Does Oracle saves plan at stored procedure level?
Oracle does not save plans at the stored procedure level.
Oracle stores SQL plan information in a memory structure called the shared pool, which is part of the System Global Area (SGA). Plans will be modified and created when necessary, based on when the queries are executed. With the default settings, the plans will automatically adjust based on things like data and bind value changes. You don't need to worry about plans changing or not changing when compiling objects.
I just created a stored procedure in MS SQL DB using TOAD.
what it does is that it accepts an ID wherein some records are associated with, then it inserts those records to a table.
next part of the stored procedure is to use the ID input to search on the table where the items got inserted and then return it as the result set to the user just to confirm that the information got inserted.
IN TOAD, it does what is expected. It inserts date and returns information using just the stored procedure.
IN Oracle SQL developer however, it does the insert and it ends at that. It seems to not execute the 2nd part of the stored procedure which is a select stmt.
I just have a feeling that this is because of the jdbc adapter. Also why I'm asking is because I'm using a reporting tool Pentaho Report Designer and it would really make it easier if I can do 2 things at the same time. Pentaho Report Designer is also using jdbc adapters, not a coincidence maybe?
But if there are other things that I can tweak I'd really appreciate it.
This is a guess, but worth considering...
There are things called "Batches", where are sets of SQL Statements that are all sent to the server at once, and executed by the server as one set of statements, within a single server-side session. Sending a set of sql statements to the server as a batch will often result in different results than if you sent them one at a time, where each statement is executed in its own session.
I haven't used Toad (or Oracle) in a while, but as I recall, it dealt with batches differently than the other ide I used. If the second statement in your set is relying on being in the same session as the first, and in one ide it is in a separate session, then this might explain what is happening.
I have two schemas: A and B (Oracle 9). At the A there is a dblink to B. At the B there is a package, that i calls from A. Procedures in B package can returns varying count results and i think that returning a collection is a better way for this reason.
create type B.tr_rad as object (
name varchar2(64)
,code number
,vendor number
,val varchar2(255)
,num number
);
create type B.tt_rad as varray(256) of B.tr_rad;
But from A scheme I cannot use tt_rad type because using SQL-types by dblink is not supported. DBMS_SQL is not supported cursors. Create types with same OID is impossible.
I think to use temporary tables. But firstly it is not that good (after the remote function returns the value, calling side must select collection from remote table). And there are fears of a slowdown of work with temporary tables.
Maybe who knows the alternative interaction?
I've had similar problems in the past. Then I came to the conclusion that fundamentally Oracle's db links are "broken" for anything but simple SQL types (especially UDT's, CLOBS may have problems, XMLType may as well). If you can get the OID solution working then good luck to you.
The solution I resorted to was to use a Java Stored procedure, instead of the DB Link.
Characteristics of the Java Stored Procedure:
Can return a "rich set of types", just about all of the complex types (UDT's, tables/arrays/varrays) see Oracle online documentation for details. Oracle does a much better job of marshalling complex (or rich) types from java, than from a DBLink.
Stored Java can acquire the "default connection" (runs in the same session as the SQL connection to the db - no authentication issues).
Stored Java calls the PL/SQL proc on the remote DB, and the java JDBC layer does the marshaling from the remote DB.
Stored Java packages up the result and returns the results to the SQL or PL/SQL layer.
It's a bit of work, but if you have a bit of java, you should be able to "cut and paste" a solution together from the Oracle documentation and sample.
I hope this helps.
See this existing discussion
referencing oracle user defined types over dblink
An alternative interaction is to have one database with schemas A and B instead of two databases with a database link.
My solution.
On the side B i create temporary table like the collection record. At the A side i have a DBMS_SQL wrapper that calls procedure over dblink. This procedure writes result collection in the temporary table. After successful completion remote procedure i select results from remote temporary table and transform it to local collection type.
Limitations
1. the need for permanent object synchronization.
2. impossibility use A-side procedure (that call remote procedure) in SQL query.
3. the complexity of using.