I'm using latest DBeaver with Oracle 12
I need to run several inserts to different tables that are connected by foreign key
When executing multiple oracle inserts (Alt + X ) to several tables and it failed on foreign key when it shouldn't (if executed sequentially).
Executing same SQLs in PLSQL developer doesn't produce any error. (reproducible)
It seems that the inserts aren't execute in sequence
Can this behavior changed?
Found DBeaver wiki that warns for unexpected results
NOTE: Be careful with this feature. If you execute a huge script with a large number of queries, it might cause unexpected problems.
Found in disucssions solution to add inserts to PL/SQL block:
ShadelessFox
It's not possible from a DBeaver perspective, but you can use PL/SQL blocks
Related
I am not sure in case of Stored Procedures, if Postgresql treats static sql any differently from a query submitted as a quoted string.
When I create a stored procedure in PostgreSQL using static sql, there seems to be no validation of the table names and table columns or column types but when I run the procedure I get the listing of the problems if any.
open ref_cursor_variable for
select usr_name from usres_master;
-- This is a typing mistake. The table name should be users_master. But the stored procedure is created and the error is thrown only when I run the procedure.
When I run the procedure I (naturally) get some error like :
table usres_master - invalid table name
The above is a trivial version. The real procedures we use at work combine several tables and run to at least a few hundred lines. In PostgresQL stored procedure, is there no advantage to using static sql over dynamic sql i.e. something like open ref_cursor_variable for EXECUTE select_query_string_variable.
The static SQL should be preferred almost time - dynamic SQL should be used only when it is necessary
from performance reasons (dynamic SQL doesn't reuse execution plans). One shot plan can be better some times (and necessary).
can reduce lot of code
In other cases uses static SQL every time. Benefits:
readability
reuse of execution plans
it is safe against SQL injection by default
static check is available
The source of a function is just a string to Postgres. The main reason for this is the fact that Postgres (unlike other DBMS) supports many, even installable languages for functions and procedures. As the Postgres core can't possibly know the syntax of all languages, it can not validate the "inner" part of a function. To my knowledge the "language API" does not contain any "validate" method (in theory this would probably be possible though).
If you want to statically validate your PL/pgSQL functions (and procedures since Postgres 11) you could use e.g. https://github.com/okbob/plpgsql_check/
I am using Oracle 12C, Needed help to understand,
1) I am facing an issue of several sessions in locked state (LibraryCache). Showing me as Parallel queries are running, but I have not set any Parallel clause in DDL of table object.
Only for migration Indexes are created with Parallel clause
But that is only for create Not understand why it is taking for DML also.
2) Also if I assume if that is the case then while running DML from sqleditor its ExecutionPlan shows me noting as parallel.
Finally got an answer through,
http://blog.tanelpoder.com/2007/06/23/a-gotcha-with-parallel-index-builds-parallel-degree-and-query-plans/
The problem I am trying to solve:
I have a SAS dataset work.testData (in the work library) that contains 8 columns and around 1 million rows. All columns are in text (i.e. no numeric data). This SAS dataset is around 100 MB in file size. My objective is to have a step to parse this entire SAS dataset into Oracle. i.e. sort of like a "copy and paste" of the SAS dataset from the SAS platform to the Oracle platform. The rationale behind this is that on a daily basis, this table in Oracle gets "replaced" by the one in SAS which will enable downstream Oracle processes.
My approach to solve the problem:
One-off initial setup in Oracle:
In Oracle, I created a table called testData with a table structure pretty much identical to the SAS dataset testData. (i.e. Same table name, same number of columns, same column names, etc.).
On-going repeating process:
In SAS, do a SQL-pass through to truncate ora.testData (i.e. remove all rows whilst keeping the table structure). This ensure the ora.testData is empty before inserting from SAS.
In SAS, a LIBNAME statement to assign the Oracle database as a SAS library (called ora). So I can "see" what's in Oracle and perform read/update from SAS.
In SAS, a PROC SQL procedure to "insert" data from the SAS dataset work.testData into the Oracle table ora.testData.
Sample codes
One-off initial setup in Oracle:
Step 1: Run this Oracle SQL Script in Oracle SQL Developer (to create table structure for table testData. 0 rows of data to begin with.)
DROP TABLE testData;
CREATE TABLE testData
(
NODENAME VARCHAR2(64) NOT NULL,
STORAGE_NAME VARCHAR2(100) NOT NULL,
TS VARCHAR2(10) NOT NULL,
STORAGE_TYPE VARCHAR2(12) NOT NULL,
CAPACITY_MB VARCHAR2(11) NOT NULL,
MAX_UTIL_PCT VARCHAR2(12) NOT NULL,
AVG_UTIL_PCT VARCHAR2(12) NOT NULL,
JOBRUN_START_TIME VARCHAR2(19) NOT NULL
)
;
COMMIT;
On-going repeating process:
Step 2, 3 and 4: Run this SAS code in SAS
******************************************************;
******* On-going repeatable process starts here ******;
******************************************************;
*** Step 2: Trancate the temporary Oracle transaction dataset;
proc sql;
connect to oracle (user=XXX password=YYY path=ZZZ);
execute (
truncate table testData
) by oracle;
execute (
commit
) by oracle;
disconnect from oracle;
quit;
*** Step 3: Assign Oracle DB as a libname;
LIBNAME ora Oracle user=XXX password=YYY path=ZZZ dbcommit=100000;
*** Step 4: Insert data from SAS to Oracle;
PROC SQL;
insert into ora.testData
select NODENAME length=64,
STORAGE_NAME length=100,
TS length=10,
STORAGE_TYPE length=12,
CAPACITY_MB length=11,
MAX_UTIL_PCT length=12,
AVG_UTIL_PCT length=12,
JOBRUN_START_TIME length=19
from work.testData;
QUIT;
******************************************************;
**** On-going repeatable process ends here *****;
******************************************************;
The limitation / problem to my approach:
The Proc SQL step (that transfer 100 MB of data from SAS to Oracle) takes around 5 hours to perform - the job takes too long to run!
The Question:
Is there a more sensible way to perform data transfer from SAS to Oracle? (i.e. updating an Oracle table from SAS).
First off, you can do the drop/recreate from SAS if that's a necessity. I wouldn't drop and recreate each time - a truncate seems easier to get the same results - but if you have other reasons then that's fine; but either way you can use execute (truncate table xyz) from oracle or similar to drop, using a pass-through connection.
Second, assuming there are no constraints or indexes on the table - which seems likely given you are dropping and recreating it - you may not be able to improve this, because it may be based on network latency. However, there is one area you should look in the connection settings (which you don't provide): how often SAS commits the data.
There are two ways to control this, the DBCOMMMIT setting and the BULKLOAD setting. The former controls how frequently commits are executed (so if DBCOMMIT=100 then a commit is executed every 100 rows). More frequent commits = less data is lost if a random failure occurs, but much slower execution. DBCOMMIT defaults to 0 for PROC SQL INSERT, which means just make one commit (fastest option assuming no errors), so this is less likely to be helpful unless you're overriding this.
Bulkload is probably my recommendation; that uses SQLLDR to load your data, ie, it batches the whole bit over to Oracle and then says 'Load this please, thanks.' It only works with certain settings and certain kinds of queries, but it ought to work here (subject to other conditions - read the documentation page above).
If you're using BULKLOAD, then you may be up against network latency. 5 hours for 100 MB seems slow, but I've seen all sorts of things in my (relatively short) day. If BULKLOAD didn't work I would probably bring in the Oracle DBAs and have them troubleshoot this, starting from a .csv file and a SQL*LDR command file (which should be basically identical to what SAS is doing with BULKLOAD); they should know how to troubleshoot that and at least be able to monitor performance of the database itself. If there are constraints on other tables that are problematic here (ie, other tables that too-frequently recalculate themselves based on your inserts or whatever), they should be able to find out and recommend solutions.
You could look into PROC DBLOAD, which sometimes is faster than inserts in SQL (though all in all shouldn't really be, and is an 'older' procedure not used too much anymore). You could also look into whether you can avoid doing a complete flush and fill (ie, if there's a way to transfer less data across the network), or even simply shrinking the column sizes.
I just created a stored procedure in MS SQL DB using TOAD.
what it does is that it accepts an ID wherein some records are associated with, then it inserts those records to a table.
next part of the stored procedure is to use the ID input to search on the table where the items got inserted and then return it as the result set to the user just to confirm that the information got inserted.
IN TOAD, it does what is expected. It inserts date and returns information using just the stored procedure.
IN Oracle SQL developer however, it does the insert and it ends at that. It seems to not execute the 2nd part of the stored procedure which is a select stmt.
I just have a feeling that this is because of the jdbc adapter. Also why I'm asking is because I'm using a reporting tool Pentaho Report Designer and it would really make it easier if I can do 2 things at the same time. Pentaho Report Designer is also using jdbc adapters, not a coincidence maybe?
But if there are other things that I can tweak I'd really appreciate it.
This is a guess, but worth considering...
There are things called "Batches", where are sets of SQL Statements that are all sent to the server at once, and executed by the server as one set of statements, within a single server-side session. Sending a set of sql statements to the server as a batch will often result in different results than if you sent them one at a time, where each statement is executed in its own session.
I haven't used Toad (or Oracle) in a while, but as I recall, it dealt with batches differently than the other ide I used. If the second statement in your set is relying on being in the same session as the first, and in one ide it is in a separate session, then this might explain what is happening.
select max(Key) from MyTable with(nolock)
I have this T-SQL statement but need to use one statement for both SQL Server and Oracle as well, of course the “with (nolock)” is not recognised by Oracle. Is there a way having this statement that will run on both databases. By either an Oracle ignoring the “with (nolock)” or only Sqlserver using this part of the statement or a way or coded that both with understand.
The reason why I am using No lock is because multiple users are accessing the same table and I need to find the max value during the transactions.
Oracle does things a bit differently so I do not have to worry about table locks.
Your queries are doing different things. What are the semantics of the query supposed to be? Is it supposed to return max(Key) including the effect of uncommitted transactions as your SQL Server version indicates? If so
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
select max(Key)
from MyTable;
Is the same semantics. The syntax should work fine on both AFAIK.
If instead you want the last committed max(Key) you would need to change the SQL Server database to use read committed snapshot isolation by default so it behaves more similarly to Oracle. Or alternatively you could achieve similar semantics with ROWLOCK,READPAST hints but then you are back to needing two different queries.