I am looking for a way to avoid a recurrent error of mine, working with SQL scripts that sometimes contain PL/SQL blocks. As you may know, in sqlplus, if you do not add a slash / after a PL/SQL block (Begin ... End;), the block is not executed.
But from my development tools, the lack of slash is not detected until my stuff gets deployed on the testing environment, and it adds confusion and stress to the process.
I had this issue in many places I've worked at, and with many tools. So I wonder if it is possible to configure Oracle sqlplus either to
execute the content of the "buffer" before commits, even if no / is there
raise an error message if "buffer" is not empty when exiting
Another solution would be to change the behaviour of my development tools, I know, but am also looking forward to extend my Oracle knowledge with your permission.
To fix the idea, let's say I deliver script a.sql, that is encapsulated later on into the company's deployment script dpl.sql
dpl.sql simplified (note the importance of the . that kind of hides the issue, but that I cannot change, and is THE thing to work around)
spool dpl.log
#a
.
spool off
a.sql where I forgot the / after my pl/Sql block before of personal inconsistency by design (beauty of Nature)
create table a(n number);
begin
dbms_output.put_line('coucou');
end;
insert into a select 1 from dual ;
commit ;
Result where you see no commit, and no error...
SQL> #dpl
Table dropped.
Table created.
SQL> select count(1) from a ;
COUNT(1)
----------
0
I went through these, to no avail:
When do I need to use a semicolon vs a slash in Oracle SQL?
oracle SQL plus how to end command in SQL file?
Related
My procedure looks like this:
Declare
cur_1 Sys_Refcursor;
cur_2 Sys_Refcursor;
v_1 VARCHAR2(30);
v_2 VARCHAR2(30);
v_3 VARCHAR2(30);
v_4 VARCHAR2(30);
Begin
OPEN cur_1 for Select * from tab1#dblink1;
Loop
Fetch cur_1 into v_1, v_2;
EXIT WHEN cur_1%NOTFOUND;
OPEN cur_2 for Select * from tab2#dblink1 where col1 = v_1 and col2 = v2;
Loop
Fetch cur2 into v_3, v_4;
Exit when cur_2%notfound;
INSERT INTO local.tab3 values (v_1,v_2, v_3, v_4);
END Loop;
close cur_2;
End Loop;
close cur_1;
END;
The abobe procedure compiles, but when I run it I get following error:
No more data to read from socket
No more data to read from socket
No more data to read from socket
No more data to read from socket
No more data to read from socket
No more data to read from socket
No more data to read from socket
No more data to read from socket
...(Few more 'No more data to read from socket')
IO Error: Connection reset by peer: socket write error
Process exited.
Interesting thing is when I comment out the entire inner loop the procedure runs without error. So I know something is wrong with the inner loop (I tried commenting only the insert statement inside the inner loop and got the same error).
Both my localdb and dblink1 databases have same version:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
Generic advice for troubleshooting "No more data to read from socket" errors.
These errors are usually caused by another serious error, such as an ORA-600 error. A problem so serious that the server process crashed and could not even send a proper error message to the client. (Another common reason for these errors is a network disconnection caused by SQLNET.EXPIRE_TIME or some other process that kills old sessions.)
Look at the Alert Log to find out the original error message.
Look for the file alert_[name].log in this directory: select value from v$parameter where name = 'background_dump_dest';
After you find the specific error message and details, go to support.oracle.com. Use the "ora-600 tool" and then lookup the first number after the ORA-600 message.
There will usually be one or more articles for that specific type of ORA-600 error. Use the exact version and platform to narrow down the possible list of bugs. (But don't be surprised if the "Versions affected" in the article are wrong. Oracle's claims of "fixed in version x.y" are not always true.)
The articles typically explain in more details how the problem happened, possible workarounds, and a solution that usually involves a patch or upgrade.
In practice you rarely want to solve these problems. The "typical" advice is to contact Oracle Support to verify you really have the same problem, get a patch, get permission and bring down the environment(s), and then apply the patch. And then probably realize the patch doesn't work. Congratulations, you just wasted a lot of time.
Instead, you can usually avoid the problem with a subtle change to the query or procedure. There are a lot of features in Oracle, there's almost always another way to do it. If the code ends up looking a bit weird, add a comment to warn future programmers: "This code looks weird to avoid bug X, which should be fixed in version Y."
Specific advice for this code
If that's really your entire procedure, you should replace it with something like this:
insert into local.tab3(col1, col2, col3, col4)
select tab1.col1, tab1.col2, tab2.col1, tab2.col2
from tab1#dblink1 tab1
join tab2#dblink1 tab2
on tab1.col1 = tab2.col1
and tab1.col2 = tab2.col2;
In general, you should always do things in SQL if possible. Especially if you can avoid opening many cursors. And especially if you can avoid opening many cursors to a remote database.
As jonearles mentioned you should write this in one SQL statement.
If you insist on using PL/SQL : you are doing way too much work yourself, declaring variables, open cursors, looping, assigning variables. Consider this PL/SQL:
begin
for c1 in (select * from tab1#dblink1)
loop
for c2 in (Select * from tab2#dblink1 where col1 = c1.col1 and col2 = c1.col2)
loop
insert into local.tab3 values (c1.col1,c1.col2,c2.col1,c2.col2);
end loop;
end loop;
end;
/
I have a sql script which creates my applications tables, sequences, triggers etc. & inserts around 10k rows of data.
I am on a slow network and when I run this script from my local machine it takes a long time to finish.
Wondering if there is any support in sqlplus (or sqldeveloper) to run this script on server. So the entire script is first transported to the server executed and then returns say a log file of the execution.
No, there is not. There are some things you can do that might the data load go faster, such as use sql loader if you are doing individual inserts, and increasing your commit interval. However, I would have to see the code to really help very much.
If you have access to the remote server on which the database is hosted and you have access to execute sqlplus on the said server, sure you can.
Login or SSH (depending upon OS - Windows or *nix) to the
server
Create your SQL script (myscript.sql) over there.
Login to SQL*Plus and execute using the command # myscript.sql
There is rarely a need to run these kinds of scripts on the server. A few simple changes to batch commands can significantly improve performance. All of the changes below combine multiple statements, reducing the number of network round-trips. Some of them also decrease parsing time, which will significantly improve performance even if the scripts run on the server.
Combine INSERTs into a single statement
Replace individual inserts:
insert into some_table values(1);
insert into some_table values(2);
...
with combind inserts like this:
insert into some_table
select 1 from dual union all
select 2 from dual union all
...
Use PL/SQL blocks
Replace individual DDL:
create sequence sequence1;
create sequence sequence2;
with a PL/SQL block:
begin
execute immediate 'create sequence sequence1';
execute immediate 'create sequence sequence2';
end;
/
Use inline constraints
Combine DDL as much as possible. For example, use this statement:
create table some_table(a number not null);
Instead of this:
create table some_table(a number);
alter table some_table modify a not null;
With TSQL I'm used to putting some repeatable tests in for my stored procs. Typically this may include putting the db in a particular state, runnings the sproc, validating the state and rolling back. And contrived example might something like this"
BEGIN TRAN
--input for test case
DECLARE #TestName VARCHAR(10) = 'bob'
--insert test row
INSERT INTO tbl (data) values (#TestName)
--display initial state of target row
SELECT * FROM tbl WHERE data = #TestName
--do some useful test
EXEC MyProc
--display the final state of the target row
SELECT * FROM tbl WHERE data = #TestName
--put the db back where it started
ROLLBACK TRAN
Now I'm working with Oracle and PL/SQL and I'm trying to use a some similar pattern to test my work and not finding it obvious to me quite how to do that. I believe there are a few different ways I might accomplish it but haven't gotten anything to actually work. Ideally I would have a single script in which I could run multiple test cases and inspect the result.
I am trying to work in PL/SQL Developer at this point and understand that might have some differences from how it might work in Oracle SQL Developer or elsewhere.
In Oracle, using tools like SQL*Plus and GUI tools like SQL Developer, you have many options :
To execute the statements and procedures in a single session in an order, i.e. using procedural method of PL/SQL, write an anonymous plsql block and execute it as a script.
Most of the GUI based tools have an option like Execute as script or Test Window to execute your scripts individually or embedded in an anonymous block.
Using DBMS_SCHEDULER also you could achieve the same task.
As you are interested in PL/SQL Developer tool product of Allround Automations, you could simply use the test window to test individual objects.
I have documented few useful features of the PL/SQL Developer tool in my blog, please read http://lalitkumarb.wordpress.com/2014/08/14/plsql-developer-settings/
Can anybody let me know if there is any way to find out cost of a stored procedure in Oracle? If no direct way is there, I would like to know any substitutes.
The way I found the cost is doing an auto trace of all the queries used in the stored procedure and then estimate the proc cost according to the frequency of the queries execution.
In addition to that I would like suggestions to optimize my stored procedure especially the query given below.
Logic of the procedure:
Below is the dynamic sql query used as a cursor in my stored procedure. This cursor is opened and fetched inside a loop. I fetch the info and put them in a varray, count the data and then insert it to a table.
My objective is to find out the cost of the proc as well as optimize the sp.
SELECT DISTINCT acct_no
FROM raw
WHERE 1=1
AND code = ''' || code ||
''' AND qty < 0
AND acct_no
IN (SELECT acct_no FROM ' || table_name || ' WHERE counter =
(SELECT MAX(counter) FROM ' || table_name || '))
One of the best tool in analyzing SQL and PLSQL performance is the native SQL trace.
enable tracing in your session:
SQL> alter session set SQL_TRACE=TRUE;
Session altered
Run your procedure
Exit your session
Navigate to your server udump directory and find your trace file (usually the latest)
Run tkprof
This will produce a file containing a list of all statements with lots of information, including the number of times each was executed, its query plan and statistics. This is more detailed and precise than manually running the plan for each select.
If you want to optimize performance on a procedure, you would usually sort the trace file by the time taken to execute (with sort=EXEELA) or fetch SQL and try to optimize the queries that make the most work.
You can also make the trace file log wait events by using the following command at step 1:
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
The way to find out the cost (in execution of time) for a stored procedure is to employ a profiler. 11g introduced the Hierarchical Profiler which is highly neat. Find out more.
Prior to 11g there was only the DBMS_PROFILER, which is good enough, especially if your stored procedure doesn't use objects in other schemas. Find out more.
Trace is good for identifying poorly performing SQL. Profilers are good for identifying the cost of the PL/SQL elements of a stored proc. If your proc has some expensive computation elements which don't read or write to tables then that won't show up in SQL trace.
Likewise if you have a well-tuned SQL statement but use it badly ia profiler run is likely to be more help than trace. An example of what I mean is repeatedly executing the same SELECT statement inside a Cursor loop: I know that's not quite what you're doing but it's close enough.
Apparently the hierarchical profiler DBMS_HPROF is installed by default in 11g but a DBA has to grant some privileges to developers who want to use it. Find out more.
To install the DBMS_PROFILER in 10g (or earlier) a DBA has to run this script:
$ORACLE_HOME/rdbms/admin/proftab.sql
Be sure to get the reporting infrastructure as well:
$ORACLE_HOME/plsql/demo/profsum.sql
(The name or location of this script may vary in earlier versions).
The easy way is to execute the procedure and then query v$sql.
if you want a little tip to make your life easier (not just for packages) add a blank comment to the query inside the procedure, something like
select /* BIG DADDY */ * from dual;
and then query v$sql as follows
select * from v$sql where sql_text like '%BIG DADDY%';
the best way is definitely the way #Vincent Malgrat suggested.
good luck.
I have a view whose DDL definition is many thousands of lines long. Part of our CI process is to drop and recreate views from DDL using SQLPlus called from a command line script.
This works for hundreds of views in the database but the very large view is never created in the target schema. I always manually paste the view creation script into Toad and run it manually after the automated process has completed. This is a drag.
There is no meaningful error message from SQLPlus when the large-view portion of the DDL script is run but I suspect that it fails because of it's size.
Is there a "set" command that I can include at the top of my DDL to tell SQLPlus that it's ok to create large views or am I forever doomed to include a stoopid manual step in the otherwise automatic CI process?
Firstly, use the most recent version of SQLPlus. Its been a long time since I had a piece of code that was too large to be executed through SQLPlus. You can use the InstantClient
I'd also look at re-factoring the view. Look at the WITH clause as that is relatively new and, if the view has evolved over a long period, there's a good chance it can be amended to make use of this
Is there an empty line in the view SQL, or does any line have more than 2499 characters? Either one of these may cause SQL*Plus to behave unexpectedly but not actually fail.
If there is an empty line, Oracle will ignore everything before it and try to run everything after it. (This only applies to SQL, not PL/SQL.) For example, if you have an empty line right after the create view line, the query will run:
SQL> create or replace view newline_in_the_middle as
2
SQL> select * from dual;
D
-
X
A line with >2499 characters will be ignored but Oracle will still try to process the statement without it. This can cause problems but may still result in a valid statement:
SQL> create or replace view long_line as
2 select '...[enter 2500 characters]...' asdf from dual union all
SP2-0027: Input is too long (> 2499 characters) - line ignored
2 select '1' asdf from dual;
View created.
You may have to check the script output very carefully to find these issues.