SQL*Plus - Spool into multiple files - oracle

I've got the next spool that stores the DDL of the user_tables into a file:
set pagesize 0
set long 90000
spool C:\Users\personal\Desktop\MAIN_USR\test.txt
select DBMS_METADATA.GET_DDL('TABLE',table_name,'MAIN_USR')
FROM user_tables ut;
spool off
exit
It returns the DDL of all user_tables into a single file, but I need it to be a little more dynamic and return them in separate files with the file name of their respective table. Something like this:
set pagesize 0
set long 90000
FOR tab_nam IN (SELECT table_name FROM user_tables) LOOP
spool C:\Users\personal\Desktop\MAIN_USR\test.txt
select DBMS_METADATA.GET_DDL('TABLE',table_name,'MAIN_USR')
FROM user_tables ut;
spool off
END LOOP;
exit
I know the one above won't work, but it's kind of an idea of what I want to do.
I appreciate any kind of help

You need to read USER_TABLES for creating N calls to DBMS_METADATA.GET_DDL each of which has its own spooled file. Spool everything to a file named out.sql and run it after spooling it off
set pagesize 0
set long 90000
SET TERMOUT OFF
spool out.sql
select 'spool C:\Users\personal\Desktop\MAIN_USR\'||REPLACE(table_name, '$', '_')||'.txt'||chr(13)||chr(10)||
'SELECT DBMS_METADATA.GET_DDL(''TABLE'','''||table_name||''',''MAIN_USR'') FROM DUAL;'||chr(13)||chr(10)||
'spool off' as cmd
FROM user_tables ut;
spool off
#OUT.SQL
exit

Using python's cx_Oracle module which enables access to Oracle Database might be elegant way for your case :
import cx_Oracle
con = cx_Oracle.connect('uname/pwd#host:port/service_name')
cur = con.cursor()
def OutputTypeHandler(cursor, name, defaultType, size, precision, scale):
if defaultType == cx_Oracle.CLOB:
return cursor.var(cx_Oracle.LONG_STRING, arraysize = cursor.arraysize)
cur.outputtypehandler = OutputTypeHandler
cur.execute("select table_name from user_tables order by 1")
rec = cur.fetchall()
for r in rec:
cur.execute("select dbms_metadata.get_ddl('TABLE',:tableName) from dual",tableName=r[0])
ddl, = cur.fetchone()
file = r'C:\\Users\\personal\\Desktop\\MAIN_USR\\'+r[0]+'.txt'
with open(file,"w") as f:
f.write(ddl)
f.close()
where the table names of the whole schema are determined through the first cur.execute, and their creation DDL are done in the second, and files are created with respective table names at the last step. Important thing to consider is related to use of OutputTypeHandler is because of getting rid of processing the CLOB result steming from Dbms_Metadata.Get_Ddl function. The compiler wouldn't want to get a CLOB value during the creation of the files, or DDL would be so long that will exceed the length 4000 chars which would prevent to_char conversion with raising error without using OutputTypeHandler.

Related

debug in pl/sql script or display datas of many tables

In T-SQL I used to debug in script or stored procedure like below (very simple example), to check the data of the tables :
declare #my_variable nvarchar(4) = 'test';
....
update t
set t.my_column = case when t.my_column_3 = 1
then 1 + 1
when t.my_column_4 = 2
then 2 + 2
...
end
from my_table t
where t.column_2 = 'test';
--check the updated data
select t.my_column, t.my_column_2, t.my_column_3, t.my_column_4
from my_table t
where t.column_2 = 'test';
update t
set t.my_column_5 = case when t.my_column_6 + t.my_column_7 = t.my_column_8
then 'OK'
else 'NOT OK'
end,
....
from my_table t
where t.column_2 = 'test_2';
--check the updated data
select t.my_column_5, t.my_column_6, t.my_column_7, t.my_column_8
from my_table t
where t.column_2 = 'test_2';
etc..
I know that in Oracle is quite different but is there a similar way to check the data of some tables in a middle of a pl/sql script.
I did some research and apparently we can use dbms_output but it's not as simple as a select * from my_table like in T-SQL.
Also sometimes, after running a pl/sql script (manipulate the data etc), I need to display the datas of two or more tables in my script (for controlling purpose) is it possible ?
All suggestions are welcome.
Thank you.

SQL Developer script output to datagrid

In Oracle SQL Developer, I can get simple query results returned in the 'Query Results' grid, but if I need to use variable in script, I need to use the 'Run Script' option and my results show up in 'Script Output' window, and I can't export it to csv format. Here is my sample code:
var CatCode char(5) ;
exec :CatCode := 'ZK';
SELECT * FROM Products WHERE CategoryCode = :CatCode;
Any help would be appreciated.
Thanks.
Just add a /*csv*/ to your query, the tool will bring back the output in CSV automatically when executed as a script (F5).
Or use a substitution variable instead. &Var vs :Var, run with F9, SQLDev will prompt you for the value.
VAR stcode CHAR(2);
EXEC :stcode := 'NC';
SELECT /*csv*/
*
FROM
untappd
WHERE
venue_state =:stcode;
Or to go straight to the grid so you can use can use the Grid Export feature.
SELECT
*
FROM
untappd
WHERE
venue_state =:stcode2;
Execute with Ctrl+Enter or F9
Supply the input parameter in the pop up dialog, click OK.
Shazaam.
Here you go you can run this one to be ensure. it's running.
set colsep , -- separate columns with a comma
set pagesize 0 -- No header rows
set trimspool on -- remove trailing blanks
set headsep off -- this may or may not be useful...depends on your headings.
set linesize X -- X should be the sum of the column widths
set numw X -- X should be the length you want for numbers (avoid scientific notation on IDs)
spool C:\Users\**direcotory**\sql\Test1.csv; --this is file path to save data
var CatCode char(5) ;
exec :CatCode := 'ZK';
SELECT * FROM Products WHERE CategoryCode = :CatCode;
spool off;
Thanks #thatjeffsmith and Paras, spool option gave me new direction and it worked. I slightly changed your code and it works great.
var CatCode char(5) ;
exec :CatCode := 'ZK';
set feedback off;
SET SQLFORMAT csv;
spool "c:\temp\spoolTest.csv"
SELECT * FROM Products WHERE CategoryCode = :CatCode;
spool off;
SET SQLFORMAT;
set feedback on;

Error with LONG data type in cursor

I am trying to generate a spool file through anonymous block below in order to find out views on a particular table.
declare
cursor c1 is select view_name,text from users_view;
rt c1%rowtype;
begin
open c1;
loop
fetch c1 into rt;
exit when c1%notfound;
dbms_output.put_line(rt.view_name || '|' || rt.text);
end loop;
end;
When I run it, I get an error like "numeric or value errors", however if I remove text(LONG) column from cursor definition the block goes through without any error.
I understand that we can not use LONG data type in a where clause but is it that it can't be fetched in a cursor as well? If yes, what can be the alternative in this case?
Not directly addressing the long issue, but if you want to find out which views refer to a particular table, rather than searching through the view source you can query the data dictionary:
select owner, type, name
from all_dependencies
where referenced_type = 'TABLE'
and referenced_owner = user -- or a specific schema
and referenced_name = '<my_table_name>';
This will also list any triggers on the table, etc., so if you are only interested in view you can add and type = 'VIEW'.
Of course, this may just give you a smaller list of views to examine in more detail to see how the table is used by each one, but it's easier than searching all 300 of your views manually... and it might mean you don't need to get the text of the large views with more than 32k characters which are causing the long problem in the first place.
In this case the error indicates that you've hit the buffer limit - dbms_output.put_line wont handle such large amount of data.
After looking at the problem more closely, it's not the dbms_output.put_line issue, not yet, it's the, as Alex Poole has pointed out in the comment to your question, it's the cursor problem. So I would suggest you to use simple Select statement(option #2 in the answer). If you go for a workaround
create table <<name>> as
select view_name
, to_lob(text)
from user_views
for example, you will be able to use cursor but then dbms_output.put_line will stop you
To generate a spool file you have at least two options:
Use UTL_FILE package to write data to a file.
Let SQL*PLUS do the job. For example:
set feedback off;
set termout off;
set colsep "|";
set long 20000; -- increase the size if it's not enough
set heading off;
set linesize 150; -- increase the size if it's not enough
spool <<file path\file name>>
select view_name
, text
from user_views
spool off;
At the end you will have a similar output in your <<file path\file name>> file:
ALL_APPLY_CONFLICT_COLUMNS |select c.object_owner,
| c.object_name,
| c.method_name,
| c.resolution_column, c.column_name,
| c.apply_database_link
| from all_tab_columns o,
| dba_apply_conflict_columns c
| where c.object_owner = o.owner
| and c.object_name = o.table_name
| and c.column_name = o.column_name

SQLPlus - spooling to multiple files from PL/SQL blocks

I have a query that returns a lot of data into a CSV file. So much, in fact, that Excel can't open it - there are too many rows. Is there a way to control spool to spool to a new file everytime 65000 rows have been processed? Ideally, I'd like to have my output in files named in sequence, such as large_data_1.csv, large_data_2.csv, large_data_3.csv, etc...
I could use dbms_output in a PL/SQL block to control how many rows are output, but then how would I switch files, as spool does not seem to be accessible from PL/SQL blocks?
(Oracle 10g)
UPDATE:
I don't have access to the server, so writing files to the server would probably not work.
UPDATE 2:
Some of the fields contain free-form text, including linebreaks, so counting line breaks AFTER the file is written is not as easy as counting records WHILE the data is being returned...
Got a solution, don't know why I didn't think of this sooner...
The basic idea is that the master sqplplus script generates an intermediate script that will split the output to multiple files. Executing the intermediate script will execute multiple queries with different ranges imposed on rownum, and spool to a different file for each query.
set termout off
set serveroutput on
set echo off
set feedback off
variable v_rowCount number;
spool intermediate_file.sql
declare
i number := 0;
v_fileNum number := 1;
v_range_start number := 1;
v_range_end number := 1;
k_max_rows constant number := 65536;
begin
dbms_output.enable(10000);
select count(*)
into :v_err_count
from ...
/* You don't need to see the details of the query... */
while i <= :v_err_count loop
v_range_start := i+1;
if v_range_start <= :v_err_count then
i := i+k_max_rows;
v_range_end := i;
dbms_output.put_line('set colsep ,
set pagesize 0
set trimspool on
set headsep off
set feedback off
set echo off
set termout off
set linesize 4000
spool large_data_file_'||v_fileNum||'.csv
select data_string
from (select rownum rn, data_object
from
/* Details of query omitted */
)
where rn >= '||v_range_start||' and rn <= '||v_range_end||';
spool off');
v_fileNum := v_fileNum +1;
end if;
end loop;
end;
/
spool off
prompt executing intermediate file
#intermediate_file.sql;
set serveroutput off
Try this for a pure SQL*Plus solution...
set pagesize 0
set trimspool on
set headsep off
set feedback off
set echo off
set verify off
set timing off
set linesize 4000
DEFINE rows_per_file = 50
-- Create an sql file that will create the individual result files
SET DEFINE OFF
SPOOL c:\temp\generate_one.sql
PROMPT COLUMN which_dynamic NEW_VALUE dynamic_filename
PROMPT
PROMPT SELECT 'c:\temp\run_#'||TO_CHAR( &1, 'fm000' )||'_result.txt' which_dynamic FROM dual
PROMPT /
PROMPT SPOOL &dynamic_filename
PROMPT SELECT *
PROMPT FROM ( SELECT a.*, rownum rnum
PROMPT FROM ( SELECT object_id FROM all_objects ORDER BY object_id ) a
PROMPT WHERE rownum <= ( &2 * 50 ) )
PROMPT WHERE rnum >= ( ( &3 - 1 ) * 50 ) + 1
PROMPT /
PROMPT SPOOL OFF
SPOOL OFF
SET DEFINE &
-- Define variable to hold number of rows
-- returned by the query
COLUMN num_rows NEW_VALUE v_num_rows
-- Find out how many rows there are to be
SELECT COUNT(*) num_rows
FROM ( SELECT LEVEL num_files FROM dual CONNECT BY LEVEL <= 120 );
-- Create a master file with the correct number of sql files
SPOOL c:\temp\run_all.sql
SELECT '#c:\temp\generate_one.sql '||TO_CHAR( num_files )
||' '||TO_CHAR( num_files )
||' '||TO_CHAR( num_files ) file_name
FROM ( SELECT LEVEL num_files
FROM dual
CONNECT BY LEVEL <= CEIL( &v_num_rows / &rows_per_file ) )
/
SPOOL OFF
-- Now run them all
#c:\temp\run_all.sql
Use split on the resulting file.
utl_file is the package you are looking for. You can write a cursor and loop over the rows (writing them out) and when mod(num_rows_written,num_per_file) == 0 it's time to start a new file. It works fine within PL/SQL blocks.
Here's the reference for utl_file:
http://www.adp-gmbh.ch/ora/plsql/utl_file.html
NOTE:
I'm assuming here, that it's ok to write the files out to the server.
Have you looked at setting up an external data connection in Excel (assuming that the CSV files are only being produced for use in Excel)? You could define an Oracle view that limits the rows returned and also add some parameters in the query to allow the user to further limit the result set. (I've never understood what someone does with 64K rows in Excel anyway).
I feel that this is somewhat of a hack, but you could also use UTL_MAIL and generate attachments to email to your user(s). There's a 32K size limit to the attachments, so you'd have to keep track of the size in the cursor loop and start a new attachment on this basis.
While your question asks how to break the greate volume of data into chunks Excel can handle, I would ask if there is any part of the Excel operation that can be moved into SQL (PL/SQL?) that can reduce the volume of data. Ultimately it has to be reduced to be made meaningful to anyone. The database is a great engine to do that work on.
When you have reduced the data to more presentable volumes or even final results, dump it for Excel to make the final presentation.
This is not the answer you were looking for but I think it is always good to ask if you are using the right tool when it is getting difficult to get the job done.

Table record count to unix log file

I need the count of records of a data base table from unix.
I am calling one sql script from unix and need the record count to any log file .
You can put the following into a test.sql file:
SET HEADING OFF;
SELECT COUNT(*) FROM dual;
QUIT;
and call it via SQL*Plus via script.
It will output:
1
since table dual has only one row. You should be able to write that into a log file.
You cant to append the output off your script to a named file by rediecting it like this.
$ sqlplus username/password#SID #your_script.sql >> /tmp/whatever.log
If your want to have more than a bald count in the output you'll need to include the boilerplate in the projectors:
SQL> select to_char(sysdate, 'YYYYMMDDHH24MISS')||'::Number of emps = '
2 , count(*)
3 from emp
4 group by to_char(sysdate, 'YYYYMMDDHH24MISS')||'::Number of emps = '
5 /
TO_CHAR(SYSDATE,'YYYYMMDDHH24MISS COUNT(*)
--------------------------------- ----------
20100210133747::Number of emps = 16
SQL>

Resources