SQLPlus - spooling to multiple files from PL/SQL blocks - oracle

I have a query that returns a lot of data into a CSV file. So much, in fact, that Excel can't open it - there are too many rows. Is there a way to control spool to spool to a new file everytime 65000 rows have been processed? Ideally, I'd like to have my output in files named in sequence, such as large_data_1.csv, large_data_2.csv, large_data_3.csv, etc...
I could use dbms_output in a PL/SQL block to control how many rows are output, but then how would I switch files, as spool does not seem to be accessible from PL/SQL blocks?
(Oracle 10g)
UPDATE:
I don't have access to the server, so writing files to the server would probably not work.
UPDATE 2:
Some of the fields contain free-form text, including linebreaks, so counting line breaks AFTER the file is written is not as easy as counting records WHILE the data is being returned...

Got a solution, don't know why I didn't think of this sooner...
The basic idea is that the master sqplplus script generates an intermediate script that will split the output to multiple files. Executing the intermediate script will execute multiple queries with different ranges imposed on rownum, and spool to a different file for each query.
set termout off
set serveroutput on
set echo off
set feedback off
variable v_rowCount number;
spool intermediate_file.sql
declare
i number := 0;
v_fileNum number := 1;
v_range_start number := 1;
v_range_end number := 1;
k_max_rows constant number := 65536;
begin
dbms_output.enable(10000);
select count(*)
into :v_err_count
from ...
/* You don't need to see the details of the query... */
while i <= :v_err_count loop
v_range_start := i+1;
if v_range_start <= :v_err_count then
i := i+k_max_rows;
v_range_end := i;
dbms_output.put_line('set colsep ,
set pagesize 0
set trimspool on
set headsep off
set feedback off
set echo off
set termout off
set linesize 4000
spool large_data_file_'||v_fileNum||'.csv
select data_string
from (select rownum rn, data_object
from
/* Details of query omitted */
)
where rn >= '||v_range_start||' and rn <= '||v_range_end||';
spool off');
v_fileNum := v_fileNum +1;
end if;
end loop;
end;
/
spool off
prompt executing intermediate file
#intermediate_file.sql;
set serveroutput off

Try this for a pure SQL*Plus solution...
set pagesize 0
set trimspool on
set headsep off
set feedback off
set echo off
set verify off
set timing off
set linesize 4000
DEFINE rows_per_file = 50
-- Create an sql file that will create the individual result files
SET DEFINE OFF
SPOOL c:\temp\generate_one.sql
PROMPT COLUMN which_dynamic NEW_VALUE dynamic_filename
PROMPT
PROMPT SELECT 'c:\temp\run_#'||TO_CHAR( &1, 'fm000' )||'_result.txt' which_dynamic FROM dual
PROMPT /
PROMPT SPOOL &dynamic_filename
PROMPT SELECT *
PROMPT FROM ( SELECT a.*, rownum rnum
PROMPT FROM ( SELECT object_id FROM all_objects ORDER BY object_id ) a
PROMPT WHERE rownum <= ( &2 * 50 ) )
PROMPT WHERE rnum >= ( ( &3 - 1 ) * 50 ) + 1
PROMPT /
PROMPT SPOOL OFF
SPOOL OFF
SET DEFINE &
-- Define variable to hold number of rows
-- returned by the query
COLUMN num_rows NEW_VALUE v_num_rows
-- Find out how many rows there are to be
SELECT COUNT(*) num_rows
FROM ( SELECT LEVEL num_files FROM dual CONNECT BY LEVEL <= 120 );
-- Create a master file with the correct number of sql files
SPOOL c:\temp\run_all.sql
SELECT '#c:\temp\generate_one.sql '||TO_CHAR( num_files )
||' '||TO_CHAR( num_files )
||' '||TO_CHAR( num_files ) file_name
FROM ( SELECT LEVEL num_files
FROM dual
CONNECT BY LEVEL <= CEIL( &v_num_rows / &rows_per_file ) )
/
SPOOL OFF
-- Now run them all
#c:\temp\run_all.sql

Use split on the resulting file.

utl_file is the package you are looking for. You can write a cursor and loop over the rows (writing them out) and when mod(num_rows_written,num_per_file) == 0 it's time to start a new file. It works fine within PL/SQL blocks.
Here's the reference for utl_file:
http://www.adp-gmbh.ch/ora/plsql/utl_file.html
NOTE:
I'm assuming here, that it's ok to write the files out to the server.

Have you looked at setting up an external data connection in Excel (assuming that the CSV files are only being produced for use in Excel)? You could define an Oracle view that limits the rows returned and also add some parameters in the query to allow the user to further limit the result set. (I've never understood what someone does with 64K rows in Excel anyway).
I feel that this is somewhat of a hack, but you could also use UTL_MAIL and generate attachments to email to your user(s). There's a 32K size limit to the attachments, so you'd have to keep track of the size in the cursor loop and start a new attachment on this basis.

While your question asks how to break the greate volume of data into chunks Excel can handle, I would ask if there is any part of the Excel operation that can be moved into SQL (PL/SQL?) that can reduce the volume of data. Ultimately it has to be reduced to be made meaningful to anyone. The database is a great engine to do that work on.
When you have reduced the data to more presentable volumes or even final results, dump it for Excel to make the final presentation.
This is not the answer you were looking for but I think it is always good to ask if you are using the right tool when it is getting difficult to get the job done.

Related

SQL*Plus - Spool into multiple files

I've got the next spool that stores the DDL of the user_tables into a file:
set pagesize 0
set long 90000
spool C:\Users\personal\Desktop\MAIN_USR\test.txt
select DBMS_METADATA.GET_DDL('TABLE',table_name,'MAIN_USR')
FROM user_tables ut;
spool off
exit
It returns the DDL of all user_tables into a single file, but I need it to be a little more dynamic and return them in separate files with the file name of their respective table. Something like this:
set pagesize 0
set long 90000
FOR tab_nam IN (SELECT table_name FROM user_tables) LOOP
spool C:\Users\personal\Desktop\MAIN_USR\test.txt
select DBMS_METADATA.GET_DDL('TABLE',table_name,'MAIN_USR')
FROM user_tables ut;
spool off
END LOOP;
exit
I know the one above won't work, but it's kind of an idea of what I want to do.
I appreciate any kind of help
You need to read USER_TABLES for creating N calls to DBMS_METADATA.GET_DDL each of which has its own spooled file. Spool everything to a file named out.sql and run it after spooling it off
set pagesize 0
set long 90000
SET TERMOUT OFF
spool out.sql
select 'spool C:\Users\personal\Desktop\MAIN_USR\'||REPLACE(table_name, '$', '_')||'.txt'||chr(13)||chr(10)||
'SELECT DBMS_METADATA.GET_DDL(''TABLE'','''||table_name||''',''MAIN_USR'') FROM DUAL;'||chr(13)||chr(10)||
'spool off' as cmd
FROM user_tables ut;
spool off
#OUT.SQL
exit
Using python's cx_Oracle module which enables access to Oracle Database might be elegant way for your case :
import cx_Oracle
con = cx_Oracle.connect('uname/pwd#host:port/service_name')
cur = con.cursor()
def OutputTypeHandler(cursor, name, defaultType, size, precision, scale):
if defaultType == cx_Oracle.CLOB:
return cursor.var(cx_Oracle.LONG_STRING, arraysize = cursor.arraysize)
cur.outputtypehandler = OutputTypeHandler
cur.execute("select table_name from user_tables order by 1")
rec = cur.fetchall()
for r in rec:
cur.execute("select dbms_metadata.get_ddl('TABLE',:tableName) from dual",tableName=r[0])
ddl, = cur.fetchone()
file = r'C:\\Users\\personal\\Desktop\\MAIN_USR\\'+r[0]+'.txt'
with open(file,"w") as f:
f.write(ddl)
f.close()
where the table names of the whole schema are determined through the first cur.execute, and their creation DDL are done in the second, and files are created with respective table names at the last step. Important thing to consider is related to use of OutputTypeHandler is because of getting rid of processing the CLOB result steming from Dbms_Metadata.Get_Ddl function. The compiler wouldn't want to get a CLOB value during the creation of the files, or DDL would be so long that will exceed the length 4000 chars which would prevent to_char conversion with raising error without using OutputTypeHandler.

how to show data in CSV in different colums using plsql?

I have written sql to generate the output data in CSV format.I have used spool to generate in CSV.
SET LINESIZE 1000 TRIMSPOOL ON FEEDBACK OFF
SPOOL E:\oracle\extract\emp2.csv
SELECT emp_id,
emp_name
FROM offc.employee
ORDER BY emp_id;
SPOOL OFF
SET PAGESIZE 14
The output is coming in location.The output is generated correctly but when I see the CSV file ,then all the data are coming in same column A in CSV.
I want emp_id in column A and emp_name in columnB.Why they are coming in same column? What is the problem here?
Here, the issue is the space is considered as the separator for your column data and excel can not separate the column values which are space separated.
You can use colsep attribute along with other attributes as following:
set colsep , -- defines column data separator
set pagesize 14 -- defines size of the page. Keep it large so that header is not repeated
set trimspool on -- remove trailing blanks
set lines 1000 -- linesize should be more than sum of width of the all columns
set FEEDBACK OFF -- removes the comment at the end of the data
SPOOL E:\oracle\extract\emp2.csv
SELECT emp_id,
emp_name
FROM offc.employee
ORDER BY emp_id;
SPOOL OFF
Cheers!!
What is the problem here?
Excel expects a Comma Separated Values file to have values in columns separated by commas. Your query outputs two columns of data but doesn't include an explicit separator. So when Excel reads the file it doesn't find any commas so it renders a single column of data.
There are various ways of solving this. One is include your own explicit separator in the query:
SELECT emp_id
|| ', ' ||
emp_name
FROM offc.employee
ORDER BY emp_id;
Another is to use the SQL*Plus colsep command to format the output in the file. A third option is to use a tool like SQL Developer, whose Export feature handles all this for us.
Talking about my own experience: although Excel knows how to open a CSV file, it is kind of stupid and still puts everything into the first column. Therefore, I prefer creating a TXT file instead, using a column separator (so - yes, it basically is a comma-separated-file (or whichever separator you choose)).
For example:
SQL> set pagesize 100
SQL> set linesize 100
SQL> set colsep ";"
SQL>
SQL> spool emp.txt
SQL>
SQL> select * from emp;
EMPNO;ENAME ;JOB ; MGR;HIREDATE; SAL; COMM; DEPTNO
----------;----------;---------;----------;--------;----------;----------;----------
7369;SMITH ;CLERK ; 7902;17.12.80; 800; ; 20
7499;ALLEN ;SALESMAN ; 7698;20.02.81; 1600; 300; 30
7521;WARD ;SALESMAN ; 7698;22.02.81; 1250; 500; 30
7566;JONES ;MANAGER ; 7839;02.04.81; 2975; ; 20
7654;MARTIN ;SALESMAN ; 7698;28.09.81; 1250; 1400; 30
7698;BLAKE ;MANAGER ; 7839;01.05.81; 2850; ; 30
7782;CLARK ;MANAGER ; 7839;09.06.81; 2450; ; 10
7788;SCOTT ;ANALYST ; 7566;09.12.82; 3000; ; 20
7839;KING ;PRESIDENT; ;17.11.81; 5000; ; 10
7844;TURNER ;SALESMAN ; 7698;08.09.81; 1500; 0; 30
7876;ADAMS ;CLERK ; 7788;12.01.83; 1100; ; 20
7900;JAMES ;CLERK ; 7698;03.12.81; 950; ; 30
7902;FORD ;ANALYST ; 7566;03.12.81; 3000; ; 20
7934;MILLER ;CLERK ; 7782;23.01.82; 1300; ; 10
14 rows selected.
SQL> spool off;
Now, start Excel and go to Open; choose "All files" (i.e. not only Excel-type files) so that you'd see emp.txt listed. Excel then - in its "Text Import Wizard" - asks you which kind of a file it is (choose delimited):
set the separator (semi-colon in our example)
and - open the file:
Everything is now in its own column.

SQL Developer script output to datagrid

In Oracle SQL Developer, I can get simple query results returned in the 'Query Results' grid, but if I need to use variable in script, I need to use the 'Run Script' option and my results show up in 'Script Output' window, and I can't export it to csv format. Here is my sample code:
var CatCode char(5) ;
exec :CatCode := 'ZK';
SELECT * FROM Products WHERE CategoryCode = :CatCode;
Any help would be appreciated.
Thanks.
Just add a /*csv*/ to your query, the tool will bring back the output in CSV automatically when executed as a script (F5).
Or use a substitution variable instead. &Var vs :Var, run with F9, SQLDev will prompt you for the value.
VAR stcode CHAR(2);
EXEC :stcode := 'NC';
SELECT /*csv*/
*
FROM
untappd
WHERE
venue_state =:stcode;
Or to go straight to the grid so you can use can use the Grid Export feature.
SELECT
*
FROM
untappd
WHERE
venue_state =:stcode2;
Execute with Ctrl+Enter or F9
Supply the input parameter in the pop up dialog, click OK.
Shazaam.
Here you go you can run this one to be ensure. it's running.
set colsep , -- separate columns with a comma
set pagesize 0 -- No header rows
set trimspool on -- remove trailing blanks
set headsep off -- this may or may not be useful...depends on your headings.
set linesize X -- X should be the sum of the column widths
set numw X -- X should be the length you want for numbers (avoid scientific notation on IDs)
spool C:\Users\**direcotory**\sql\Test1.csv; --this is file path to save data
var CatCode char(5) ;
exec :CatCode := 'ZK';
SELECT * FROM Products WHERE CategoryCode = :CatCode;
spool off;
Thanks #thatjeffsmith and Paras, spool option gave me new direction and it worked. I slightly changed your code and it works great.
var CatCode char(5) ;
exec :CatCode := 'ZK';
set feedback off;
SET SQLFORMAT csv;
spool "c:\temp\spoolTest.csv"
SELECT * FROM Products WHERE CategoryCode = :CatCode;
spool off;
SET SQLFORMAT;
set feedback on;

Error with LONG data type in cursor

I am trying to generate a spool file through anonymous block below in order to find out views on a particular table.
declare
cursor c1 is select view_name,text from users_view;
rt c1%rowtype;
begin
open c1;
loop
fetch c1 into rt;
exit when c1%notfound;
dbms_output.put_line(rt.view_name || '|' || rt.text);
end loop;
end;
When I run it, I get an error like "numeric or value errors", however if I remove text(LONG) column from cursor definition the block goes through without any error.
I understand that we can not use LONG data type in a where clause but is it that it can't be fetched in a cursor as well? If yes, what can be the alternative in this case?
Not directly addressing the long issue, but if you want to find out which views refer to a particular table, rather than searching through the view source you can query the data dictionary:
select owner, type, name
from all_dependencies
where referenced_type = 'TABLE'
and referenced_owner = user -- or a specific schema
and referenced_name = '<my_table_name>';
This will also list any triggers on the table, etc., so if you are only interested in view you can add and type = 'VIEW'.
Of course, this may just give you a smaller list of views to examine in more detail to see how the table is used by each one, but it's easier than searching all 300 of your views manually... and it might mean you don't need to get the text of the large views with more than 32k characters which are causing the long problem in the first place.
In this case the error indicates that you've hit the buffer limit - dbms_output.put_line wont handle such large amount of data.
After looking at the problem more closely, it's not the dbms_output.put_line issue, not yet, it's the, as Alex Poole has pointed out in the comment to your question, it's the cursor problem. So I would suggest you to use simple Select statement(option #2 in the answer). If you go for a workaround
create table <<name>> as
select view_name
, to_lob(text)
from user_views
for example, you will be able to use cursor but then dbms_output.put_line will stop you
To generate a spool file you have at least two options:
Use UTL_FILE package to write data to a file.
Let SQL*PLUS do the job. For example:
set feedback off;
set termout off;
set colsep "|";
set long 20000; -- increase the size if it's not enough
set heading off;
set linesize 150; -- increase the size if it's not enough
spool <<file path\file name>>
select view_name
, text
from user_views
spool off;
At the end you will have a similar output in your <<file path\file name>> file:
ALL_APPLY_CONFLICT_COLUMNS |select c.object_owner,
| c.object_name,
| c.method_name,
| c.resolution_column, c.column_name,
| c.apply_database_link
| from all_tab_columns o,
| dba_apply_conflict_columns c
| where c.object_owner = o.owner
| and c.object_name = o.table_name
| and c.column_name = o.column_name

Header formatting while spooling a csv file in sqlplus

I am required to spool a csv from a table in Oracle, using sqlplus. Following is the format required:
"HOST_SITE_TX_ID","SITE_ID","SITETX_TX_ID","SITETX_HELP_ID"
"664436565","16","2195301","0"
"664700792","52","1099970","0"
Following is the relevant piece of the shell script I wrote:
sqlplus -s $sql_user/$sql_password#$sid << eof >> /dev/null
set feedback off
set term off
set linesize 1500
set pagesize 11000
--set colsep ,
--set colsep '","'
set trimspool on
set underline off
set heading on
--set headsep $
set newpage none
spool "$folder$filename$ext"
select '"'||PCL_CARRIER_NAME||'","'||SITETX_EQUIP_ID||'","'||SITETX_SITE_STAT||'","'||SITETX_CREATE_DATE||'","'||ADVTX_VEH_WT||'"'
from cvo_admin.MISSING_HOST_SITE_TX_IDS;
spool off
(I have used some commented statements in, to signify the things that I tried but couldn't get to work)
The output I receive is:
'"'||PCL_CARRIER_NAME||'","'||SITETX_EQUIP_ID||'","'||SITETX_SITE_STAT||'","'||SITETX_CREATE_DATE||'","'||ADVTX_VEH_WT||'"'
"TRANSPORT INC","113","00000000","25-JAN-13 10.17.51 AM",""
"TRANSPORT INC","1905","00000000","25-JAN-13 05.06.44 PM","0"
Which shows that the header is messed up - it is literally printing the whole string that should have been interpreted as an sql statement, as is the case with the data displayed.
Options I am considering:
1) Using colsep
set colsep '","'
spool
select * from TABLE
spool off
This introduces other problems as the data having leading and trailing spaces, first and the last values in the files are not enclosed by quotes
HOST_SITE_TX_ID"," SITE_ID"
" 12345"," 16"
" 12345"," 21
I concluded that this method gives me more heartburn than the one I described earlier.
2) Getting the file and use a regex to modify the header.
3) Leaving the header altogether and manually adding a header string at the beginning of the file, using a script
Option 2 is more doable, but I was still interested in asking, if there might be a better way to format the header somehow, so it comes in a regular csv, (comma delimited, double quote bounded) format.
I am looking to do as less hard coding as possible - the table I am exporting has around 40 columns and I am currently running the script for around 4 million records - breaking them in a batch of around 10K each. I would really appreciate any suggestions, even totally different from my approach - I am a programmer in learning.
One easy way to have a csv with just one header is to do
set embedded on
set pagesize 0
set colsep '|'
set echo off
set feedback off
set linesize 1000
set trimspool on
set headsep off
the embedded is a hidden option but it is important to have JUST one header
This is how I created a header:
set heading off
/* header */
SELECT '"'||PCL_CARRIER_NAME||'","'||SITETX_EQUIP_ID||'","'||SITETX_SITE_STAT||'","'||SITETX_CREATE_DATE||'","'||ADVTX_VEH_WT||'"'
FROM
(
SELECT 'PCL_CARRIER_NAME' AS PCL_CARRIER_NAME
, 'SITETX_EQUIP_ID' AS SITETX_EQUIP_ID
, 'SITETX_SITE_STAT' AS SITETX_SITE_STAT
, 'SITETX_CREATE_DATE' AS SITETX_CREATE_DATE
, 'ADVTX_VEH_WT' AS ADVTX_VEH_WT
FROM DUAL
)
UNION ALL
SELECT '"'||PCL_CARRIER_NAME||'","'||SITETX_EQUIP_ID||'","'||SITETX_SITE_STAT||'","'||SITETX_CREATE_DATE||'","'||ADVTX_VEH_WT||'"'
FROM
(
/* first row */
SELECT to_char(123) AS PCL_CARRIER_NAME
, to_char(sysdate, 'yyyy-mm-dd') AS SITETX_EQUIP_ID
, 'value3' AS SITETX_SITE_STAT
, 'value4' AS SITETX_CREATE_DATE
, 'value5' AS ADVTX_VEH_WT
FROM DUAL
UNION ALL
/* second row */
SELECT to_char(456) AS PCL_CARRIER_NAME
, to_char(sysdate-1, 'yyyy-mm-dd') AS SITETX_EQUIP_ID
, 'value3' AS SITETX_SITE_STAT
, 'value4' AS SITETX_CREATE_DATE
, 'value5' AS ADVTX_VEH_WT
FROM DUAL
) MISSING_HOST_SITE_TX_IDS;
This is how you add a pipe delimited header to SQL statements. Once you spool it out that "something" wont be there
-- this creates the header
select 'header_column1|header_column2|header_column3' as something
From dual
Union all
-- this is where you run the actual sql statement with pipes in it
select
rev.value1 ||'|'||
rev.value2 ||'|'||
'related_Rel' as something
from
...
In Oracle 19 you can use set markup csv on to ensure that csv outputs are created.
You can also set the delimiter and optional quote or even spool html if you prefer
You can read more here
set markup csv on
spool "$folder$filename$ext"
select q'|wow, I can't beleive he said "hello, how are you?", can you beleive it!|' as text
from dual;
spool off
quit;

Resources