.sql file not returning the column headers in csv file - oracle

The below code is in batch file script(.bat file) which calls the sql file.
del C:\SREE\csvfile.csv
sqlplus SERVERNAME/Test123#ldptstb #C:\SREE\sree.sql
set from_email="SENDER_EMAIL_ID"
set to_email="TO_EMAIL_ID"
set cc_email="CC_EMAIL_ID"
set email_message="Csv file from application server"
set body_email=C:\SREE\sree.txt
set sendmail=C:\Interface\sqlldr\common\SENDMAIL.VBS
set interface_log=C:\SREE\csvfile.csv
cscript %sendmail% -F %from_email% -T %to_email% -C %cc_email% -S %email_message% -B %body_email% -A %interface_log% -O "ATTACHFILE" -A %body_email% -O "FILEASTEXT"
exit
This below content in .sql file code which executes the SQL Query and stores the data into csv file:
set pagesize 0
set heading on
set feedback off
set trimspool on
set linesize 32767
set termout off
set verify off
set colsep ","
spool C:\SREE\csvfile.csv
SELECT Name, ID, Email, Role, Status FROM csvfile
exit
The output is stored in csv file and getting this file in email.
But theproblem is I am not getting the Column Names in the csv file. I had tried in many scenarios to get the names as a cloumn headings in csv file.
Anyone please help me out with the code to get the column names in the csv file.. thanks in advance...

When you set pagesize 0 headings are suppressed:
SET PAGES[IZE] {14 | n}
Sets the number of lines on each page of output. You can set PAGESIZE to zero to suppress all headings, page breaks, titles, the initial blank line, and other formatting information.
That's the case even if you then explicitly set headings on.
You can either set pagesize to something very large instead, or possibly more helpfully as you probably don't really want the separator line of dashes, generate them yourself with:
PROMPT Name,ID,Email,Role,Status
... before your select statement.

Use GENERATE_HEADER configuration setting amd set it to Yes like
SET GENERATE_HEADER = 'Yes'
See this related thread here https://community.oracle.com/thread/2325171?start=0&tstart=0

Related

How to force header when creating hive .gz output

How do I make sure each .gz file is created with a header? I am setting these properties which give me multiple output files named 00000_0.gz, 00001_0.gz, 00002_0.gz, etc. But these have no header. What syntax do I need to force a header for each file?
BTW, my query is of the form
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/target_dir/' ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' SELECT ...
Properties now set:
set mapred.output.compress=true;
set hive.exec.compress.output=true;
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec;

shell script to pass table name as a parameter in sqlplus?

I have a file called table.txt which stores the table names. I want the sql update query to take the table name one by one from my table.txt file. My code is as follows:
while read LINE1; do
`sqlplus username/pwd#tname <<END |sed '/^$/d'
set head off;
set feedback off;
update &LINE1 set enterprise_id = '1234567890' where enterprise_id is NULL;
update &LINE1 set sim_inventory_id ='1234567890';
COMMIT;
exit;
END`
done < table.txt
it gives an error sqlplus not found. Can you please tell what is wrong?
This is nothing to do with passing the table names. The "sqlplus not found" error means it cannot find that binary executable, so it isn't getting as far as trying to connect or run the SQL commands.
Your shell script can only see environment variables from the calling shell if they were exported. If you've modified your PATH to include the location of the sqlplus binary then you may not have exported it; add export PATH after you set it.
Or you can set the script up to not rely on the shell environment.
export ORACLE_HOME=/path/to/oracle/installation
export PATH=${ORACLE_HOME}/bin:$PATH
export LD_LIBRARY_PATH=${ORACLE}/lib:${LD_LIBRARY_PATH}
while read LINE1; do
sqlplus username/pwd#tname <<END |sed '/^$/d'
set head off;
set feedback off;
update &LINE1 set enterprise_id = '1234567890' where enterprise_id is NULL;
update &LINE1 set sim_inventory_id ='1234567890';
COMMIT;
exit;
END
done < table.txt
Incidentally, updating the same table twice isn't necessary; you could do:
update &LINE1 set enterprise_id = nvl(enterprise_id, '1234567890'),
sim_inventory_id ='1234567890';
It would also be quicker to create a list of all the update statements from your file contents and run them all in a single SQL*Plus session, so you aren't repeatedly creating and tearing down connections. But that's outside the scope of what you asked.

how to use batch script to export data from oracle into excel file with column names

This is my bat script:
set linesize 4000 pagesize 0 colsep ','
set heading off feedback off verify off trimspool on trimout on
spool &1
select *
from siebel.s_contact;
spool off
exit;
Then I try to run below command. I can generate a Excel file with data but column names are missing.
C:\Users\jy70606\Documents\Script>sqlplus -s geneos/password#database #C:\Users\jy
70606\Documents\Script\daily.sql daily.csv
My question is, how to make the columns available in my Excel?
Your Bat Script contains the following command to remove the columns
set heading off
You need to switch it on
set heading on
Reference: https://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12040.htm#i2699001

Oracle/SQL PLUS: How to spool a log and write intermittently throughout script

Figuring out how to spool to a file has been easy enough. I am hoping there is an option to write to the text file after each command is written. I am not sure how to communicate the status of a long script to other people on my team. The solution we were going for was to write a log file to a network drive, as the script executes they would be able to follow along.
However, this seems to only write output to the file after the spool off; command at the end of the file.
Is there any way to achieve what we're trying to do, either with spooling a log file or another method?
Here is the code I have so far.
set timing on;
set echo on;
column date_column new_value today_var
select to_char(current_timestamp, 'yyyymmdd_HH24_MI') as date_column
from dual
/
select current_timestamp from dual;
SPOOL 'Z:\log\KPI\secondary_reporting_&today_var..log'
... lots of stuff...
spool off;
As far as I know there's no way to control when spooled output is written to a file. One way around this, though, could be to abandon spooling altogether and just redirect the output:
$ sqlplus #/path/to/script.sql >& /path/to/script.log
Two methods come to mind, depending on what your 'stuff' is.
1) If your code has lots of SQL statements and PL/SQL blocks then you can repeatedly spool for a little while. Use the spool <filename> append statement for this.
SQL> help spool
SPOOL
-----
Stores query results in a file, or optionally sends the file to a printer.
In iSQL*Plus, use the Preferences screen to direct output to a file.
SPO[OL] [file_name[.ext] [CRE[ATE] | REP[LACE] | APP[END]] | OFF | OUT]
Not available in iSQL*Plus
2) If you have long running PL/SQL procedures use the UTL_FILE package. See https://docs.oracle.com/html/B14258_02/u_file.htm for more information. This does require some setup and administrative privileges in the database to set up a directory where writing is allowed.

How to write execution result of script to a file?

I have a script a2create . I must save execution result file. Instruction say remember to put sql plus command set echo on in the front of your file. I tried this
#a2create set echo on
or
set echo on #a2create
or
#a2create set echo on a2.lst
but did not work.
I want to write the result to a file which is a2.lst. Appreciate if anyone can help.
Try the following,
define spool_file = 'a2.lst'
SET ECHO OFF
SET NEWPAGE 0
SET SPACE 0
SET PAGESIZE 0
SET FEEDBACK OFF
SET HEADING OFF
spool a2.lst;
#a2create
spool off;

Resources