We are using SQL*Plus to export data to a csv file. We have both decimal and text columns, and each field is delimited by comma (,) but some text columns also contain commas. While importing data into some other DB we are seeing that these commas in the text are treated as column separators. Can anyone tell me how I can overcome this?
set tab off
SET head OFF
SET feedback OFF
SET pagesize 0
SET linesize 3000;
SET colsep ,
set trimspool on
set trimout on
set trims on
set null ""
set rowprefetch 2
set feedback off
set arraysize 1000
set PAGESIZE 50000
set STATEMENTCACHE 20
set numwidth 15
column coulmnName format 999999999.99
column coulmnName format 999999999.99
column coulmnName format 999999999.99
column coulmnName format 999999999.99
If you are able to upgrade to SQL*Plus 12.2, you can use set markup csv on:
SQL> set markup csv on
SQL> select * from departments;
"DEPARTMENT_ID","DEPARTMENT_NAME","MANAGER_ID","LOCATION_ID"
10,"Administration",200,1700
20,"Marketing",201,1800
30,"Purchasing",114,1700
40,"Human Resources",203,2400
50,"Shipping",121,1500
60,"IT",103,1400
70,"Public Relations",204,2700
80,"Sales",145,2500
90,"Executive",100,1700
100,"Finance",108,1700
Just concatenate a double quote character to the start and end of columns with a string data type:
SELECT '"' || stringColumnA || '"' AS stringColumnA,
numberColumnB,
'"' || stringColumnC || '"' AS stringColumnC
-- ...
FROM table_name;
If your column already contains double quotes then escape them by doubling them up:
SELECT '"' || REPLACE( stringColumnA, '"', '""' ) || '"' AS stringColumnA,
numberColumnB,
'"' || REPLACE( stringColumnC, '"', '""' ) || '"' AS stringColumnC
-- ...
FROM table_name;
Don't set column separator to a comma, but to something else (such as pipe | or exclamation ! or hash # or some other character that doesn't exist in data you're about to export). You'd, of course, use that sign as a separator while loading data into the target database.
If you are in a linux environment, you can use ~ as the separator and then sed to fix up the output. with this method, you don't have to know what the table content is in order to create the csv file.
The script below wraps every field in double-quotes and drops the trailing delimiter:
ORDER="01"
TABLE="MY_DATA_TBL"
CONN="127.0.0.1:31521/abc0008.world" #via ssh tunnel
sqlplus -L login/pswd#//${CONN}<<EOF >/dev/null
set pagesize 4000;
set verify off;
set feedback off;
set long 99999;
set linesize 32767;
set trimspool on;
col object_ddl format A32000;
set colsep ~;
set underline off;
set headsep off;
spool ${ORD}${TABLE}.tmp1;
select * from ${TABLE};
EOF
cat ${ORD}${TABLE}.tmp1 | sed -e "s/\"/'/" -e 's/ * / /g' -e "s/^ //" -e "s/ ~/~/g" -e "s/~ /~/g" | tail -n +11 | head -n -1 > ${ORD}${TABLE}.tmp2
head -n 1 ${ORD}${TABLE}.tmp2 | sed -e "s/$/~/" > ${ORD}${TABLE}.tmp3
tail -n +2 ${ORD}${TABLE}.tmp2 >> ${ORD}${TABLE}.tmp3
cat ${ORD}${TABLE}.tmp3 | sed -e "s/^/\"/" -e "s/~$/\"/" -e "s/~/\",\"/g" > ${ORD}${TABLE}.csv
Related
I have used the method to assign the SQL output to a variable as below.
dbRole=$(${SQLPLUSPGM} -s / as sysdba <<-EOF
set head off
set verify off
set feedback off
select trim(translate(database_role,' ','_')) from v\$database;
exit;
EOF
)
But the variable o/p appending a "\n" character i.e \nPHYSICAL_STANDBY
However, when I use the below method it is working fine
${SQLPLUSPGM} -s / as sysdba <<-EOF | grep -v '^$' | read dbRole
set head off
set verify off
set feedback off
select trim(translate(database_role,' ','_')) from v\$database;
exit;
EOF
Any suggestion why it is appending `\n' and how we can get rid of it.
Appreciate your suggestions.
Your second method, with the grep -v, removes the additional line.
You can use some filter inside your first method with additional parentheses.
dbRole=$((cat | grep -v "^$") <<-EOF
1
2
3
5
EOF
)
Alternative filters with some differences are grep ., head -1, sed '$d'.
#cat file.txt
12354
13456
13498
#!/bin/bash
for i in `cat file.txt`
do
sqlplus XXXXX/XXXXX#DB_NAME << EOF
select *from TABLE_NAME where id="$i"
EOF
done
This is not working for me. Help me how I can solve this.
The solution given by #codeforester works. However I was unable to use it because it created as many DB connections as the number of lines in your file which is a potential impact.
To overcome this, I chose the below solution which may not be ideal but does the job with just one DB connection.
Considering the same data in file.txt
12354
13456
13498
I used the below sed command to populate the above to a single variable "12354,13456,13498"
myvariable=$(echo "`cat file.txt | sed '$!s/$/,/g' | tr -d '\n' | tr -d ' '`")
Now below script will pass this variable to the SQL query and spool the data into a text file:
#!/bin/bash
myvariable=$(echo "`cat file.txt | sed '$!s/$/,/g' | tr -d '\n' | tr -d ' '`")
echo #myvariable
sqlplus /nolog << EOF
CONNECT user#dbname/dbpassword
SPOOL dboutput.txt
select column1 from table_name where id in ($myvariable);
SPOOL OFF
EOF
The output is stored in dboutput.txt (along with the SQL query)
cat dboutput.txt
SQL> select column1 from table_name where id in (12354,13456,13498);
NAME
---------------------------------------------------------------------------- ----
data1
data2
data3
SQL> spool off
Here is the right way to use the heredoc <<, along with the choice of while read instead of for to read the file:
#!/bin/bash
while read -r value; do
sqlplus xxxxx/xxxxx#db_name << EOF
select * from table_name where id='$value';
# make sure heredoc marker "EOF" is not indented
EOF
done < file.txt
See also:
How can I write a here doc to a file in Bash script?
BashFAQ001 to understand why for loop is not the best way to read text lines from a file.
When I try running the script, I am gettin the error line 45: syntax error: unexpected end of file. I am relatively new to scripting. Please help me resolve it.
#!/bin/ksh
set -xv
export HOME=/home/mine
. $HOME/.env.ksh
BIS_SPOOL=/tmp/bis_table_mine.spl
BIS_REPORT_MINE=/tmp/bis_table_report_mine.txt
touch $BIS_SPOOL
rm $BIS_SPOOL
touch $BIS_SPOOL
exec 5< $BIS_REPORT_MINE
while read -u5 REC_MINE
do
TBLENAME=`echo "$REC_MINE" | awk '{print $3}' | tr '[:upper:]' '[:lower:]'`
sqlplus -s ${USER_ID}/${USER_PASS}#${ORACLE_SID} <<- EOF
set feedback off
set hea ON
set pagesize 9999
set linesize 9999
set trimspool ON
set termout off
spool $BIS_SPOOL append
Column C1 Heading 'Job Name' Format a30
Column C2 Heading 'Table Name' Format a30
SELECT job_name C1,
table_name C2,
FROM table_usage
WHERE table_name like 'TBLENAME%'
/
exit;
EOF
done
exec 5<& -
The <<- EOS is interpreted literally. Make it
sqlplus -s ${USER_ID}/${USER_PASS}#${ORACLE_SID} <<-EOF
without the space char.
Also make sure there is no space char before or after your closing EOS , but leading tab chars are allowed.
IHTH
I have a text file that contains the following format below and I wanted to write a bash script that stores the column (adastatus,type,bodycomponent) names into a variable say x1.
# col_name data_type comment
adastatus string None
type string None
bodycomponent string None
bodytextlanguage string None
copyishchar string None
Then for each of the columns names in x1 I want to run a loop
alter table tabelname change x1(i) x1(i) DOUBLE;
How about:
#!/bin/sh
for i in `cut -f1 yourfile.txt`
do
SQL="alter table tablename change $i $i DOUBLE"
sql_command $SQL
done
awk '$1 !~ /^#/ {if ($1) print $1}' in.txt | \
xargs -I % echo "alter table tabelname change % % DOUBLE"
Replace echo with the command needed to run the alter command (from #Severun's answer it sounds like sql_command).
using awk, matches only input lines that do no start with # (except for leading whitespace) and are non-empty, then returns the first whitespace-separated token, i.e., the 1st column value for each line.
xargs invokes the target command once for each column name, substituting the column name for % - note that % as a placeholder was randomly chosen via the -I option.
Try:
#!/bin/bash
while read col1 _ _
do
[[ "$col1" =~ \#.* ]] && continue # skip comments
[[ -z "$col1" ]] && continue # skip empty lines
echo alter table tabelname change ${col1}\(i\) ${col1}\(i\)
done < input.txt
Output:
$ ./c.sh
alter table tabelname change adastatus(i) adastatus(i)
alter table tabelname change type(i) type(i)
alter table tabelname change bodycomponent(i) bodycomponent(i)
alter table tabelname change bodytextlanguage(i) bodytextlanguage(i)
alter table tabelname change copyishchar(i) copyishchar(i)
Change echo to a more appropriate command.
Say i have Table A with columns
col1 col2 col3 col4
-------------------
sajal singh 28 IND
hello how are you
I want to export the data into a flat file without spaces or tabs between columns
So the output should be
cat dump
sajalsingh28IND
hellohowareyou
what i have tried. i have written a script
#!/usr/bin/bash
#the file where sql output will go
OUT=report.txt
>$OUT
DESC=desc.txt
>$DESC
sqlplus -s "xxx/xxx#xxx" << END_SQL > /dev/null
set pages 0
set feedback off
set heading off
set trimspool off
set termout off
set verify off
set wrap off
SPOOL $DESC
Select * from table_name;
SPOOL OFF
END_SQL
But i am getting outputs of one row in multiple lines and with tabs/spaces
So question is how can i fix that? and
I found some data pump utilities like expdp. Can i use that in Unix? if yes, how can i acheive the dump in that format?
Thank You
If you already have a CSV dump, then you can run the following command:
awk 'BEGIN{FS=",";OFS=""}{$1=$1}1' csv.dump > new.dump
Untested:
SET HEADING OFF
SET FEEDBACK OFF
SPOOL $DESC
SELECT col1 ||''|| col2 ||''|| col3 FROM table_name;
SPOOL OFF;
From a "simplified oracle view" to "plain" characters with sed:
sed -n '3,$ s/\s//gp' file
$cat file
col1 col2 col3 col4
-------------------
sajal singh 28 IND
hello how are you
$sed -n '3,$ s/\s//gp' file
sajalsingh28IND
hellohowareyou
Explanation: replace all white space (not line breaks) from line 3 to EOF with "nothing".
If you want the columns padded out but no additional spaces between the columns you can do:
set colsep ""
The default is to have a single space between the double-quotes, which puts a single space between the columns. You might also want to do:
set tab off
... to ensure that multiple spaces in the padding isn't converted to tabs, which looks fine for display but would be a pain parsing the file.
If you want no spaces at all, to do this within SQL*Plus you'd need to concatenate the columns:
select col1 || col2 || col3 || col4 from table_name;
This is useful if you're putting a delimiter between the columns (e.g. making it a CSV), but I don't know what you'd be able to do with the data in the file if you squashed everything together without delimiters.
So this is what i came up with: it will dump oracle data without any spaces etc between columns, while preserving the spaces within data. I thought i would share it with you.
#!/usr/bin/bash
#the file where sql output will go
OUT=report.txt
>$OUT
DESC=desc.txt
>$DESC
TABLE_NAME=$1
###GET DESCRIBE####
s=""
#######################
sqlplus -s "xxx/xxx#xxx" << END_SQL > /dev/null
set pages 0
set feedback off
set heading off
set trimspool off
set termout off
set verify off
set wrap off
SPOOL $DESC
desc $TABLE_NAME;
SPOOL OFF
END_SQL
#######################
for i in `cat $DESC|awk -F" " '{print $1}'|grep -v -i name|grep -v -`
do
s=$s"trim($i)||'|'||"
done
s=`echo $s|sed "s/||'|'||$//g"`
echo $s
#######################
#sqlplus - silent mode
#redirect /dev/null so that output is not shown on terminal
sqlplus -s "xxx/xxx#xxx" << END_SQL > /dev/null
set pages 0
set feedback off
set heading off
set trimspool off
set termout off
set verify off
set colsep ""
set tab off
set lines 1500
SPOOL $OUT
select $s from $TABLE_NAME;
SPOOL OFF
END_SQL
#######################
cat $OUT|sed "s/|//g"|sed "s/ *$//g" >$OUT.new
mv $OUT.new $OUT
echo Finished writing report $OUT