I have to generate a file.sql file, in which I want to hold some DDL statements.
Firstly it should create a database user, schema and grant some permissions. Then it should make copyes of some tables into newly created schema. Table names I hold in bash array variable. Number of tables in array may vary.
I cannot make one cat > file.sql < EOF ... EOF block to do that, because I need to generate CREATE TABLE and ALTER TABLE commands.. depends on the table count.
I made a generate.sh script with variables and three bash functions:
TABLES=( "table_one" "table_two" "table_three" "table_four" "table_five")
SOURCE_SCHEMA="src"
TARGET_SCHEMA="tgt"
FILE=file.sql
function create_prepare_sql_v1 {
cat > $FILE << EOF_SQL_V1
...here goes create user/schema and grant statements...
EOF_SQL_V1
function create_prepare_sql_v2 {
count=1
for table in ${!TABLES[#]}; do
( echo -e "CREATE TABLE $SOURCE_SCHEMA.${TABLES[$table]} AS TABLE $TARGET_SCHEMA.${TABLES[$table]};" \
"\nALTER TABLE $SOURCE_SCHEMA.${TABLES[$table]} ADD PRIMARY KEY(id);" \
"\nINSERT INTO $SOURCE_SCHEMA.temp VALUES ("$count", '"$TARGET_SCHEMA"', '"${TABLES[$table]}"');" ) >> $FILE
((count++))
done
function create_prepare_sql_v3 {
cat >> $FILE << 'EOF_SQL_V3'
...here I put my custom PL/pgSQL functions...
EOF_SQL_V3
When I execute the functions and I check that file.sql has allprevious data by adding and removing step#_done strings. When all done, I remove step_3_done string.
It works but seems, that this is a very weak solution.
create_prepare_sql_v1 && echo "step1_done" >> $FILE || exit 1
create_prepare_sql_v2 && sed -i 's/step1_done/step2_done/' $FILE || exit 1
create_prepare_sql_v3 && sed -i 's/step2_done/step3_done/' $FILE || exit 1
if ! fgrep -q 'step3_done' $FILE ; then
echo "Something went wrong. Exiting.."
exit 1
fi
sed -i '/step3_done/d' $FILE
echo "All good."
After generation the file.sql should be something like that:
\connect dbname
CREATE USER u_user WITH PASSWORD 'blah';
CREATE SCHEMA IF NOT EXISTS src AUTHORIZATION u_user;
GRANT USAGE ON SCHEMA src TO u_user;
CREATE TABLE src.temp (
id INTEGER PRIMARY KEY,
schema_name VARCHAR(100),
table_name VARCHAR(100),
truncated BOOLEAN DEFAULT false
);
CREATE TABLE src.table_one AS TABLE tgt.table_one;
ALTER TABLE src.table_one ADD PRIMARY KEY(id);
INSERT INTO src.temp VALUES (1, 'tgt', 'table_one');
CREATE TABLE src.table_two AS TABLE tgt.table_two;
ALTER TABLE src.table_two ADD PRIMARY KEY(id);
INSERT INTO src.temp VALUES (2, 'tgt', 'table_two');
CREATE TABLE src.table_three AS TABLE tgt.table_three;
ALTER TABLE src.table_three ADD PRIMARY KEY(id);
INSERT INTO src.temp VALUES (3, 'tgt', 'table_three');
etc
CREATE OR REPLACE FUNCTION ..
CREATE OR REPLACE FUNCTION ..
CREATE OR REPLACE FUNCTION ..
If it was only sql statements with constant number of tables, there would be no problem to do that with one cat > $FILE << EOF <some_code> EOF block...
Can you plese offer a better solution?
Thanks!
Related
I have a requirement where I need to parameterize to generate one extract file from multiple Oracle tables through the UNIX shell script.
Here is the script which I have written to generate one tab delimited file which will fetch all the data from EMPLOYEE table.
I need to parameterize the TABLE_NAME,OWNER_NAME,USERNAME,PASSWORD and HOST to generate from 12 more tables.
So, I would like to have only one SQL to dyngenerate the extract for 12 tables by passing these parameters values when executing the scripts.
Could you please give me show me how we can modify the below script and how to pass the parameter during the script execution.
Second Requirement is to generate the file incrementally based on a column for example, ETL_UPDATE_TS. can you please show me this also.
Sample Scripts
#!/usr/bin/ksh
TD=/mz/mz01/TgtFiles
MD=/mz/mz01/Scripts
#CAQH_Server=sftp.org
#UN=user
#PWD=password
#RD=Incoming
#RD=/home/
cd $TD
FILE="EMPLOYEE.TXT"
sqlplus -s scott/tiger#db <<EOF
SET PAGES 999
SET COLSEP " "
SET LINES 999
SET FEEDBACK OFF
SPOOL $FILE
SELECT * FROM EMP;
SPOOL OFF
EXIT
EOF
Handling your parameters in a similar way you did for $FILE variable and passing them as options to the script
#!/usr/bin/ksh
TD=/mz/mz01/TgtFiles
MD=/mz/mz01/Scripts
cd $TD
FILE="undefined"
TABLE="undefined"
while getopts :f:t: opt
do
case $opt in
f) FILE=${OPTARG} ;;
t) TABLE=${OPTARG} ;;
*) echo "invalid flag" ;;
esac
done
if [ "$TABLE" == "undefined" ]; then
echo "ERROR. TABLE is undefined, use -f option."
exit 1
fi
# More required variables checks here
# create more options to parameterize connection
sqlplus -s scott/tiger#db <<EOF
SET PAGES 999
SET COLSEP " "
SET LINES 999
SET FEEDBACK OFF
SPOOL $FILE
SELECT * FROM $TABLE;
SPOOL OFF
EXIT
EOF
An execute it as
my_script.sh -f "EMPLOYEE.TXT" -t "EMP"
I have a PostgreSQL query that I'd like to run for multiple geographic areas via a loop. I want to use the elements in the array to modify the query and the name of the csv file where I'm exporting the data to. So in essence, I want the query to run on ...cwa = 'MFR'... and export to hourly_MFR.csv, then run on ...cwa = 'PQR'... and export to hourly_PQR.csv, and so on.
Here's what I have so far. I thought maybe the EOF in the script might be causing problems, but I couldn't figure out how to get the loop to work while maintaining the general format of the script.
Also, the query/script, without the looping (excluding declare, for, do, done statements) works fine.
dbname="XXX"
username="XXXXX"
psql $dbname $username << EOF
declare -a arr=('MFR', 'PQR', 'REV')
for i in "${arr[#]}"
do
\COPY
(SELECT d.woyhh,
COALESCE(ct.ct, 0) AS total_count
FROM
(SELECT f_woyhh(d::TIMESTAMP) AS woyhh
FROM generate_series(TIMESTAMP '2018-01-01', TIMESTAMP '2018-12-31', interval '1 hour') d) d
LEFT JOIN
(SELECT f_woyhh((TIME)::TIMESTAMP) AS woyhh,
count(*) AS ct
FROM counties c
JOIN ltg_data d ON ST_contains(c.the_geom, d.ltg_geom)
WHERE cwa = $i
GROUP BY 1) ct USING (whh)
ORDER BY 1) TO /var/www/html/GIS/ltg_db/bigquery/hourly_$i.csv CSV HEADER;
done
EOF
Thanks for any help!
I think you are nearly there, you just have to reorder some lines. Try this:
dbname="XXX"
username="XXXXX"
declare -a arr=('MFR', 'PQR', 'REV')
for i in "${arr[#]}"
do
psql $dbname $username << EOF
\COPY
(SELECT d.woyhh,
COALESCE(ct.ct, 0) AS total_count
FROM
(SELECT f_woyhh(d::TIMESTAMP) AS woyhh
FROM generate_series(TIMESTAMP '2018-01-01', TIMESTAMP '2018-12-31', interval '1 hour') d) d
LEFT JOIN
(SELECT f_woyhh((TIME)::TIMESTAMP) AS woyhh,
count(*) AS ct
FROM counties c
JOIN ltg_data d ON ST_contains(c.the_geom, d.ltg_geom)
WHERE cwa = $i
GROUP BY 1) ct USING (whh)
ORDER BY 1) TO /var/www/html/GIS/ltg_db/bigquery/hourly_$i.csv CSV HEADER;
EOF
done
The declare and the for loop are part of the bash script while everything between <<EOF and EOF are part of your Postgresql query.
In #Lienhart Woitok's answer above, the solution will definitely work. However - note that this has the side effect of executing a new 'psql' call, database connection setup, authentication, and subsequent response returned; followed by closing the connection - for each iteration of the loop.
In this case you are only running 3 iterations of the loop, so it may not be a significant issue. However, if you expand the usage to run more iterations, you may want to optimize this to only run a single DB connection and batch query it.
To do that, use of a temporary working file to build the SQL commands may be necessary. There are other ways, but this is relatively simple to use and debug:
QUERY_FILE=$(mktemp /tmp/query.XXXXXXX)
# note the use of an array isn't really necessary in this use
# case - and a simple set of values can be used equally as well
CWA="MFR PQR REV"
for i in $CWA
do
cat <<EOF >> $QUERY_FILE
<ADD_YOUR_QUERY_STATEMENTS_HERE>
EOF
done
psql --file=$QUERY_FILE $dbname $username
if (( $? ))
then
echo "query failed (QUERY_FILE: ($QUERY_FILE')"
exit 1
else
echo "query succeeded"
rm -f $QUERY_FILE
exit 0
fi
I am trying to fetch a row with more than one column value to different shell variables. Infact I found that at a time all the column values can be stored to single shell variable. But how can I put those column values to seperate shell variables. Below is an example I am trying for time being
function sqlQuery {
sqlplus -S shiyas/********* <<'EOF'
set heading OFF termout ON trimout ON feedback OFF
set pagesize 0
SELECT name,open_mode from v$database;
EOF
}
OUTPUT="$( sqlQuery )"
echo $OUTPUT
Here I am getting the output as
ORCL READ WRITE
But my requirement is column values ORCL, READ WRITE should get assigned to different shell variable.
I tried the below of parsing.
echo "$OUTPUT" | while read name open_mode
but it was throwing unexpected end of file error.
-bash-3.2$ sh call_sql_col_val_1.sh
ORCL READ WRITE
call_sql_col_val_1.sh: line 18: syntax error: unexpected end of file
Please let me know what concept I can use to fetch a single row column values to different shell variables.
I do this via eval myself:
oracle#******:/*****> cat test.sh
#!/bin/bash
function sqlQuery {
sqlplus -S / as sysdba <<'EOF'
set heading OFF termout ON trimout ON feedback OFF
set pagesize 0
SELECT name,open_mode from v$database;
EOF
}
eval x=(`sqlQuery`)
NAME=${x[0]}
OPEN_MODE="${x[1]} ${x[2]}"
echo NAME IS $NAME
echo OPEN_MODE IS $OPEN_MODE
So we are running the same function you have above, passing it into x and running it through eval to handle the delimitation. Then you have an array and call call is as such: x[0] for the first item, for example.
Output is:
oracle#******:/******> sh test.sh
NAME IS ******
OPEN_MODE IS READ WRITE
I am trying to export a few columns from a table as encoded integer.
Basically I want to use a bash script to pass the SQL command to mclient as command line argument. My bash script looks like:
#!/bin/bash
dir=`pwd`
for col in occupation native_country martial_status race sex
do
mclient -d adult -s \
"create function encode${col}(s varchar(200)) returns int begin return (select code from ${col}_dict where ${col}=s); end;"
mclient -d adult -s \
"COPY (select encode${col}($col) from adult) INTO '${dir}/${col}.txt' NULL AS '0'"
mclient -d adult -s \
"drop function encode${col}"
done
In each iteration, I want to create a SQL function on the fly. Then use the function to encode an attribute and export it to a text file. And lastly drop the function.
However, the output strangely contains some monster characters,
as if it can't recognize the function name.
If I remove the second mclient command, the other operations are successful.
operation successful
Function 'X�..X�' not defined
operation successful
operation successful
Function '��"X�.X�' not defined
operation successful
operation successful
Function ' X�.PX�' not defined
I need to check if one of the columns in my db contains specific value. If it doesn't I want to create that row with folowing values:
#!/bin/bash
#
MODEL=$1
if true (SELECT * FROM table.STATISTICS
WHERE MODEL = '$MODEL' )
do this (INSERT INTO table.STATISTICS('$MODEL',0,SYSDATE,0,SYSDATE,0); )
You could use a merge for this, run through SQL*Plus as a 'heredoc', so you don't have to do a separate count operation; the merge will do that for you effectively:
#!/bin/bash
MODEL=$1
sqlplus -s /nolog <<!EOF
connect user/pass
merge into statistics s
using (select '${MODEL}' as model, 0 as num1, sysdate as date1,
0 as num2, sysdate as date2 from dual) t
on (s.model = t.model)
when not matched then
insert (s.model, s.num1, s.date1, s.num2, s.date2)
values (t.model, t.num1, t.date1, t.num2, t.date2);
!EOF
But using your real column names, obviously. It's better to list them explicitly even for a plain insert.
get_count () {
sqlplus -s username/pass <<!
set heading off
set feedback off
set pages 0
select count(model) from statistics
where model='$MODEL';
!
}
count=$(get_count $1)
if [ "${count:-0}" -eq 0 ]; then
echo "its zero"
sqlplus -S username/pass << EOF
whenever sqlerror exit 1;
set echo on
set verify off
INSERT INTO table.STATISTICS VALUES('$MODEL',0,SYSDATE,0,SYSDATE,0);
exit;
EOF
fi