Parsing through a CSV file - shell

I have a CSV files like this:
2015-12-10,22:45:00,205,5626,85
2015-12-10,23:00:01,79,5625,85
2015-12-13,13:00:01,4410,5629,85
2015-12-13,13:15:00,4244,5627,85
2015-12-13,13:30:00,4082,5627,85
I tried this script to generate an SQL statement:
#!/bin/bash
inputfile=${1}
echo $inputfile
OLDIFS=$IFS
IFS=,
while read date time current full cycle
do
echo -—$date --$time --$current --$full --$cycle
echo insert into table values($date,$time,$current,$full,$cycle)
sleep 1
done < $inputfile
IFS=$OLDIFS
But on execution I get this error and it doesn't run as expected:
/Scripts/CreateSql.sh: line 10: syntax error near unexpected token `('
/Scripts/CreateSql.sh: line 10: `echo insert into table values(\$date,$time,$current,$full,$cycle)'
I need the statement generated like this:
insert into table values($date,$time,$current,$full,$cycle)
Please kindly suggest a fix for this.

Use double quotes as anything under () to shell means spawn a new process.
echo "insert into table values($date,$time,$current,$full,$cycle)"

All,
i fixed this-
echo 'insert into table values ('$date','$time','$current','$full','$cycle')'

Related

Not able to pass parameters to hql from sh file

I have a .sh file from which I am passing values to .hql, but its giving me errors
sm=1
XXXXX=""
while read -r line
do
name="$line"
XXXXX="hive$name(${XXXX[$sm]%?})"
echo $XXXXX
hive -hiveconf var1=$XXXXX -hiveconf var2=/user/cloudera/project -hiveconf var3=$name -f test1.hql
sm=$((sm + 1))
done < "$filename"
CREATE EXTERNAL TABLE IF NOT EXISTS ${hiveconf:var1}
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
location ${hiveconf:var2/hiveconf:var3};
*Please note that $XXXXX is creating a tablename with schema after reading from the file and some logic. When I echo it, there is no problem, but problem comes in .hql file. Error is somewhat like below :
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
FAILED: ParseException line 2:4 cannot recognize input near 'ROW' 'FORMAT' 'DELIMITED' in column type
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
Try to quote variable:
hive -hiveconf var1="$XXXXX"
All such variables should be quoted
Use this command inside script to check value passed:
! echo "${hiveconf:var1}";
An alternate to above is using hivevar:
sm=1
XXXXX=""
while read -r line
do
name="$line"
XXXXX="hive$name(${XXXX[$sm]%?})"
echo "${XXXXX}"
hive -hivevar var1=${XXXXX} -hivevar var2="/user/cloudera/project" -hivevar var3=${name} -f test1.hql
sm=$((sm + 1))
done < "$filename"
CREATE EXTERNAL TABLE IF NOT EXISTS ${var1}
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
location ${var2/var3};
Also here is a link for a read on the difference between hiveconf and hivevar in case the curiosity bug bites :)
What is the difference between -hivevar and -hiveconf?

convert oracle refcursor to text in unix script

This is in continuation to the post below. I am able to return data from the oracle stored procedure to unix script.
Fetch data from Oracle SP Out Param SYS_REFCURSOR in Unix Korn Shell Script
But while looping through the records i dont get the expected result. Below is the code. Before the variable table is printed i am getting an error "cannot open"
weeknum=$1
#read ref cursor from proc
cur=`sqlplus -s $connection <<EOF
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
var return_val refcursor
exec WEEKLYLOAD($weeknum, :return_val);
print return_val
EXIT
EOF`
print "done"
table=""
while read -r line
do
$table=$$table"\n"$line
done < $cur
You are trying to direct input from your cur variable, but the form you are using is looking for a file with the name of the first word in $cur - rather than the entire contents of that variable. The error you see will be the first word in the first column of the first row of the ref cursor opened by your procedure.
So if your ref cursor was opened for a query that, say, produced three rows of output with value A, B and C it would try to read input from a file called A, and report cannot open (unless a file called that happened to exist in the current working directory).
You can echo the variable and pipe it instead:
echo "$cur" | while read -r line
do
table=$table"\n"$line
done
I've removed the extra $ symbols from the assignment. But this doesn't look particularly useful; with the same three-row result as above, $table would end up as:
\nA\nB\nC
If you just want to print the contents of $cur to the console you can use one of these (or others):
echo "$cur"
printf "%s\n" "$cur"
which both produce
A
B
C

Pass values read from a file as input to an SQL query in Oracle

#cat file.txt
12354
13456
13498
#!/bin/bash
for i in `cat file.txt`
do
sqlplus XXXXX/XXXXX#DB_NAME << EOF
select *from TABLE_NAME where id="$i"
EOF
done
This is not working for me. Help me how I can solve this.
The solution given by #codeforester works. However I was unable to use it because it created as many DB connections as the number of lines in your file which is a potential impact.
To overcome this, I chose the below solution which may not be ideal but does the job with just one DB connection.
Considering the same data in file.txt
12354
13456
13498
I used the below sed command to populate the above to a single variable "12354,13456,13498"
myvariable=$(echo "`cat file.txt | sed '$!s/$/,/g' | tr -d '\n' | tr -d ' '`")
Now below script will pass this variable to the SQL query and spool the data into a text file:
#!/bin/bash
myvariable=$(echo "`cat file.txt | sed '$!s/$/,/g' | tr -d '\n' | tr -d ' '`")
echo #myvariable
sqlplus /nolog << EOF
CONNECT user#dbname/dbpassword
SPOOL dboutput.txt
select column1 from table_name where id in ($myvariable);
SPOOL OFF
EOF
The output is stored in dboutput.txt (along with the SQL query)
cat dboutput.txt
SQL> select column1 from table_name where id in (12354,13456,13498);
NAME
---------------------------------------------------------------------------- ----
data1
data2
data3
SQL> spool off
Here is the right way to use the heredoc <<, along with the choice of while read instead of for to read the file:
#!/bin/bash
while read -r value; do
sqlplus xxxxx/xxxxx#db_name << EOF
select * from table_name where id='$value';
# make sure heredoc marker "EOF" is not indented
EOF
done < file.txt
See also:
How can I write a here doc to a file in Bash script?
BashFAQ001 to understand why for loop is not the best way to read text lines from a file.

delete rows from databse using a shell script

the below script gives me an error. Basically I am trying to delete the records that I got from the first query. I have put them in a text file, formatted them and used them in the delete operation.
After executing the script I am getting the below error:-
: line 5: syntax error at line 27: `<<' unmatched
Can't tell because the code you dumped is unformatted, but my first guess would be you have leading spaces in front of the EOF in your here document.
This should work (note that there are no leading spaces in front of the EOF.:
sqlplus -s $dbcreds << EOF > output.txt
SET SERVEROUTPUT OFF
select empname from emp where dept_no=123;
EOF
if [ -s "output.txt" ]
then
echo " Found the below employees....Deleting them from Database ..............!!!! \n"
cat output.txt
sed "s/(.*)/'\1'/" output.txt| tr '\n' ','|sed 's/.$//' >final_employees.txt
while read line
do
sqlplus -s $dbcreds <<EOF
SET SERVEROUTPUT OFF
Delete from emp where empname in ($line);
EOF
done < final_employees.txt
else
echo " No employees found....!!!"
fi

How to extract the sybase sql query output in a shell script

I am trying to execute a SQL query on SYBASE database using shell script.
A simple query to count the number of rows in a table.
#!/bin/sh
[ -f /etc/bash.bashrc.local ] && . /etc/bash.bashrc.local
. /gi/base_environ
. /usr/gi/bin/environ
. /usr/gi/bin/path
ISQL="isql <username> guest"
count() {
VAL=$( ${ISQL} <<EOSQL
set nocount on
go
set chained off
go
select count(*) from table_name
go
EOSQL
)
echo "VAL : $VAL"
echo $VAL | while read line
do
echo "line : $line"
done
}
count
The above code gives the output as follows
VAL : Password:
-----------
35
line : Password: ----------- 35
Is there a way to get only the value '35'. What I am missing here? Thanks in advance.
The "select count(*)" prints a result set as output, i.e. a column header (here, that's blank), a line of dashes for each column, and the column value for every row. Here you have only 1 column and 1 row.
If you want to get rid of the dashes, you can do various things:
select the count(*) into a variable and just PRINT the variable. This will remove the dashes from the output
perform some additional filtering with things like grep and awk on the $VAL variable before using it
As for the 'Password:' line: you are not specifying a password in the 'isql' command, so 'isql' will prompt for it (since it works, it looks like there is no password). Best specify a password flag to avoid this prompt -- or filter out that part as mentioned above.
Incidentally, it looks like you may be using the 'isql' from the Unix/Linux ODBC installation, rather than the 'isql' utility that comes with Sybase. Best use the latter (check with 'which isql').

Resources