Insert multiple lines Containing ' and $variable using sed not working - shell

I am new to scripting and stuck at one place that may be really simple.. still would be grateful if anyone can help.
Below is my issue in simplest term:
Input file new.txt
Hello team
Output file expected: new_2.txt
Select '/backup/path1_' from dual;
Select '/backup/path2_' from dual;
Hello team
Note : $var1=path1 and $var2=path2
Sed command used :
Sed '1i\
Select '/backup/"$var1"_' from dual;\
Select '/backup/"$var2"_' from dual;\
' new.txt > new_2.txt
Output received:
new_2.txt
Select /backup/path1_ from dual;
Select /backup/path2_from dual;
Hello team
After various quotes combination also, either single quote ' won't be displayed in output or var value won't be inserted.

Would you please try the following:
var1=path1
var2=path2
sed "1i\\
Select '/backup/${var1}_' from dual;\\
Select '/backup/${var2}_' from dual;
" new.txt > new_2.txt
Result:
Select '/backup/path1_' from dual;
Select '/backup/path2_' from dual;
Hello team

You can also escape the quotes mark with a backslash:
sed '1i\
Select '\'/backup/"$var1"_\'' from dual;\
Select '\'/backup/"$var2"_\'' from dual;
' new.txt > new_2.txt

Related

Bash HEREDOC Single Quotes around expanded variable

I need the $table var to expand but while keeping the quotes that are required by MSSQL for the table_name parameter. I don't know if this is possible as I have been searching for a while. The common answer I see is if there are any quotes then variables won't be expanded. Is it simply not possible to do what I need here?
Code
cat <<EOF | isql $host sa 'password' -d, | sed '-e 1,10d;$d' | sort > mssql_table_${table}_column_info
use $database;
select column_name from information_schema.columns where table_name = '$table';
EOF
Desired Output
select column_name from information_schema.columns where table_name = 'mytable_name';
Notice that the output has single quotes still around the table name. This is necessary for MSSQL to select the appropriate table.
Are you sure the variable expansion in the here document is the problem though? If you just inspect the output of the cat command (using bash):
$ database=database123 table=mytable_name cat <<EOF >/dev/stdout
use $database;
select column_name from information_schema.columns where table_name = '$table';
EOF
use database123;
select column_name from information_schema.columns where table_name = 'mytable_name';
You might want to break down what the other commands are doing to make sure where the error is.
On a side note, your mention of turning off the variable expansion via quotation marks (' or ") apparently conflates the syntax of the here document with the unrelated syntax of other commands in the pipeline.
For example this is correct:
### works, prints 'hello hello'
MYVAR='hello' cat <<EOF | grep 'hello hello'
hello $MYVAR
EOF
As opposed to:
### WRONG, response empty
MYVAR='hello' cat <<'EOF' | grep 'hello hello'
hello $MYVAR
EOF
Does not perform the variable substitution, because the word 'EOF' is quoted in the here document, turning off variable expansion. This is regardless of whether other commands in the pipeline, that is grep 'hello hello', have quotes or not.

bash: separate blocks of lines between pattern x and y

I have a similar question to this one Sed/Awk - pull lines between pattern x and y, however, in my case I want to output each block-of-lines to individual files (named after the first pattern).
Input example:
-- filename: query1.sql
-- sql comments goes here or else where
select * from table1
where id=123;
-- eof
-- filename: query2.sql
insert into table1
(id, date) values (1, sysdate);
-- eof
I want the bash script to generate 2 files: query1.sql and query2.sql with the following content:
query1.sql:
-- sql comments goes here or else where
select * from table1
where id=123;
query2.sql:
insert into table1
(id, date) values (1, sysdate);
Thank you
awk '/-- filename/{if(f)close(f); f=$3;next} !/eof/&&/./{print $0 >> f}' input
Brief explanation,
-- filename{if(f)close(f); f=$3;next}: locate the record contains filename, and assign it to f
!/eof/&&/./{print $0 >> f}: if following lines don't contain 'eof' neither empty, save it to the corresponding file.
This might work for you (GNU sed):
sed -r '/-- filename: (\S+)/!d;s##/&/,/-- eof/{//d;w \1#p;s/.*/}/p;d' file |
sed -nf - file
Create a sed script from the input file and run it against the input file
N.B. Two lines are needed for each query as the program for the query must be surrounded by braces and the w command must end in a newline.
Using GNU awk to handle multiple open files for you:
awk '/^-- eof/{f=0} f{print > out} /^-- filename/{out=$3; f=1}' file
or with any awk:
awk '/^-- eof/{f=0} f{print > out} /^-- filename/{close(out); out=$3; f=1}' file

Pass values read from a file as input to an SQL query in Oracle

#cat file.txt
12354
13456
13498
#!/bin/bash
for i in `cat file.txt`
do
sqlplus XXXXX/XXXXX#DB_NAME << EOF
select *from TABLE_NAME where id="$i"
EOF
done
This is not working for me. Help me how I can solve this.
The solution given by #codeforester works. However I was unable to use it because it created as many DB connections as the number of lines in your file which is a potential impact.
To overcome this, I chose the below solution which may not be ideal but does the job with just one DB connection.
Considering the same data in file.txt
12354
13456
13498
I used the below sed command to populate the above to a single variable "12354,13456,13498"
myvariable=$(echo "`cat file.txt | sed '$!s/$/,/g' | tr -d '\n' | tr -d ' '`")
Now below script will pass this variable to the SQL query and spool the data into a text file:
#!/bin/bash
myvariable=$(echo "`cat file.txt | sed '$!s/$/,/g' | tr -d '\n' | tr -d ' '`")
echo #myvariable
sqlplus /nolog << EOF
CONNECT user#dbname/dbpassword
SPOOL dboutput.txt
select column1 from table_name where id in ($myvariable);
SPOOL OFF
EOF
The output is stored in dboutput.txt (along with the SQL query)
cat dboutput.txt
SQL> select column1 from table_name where id in (12354,13456,13498);
NAME
---------------------------------------------------------------------------- ----
data1
data2
data3
SQL> spool off
Here is the right way to use the heredoc <<, along with the choice of while read instead of for to read the file:
#!/bin/bash
while read -r value; do
sqlplus xxxxx/xxxxx#db_name << EOF
select * from table_name where id='$value';
# make sure heredoc marker "EOF" is not indented
EOF
done < file.txt
See also:
How can I write a here doc to a file in Bash script?
BashFAQ001 to understand why for loop is not the best way to read text lines from a file.

bash / sed / awk Remove or gsub timestamp pattern from text file

I have a text file like this:
1/7/2017 12:53 DROP TABLE table1
1/7/2017 12:53 SELECT
1/7/2017 12:55 --UPDATE #dat_recency SET
Select * from table 2
into table 3;
I'd like to remove all of the timestamp patterns (M/D/YYYY HH:MM, M/DD/YYYY HH:MM, MM/D/YYYY HH:MM, MM/DD/YYYY HH:MM). I can find the patterns using grep but can't figure out how to use gsub. Any suggestions?
DESIRED OUTPUT:
DROP TABLE table1
SELECT
--UPDATE #dat_recency SET
Select * from table 2
into table 3;
You can use this sed command to remove data/time stamps from line start:
sed -i.bak -E 's~([0-9]{1,2}/){2}[0-9]{4} [0-9]{2}:[0-9]{2} *~~' file
cat file
DROP TABLE table1
SELECT
--UPDATE #dat_recency SET
Select * from table 2
into table 3;
Use the default space separator, make first and second columns to empty string and then print the whole line.
awk '/^[0-9]/{$1=$2="";gsub(/^[ \t]+|[ \t]+$/, "")} !/^[0-9]/{print}' sample.csv
the command checks each line whether starts with numeric or not, if it is replace the first 2 columns with empty strings and remove leading spaces; otherwise print the original line.
output:
DROP TABLE table1
SELECT
--UPDATE #dat_recency SET
Select * from table 2
into table 3;

Merge all the data within the (.....) in one line in shell script

I am new to shell script i need some help i have one SQL file like
SELECT DISTINCT F1.COL1,
F1.COL5 ADDRESS ,
COALESCE(COL1,
COL2,
COL3,
COL4),
F1.COL7
FROM TABLE1 F1
I need to print this in one line like
SELECT DISTINCT F1.COL1,
F1.COL5 ADDRESS ,
COALESCE(COL1,COL2,COL3,COL4),
F1.COL7
FROM TABLE1 F1
Thanks
With sed :
sed '/(/{:a;N;s/^ *//;s/\n *//;/)/!{ba}}' file
To edit file in place, add the -i option :
sed -i '/(/{:a;N;s/^ *//;s/\n *//;/)/!{ba}}' file
All lines starting with ( are joined until next line containing ).

Resources