Retrieve oracle sid from oratab file - shell

I'm creating this ksh shell script to compare Oracle homes from two database name which user inputs.
I tried using cat and also sed from various threads but somehow not able to put oracle home value into variable to compare them.
Oratab:
db1:/oracle/app/oracle/product/11.2.0.3:Y
db2:/oracle/app/oracle/product/11.2.0.3:N
#db3:/oracle/app/oracle/product/11.2.0.4:Y
Runtime:
./compare_db db1 db2
#!/bin/ksh
sid1=$1;
sid2=$2;
file=/etc/oratab
function compare {
home1= sed -n "s#${sid1}.*/\(.*\)${sid1}.*#\1#p" $file
home2= sed -n "s#${sid2}.*/\(.*\)${sid2}.*#\1#p" $file
if $home1 = $home2; then
echo "Success"
else
echo "Failure"
fi
}
Output: (I don't want to include last part "N/Y" after the : (colon))
home1 = /oracle/app/oracle/product/11.2.0.3
home2 = /oracle/app/oracle/product/11.2.0.3
db1 = db2
success
Obviously above is not working and only test code, does somebody comment and what's missing or how it can be done in elegant way?
Thanks,

awk works:
awk -F: "/^${mysid}/{printf \"%s\n\",\$2}" /etc/oratab

You can do it elegantly with this combination of programs :
home1=$(grep $sid1 $file | cut -d":" -f2) --> output /oracle/app/oracle/product/11.2.0.3
home2=$(grep $sid2 $file | cut -d":" -f2) --> output /oracle/app/oracle/product/11.2.0.3
grep --> finds the line in which the specified SID is
cut --> cuts the second field (specified by -f2), fields are delimited with the specified character -d":"

Related

How can I create and process an array of variables in shell script?

I have a shell script to read the data from a YAML file and then do some processing. This is how the YAML file is -
view:
schema1.view1:/some-path/view1.sql
schema2.view2:/some-path/view2.sql
tables:
schema1.table1:/some-path/table1.sql
schema2.table2:/some-path/table2.sql
end
I want the output as -
schema:schema1
object:view1
fileloc:/some-path/view1.sql
schema:schema2
object:view2
fileloc:/some-path/view2.sql
schema:schema1
object:table1
fileloc:/some-path/table1.sql
schema:schema2
object:table2
fileloc:/some-path/table2.sql
This is how I'm reading the YAML file using the shell script -
#!/bin/bash
input=./file.yaml
viewData=$(sed '/view/,/tables/!d;/tables/q' $file|sed '1d;$d')
tableData=$(sed '/tables/,/end/!d;/end/q' $file|sed '1d;$d')
so viewData will have this data -
schema1.view1:/some-path/view1.sql
schema2.view2:/some-path/view2.sql
and tableData will have this data -
schema1.table1:/some-path/table1.sql
schema2.table2:/some-path/table2.sql
And then I'm using a for loop to separate the schema, object and SQL file -
for line in $tableData; do
field=`echo $line | cut -d: -f1`
schema=`echo $field | cut -d. -f1`
object=`echo $field | cut -d. -f2`
fileLoc=`echo $line | cut -d: -f2`
echo "schema=$schema"
echo "object=$object"
echo "fileloc=$fileLoc"
done
But I'll have to do the same thing again for the view. Is there any way in shell script like using an array or something else so that I can use the same loop to get data for both view and tables.
Any help would be appreciated. Thanks!
Using (g)awk:
awk -F "[:.]" '/:$/{ s=$1 }{ gsub(" ",""); if($3!=""){ print "schema="$1; print "object="$2; print "fileloc="$3 }}' yaml
-F "[:.]" reads input, and separates this on : or . (But using the regular expression [:.].)
/:$/{ s=$1 } This will store the group (view or tables) you are currently reading. This is not used anymore, so can be ignored.
gsub(" ",""); This will delete all spaced in the input line.
if... When you have three fields, checked by a not empty third field, print the info.
output:
schema=schema1
object=view1
fileloc=/some-path/view1
schema=schema2
object=view2
fileloc=/some-path/view2
schema=schema1
object=table1
fileloc=/some-path/table1
schema=schema2
object=table2
fileloc=/some-path/table2
EDIT: Adding the objectType to the output:
awk -F "[:.]" '/:$/{ s=$1 }{ gsub(" ",""); if($3!=""){ print "objectType="$s; "schema="$1; print "object="$2; print "fileloc="$3 }}' yaml
But I do see that I made a mistake.... 🤔😉
I would have expected the regular expression /:$/ to find a line that end with a :, but for some reason it does not. (I will have to do some more research to look into that)
It should be, for a working work-around:
awk -F "[:.]" 'NF==2{ s=$1 }NF>2{ gsub(" ",""); if($3!=""){ print "objectType="s; "sch
ema="$1; print "object="$2; print "fileloc="$3 }}' yaml
The line with view: has two field, which make NF return the value 2, and view is stored in the variable s.
When we have more than two fields, the contents of the variables is printed.

Assistance needed with bash script parsing data from a file

Would first like to thank everyone for taking the time and reviewing this, and providing some assistance.
I am stuck on this bash script project I have been working on. This script is supposed to pull data from this file, export it to a csv, and then email it out. I was able to grab the required data and email it to myself but the problem is that the groups in the file have special characters. I need to have the lsgroup command executed on those groups in order to retrieve the users and then have it exported to the csv file.
For example, below is sample data that are in the file and how it looks like:
[Skyrim]
comment = Elder Scrolls
path = /export/skyrim/elderscrolls
valid users = #dawnstar nords #riften
invalid users = #lakers
[SONY]
comment = PS4
path = /export/Sony/PS4
valid users = #insomniac #activision
invalid users = peterparker controller #pspro
The script is supposed to be gathering the name, comment, path, valid users, invalid users, and exporting them to the csv
So far this is what I have that works,
out="/tmp/parsed-report.csv"
file="/tmp/file.conf"
echo "name,comment,path,valid_users,invalid_users" > $out;
scp -q server:/tmp/parse/file.conf $out
grep "^\[.*\]$" $file |grep -Ev 'PasswordPickup|global' | while read shr ; do
shr_regex=$(echo "$shr" | sed 's/[][]/\\&/g')
shr_print=$(echo "$shr"|sed 's/[][]//g')
com=$(grep -p "$shr_regex" $file|grep -v "#"| grep -w "comment"| awk -F'=' '{print $2}'|sed 's/,/ /g')
path=$(grep -p "$shr_regex" $file|grep -v "#"| grep -w "path"| awk -F'=' '{print $2}')
val=$(grep -p "$shr_regex" $file|grep -v "#"| grep -w "valid users"| awk -F'=' '{print $2}')
inv=$(grep -p "$shr_regex" $file|grep -v "#"| grep -w "invalid users"| awk -F'=' '{print$2}')
echo "$shr_print,$com,$path,$val,$inv" >> $out
done
exit 0
The text with '#' are considered groups so if $var3='#' then run the lsgroup command and export the data to csv file under the correct category, else if $vars3!='#' then export users to the csv file.
This is what I tried to come up with:
vars3="$val$inv"
Server="server_1"
for lists in $(echo "$vars3"); do
if [[ $lists = *[!\#]* ]]; then
ssh -q $Server "lsgroup -a users $(echo "$lists"|tr -d /#/)|awk -
F'=' '{print $1}'" > print to csv file as valid or invalid users
else [[ $lists != *[!\#]* ]]; then
echo "users without #" > to csv file as valid or invalid users
With the right commands the output should look like this
: skyrim
Comment: Elder Scrolls
Path: /export/skyrim/elderscrolls
Valid Users: dragonborn argonian kajit nords
Invalid Users : Shaq Kobe Phil Lebron
: SONY
Comment: PS4
Path: /export/Sony/PS4
Valid Users: spiderman ratchet&clank callofduty spyro
Invalid Users : peterparker controller 4k
Create a file file.sed with this content:
s/\[/: / # replace [ with : and one space
s/]// # remove ]
s/^ // # remove leading spaces
s/ = /: /
s/#lakers/Shaq Kobe Phil Lebron/
s/^comment/Comment/
# Can be completed by you here.
and then use
sed -f file.sed your_sample_data_file
Output:
: Skyrim
Comment: Elder Scrolls
path: /export/skyrim/elderscrolls
valid users: #dawnstar nords #riften
invalid users: Shaq Kobe Phil Lebron
: SONY
Comment: PS4
path: /export/Sony/PS4
valid users: #insomniac #activision
invalid users: peterparker controller #pspro
Parsing things is a hard problem and, in my opinion, writing your own parser is unproductive.
Instead, I highly advise you to take your time and learn about grammars and parsing generators. Then you can use some battle tested library such as textX to implement your parser.

Alternating output in bash for loop from two grep

I'm trying to search through files and extract two pieces of relevant information every time they appear in the file. The code I currently have:
#!/bin/bash
echo "Utilized reads from ustacks output" > reads.txt
str1="utilized reads:"
str2="Parsing"
for file in /home/desaixmg/novogene/stacks/sample01/conda_ustacks.o*; do
reads=$(grep $str1 $file | cut -d ':' -f 3
samples=$(grep $str2 $file | cut -d '/' -f 8
echo $samples $reads >> reads.txt
done
It is doing each line for the file (the files have varying numbers of instances of these phrases) and gives me the output per row for each file:
PopA_15.fq 1081264
PopA_16.fq PopA_17.fq 1008416 554791
PopA_18.fq PopA_20.fq PopA_21.fq 604610 531227 595129
...
I want it to match each instance (i.e. 1st instance of both greps next two each other):
PopA_15.fq 1081264
PopA_16.fq 1008416
PopA_17.fq 554791
PopA_18.fq 604610
PopA_20.fq 531227
PopA_21.fq 595129
...
How do I do this? Thank you
Considering that your Input_file is same as sample shown and number of columns are even on each line with 1 PopA value and other will be with digit values. Following awk may help you in same.
awk '{for(i=1;i<=(NF/2);i++){print $i,$((NF/2)+i)}}' Input_file
Output will be as follows.
PopA_15.fq 1081264
PopA_16.fq 1008416
PopA_17.fq 554791
PopA_18.fq 604610
PopA_20.fq 531227
PopA_21.fq 595129
In case you want to pass output of a command to awk command then you could do like your command | awk command... no need to add Input_file to above awk command.
This is what ended up working for me...any tips for more efficient code are definitely welcome
#!/bin/bash
echo "Utilized reads from ustacks output" > reads.txt
str1="utilized reads:"
str2="Parsing"
for file in /home/desaixmg/novogene/stacks/sample01/conda_ustacks.o*; do
reads=$(grep $str1 $file | cut -d ':' -f 3)
samples=$(grep $str2 $file | cut -d '/' -f 8)
paste <(echo "$samples" | column -t) <(echo "$reads" | column -t) >> reads.txt
done
This provides the desired output described above.

How do i print multiple variables in separate lines into a file using shell scripting

i need to select string from one csv file to another properties file using shell
project.csv - this is the file which contains data like below & this may contain N number of lines/data
PN549,projects.pn549
SaturnTV_SW,projects.saturntv_sw
Need to collect each string "pn549" , "saturntv_sw" into a properties file
properties
[projects]
pn549_pt=pn549
saturntv_sw_pt=saturntv_sw
Below is the code i wrote to fetch the string and to print
cat "project.csv" | while IFS='' read -r line; do
Display_Name="$(echo "$line" | cut -d ',' -f 1 | tr -d '"')"
project_name="$(echo "$TEMP_Name" | cut -d '.' -f 2)"
echo "$project_name"
echo "$project_name"_pt="$project_name" > /opt/properties
How do i print multiple lines like i gave in example(properties)
i have got my answer, simply redirected the output

UNIX - Replacing variables in sql with matching values from .profile file

I am trying to write a shell which will take an SQL file as input. Example SQL file:
SELECT *
FROM %%DB.TBL_%%TBLEXT
WHERE CITY = '%%CITY'
Now the script should extract all variables, which in this case everything starting with %%. So the output file will be something as below:
%%DB
%%TBLEXT
%%CITY
Now I should be able to extract the matching values from the user's .profile file for these variables and create the SQL file with the proper values.
SELECT *
FROM tempdb.TBL_abc
WHERE CITY = 'Chicago'
As of now I am trying to generate the file1 which will contain all the variables. Below code sample -
sed "s/[(),']//g" "T:/work/shell/sqlfile1.sql" | awk '/%%/{print $NF}' | awk '/%%/{print $NF}' > sqltemp2.sql
takes me till
%%DB.TBL_%%TBLEXT
%%CITY
Can someone help me in getting to file1 listing the variables?
You can use grep and sort to get a list of unique variables, as per the following transcript:
$ echo "SELECT *
FROM %%DB.TBL_%%TBLEXT
WHERE CITY = '%%CITY'" | grep -o '%%[A-Za-z0-9_]*' | sort -u
%%CITY
%%DB
%%TBLEXT
The -o flag to grep instructs it to only print the matching parts of lines rather than the entire line, and also outputs each matching part on a distinct line. Then sort -u just makes sure there are no duplicates.
In terms of the full process, here's a slight modification to a bash script I've used for similar purposes:
# Define all translations.
declare -A xlat
xlat['%%DB']='tempdb'
xlat['%%TBLEXT']='abc'
xlat['%%CITY']='Chicago'
# Check all variables in input file.
okay=1
for key in $(grep -o '%%[A-Za-z0-9_]*' input.sql | sort -u) ; do
if [[ "${xlat[$key]}" == "" ]] ; then
echo "Bad key ($key) in file:"
grep -n "${key}" input.sql | sed 's/^/ /'
okay=0
fi
done
if [[ ${okay} -eq 0 ]] ; then
exit 1
fi
# Process input file doing substitutions. Fairly
# primitive use of sed, must change to use sed -i
# at some point.
# Note we sort keys based on descending length so we
# correctly handle extensions like "NAME" and "NAMESPACE",
# doing the longer ones first makes it work properly.
cp input.sql output.sql
for key in $( (
for key in ${!xlat[#]} ; do
echo ${key}
done
) | awk '{print length($0)":"$0}' | sort -rnu | cut -d':' -f2) ; do
sed "s/${key}/${xlat[$key]}/g" output.sql >output2.sql
mv output2.sql output.sql
done
cat output.sql
It first checks that the input file doesn't contain any keys not found in the translation array. Then it applies sed substitutions to the input file, one per translation, to ensure all keys are substituted with their respective values.
This should be a good start, though there may be some edge cases such as if your keys or values contain characters sed would consider important (like / for example). If that is the case, you'll probably need to escape them such as changing:
xlat['%%UNDEFINED']='0/0'
into:
xlat['%%UNDEFINED']='0\/0'

Resources