How do I write a for loop with 2 variables in bash? - bash

I sorted the output from conflicting numbers that are already existing in my DB with new insert request to a file name output.
This is the output of the file.
cat output
DataBase: (9999999999) NewDB_insert: (999-999-9999)
DataBase: (1111111111) NewDB_insert: (111-111-1111)
DataBase: (2222222222) NewDB_insert: (222-222-2222)
DataBase: (3333333333) NewDB_insert: (333-333-3333)
I want to run the same command that displays information from the same line that has the conflict. The command just pulls the DB information.
showinfo -id 9999999999
ID:9999999999
Name: Mr.Brown
City: New York
Company: Acme
Client: 1245
showinfo -id 999-999-9999
ID:999-999-9999
Name: Mr.Brown
City: New York
Company: Acme
Client: 1245
I want to use 2 variables to send the output to the screen from my file named output. So I would have the following information from my file.
**From DB:**
ID:9999999999
Name: Mr.Brown
City: New York
Company: Acme
Client: 1245
**From NewDB_insert:**
showinfo -id 999-999-9999
ID:999-999-9999
Name: Mr.Brown
City: New York
Company: Acme
Client: 1245
So basically I would cat the file in a for loop with one variable as so.
for i in `cat output | awk '{print $2}' | tr -d ")("`;do echo == From DB:==;showinfo -id $i; echo ==========;done
Which gives me the first part of what I need. I looked online and not able to use any examples to create a for loop with 2 variables. Any help would be appreciated.

The short answer is probably: "you don't".
You're probably best off using a while loop with read.
sed -e 's/[^(]*(\([^)]*\))[^(]*(\([^)]*\)).*/\1 \2/' output |
while read a b
do
echo == From DB ==
showinfo -id $a
echo =============
echo == From NewDB_Insert ==
showinfo -id $b
echo =======================
done
If you think there might be nasty characters, especially backslashes, in your data, you can use read -r and you can quote "$a" and "$b" in your showinfo lines. If you're sure there won't be any such characters, you're (just about) OK with the code as written.
Note that any variables set in the while loop as written won't be accessible when the loop completes. There are ways around that problem if it is, indeed, a problem.
The sed script looks for:
zero or more non-open-parentheses
an open parenthesis
captures zero or more non-close-parentheses
stops capturing
a close parenthesis
zero of more non-open-parentheses
an open parenthesis
captures zero of more non-close-parentheses
stops capturing
a close parenthesis
zero or more other characters
and replaces them with the two captured strings (e.g. 9999999999 and 999-999-9999) separated by a space.

Related

Assistance needed with bash script parsing data from a file

Would first like to thank everyone for taking the time and reviewing this, and providing some assistance.
I am stuck on this bash script project I have been working on. This script is supposed to pull data from this file, export it to a csv, and then email it out. I was able to grab the required data and email it to myself but the problem is that the groups in the file have special characters. I need to have the lsgroup command executed on those groups in order to retrieve the users and then have it exported to the csv file.
For example, below is sample data that are in the file and how it looks like:
[Skyrim]
comment = Elder Scrolls
path = /export/skyrim/elderscrolls
valid users = #dawnstar nords #riften
invalid users = #lakers
[SONY]
comment = PS4
path = /export/Sony/PS4
valid users = #insomniac #activision
invalid users = peterparker controller #pspro
The script is supposed to be gathering the name, comment, path, valid users, invalid users, and exporting them to the csv
So far this is what I have that works,
out="/tmp/parsed-report.csv"
file="/tmp/file.conf"
echo "name,comment,path,valid_users,invalid_users" > $out;
scp -q server:/tmp/parse/file.conf $out
grep "^\[.*\]$" $file |grep -Ev 'PasswordPickup|global' | while read shr ; do
shr_regex=$(echo "$shr" | sed 's/[][]/\\&/g')
shr_print=$(echo "$shr"|sed 's/[][]//g')
com=$(grep -p "$shr_regex" $file|grep -v "#"| grep -w "comment"| awk -F'=' '{print $2}'|sed 's/,/ /g')
path=$(grep -p "$shr_regex" $file|grep -v "#"| grep -w "path"| awk -F'=' '{print $2}')
val=$(grep -p "$shr_regex" $file|grep -v "#"| grep -w "valid users"| awk -F'=' '{print $2}')
inv=$(grep -p "$shr_regex" $file|grep -v "#"| grep -w "invalid users"| awk -F'=' '{print$2}')
echo "$shr_print,$com,$path,$val,$inv" >> $out
done
exit 0
The text with '#' are considered groups so if $var3='#' then run the lsgroup command and export the data to csv file under the correct category, else if $vars3!='#' then export users to the csv file.
This is what I tried to come up with:
vars3="$val$inv"
Server="server_1"
for lists in $(echo "$vars3"); do
if [[ $lists = *[!\#]* ]]; then
ssh -q $Server "lsgroup -a users $(echo "$lists"|tr -d /#/)|awk -
F'=' '{print $1}'" > print to csv file as valid or invalid users
else [[ $lists != *[!\#]* ]]; then
echo "users without #" > to csv file as valid or invalid users
With the right commands the output should look like this
: skyrim
Comment: Elder Scrolls
Path: /export/skyrim/elderscrolls
Valid Users: dragonborn argonian kajit nords
Invalid Users : Shaq Kobe Phil Lebron
: SONY
Comment: PS4
Path: /export/Sony/PS4
Valid Users: spiderman ratchet&clank callofduty spyro
Invalid Users : peterparker controller 4k
Create a file file.sed with this content:
s/\[/: / # replace [ with : and one space
s/]// # remove ]
s/^ // # remove leading spaces
s/ = /: /
s/#lakers/Shaq Kobe Phil Lebron/
s/^comment/Comment/
# Can be completed by you here.
and then use
sed -f file.sed your_sample_data_file
Output:
: Skyrim
Comment: Elder Scrolls
path: /export/skyrim/elderscrolls
valid users: #dawnstar nords #riften
invalid users: Shaq Kobe Phil Lebron
: SONY
Comment: PS4
path: /export/Sony/PS4
valid users: #insomniac #activision
invalid users: peterparker controller #pspro
Parsing things is a hard problem and, in my opinion, writing your own parser is unproductive.
Instead, I highly advise you to take your time and learn about grammars and parsing generators. Then you can use some battle tested library such as textX to implement your parser.

Beginner bash scripting

I want to break down a file that has multiple lines that follow this style:
e-mail;year/month/date;groups;sharedFolder
An example line from file:
alan.turing#cam.ac.uk;1912/06/23;visitor;/visitorData
Essentially I want to break each line up into four arrays that can be accessed later on in a loop to create a new user for each line.
I have declared the arrays already have a file saved as variable 'filename'
Usernames need to be the first three letters of the surname and the first three letters of the first name.
Passwords need to be the users birthdate as day/month/year.
So far this is what I have. Am I on the right track? Are there places I have gone wrong or could improve on?
#reads file and saves into appropriate arrays
while read -r line
do
IFS = $';' read -r -a array <<< "$line"
mailArray += "$(array[0])"
dateArray += "$(array[1])"
groupArray += "$(array[2])"
folderArray += "$(array[3])"
done < $filename
#create usernames from emails
for i in "$(mailArray[#])"
do
IFS=$'.' read -r -a array <<< "$i"
part1 = ${array[0]:0:3}
part2 = ${array[1]:0:3}
user = $part2
user .= $part1
userArray += ("$user")
done
#create passwords from birthdates
for i in "$(dateArray[#])"
do
IFS=$'/' read -r -a array <<< "$i"
password = $part3
password .= $part2
password .= $part1
passArray += ("$password")
done
Not sure if arrays are required here, and if you just want to create username, password from the lines in the desired format, please see below:
# Sample Input Data:
bash$> cat d
alan.turing#cam.ac.uk;1912/06/23;visitor;/visitorData
rob.zombie#cam.ac.uk;1966/06/23;metalhead;/metaldata
donald.trump#stupid.com;1900/00/00;idiot;/idiotique
bash$>
# Sample Output from script:
bash$> ./dank.sh "d"
After processing the line [alan.turing#cam.ac.uk;1912/06/23;visitor;/visitorData], we have the following parameters extracted:
Name: alan
Surname: turing
Birthdate: 1912/06/23
username: alatur
Password: 1912/06/23
After processing the line [rob.zombie#cam.ac.uk;1966/06/23;metalhead;/metaldata], we have the following parameters extracted:
Name: rob
Surname: zombie
Birthdate: 1966/06/23
username: robzom
Password: 1966/06/23
After processing the line [donald.trump#stupid.com;1900/00/00;idiot;/idiotique], we have the following parameters extracted:
Name: donald
Surname: trump
Birthdate: 1900/00/00
username: dontru
Password: 1900/00/00
Now the script, which does this operation.
# Script.
bash$> cat dank.sh
#!/bin/bash
cat "$1" | while read line
do
name=`echo $line | sed 's/^\(.*\)\..*\#.*$/\1/g'`
surname=`echo $line | sed 's/^.*\.\(.*\)\#.*$/\1/g'`
bdate=`echo $line | sed 's/^.*;\(.*\);.*;.*$/\1/g'`
temp1=`echo $name | sed 's/^\(...\).*$/\1/g'`
temp2=`echo $surname | sed 's/^\(...\).*$/\1/g'`
uname="${temp1}${temp2}"
echo "
After processing the line [$line], we have the following parameters extracted:
Name: $name
Surname: $surname
Birthdate: $bdate
username: $uname
Password: $bdate
"
done
bash$>
Basically, i am just running couple of sed commands to extract what is useful and required, and storing them in variables and then one could use them anyways they want to. You could redirect them to a file if you want or print out a pipe separated output.. upto you.
Let me know..

Read rest of while loop output

I want to run a while loop from output I get from MySQL, but my output is being cut off.
Example output I get from MySQL is:
123 nfs://192.168.1.100/full/Some.file.1.txt
124 nfs://192.168.1.100/full/A second file 2.txt
My loop looks like so:
mysql -uuser -ppass queue -ss -e 'select id,path from queue where status = 0' | while read a b c
do
echo $a
echo $b
done
The result for $b cuts off after nfs://192.168.1.100/full/A.
How can I have it output the whole sentence?
Your second filename contains spaces, so that is where the field is cut off.
Since it is the last field of the output, you can just skip field c:
mysql -uuser -ppass queue -ss -e 'select id,path from queue where status = 0' | while read a b
do
echo $a
echo $b
done
The last field in read will have all remaining fields.
Problem is that you are reading each line into 3 variables using:
read a b c
And since your input line also contains a whitespace e.g.
124 nfs://192.168.1.100/full/A second file 2.txt
with the default IFS it is setting 3 variables as:
a=124
b=nfs://192.168.1.100/full/A
c=second file 2.txt
Since c is the last parameter in read it is reading rest of the line in c.
To fix your script you can just do:
read a b

Output a record from an existing file based on a matching condition in bash scripting

I need to be able to output a record if a condition is true.
Suppose this is the existing file,
Record_ID,Name,Last Name,Phone Number
I am trying to output record if the last name matches. I collect user input to get last name and then perform the following operation.
read last_name
cat contact_records.txt | awk -F, '{if($3=='$last_name')print "match"; else print "no match";}'
This script outputs no match for every record within contact_records.txt
Your script has two problems:
First, $last_name is not considered quoted in the context of 'awk'. For example, if "John" is to be queried, you are comparing $3 with the variable John rather than string "John". This can be fixed by adding two double-quotes as below:
read last_name
cat contact_records.txt | awk -F, '{if($3=="'$last_name'")print "match"; else print "no match";}'
Second, it actually scans the whole contact_records.txt and prints match/no match for each line of comparison. For example, contact_records.txt has 100 lines, with "John" in it. Then, querying if John is in it by this script yields 1 "match"'s and 99 "no match"'s. This might not be what you want. Here's a fix:
read last_name
if [ `cat contact_records.txt | cut -d, -f 3 | grep -c "$last_name"` -eq 0 ]; then
echo "no match"
else
echo "match"
fi

How to concatenate stdin and a string?

How to I concatenate stdin to a string, like this?
echo "input" | COMMAND "string"
and get
inputstring
A bit hacky, but this might be the shortest way to do what you asked in the question (use a pipe to accept stdout from echo "input" as stdin to another process / command:
echo "input" | awk '{print $1"string"}'
Output:
inputstring
What task are you exactly trying to accomplish? More context can get you more direction on a better solution.
Update - responding to comment:
#NoamRoss
The more idiomatic way of doing what you want is then:
echo 'http://dx.doi.org/'"$(pbpaste)"
The $(...) syntax is called command substitution. In short, it executes the commands enclosed in a new subshell, and substitutes the its stdout output to where the $(...) was invoked in the parent shell. So you would get, in effect:
echo 'http://dx.doi.org/'"rsif.2012.0125"
use cat - to read from stdin, and put it in $() to throw away the trailing newline
echo input | COMMAND "$(cat -)string"
However why don't you drop the pipe and grab the output of the left side in a command substitution:
COMMAND "$(echo input)string"
I'm often using pipes, so this tends to be an easy way to prefix and suffix stdin:
echo -n "my standard in" | cat <(echo -n "prefix... ") - <(echo " ...suffix")
prefix... my standard in ...suffix
There are some ways of accomplish this, i personally think the best is:
echo input | while read line; do echo $line string; done
Another can be by substituting "$" (end of line character) with "string" in a sed command:
echo input | sed "s/$/ string/g"
Why i prefer the former? Because it concatenates a string to stdin instantly, for example with the following command:
(echo input_one ;sleep 5; echo input_two ) | while read line; do echo $line string; done
you get immediatly the first output:
input_one string
and then after 5 seconds you get the other echo:
input_two string
On the other hand using "sed" first it performs all the content of the parenthesis and then it gives it to "sed", so the command
(echo input_one ;sleep 5; echo input_two ) | sed "s/$/ string/g"
will output both the lines
input_one string
input_two string
after 5 seconds.
This can be very useful in cases you are performing calls to functions which takes a long time to complete and want to be continuously updated about the output of the function.
You can do it with sed:
seq 5 | sed '$a\6'
seq 5 | sed '$ s/.*/\0 6/'
In your example:
echo input | sed 's/.*/\0string/'
I know this is a few years late, but you can accomplish this with the xargs -J option:
echo "input" | xargs -J "%" echo "%" "string"
And since it is xargs, you can do this on multiple lines of a file at once. If the file 'names' has three lines, like:
Adam
Bob
Charlie
You could do:
cat names | xargs -n 1 -J "%" echo "I like" "%" "because he is nice"
Also works:
seq -w 0 100 | xargs -I {} echo "string "{}
Will generate strings like:
string 000
string 001
string 002
string 003
string 004
...
The command you posted would take the string "input" use it as COMMAND's stdin stream, which would not produce the results you are looking for unless COMMAND first printed out the contents of its stdin and then printed out its command line arguments.
It seems like what you want to do is more close to command substitution.
http://www.gnu.org/software/bash/manual/html_node/Command-Substitution.html#Command-Substitution
With command substitution you can have a commandline like this:
echo input `COMMAND "string"`
This will first evaluate COMMAND with "string" as input, and then expand the results of that commands execution onto a line, replacing what's between the ‘`’ characters.
cat will be my choice: ls | cat - <(echo new line)
With perl
echo "input" | perl -ne 'print "prefix $_"'
Output:
prefix input
A solution using sd (basically a modern sed; much easier to use IMO):
# replace '$' (end of string marker) with 'Ipsum'
# the `e` flag disables multi-line matching (treats all lines as one)
$ echo "Lorem" | sd --flags e '$' 'Ipsum'
Lorem
Ipsum#no new line here
You might observe that Ipsum appears on a new line, and the output is missing a \n. The reason is echo's output ends in a \n, and you didn't tell sd to add a new \n. sd is technically correct because it's doing exactly what you are asking it to do and nothing else.
However this may not be what you want, so instead you can do this:
# replace '\n$' (new line, immediately followed by end of string) by 'Ipsum\n'
# don't forget to re-add the `\n` that you removed (if you want it)
$ echo "Lorem" | sd --flags e '\n$' 'Ipsum\n'
LoremIpsum
If you have a multi-line string, but you want to append to the end of each individual line:
$ ls
foo bar baz
$ ls | sd '\n' '/file\n'
bar/file
baz/file
foo/file
I want to prepend my sql script with "set" statement before running it.
So I echo the "set" instruction, then pipe it to cat. Command cat takes two parameters : STDIN marked as "-" and my sql file, cat joins both of them to one output. Next I pass the result to mysql command to run it as a script.
echo "set #ZERO_PRODUCTS_DISPLAY='$ZERO_PRODUCTS_DISPLAY';" | cat - sql/test_parameter.sql | mysql
p.s. mysql login and password stored in .my.cnf file

Resources