How to set variables from each line store in file - bash

I want to read a file line by line and pass the first and second fields as arguments to a bash script, then iterate to the next line and do the same thing.
My file is pwd.out:
/path/dir/name1/date name1
/path/dir/name2/date name2
I have tried the following without success:
while read line; do dir=`awk '{print $1}'`; name=`awk '{print $2}'`; echo "./myprogram $dir somethingHere $name"; done < pwd.out
where it outputs:
./myprogram /path/dir/name1/date /path/dir/name2/date somethingHere
I think that somehow $dir is getting the values from all the lines and $name is not being set.
What I would like to have is:
./myprogram /path/dir/name1/date somethingHere name1
./myprogram /path/dir/name2/date somethingHere name2
Thanks in advance

You don't need awk for this. Just read the variables in the order they come, such as this:
while read dir name
do
./myprogram $dir somethingHere $name
done < pwd.out
Test
See an example in which I just echo dir=$dir, name=$name with your given file:
$ while read dir name; do echo "dir=$dir, name=$name"; done < pwd.out
dir=/path/dir/name1/date, name=name1
dir=/path/dir/name2/date, name=name2
Your awk command was not working because you were not giving any input to it.
It could work if you did this, although it is unnecessary to use an external command like awk with something that bash can handle perfectly as you can see above.
while read line
do
dir=$(awk '{print $1}' <<< "$line")
name=$(awk '{print $2}' <<< "$line")
echo "./myprogram $dir somethingHere $name"
done < pwd.out

while read dir name; do
echo "dir=$dir name=$name"
done < pwd.out

Related

How to iterate two variables in bash script?

I have these kind of files:
file6543_015.bam
subreadset_15.xml
file6543_024.bam
subreadset_24.xml
file6543_027.bam
subreadset_27.xml
I would like to run something like this:
for i in *bam && l in *xml
do
my_script $i $l > output_file
done
Because in my command the first bam file goes with the first xml file. For each combination bam/xml, that command will give a specific output file.
Like this, using bash arrays:
bam=( *.bam )
xml=( *.xml )
for ((i=0; i<${#bam[#]}; i++)); do
my_script "${bam[i]}" "${xml[i]}"
done
Assuming you have way to uniquely name your output_file for each specific output,
here is one way:
#!/bin/bash
ls file*.bam | while read i
do
CMD=`echo -n "my_script $i "`
CMD="$CMD `echo $i | sed -e 's/file.*_0/subreadset_/' -e 's/.bam/.xml/'`"
$CMD >> output_file
done

How to use variable with awk when being read from a file

I have a file with the following entries:
foop07_bar2_20190423152612.zip
foop07_bar1_20190423153115.zip
foop08_bar2_20190423152612.zip
foop08_bar1_20190423153115.zip
where
foop0* = host
bar* = fp
I would like to read the file and create 3 variables, the whole file name, host and fp (which stands for file_path_differentiator).
I am using read to take the first line and get my whole file name variable, I though I could then feed this into awk to grab the next two variables, however the first method of variable insertion creates an error and the second gives me all the variables.
I would like to loop each line, as I wish to use these variables to ssh to the host and grab the file
#!/bin/bash
while read -r FILE
do
echo ${FILE}
host=`awk 'BEGIN { FS = "_" } ; { print $1 }'<<<<"$FILE"`
echo ${host}
path=`awk -v var="${FILE}" 'BEGIN { FS = "_" } ; { print $2 }'`
echo ${path}
done <zips_not_received.csv
Expected Result
foop07_bar2_20190423152612.zip
foop07
bar2
foop07_bar1_20190423153115.zip
foop07
bar1
Actual Result
foop07_bar2_20190423152612.zip
/ : No such file or directoryfoop07_bar2_20190423152612.zip
bar2 bar1 bar2 bar1
You can do this alone with bash, without using any external tool.
while read -r file; do
[[ $file =~ (.*)_(.*)_.*\.zip ]] || { echo "invalid file name"; exit 1; }
host="${BASH_REMATCH[1]}"
path="${BASH_REMATCH[2]}"
echo "$file"
echo "$host"
echo "$path"
done < zips_not_received.csv
typical...
Managed to work a solution after posting...
#!/bin/bash
while read -r FILE
do
echo ${FILE}
host=`echo "$FILE" | awk -F"_" '{print $1}'`
echo $host
path=`echo "$FILE" | awk -F"_" '{print $2}'`
echo ${path}
done <zips_not_received.csv
not sure on the elegance or its correctness as i am using echo to create variable...but i have it working..
Assuming there is no space or _ in your "file name" that are part of the host or path
just separate line before with sed, awk, ... if using default space separator (or use _ as argument separator in batch). I add the remove of empty line value as basic security seeing your sample.
sed 's/_/ /g;/[[:blank:]]\{1,\}/d' zips_not_received.csv \
| while read host path Ignored
do
echo "${host}"
echo "${path}"
done

Read multiple variables from file

I need to read a file that has lines like
user=username1
pass=password1
How can I read multiple lines like this into separate variables like username and password?
Would I use awk or grep? I have found ways to read lines into variables with grep but would I need to read the file for each individual item?
The end result is to use these variables to access a database via the command line. So I need to be able to read, store and use these values in other commands.
if the process which generates the file is safe and has shell syntax just source the file.
. ./file
Otherwise the file can be processes before to add quotes
perl -ne 'if (/^([A-Za-z_]\w*)=(.*)/) {$k=$1;$v=$2;$v=~s/\x27/\x27\\\x27\x27/g;print "$k=\x27$v\x27\n";}' <file >file2
. ./file2
If you want to use awk then
Input
$ cat file
user=username1
pass=password1
Reading
$ user=$(awk -F= '$1=="user"{print $2;exit}' file)
$ pass=$(awk -F= '$1=="pass"{print $2;exit}' file)
Output
$ echo $user
username1
$ echo $pass
password1
You could use a loop for your file perhaps, but this is probably the functionality you're looking for.
$ echo 'user=username1' | awk -F= '{print $2}'
username1
Using the -F flag sets the delimiter to = and we select the 2nd item from the row.
file.txt:
user=username1
pass=password1
user=username2
pass=password2
user=username3
pass=password3
Do to avoid browsing several times the file file.txt:
#!/usr/bin/env bash
func () {
echo "user:$1 pass:$2"
}
i=0
while IFS='' read -r line; do
if [ $i -eq 0 ]; then
i=1
user=$(echo ${line} | cut -f2 -d'=')
else
i=0
pass=$(echo ${line} | cut -f2 -d'=')
func "$user" "$pass"
fi
done < file.txt
Output:
user:username1 pass:password1
user:username2 pass:password2
user:username3 pass:password3

Read words in a specific line in a text file using shell script

In my Bash shell script, I would like to read a specific line from a file; that is delimited by : and assign each section to a variable for processing later.
For example I want to read the words found on line 2. The text file:
abc:01APR91:1:50
Jim:02DEC99:2:3
banana:today:three:0
Once I have "read" line 2, I should be able to echo the values as something like this:
echo "$name";
echo "$date";
echo "$number";
echo "$age";
The output would be:
Jim
02DEC99
2
3
For echoing a single line of a file, I quite like sed:
$ IFS=: read name date number age < <(sed -n 2p data)
$ echo $name
Jim
$ echo $date
02DEC99
$ echo $number
2
$ echo $age
3
$
This uses process substitution to get the output of sed to the read command. The sed command uses the -n option so it does not print each line (as it does by default); the 2p means 'when it is line 2, print the line'; data is simply the name of the file.
You can use this:
read name date number age <<< $(awk -F: 'NR==2{printf("%s %s %s %s\n", $1, $2, $3, $4)}' inFile)
echo "$name"
echo "$date"
echo "$number"
echo "$age"

How can I tokenize $PATH by using awk?

How can I tokenize $PATH by using awk?
I tried 3 hours, but it totally screwed out.
#!/bin/bash
i=1
while true; do
token=$($echo $PATH | awk -F ':' '{print $"$i"}')
if [ -z "$token" ]; then
break
fi
((i++))
if [ -a "$TOKEN/$1" ]; then
echo "$TOKEN/$1"
break
fi
break
done
When I run this code, I got
/home/$USERID/bin/ff: line 6: /home/$USERID/bin:/usr/local/symlinks:/usr/local/scripts:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/$USERID/bin: No such file or directory
How can I change my program?
What are you trying to do?
This will let you iterate against the individual paths:
echo $PATH | tr ':' '\n' | while read line; do echo $line; done
As #SiegeX notes, an even shorter version works
echo $PATH | while read -d ':' line; do echo $line; done
Do the whole thing in awk
#!/bin/bash
awk -v addPath="$1" 'BEGIN{RS=":";ORS=addPath "\n"}{$1=$1}1' <<< $PATH
Proof of Concept
$ addPath="/foo"
$ awk -v addPath="$addPath" 'BEGIN{RS=":";ORS=addPath "\n"}{$1=$1}1' <<< $PATH
/usr/local/bin/foo
/usr/bin/foo
/bin/foo
/usr/games/foo
/usr/lib/java/bin/foo
/usr/lib/qt/bin/foo
/usr/share/texmf/bin/foo
./foo
/sbin/foo
/usr/sbin/foo
/usr/local/sbin/foo
I think simple tr : \\n would suffice. Pipe it with sed 's#$#blabla#g' to add something to the lines and that's it.
You don't need to use external tools such as awk or tr to tokenize the PATH. Bash is capable of doing so:
#!/bin/sh
IFS=:
for p in $PATH
do
if [ -a "$p/$1" ]; then
echo "$p/$1"
break
fi
done
The IFS is a bash built-in variable which bash use as an input field separator (IFS).

Resources