assign bash array values to variables - bash

I have a file with list of disks and their serial numbers in separate lines. The data is consistent through the file formatted like this:
Disk hostname disk /proc/cds/cdd/disks/csd1
Disk hostname disk serial: NAGYNLGX
Disk hostname disk /proc/cds/cdd/disks/csd10
Disk hostname disk serial: NAGY85MX
I am trying to grab the data from the first of two lines /proc/cds/cdd/disks/cds1 and place the next line's serial number after it on the same line such that it would be formatted this way:
/proc/cds/cdd/disks/csd1 NAGYNLGX
/proc/cds/cdd/disks/cds10 NAGY85MX
I tried using an array to read in all the file output and then assign variables with the values in a bash script.
#!/bin/bash
readarray a < rec20.txt
total=${#a[*]}
for (( i=0; i<=$(( $total -1 )); i++ ))
do
let b=i+1
# echo -n "${a[$i]} "|awk '{print $4}'; echo -n "${a[$b]} "|awk '{print $5}'
# echo -e "${a[$i]} "|awk '{print $4}'\t; echo -e "${a[$b]} "|awk '{print $5}'\n
# set var1= echo "${a[$i]} " |awk '{print $4}'
# set var2= echo "${a[$b]} " |awk '{print $5}'
# var1=printf '%s\t' "${a[$i]} "|awk '{print $4}'
# var2=printf '%s\n' "${a[$b]} "|awk '{print $5}'
echo -e "${a[$i]} "|awk '{print $4}'\t
echo -e "${a[$b]} "|awk '{print $5}'\n
echo "var1 is $var1 var2 is $var2"
let i++
done

This can be done easily using awk:
awk -v OFS='\t' 'NR%2{s=$NF; next} {print s, $NF}' rec20.txt
/proc/cds/cdd/disks/csd1 NAGYNLGX
/proc/cds/cdd/disks/csd10 NAGY85MX
btw to read file data correctly into BASH array you need to use -t option i.e.
readarray -t a < rec20.txt

This here will read the file (file.txt) and parse it how you wanted:
read -a lines <<< `cat file.txt | sed -e "s/.* //"`
for (( i=0; i<${#lines[*]}; i+=2 )); do
myArray+=("${lines[i]} ${lines[i+1]}")
done
The first line reads the file and passes it into "sed" which trims off everything after the last space. So the file contents will be changed from what you posted to this:
/proc/cds/cdd/disks/csd1
NAGYNLGX
/proc/cds/cdd/disks/csd10
NAGY85MX
I then just looped through all the lines, and every other line I appended "myArray" to hold the current line and the next line put together.

Related

How to grab fields in inverted commas

I have a text file which contains the following lines:
"user","password_last_changed","expires_in"
"jeffrey","2021-09-21 12:54:26","90 days"
"root","2021-09-21 11:06:57","0 days"
How can I grab two fields jeffrey and 90 days from inverted commas and save in a variable.
If awk is an option, you could save an array and then save the elements as individual variables.
$ IFS="\"" read -ra var <<< $(awk -F, '/jeffrey/{ print $1, $NF }' input_file)
$ $ var2="${var[3]}"
$ echo "$var2"
90 days
$ var1="${var[1]}"
$ echo "$var1"
jeffrey
while read -r line; do # read in line by line
name=$(echo $line | awk -F, ' { print $1} ' | sed 's/"//g') # grap first col and strip "
expire=$(echo $line | awk -F, ' { print $3} '| sed 's/"//g') # grap third col and strip "
echo "$name" "$expire" # do your business
done < yourfile.txt
IFS=","
arr=( $(cat txt | head -2 | tail -1 | cut -d, -f 1,3 | tr -d '"') )
echo "${arr[0]}"
echo "${arr[1]}"
The result is into an array, you can access to the elements by index.
May be this below method will help you using
sed and awk command
#!/bin/sh
username=$(sed -n '/jeffrey/p' demo.txt | awk -F',' '{print $1}')
echo "$username"
expires_in=$(sed -n '/jeffrey/p' demo.txt | awk -F',' '{print $3}')
echo "$expires_in"
Output :
jeffrey
90 days
Note :
This above method will work if their is only distinct username
As far i know username are not duplicate

Bash: read property file into Array

I'm trying to read a property file like this one into a set of arrays:
DATABASE="mysql57"
DB_DRIVER_XA="com.mysql.cj.jdbc.MysqlXADataSource"
DB_DRIVER_CLASS="com.mysql.cj.jdbc.Driver"
DATABASE="db2_111"
DB_DRIVER_XA="com.ibm.db2.jcc.DB2XADataSource"
DB_DRIVER_CLASS="com.ibm.db2.jcc.DB2Driver"
I've found the following grep to be useful to store each key into its array:
filename=conf.properties
dblist=($(grep "DATABASE" $filename))
xadriver=($(grep "DB_DRIVER_XA" $filename))
driver=($(grep "DB_DRIVER_CLASS" $filename))
The problem is that the above solution stores into the array KEY=VALUE:
printf '%s\n' "${dblist[#]}"
DATABASE="mysql57"
DATABASE="db2_111"
I'd like to have in each array only the value. Is there a simple way to do it rather than looping over the array and maybe use "cut" to remove the "KEY=" part?
Sure:
databases=()
xas=()
classes=()
while IFS="=" read -r var value; do
without_quotes=${value//\"/}
case $var in
DATABASE) databases+=( "$without_quotes" ) ;;
DB_DRIVER_XA) xas+=( "$without_quotes" ) ;;
DB_DRIVER_CLASS) classes+=( "$without_quotes" ) ;;
esac
done < file
declare -p databases xas classes
declare -a databases='([0]="mysql57" [1]="db2_111")'
declare -a xas='([0]="com.mysql.cj.jdbc.MysqlXADataSource" [1]="com.ibm.db2.jcc.DB2XADataSource")'
declare -a classes='([0]="com.mysql.cj.jdbc.Driver" [1]="com.ibm.db2.jcc.DB2Driver")'
The take-away is to use IFS with the read command to split the line into fields, and store the results in separate variables.
Use awk -F= to split each line into key and value, and sed to strip out the quotes.
dblist=( $(awk -F= '$1=="DATABASE" {print $2}' "$filename" | sed 's/"//g'))
xadriver=($(awk -F= '$1=="DB_DRIVER_XA" {print $2}' "$filename" | sed 's/"//g'))
driver=( $(awk -F= '$1=="DB_DRIVER_CLASS" {print $2}' "$filename" | sed 's/"//g'))
Then, it would be better to use readarray to populate arrays to prevent word splitting on spaces and glob expansion on * and ?.
readarray -t dblist < <(awk -F= '$1=="DATABASE" {print $2}' "$filename" | sed 's/"//g')
readarray -t xadriver < <(awk -F= '$1=="DB_DRIVER_XA" {print $2}' "$filename" | sed 's/"//g')
readarray -t driver < <(awk -F= '$1=="DB_DRIVER_CLASS" {print $2}' "$filename" | sed 's/"//g')

Get the third element of a line into a file with script shell

I'm doing a script shell and I want to read data inside a file. In the file, I have something like :
/path/to/file1 something 0
/path/to/file2 something2 1
/path/to/file3 something3 2
What I want is to get the third element of the line but I don't know how to do it.
In my code, I have:
while read line;
do
//must echo the third element of the line
done < file | sort -n -k 2 -t " "
I already tried with awk but it didn't work.
How should I do please ?
This works if fields are separated by space:
$ echo 'foo bar baz' | cut --delimiter=' ' --fields=3
baz
This works for most whitespace separators:
$ echo 'foo bar baz' | awk '{print $3}'
baz
you can try something like this;
while read line;
do
path=$(echo $line | awk '{print $1}')
secondColumn=$(echo $line | awk '{print $2}')
thirdColumn=$(echo $line | awk '{print $3}')
echo $path
echo $secondColumn
echo $thirdColumn
done < test

Save output of awk to two different variables

Okay. I am kind of lost and google search isn't helping me much.
I have a command like:
filesize_filename=$(echo $line | awk ' ''{print $5":"$9}')
echo $filesize_filename
1024:/home/test
Now this one saves the two returns or awk'ed items into one variable. I'd like to achieve something like this:
filesize,filename=$(echo $line | awk ' ''{print $5":"$9}')
So I can access them individually like
echo $filesize
1024
echo $filename
/home/test
How to I achieve this?
Thanks.
Populate a shell array with the awk output and then do whatever you like with it:
$ fileInfo=( $(echo "foo 1024 bar /home/test" | awk '{print $2, $4}') )
$ echo "${fileInfo[0]}"
1024
$ echo "${fileInfo[1]}"
/home/test
If the file name can contain spaces then you'll have to adjust the FS and OFS in awk and the IFS in shell appropriately.
You may not need awk at all of course:
$ line="foo 1024 bar /home/test"
$ fileInfo=( $line )
$ echo "${fileInfo[1]}"
1024
$ echo "${fileInfo[3]}"
/home/test
but beware of globbing chars in $line matching on local file names in that last case. I expect there's a more robust way to populate a shell array from a shell variable but off the top of my head I can't think of it.
Use bash's read for that:
read size name < "$(awk '{print $5, $9}' <<< "$line")"
# Now you can output them separately
echo "$size"
echo "$name"
You can use process substitution on awk's output:
read filesize filename < <(echo "$line" | awk '{print $5,$9}')
You can totally avoid awk by doing:
read _ _ _ _ filesize _ _ _ filename _ <<< "$line"

Setting multiple field to awk variables at once

I am trying to set an awk variable field to several field at once.
Right now I can only set the variables one by one.
for line in `cat file.txt`;do
var1=`echo $line | awk -F, '{print $1}'`
var2=`echo $line | awk -F, '{print $2}'`
var3=`echo $line | awk -F, '{print $3}'`
#Some complex code....
done
I think this is costly cause it parses the linux variable several times. Is there a special syntax to set the variable at once? I know that awk has a BEGIN and END block but the reason I am trying to avoid the BEGIN and END block is to avoid nested awk.
I plan to place another loop and awk code in the #Some complex code.... part.
for line in `cat file.txt`;do
var1=`echo $line | awk -F, '{print $1}'`
var2=`echo $line | awk -F, '{print $2}'`
var3=`echo $line | awk -F, '{print $3}'`
for line2 in `cat file_old.txt`;do
vara=`echo $line2 | awk -F, '{print $1}'`
varb=`echo $line2 | awk -F, '{print $2}'`
# Do comparison of $var1,var2 and $vara,$varb , then do something with either
done
done
You can use the IFS internal field separator to use a comma (instead of whitespace) and do the assignments in a while loop:
SAVEIFS=$IFS;
IFS=',';
while read line; do
set -- $line;
var1=$1;
var2=$2;
var3=$3;
...
done < file.txt
IFS=$SAVEIFS;
This will save a copy of your current IFS, change it to a , character, and then iterate over each line in your file. The line set -- $line; will convert each word (separated by a comma) into a numeric-variable ($1, $2, etc.). You can either use these variables directly, or assign them to other (more meaningful) variable names.
Alternatively, you could use IFS with the answer provided by William:
IFS=',';
while read var1 var2 var3; do
...
done < file.txt
They are functionally identical and it just comes down to whether or not you want to explicitly set var1=$1 or have it defined in the while-loop's head.
Why are you using awk at all?
while IFS=, read var1 var2 var3; do
...
done < file.txt
#!/bin/bash
FILE="/tmp/values.txt"
function parse_csv() {
local lines=$lines;
> $FILE
OLDIFS=$IFS;
IFS=","
i=0
for val in ${lines}
do
i=$((++i))
eval var${i}="${val}"
done
IFS=$OLDIFS;
for ((j=1;j<=i;++j))
do
name="var${j}"
echo ${!name} >> $FILE
done
}
for lines in `cat file_old.txt`;do
parse_csv;
done
The problem you have described has only got 3 values, would there be a chance that 3 values may differ and be 4 or 5 or undefined ?
if so the above will parse through the csv line by line and output each value at a time on a new line in a file called /tmp/values.txt
feel free to modify to match your requirements its far more dynamic than defining 3 values

Resources