How to assign value to a variable that is provided from file in for loop - bash

Im facing a problem with assigning a value to a variable that its name is stored in other variable or a file
cat ids.txt
ID1
ID2
ID3
What i want to do is:
for i in `cat ids.txt'; do $i=`cat /proc/sys/kernel/random/uuid`
or
for i in ID1 ID2 ID3; do $i=`cat /proc/sys/kernel/random/uuid`
But its not working.
What i would like to have, its something like:
echo $ID1
5dcteeee-6abb-4agg-86bb-948593020451
echo $ID2
5dcteeee-6abb-4agg-46db-948593322990
echo $ID3
5dcteeee-6abb-4agg-86cb-948593abcd45

Use declare. https://linuxcommand.org/lc3_man_pages/declareh.html
# declare values
for i in ID1 ID2 ID3; do
declare ${i}=$(cat /proc/sys/kernel/random/uuid)
done
# read values (note the `!` in the variable to simulate "$ID1", not ID1)
for i in ID1 ID2 ID3; do echo ${!i}; done
3f204128-bac6-481e-abd3-37bb6cb522da
ccddd0fb-1b6c-492e-bda3-f976ca62d946
ff5e04b9-2e51-4dac-be41-4c56cfbce22e
Or better yet... Reading IDs from the file:
for i in $(cat ids.txt); do
echo "ID from file: ${i}"
declare ${i}=$(cat /proc/sys/kernel/random/uuid)
echo "${i}=${!i}"
done
Result:
$ cat ids.txt
ID1
ID2
ID3
$ for i in $(cat ids.txt); do echo "ID from file: ${i}"; declare ${i}=$(cat /proc/sys/kernel/random/uuid); echo "${i}=${!i}"; done
ID from file: ID1
ID1=d5c4a002-9039-498b-930f-0aab488eb6da
ID from file: ID2
ID2=a77f6c01-7170-4f4f-a924-1069e48e93db
ID from file: ID3
ID3=bafe8bb2-98e6-40fa-9fb2-0bcfd4b69fad

A one-liner using . built-in, process and command substitution, and printf's implicit loop:
. <(printf '%s=$(cat /proc/sys/kernel/random/uuid)\n' $(<ids.txt))
echo "ID1=$ID1"; echo "ID2=$ID2"; echo "ID3=$ID3"
Note: The lines of ids.txt must consist of only valid variable names and the file must come from a trusted source. Checking that file by grep -vq '^[[:alpha:]][[:alnum:]]*$' ids.txt before calling this command may be a safer approach.

Method with associative array:
#!/usr/bin/env bash
# Declares associative array to store ids and UUIDs
declare -A id_map=()
# Reads ids.txt into array
mapfile -t ids < ids.txt
# Iterates ids
for id in "${ids[#]}"; do
# Populates id_map with uuid for each id
# Prepends $id with an x because associative array keys must not be empty
read -r id_map["x$id"] < /proc/sys/kernel/random/uuid
done
# Debug content of id_map
for x_id in "${!id_map[#]}"; do
id="${x_id#?}" # Trims leading x filler
printf '%s=%s\n' "$id" "${id_map[$x_id]}"
done

Related

Reading CSV file in Shell Scripting

I am trying to read values from a CSV file dynamically based on the header. Here's how my input files can look like.
File 1:
name,city,age
john,New York,20
jane,London,30
or
File 2:
name,age,city,country
john,20,New York,USA
jane,30,London,England
I may not be following the best way to accomplish this but I tried the following code.
#!/bin/bash
{
read -r line
line=`tr ',' ' ' <<< $line`
while IFS=, read -r `$line`
do
echo $name
echo $city
echo $age
done
} < file.txt
I am expecting the above code read the values of the header as the variable names. I know that the order of columns can be different for the input file. But, I expect the files to have name, city and age columns in the input file. Is this the right approach? If so, what is the fix for the above code if fails with the error - "line7: name: command not found".
The issue is caused by the backticks. Bash will evaluate the contents and replace the backticks with the output from the command it just evaluated.
You can simply use the variable after the read command to achieve what you want:
#!/bin/bash
{
read -r line
line=`tr ',' ' ' <<< $line`
echo "$line"
while IFS=, read -r $line ; do
echo "person: $name -- $city -- $age"
done
} < file.txt
Some notes on your code:
The backtick syntax is legacy syntax, it is now preferred to use $(...) to evaluate commands. The new syntax is more flexible.
You can enable automatic script failure with set -euo pipefail (see here). This will make your script stop if it encounters an error.
You code is currently very sensitive to invalid header data:
with a file like
n ame,age,city,country
john,20,New York,USA
jane,30,London,England
your script (or rather the version in the beginning of my answer) will run without errors but with invalid output.
It is also good practice to quote variables to prevent unwanted splitting.
To make it much more robust, you can change it as follows:
#!/bin/bash
set -euo pipefail
# -e and -o pipefail will make the script exit
# in case of command failure (or piped command failure)
# -u will exit in case a variable is undefined
# (in you case, if the header is invalid)
{
read -r line
readarray -d, -t header < <(printf "%s" "$line")
# using an array allows to detect if one of the header entries
# contains an invalid character
# the printf is needed because bash would add a newline to the
# command input if using heredoc (<<<).
while IFS=, read -r "${header[#]}" ; do
echo "$name"
echo "$city"
echo "$age"
done
} < file.txt
A slightly different approach can let awk handle the field separation and ordering of the desired output given either of the input files. Below awk stores the desired output order in the f[] (field) array set in the BEGIN rule. Then on the first line in a file (FNR==1) the array a[] is deleted and filled with the headings from the current file. At that point you just loop over the field names in-order in the f[] array and output the corresponding field from the current line, e.g.
awk -F, '
BEGIN { f[1]="name"; f[2]="city"; f[3]="age" } # desired order
FNR==1 { # on first line read header
delete a # clear a array
for (i=1; i<=NF; i++) # loop over headings
a[$i] = i # index by heading, val is field no.
next # skip to next record
}
{
print "" # optional newline between outputs
for (i=1; i<=3; i++) # loop over desired field order
if (f[i] in a) # validate field in a array
print $a[f[i]] # output fields value
}
' file1 file2
Example Use/Output
In your case with the content you show in file1 and file2, you would have:
$ awk -F, '
> BEGIN { f[1]="name"; f[2]="city"; f[3]="age" } # desired order
> FNR==1 { # on first line read header
> delete a # clear a array
> for (i=1; i<=NF; i++) # loop over headings
> a[$i] = i # index by heading, val is field no.
> next # skip to next record
> }
> {
> print "" # optional newline between outputs
> for (i=1; i<=3; i++) # loop over desired field order
> if (f[i] in a) # validate field in a array
> print $a[f[i]] # output fields value
> }
> ' file1 file2
john
New York
20
jane
London
30
john
New York
20
jane
London
30
Where both files are read and handled identically despite having different field orderings. Let me know if you have further questions.
If using Bash verison ≥ 4.2, it is possible to use an associative array to capture an arbitrary number of fields with their name as a key:
#!/usr/bin/env bash
# Associative array to store columns names as keys and and values
declare -A fields
# Array to store columns names with index
declare -a column_name
# Array to store row's values
declare -a line
# Commands block consuming CSV input
{
# Read first line to capture column names
IFS=, read -r -a column_name
# Proces records
while IFS=, read -r -a line; do
# Store column values to corresponding field name
for ((i=0; i<${#column_name[#]}; i++)); do
# Fills fields' associative array
fields["${column_name[i]}"]="${line[i]}"
done
# Dump fields for debug|demo purpose
# Processing of each captured value could go there instead
declare -p fields
done
} < file.txt
Sample output with file 1
declare -A fields=([country]="USA" [city]="New York" [age]="20" [name]="john" )
declare -A fields=([country]="England" [city]="London" [age]="30" [name]="jane" )
For older Bash version, without associative array, use indexed column name alternatively:
#!/usr/bin/env bash
# Array to store columns names with index
declare -a column_name
# Array to store values for a line
declare -a value
# Commands block consuming CSV input
{
# Read first line to capture column names
IFS=, read -r -a column_name
# Proces records
while IFS=, read -r -a value; do
# Print record separator
printf -- '--------------------------------------------------\n'
# Print captured field name and value
for ((i=0; i<"${#column_name[#]}"; i++)); do
printf '%-18s: %s\n' "${column_name[i]}" "${value[i]}"
done
done
} < file.txt
Output:
--------------------------------------------------
name : john
age : 20
city : New York
country : USA
--------------------------------------------------
name : jane
age : 30
city : London
country : England

shell script declare associative variables from array

I have array key-value pairs separated by space,
array=Name:"John" ID:"3234" Designation:"Engineer" Age:"32" Phone:"+123 456 789"
Now I want convert above array as associative variables like below,
declare -A newmap
newmap[Name]="John"
newmap[ID]="3234"
newmap[Designation]="Engineer"
newmap[Age]="32"
newmap[Phone]="+123 456 789"
echo ${newmap[Name]}
echo ${newmap[ID]}
echo ${newmap[Designation]}
echo ${newmap[Age]}
echo ${newmap[Phone]}
I'm able to get value for given key using file,
declare -A arr
while IFS='=' read -r k v; do
arr[$k]=$v;
done < "file.txt"
echo "${arr[name]}"
But I want to implement same functionality using array instead of file.
You can just use a sed to reformat input data before calling declare -A:
s='array=Name:"John" ID:"3234" Designation:"Engineer" Age:"32" Phone:"+123 456 789"'
declare -A "newmap=(
$(sed -E 's/" ([[:alpha:]])/" [\1/g; s/:"/]="/g' <<< "[${s#*=}")
)"
Then check output:
declare -p newmap
declare -A newmap=([ID]="3234" [Designation]="Engineer" [Age]="32" [Phone]="+123 456 789" [Name]="John" )
A version without eval:
array='Name:"John" ID:"3234" Designation:"Engineer" Age:"32" Phone:"+123 456 789"'
declare -A "newmap=($(perl -pe 's/(\w+):"/[\1]="/g' <<< "$array"))"
echo ${newmap[Phone]}
# output : +123 456 789
Working with the variable array that's been defined as follows:
$ array='Name:"John" ID:"3234" Designation:"Engineer" Age:"32" Phone:"+123 456 789"'
NOTES:
assuming no white space between the attribute, ':' and value
assuming there may be variable amount of white space between attribute/value pairs
assuming all values are wrapped in a pair of double quotes
And assuming the desire is to parse this string and store in an array named newmap ...
We can use sed to break our string into separate lines as such:
$ sed 's/" /"\n/g;s/:/ /g' <<< ${array}
Name "John"
ID "3234"
Designation "Engineer"
Age "32"
Phone "+123 456 789"
We can then feed this to a while loop to populate our array:
$ unset newmap
$ typeset -A newmap
$ while read -r k v
do
newmap[${k}]=${v//\"} # strip off the double quote wrapper
done < <(sed 's/" /"\n/g;s/:/ /g' <<< ${array})
$ typeset -p newmap
declare -A newmap=([ID]="3234" [Name]="John" [Phone]="+123 456 789" [Age]="32" [Designation]="Engineer" )
And applying the proposed (and slightly modified) echo statements:
$ (
echo "Name - ${newmap[Name]}"
echo "ID - ${newmap[ID]}"
echo "Designation - ${newmap[Designation]}"
echo "Age - ${newmap[Age]}"
echo "Phone - ${newmap[Phone]}"
)
Name - John
ID - 3234
Designation - Engineer
Age - 32
Phone - +123 456 789

bash to store unique value if array in variable

The bash below goes to a folder and stores all the unique values that are .html file names in f1. It then removes all text after the _ in $p. I added a for loop to get the unique id in $p. The terminal out shows $p is correct, but the last value is only being stored in the new array ($sorted_unique_ids), I am not sure why all three are not.
dir=/path/to
var=$(ls -td "$dir"/*/ | head -1) ## sore newest <run> in var
for f1 in "$var"/qc/*.html ; do
# Grab file prefix
bname=`basename $f1` # strip of path
p="$(echo $bname|cut -d_ -f1)"
typeset -A myarray ## define associative array
myarray[${p}]=yes ## store p in myarray
for i in ${!myarray[#]}; do echo ${!myarray[#]} | tr ' ' '\n' | sort; done
done
output
id1
id1
id1
id2
id1
id2
id1
id2
id3
id1
id2
id3
desired sorted_unique_ids
id1
id2
id3
Maybe something like this:
dir=$(ls -td "$dir"/*/ | head -1)
find "$dir" -maxdepth 1 -type f -name '*_*.html' -printf "%f\n" |
cut -d_ -f1 | sort -u
For input directory structure created like:
dir=dir
mkdir -p dir/dir
touch dir/dir/id{1,2,3}_{a,b,c}.html
So it looks like this:
dir/dir/id2_b.html
dir/dir/id1_c.html
dir/dir/id2_c.html
dir/dir/id1_b.html
dir/dir/id3_b.html
dir/dir/id2_a.html
dir/dir/id3_a.html
dir/dir/id1_a.html
dir/dir/id3_c.html
The script will output:
id1
id2
id3
Tested on repl.
latest=`ls -t "$dir"|head -1` # or …|sed q` if you're really jonesing for keystrokes
for f in "$latest"/qc/*_*.html; do f=${f##*/}; printf %s\\n "${f%_*}"; done | sort -u
Define an associative array:
typeset -A myarray
Use each p value as the index for an array element; assign any value you want to the array element (the value just acts as a placeholder):
myarray[${p}]=yes
If you run across the same p value more than once, each assignment to the array will overwrite the previous assignment; net result is that you'll only ever have a single element in the array with a value of p.
To obtain your unique list of p values, you can loop through the indexes for the array, eg:
for i in ${!myarray[#]}
do
echo ${i}
done
If you need the array indexes generated in sorted order try:
echo ${!myarray[#]} | tr ' ' '\n' | sort
You can then use this sorted result set as needed (eg, dump to stdout, feed to a loop, etc).
So, adding my code to the OPs original code would give us:
typeset -A myarray ## define associative array
dir=/path/to
var=$(ls -td "$dir"/*/ | head -1) ## sore newest <run> in var
for f1 in "$var"/qc/*.html ; do
# Grab file prefix
bname=`basename $f1` # strip of path
p="$(echo $bname|cut -d_ -f1)"
myarray[${p}]=yes ## store p in myarray
done
# display sorted, unique set of p values
for i in ${!myarray[#]}; do echo ${!myarray[#]} | tr ' ' '\n' | sort; done

Generate a column for each file matching a glob

I'm having difficulties with something that sounds relatively simple. I have a few data files with single values in them as shown below:
data1.txt:
100
data2.txt
200
data3.txt
300
I have another file called header.txt and its a template file that contains the header as shown below:
Data_1 Data2 Data3
- - -
I'm trying to add the data from the data*.txt files to the last line of Master.txt
The desired output would be something like this:
Data_1 Data2 Data3
- - -
100 200 300
I'm actively working this so I'm not sure where to begin. This doesn't need to be implemented in pure shell -- use of standard UNIX tools such as awk or sed is entirely reasonable.
paste is the key tool:
#!/bin/bash
exec >>Master.txt
cat header.txt
paste $'-d\n' data1.txt data2.txt data3.txt |
while read line1
do
read line2
read line3
printf '%-10s %-10s %-10s\n' "$line1" "$line2" "$line3"
done
As a native-bash implementation:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[123].*) echo "ERROR: Bash 4.0+ needed" >&2; exit 1;; esac
declare -A keys=( ) # define an associative array (a string->string map)
for f in data*.txt; do # iterate over data*.txt files
name=${f%.txt} # for each, remove the ".txt" extension to get our name...
keys[${name^}]=$(<"$f") # capitalize the first letter, and read the file to get the value
done
{ # start a group so we can redirect output just once
printf '%s\t' "${!keys[#]}"; echo # first line: keys in our associative array
printf '%s\t' "${keys[#]//*/-}"; echo # second line: convert values to dashes
printf '%s\t' "${keys[#]}"; echo # third line: print the values unmodified
} >>Master.txt # all the above with output redirected to Master.txt
Most of the magic here is performed by parameter expansions:
${f%.txt} trims the .txt extension from the end of $f
${name^} capitalizes the first letter of $name
"${keys[#]}" expands to all values in the array named keys
"${keys[#]//*/-} replaces * (everything) in each key with the fixed string -.
"${!keys[#]}" expands to the names of entries in the associative array keys.

Create a loop for 3 different variables to output all possible combinations

So lets say i have 3 lines of code
ABC
123
!##
How do i create a for loop to output the number of ways to piece them together?
E.G ABC123!##, ABC!##123, 123ABC!##$
here is my current line of code
#!/bin/bash
alphabet='ABC' numbers='123' special='!##'
for name in $alphabet$numbers$special
do
echo $name
done
echo done
alphabet='ABC' numbers='123' special='!##'
for name1 in $alphabet $numbers $special
#on 1st iteration, name1's value will be ABC, 2nd 123 ...
do
for name2 in $alphabet $numbers $special
do
for name3 in $alphabet $numbers $special
do
#here we ensure that we want strings only as combination of that three strings
if [ $name1 != $name2 -a $name2 != $name3 ]
then
echo $name1$name2$name3
fi
done
done
done
if you want also to print strings, like 123123123 and ABCABCABC, remove if condition
You can also do it without a loop at all using brace expansion (but you lose the ability to exclude, e.g. ABCABCABC). For example:
#!/bin/bash
alpha='ABC'
num='123'
spec='!##'
printf "%s\n" {$alpha,$num,$spec}{$alpha,$num,$spec}{$alpha,$num,$spec}
Example Use/Output
$ bash permute_brace_exp.sh
ABCABCABC
ABCABC123
ABCABC!##
ABC123ABC
ABC123123
ABC123!##
ABC!##ABC
ABC!##123
ABC!##!##
123ABCABC
123ABC123
123ABC!##
123123ABC
123123123
123123!##
123!##ABC
123!##123
123!##!##
!##ABCABC
!##ABC123
!##ABC!##
!##123ABC
!##123123
!##123!##
!##!##ABC
!##!##123
!##!##!##

Resources