Create a mapping file in bash [duplicate] - bash

I'm trying to read the information of a structured file into an associative array using Bash script. The file contains in each line the name of a person and its address, separated by a "|". For example:
person1|address of person1
person2|address of person2
...
personN|address of personN
I tried to do this using the script below. Within the WHILE loop, the information is being printed. However, in the FOR loop the information is not being printed. It seems that the information is not being stored in the associative array outside of the WHILE loop.
What am I doing wrong? Why this is not working? Is there more efficient ways to do that?
#!/bin/bash
declare -A address
cat adresses.txt | while read line
do
name=`echo $line | cut -d '|' -f 1`
add=`echo $line | cut -d '|' -f 2`
address[$name]=$add
echo "$name - ${address[$name]}"
done
for name in ${!address[*]}
do
echo "$name - ${address[$name]}"
done

Wrong and useless usage of cut
#!/bin/bash
declare -A address
while IFS=\| read name add
do
address[$name]=$add
done < adresses.txt
for name in ${!address[*]}
do
echo "$name - ${address[$name]}"
done

cat addresses.txt | while read line
do
...
done
Shell commands in a pipelines are executed in subshells. Variables set in subshells aren't visible the parent shell.
You can fix this by replacing the pipelines with a redirection.
while read line
do
...
done < addresses.txt

Extending the accepted answer to resolve the OP's comment:
#!/bin/bash
declare -A address
while IFS='|' read name add
do
address[$name]=$add
echo "$name - ${address[$name]}"
done < adresses.txt
for name in "${!address[#]}"
do
echo "$name - ${address[$name]}"
done

Related

(BASH) Passing a loop variable into command that will be stored in a variable

I am trying to iterate through a line-by-line list of file ID strings (testIDs.txt) and pass them to an AWS command. The command should utilize the loop variable, and the output of the command should be stored in the "folder" variable. Whenever I run the script, I get blank spaces as output in the terminal.
#!bin/bash
while read p; do
folder=$(aws s3 ls s3://a-bucket/ --recursive | grep "${p}" | cut -c 32-)
echo "${folder}"
done < testIDs.txt
The output of the AWS command should be two strings, and I have checked that this is true by running the AWS line separately in the terminal and using a string instead of ${p}.
Note: Right now, I simply want to print folder, but later I will pass folder to another loop.
You may want to just use two for loops, (1) read contents of the bucket to temp (optional/could search it directly) , (2) Loop through IDs and (3) loop through each line of the bucket and look for the ID.
Example:
Read the folder structure to local temp, then for every ID look at the line of the file check for the testIDs and print them to a results file.
#!bin/bash
TMP="/tmp/s3-log-${RANDOM}.dat"
RESULTS="/tmp/s3-log-results.txt"
ID_FILE="testIDs.txt"
> "${RESULTS}"
aws s3 ls s3://a-bucket/ --recursive > "${TMP}"
while read p; do
while IFS= read -r line ;do
if ! [[ $line = *"${p}"* ]]; then
echo "id: ${p} => $(echo ${line} | cut -c 32-)" | tee -a "${RESULTS}"
fi
done < "${TMP}"
done < "${ID_FILE}"
This is probably what you're looking for; use the while loop if you don't know how many values you need to loop through:
while read -r -d $'\t' resourceName; do
echo "$resourceName"
done <<< "$(aws transfer list-servers --query='Servers[*].ServerId' --output text)"
In this case it's the -d $'\t' that influences the Bash word-splitting on the list of output AWS resources.

Reading a file line by line from variable

I'm working on a script and it isn't clear how read -r line knows which variable to get the data from.. I want to read line by line from the FILE variable.
Here is the script I'm working on:
#!/bin/bash
cd "/"
FILE="$(< /home/FileSystemCorruptionTest/*.chk)"
while read -r line
do
echo "$line" > /home/FileSystemCorruptionTest/`date +%Y_%m_%d_%H_%M_%S`_1.log
done
echo "" > /home/FileSystemCorruptionTest/Done
Since it looks like you want to combine multiple files, I guess that I would regard this as a legitimate usage of cat:
cat /home/FileSystemCorruptionTest/*.chk | while read -r line
do
echo "$line"
done > /home/FileSystemCorruptionTest/`date +%Y_%m_%d_%H_%M_%S`_1.log
Note that I moved the redirect out of the loop, to prevent overwriting the file once per line.
Also note that your example could easily be written as:
cat /home/FileSystemCorruptionTest/*.chk > /home/FileSystemCorruptionTest/`date +%Y_%m_%d_%H_%M_%S`_1.log
If you only actually have one file (and want to store it inside a variable), then you can use <<< after the loop:
while read -r line
do
echo "$line"
done <<<"$FILE" > /home/FileSystemCorruptionTest/`date +%Y_%m_%d_%H_%M_%S`_1.log
<<< "$FILE" has the same effect as using echo "$FILE" | before the loop but it doesn't create any subshells.
What you are requesting:
echo "${FILE}" | while read -r line …
But I think Tom's solution is better.

Why bash cant find my variable from compgen into a loop?

I am trying to push some variables into a bash array. For some reasons I cant understand, my script find the variable templates_age directly but not in the loop.
You can try the code on BASH Shell Online.
script:
templates_age="42"
templates_name="foo"
echo "age=${templates_age}"
echo "name=${templates_name}"
readarray GREPPED < <($(compgen -A variable | grep "templates_"))
for item in "${GREPPED[#]}"
do
echo "${item}"
done
output:
age=42
name=foo
./main.sh: line 32: templates_age: command not found
I tried different kind of echo "${item}" without success.
To convert from grep to array, I am using this logic.
To correctly populate array from a command's output use process substitution without $(...) which is called command substitution:
readarray -t grepped < <(compgen -A variable | grep "templates_")
Also note use of -t to trim newlines.
Full script:
templates_age="42"
templates_name="foo"
echo "age=${templates_age}"
echo "name=${templates_name}"
readarray -t grepped < <(compgen -A variable | grep "templates_")
declare -p grepped
for item in "${grepped[#]}"
do
printf "%s=%s\n" "${item}" "${!item}"
done
I'm not sure why you want to use compgen and grep here. Wouldn't this be enough?
for item in "${!templates_#}"; do
printf '%s=%s\n' "$item" "${!item}"
done
If you really want to populate an array, it's as simple as:
grepped=( "${!templates_#}" )
See Shell Parameter Expansion in the reference manual.

Making bash script with command already containing '$1'

Somewhere I found this command that sorts lines in an input file by number of characters(1st order) and alphabetically (2nd order):
while read -r l; do echo "${#l} $l"; done < input.txt | sort -n | cut -d " " -f 2- > output.txt
It works fine but I would like to use the command in a bash script where the name of the file to be sorted is an argument:
& cat numbersort.sh
#!/bin/sh
while read -r l; do echo "${#l} $l"; done < $1 | sort -n | cut -d " " -f 2- > sorted-$1
Entering numbersort.sh input-txt doesn't give the desired result, probably because $1 is already in using as an argument for something else.
How do I make the command work in a shell script?
There's nothing wrong with your original script when used with simple arguments that don't involve quoting issues. That said, there are a few bugs addressed in the below version:
#!/bin/bash
while IFS= read -r line; do
printf '%d %s\n' "${#line}" "$line"
done <"$1" | sort -n | cut -d " " -f 2- >"sorted-$1"
Use #!/bin/bash if your goal is to write a bash script; #!/bin/sh is the shebang for POSIX sh scripts, not bash.
Clear IFS to avoid pruning leading and trailing whitespace from input and output lines
Use printf rather than echo to avoid ambiguities in the POSIX standard (see http://pubs.opengroup.org/onlinepubs/009604599/utilities/echo.html, particularly APPLICATION USAGE and RATIONALE sections).
Quote expansions ("$1" rather than $1) to prevent them from being word-split or glob-expanded
Note also that this creates a new file rather than operating in-place. If you want something that operates in-place, tack a && mv -- "sorted-$1" "$1" on the end.

while read loop always false - bash

probably quite easy question for you but has been a while I'm stuck with this issue.
Basically i want to perform some operation in every csv file in a folder and i put together this simple script:
#!/bin/bash
for file in *.csv ;
do
echo $file #check
OLDIFS=$IFS
IFS=","
while read var1
do
echo $var1
done < $file
IFS=$OLDIFS
done
The csv files follow all the same format, 1 line and 12 variables like:
name,1,2,3,4,5,6,7,8,9,11
It seems to me that the while conditions is always false and never run.
Could you tell me where I'm wrong?
To get just the first field, you need to supply two arguments to read; the first for the first field, the second to hold the remaining values after the split.
#!/bin/bash
for file in *.csv; do
echo $file #check
while IFS=, read name rest; do
echo "$name"
done < "$file"
done
Something like this should work. It reads the file into an array, splitting it at every ,. It the iterates over the array and prints out each element on a new line.
#!/bin/bash
for file in *.csv
do
echo "$file" #check
IFS=',' read -a vars < "$file"
for var1 in ${vars[#]}
do
echo "$var1"
done
done

Resources