How to loop a nested loop in bash without repetition? - bash

I am trying to write two loops that would make a function together as I require both variables in one deployment for azure such as a name of storage account and container name to contain their key and store it but I am getting repeated results.
for storage in $(cat $TMP_FILE_STORAGE | sed 's/^[^"]*"\([^"]*\)".*/\1/' )
do
echo $storage
for container in $(cat $TMP_FILE_CONTAINER| sed 's/^[^"]*"\([^"]*\)".*/\1/' )
do
echo $container
continue
done
done
This is the file for container json :
lama baba
This is the file for storage json :
abdelvt33cpgsa abdelvt44cpgsa
This is the output I am getting
abdelvt33cpgsa
lama
baba
abdelvt44cpgsa
lama
baba
and the expected result should be
abdelvt33cpgsa
lama
abdelvt44cpgsa
baba

See Bash FAQ 001; you shouldn't be using for loops in the first place.
Instead, use a while loop with two separate file descriptors.
while IFS= read -r storage <&3 && IFS= read -r container <&4 ; do
echo "$storage"
echo "$container"
done 3< <(sed 's/^[^"]*"\([^"]*\)".*/\1/' < "$TMP_FILE_STORAGE" ) 4< <(sed 's/^[^"]*"\([^"]*\)".*/\1/' < "$TMP_FILE_CONTAINER")
It appears you might be able to get rid of sed as well by splitting each input line on ":
while IFS=\" read -r _ storage _ <&3 &&
IFS=\" read -r _ container _ <&4; do
echo "$storage"
echo "$container"
done 3< "$TMP_FILE_STORAGE" 4< "$TMP_FILE_CONTAINER"

Assuming
your files don't contain tab characters, and
they have the same number of lines,
a quick way is to paste both files and then convert the separator character (tab by default) to newlines:
$ cat A-file
A
B
C
D D
E E
$ cat 1-file
1
2
3
4 4
5 5
$ paste A-file 1-file|tr '\t' '\n'
A
1
B
2
C
3
D D
4 4
E E
5 5
Look how Stackoverflow syntax coloring makes this example look cool!

The right solution to this was to run it over aray loop.
Potato=$(cat $TMP_FILE_STORAGE)
Potato1=$(cat $TMP_FILE_CONTAINER)
eval "array=($Potato)"
eval "array2=($Potato1)"
for ((i=0; i<${#array[#]}; i++)); do
end=`date -d "15 minutes" '+%Y-%m-%dT%H:%MZ'`
az storage container generate-sas -n ${array2[$i]} --account-name ${array[$i]} --https-only --permissions dlrw --expiry $end -otsv`
echo "First parameter : ${array[$i]} -- second parameter : ${array2[$i]}"
done

Related

Loop through table and parse multiple arguments to scripts in Bash

I am in a situation similar to this one and having difficulties implementing this kind of solution for my situation.
I have file.tsv formatted as follows:
x y
dog woof
CAT meow
loud_goose honk-honk
duck quack
with a fixed number of columns (but variable rows) and I need to loop those pairs of values, all but the first one, in a script like the following (pseudocode)
for elements in list; do
./script1 elements[1] elements[2]
./script2 elements[1] elements[2]
done
so that script* can take the arguments from the pair and run with it.
Is there a way to do it in Bash?
I was thinking I could do something like this:
list1={`awk 'NR > 1{print $1}' file.tsv`}
list2={`awk 'NR > 1{print $2}' file.tsv`}
and then to call them in the loop based on their position, but I am not sure on how.
Thanks!
Shell tables are not multi-dimensional so table element cannot store two arguments for your scripts. However since you are processing lines from file.tsv, you can iterate on each line, reading both elements at once like this:
#!/usr/bin/env sh
# Populate tab with a tab character
tab="$(printf '\t')"
# Since printf's sub-shell added a trailing newline, remove it
tab="${tab%?}"
{
# Read first line in dummy variable _ to skip header
read -r _
# Iterate reading tab delimited x and y from each line
while IFS="$tab" read -r x y || [ -n "$x" ]; do
./script1 "$x" "$y"
./script2 "$x" "$y"
done
} < file.tsv # from this file
You could try just a while + read loop with the -a flag and IFS.
#!/usr/bin/env bash
while IFS=$' \t' read -ra line; do
echo ./script1 "${line[0]}" "${line[1]}"
echo ./script2 "${line[0]}" "${line[1]}"
done < <(tail -n +2 file.tsv)
Or without the tail
#!/usr/bin/env bash
skip=0 start=-1
while IFS=$' \t' read -ra line; do
if ((start++ >= skip)); then
echo ./script1 "${line[0]}" "${line[1]}"
echo ./script2 "${line[0]}" "${line[1]}"
fi
done < file.tsv
Remove the echo's if you're satisfied with the output.

mass replace specific lines in file using bash

I have a file with the name file.txt and contents
$cat file.txt
this one
juice apple
orange pen
apple
rice
mouse
and I have another file called word.txt
$cat word.txt
ganti
buah
bukan
i want to replace line 1,3 and 5 in file.txt with word.txt using bash. so the final result in file.txt becomes:
$cat file.txt
ganti
juice apple
buah
apple
bukan
mouse
how to use bash to do this operation? thank you.
Using ed to read first file.txt and then word.txt and shuffle lines of the latter around and finally save the modified file.txt:
ed -s file.txt <<EOF
0r word.txt
4d
2m5
4d
2m6
5d
w
EOF
m commands move lines, and d commands delete lines.
#!/bin/bash
# the 1st argument is the word.txt file
# remaining arguments are the line numbers to replace in ascending order
# assign the patterns file to file descriptor 3
exec 3< "$1"
shift
# read the first replacement pattern
read -ru 3 replacement
current_line_number=1
# lines will be read from standard in
while read -r line; do
# for replacement lines with a non-empty replacement str...
if [ "$current_line_number" == "$1" -a -n "$replacement" ]; then
echo "$replacement"
read -ru 3 replacement # get the next pattern
shift # get the next replacement line number
else
echo "$line"
fi
(( current_line_number++ ))
done
To test
diff expect <(./replace-lines.sh word.txt 1 3 5 < file.txt) && echo ok
I'm having trouble understanding what exactly you're trying to do, is this the end result you're after?
# read the 'word.txt' lines into an array called "words"
IFS=$'\n' read -d '' -r -a words < word.txt
# create a 'counter'
iter=0
# for loop through the line numbers that you want to change
for i in 1 3 5
do
# the variable "from" is the line from 'file.txt' (e.g. 1, 3, or 5)
from="$(sed -n ""$i"p" file.txt)"
# the variable "to" is the index/iteration of 'word.txt' (e.g. 1, 2, or 3)
to=${words[$iter]}
# make the 'inline' substitution
sed -i "s/$from/$to/g" file.txt
# increase the 'counter' by 1
((iter++))
done
cat file.txt
ganti
juice apple
buah
apple
bukan
mouse
Edit
If you have the line numbers you want to change in a file called "line.txt", you can adjust the code to suit:
IFS=$'\n' read -d '' -r -a words < word.txt
iter=0
while read -r line_number
do
from="$(sed -n ""$line_number"p" file.txt)"
to=${words[$iter]}
sed -i "s/$from/$to/g" file.txt
((iter++))
done < line.txt
cat file.txt
ganti
juice apple
buah
apple
bukan
mouse
Another ed approach, with Process substitution.
#!/usr/bin/env bash
ed -s file.txt < <(
printf '%s\n' '1s|^|1s/.*/|' '2s|^|3s/.*/|' '3s|^|5s/.*/|' ',s|$|/|' '$a' ,p Q . ,p Q |
ed -s word.txt
)

Read lines from a file and output with specific formatting with Bash

In A.csv, there are
1
2
3
4
How should I read this file and create variables $B and $C so that:
echo $B
echo $C
returns:
1 2 3 4
1,2,3,4
So far I am trying:
cat A.csv | while read A;
do
echo $A
done
It only returns
1
2
3
4
Assuming bash 4.x, the following is efficient, robust, and native:
# Read each line of A.csv into a separate element of the array lines
readarray -t lines <A.csv
# Generate a string B with a comma after each item in the array
printf -v B '%s,' "${lines[#]}"
# Prune the last comma from that string
B=${B%,}
# Generate a string C with a space after each item in the array
printf -v B '%s ' "${lines[#]}"
As #Cyrus said
B=$(cat A.csv)
echo $B
Will output:
1 2 3 4
Because bash will not carry the newlines if the variable is not wrapped in quotes. This is dangerous if A.csv contains any characters which might be affected by bash glob expansion, but should be fine if you are just reading simple strings.
If you are reading simple strings with no spaces in any of the elements, you can also get your desired result for $C by using:
echo $B | tr ' ' ','
This will output:
1,2,3,4
If lines in A.csv may contain bash special characters or spaces then we return to the loop.
For why I've formatted the file reading loop as I have, refer to: Looping through the content of a file in Bash?
B=''
C=''
while read -u 7 curr_line; do
if [ "$B$C" == "" ]; then
B="$curr_line"
C="$curr_line"
else
B="$B $curr_line"
C="$C,$curr_line"
fi
done 7<A.csv
echo "$B"
echo "$C"
Will construct the two variables as you desire using a loop through the file contents and should prevent against unwanted globbing and splitting.
B=$(cat A.csv)
echo $B
Output:
1 2 3 4
With quotes:
echo "$B"
Output:
1
2
3
4
I would read the file into a bash array:
mapfile -t array < A.csv
Then, with various join characters
b="${array[*]}" # space is the default
echo "$b"
c=$( IFS=","; echo "${array[*]}" )
echo "$c"
Or, you can use paste to join all the lines with a specified separator:
b=$( paste -d" " -s < A.csv )
c=$( paste -d"," -s < A.csv )
Try this :
cat A.csv | while read A;
do
printf "$A"
done
Regards!
Try This(Simpler One):
b=$(tr '\n' ' ' < file)
c=$(tr '\n' ',' < file)
You don't have to read File for that. Make sure you ran dos2unix file command. If you are running in windows(to remove \r).
Note: It will modify the file. So, make sure you copied from original file.

Name (and set) variables in current shell, based on line input data

I have a SQL*Plus output written into a text file in the following format:
3459906| |2|X1|WAS1| Output1
334596| |2|X1|WAS2| Output1
3495792| |1|X1|WAS1| Output1
687954| |1|X1|WAS2| Output1
I need a shell script to fetch the counts which were at the beginning based on the text after the counts.
For example, If the Text is like |2|X1|WAS1| , then 3459906 should be passed on to a variable x1was12 and if the text is like |2|X1|WAS2| , then 334596 should be passed on to a variable x1was22.
I tried writing a for loop and if condition to pass on the counts, but was unsuccessful:
export filename1='file1.dat'
while read -r line ; do
if [[ grep -i "*|2|X1|WAS1| Output1*" | wc -l -eq 0 ]] ; then
export xwas12=sed -n ${line}p $filename1 | \
sed 's/[^0-9]*//g' | sed 's/..$//'
elif [[ grep -i "*|2|X1|WAS2| Output1*" | wc -l -eq 0 ]] ; then
export x1was22=sed -n ${line}p $filename1 | \
sed 's/[^0-9]*//g' | sed 's/..$//'
elif [[ grep -i "*|1|X1|WAS1| Output1*" | wc -l -eq 0 ]] ; then
export x1was11=sed -n ${line}p $filename1 | \
sed 's/[^0-9]*//g' | sed 's/..$//'
elif [[ grep -i "*|1|X1|WAS2| Output1*" | wc -l -eq 0 ]]
export x1was21=sed -n ${line}p $filename1 | \
sed 's/[^0-9]*//g' | sed 's/..$//'
fi
done < "$filename1"
echo '$x1was12' > output.txt
echo '$x1was22' >> output.txt
echo '$x1was11' >> output.txt
echo '$x1was21' >> output.txt
What I was trying to do was:
Go to the first line in the file
-> Search for the text and if found then assign the sed output to the variable
Then go to the second line of the file
-> Search for the texts in the if commands and assign the sed output to another variable.
same goes for other
while IFS='|' read -r count _ n x was _; do
# remove spaces from all variables
count=${count// /}; n=${n// /}; x=${x// /}; was=${was// /}
varname="${x}${was}${n}"
printf -v "${varname,,}" %s "$count"
done <<'EOF'
3459906| |2|X1|WAS1| Output1
334596| |2|X1|WAS2| Output1
3495792| |1|X1|WAS1| Output1
687954| |1|X1|WAS2| Output1
EOF
With the above executed:
$ echo "$x1was12"
3459906
Of course, the redirection from a heredoc could be replaced with a redirection from a file as well.
How does this work? Let's break it down:
Every time IFS='|' read -r count _ n x was _ is run, it reads a single line, separating it by |s, putting the first column into count, discarding the second by assigning it to _, reading the third into n, the fourth into x, the fifth into was, and the sixth and all following content into _. This practice is discussed in detail in BashFAQ #1.
count=${count// /} is a parameter expansion which prunes spaces from the variable count, by replacing all such spaces with empty strings. See also BashFAQ #100.
"${varname,,}" is another parameter expansion, this one converting a variable's contents to all-lowercase. (This requires bash 4.0; in prior versions, consider "$(tr '[:upper:]' '[:lower:]' <<<"$varname") as a less-efficient alternative).
printf -v "$varname" %s "value" is a mechanism for doing an indirect assignment to the variable named in the variable varname.
If not for the variable names, the whole thing could be done with two commands:
cut -d '|' -f1 file1.dat | tr -d ' ' > output.txt
The variable names make it more interesting. Two bash methods follow, plus a POSIX method...
The following bash code ought to do what the OP's sample code was
meant to do:
declare $(while IFS='|' read a b c d e f ; do
echo $a 1>&2 ; echo x1${e,,}$c=${a/ /}
done < file1.dat 2> output.txt )
Notes:
The bash shell is needed for ${e,,}, (turns "WAS" into "was"), and $a/ /} , (removes a leading space that might be in
$a), and declare.
The while loop parses file1.dat and outputs a bunch of variable assignments. Without the declare this code:
while IFS='|' read a b c d e f ; do
echo x1${e,,}$c=${a/ /} ;
done < file1.dat
Outputs:
x1was12=3459906
x1was22=334596
x1was11=3495792
x1was21=687954
The while loop outputs to two separate streams: stdout (for the declare), and stderr (using the 1>&2 and 2> redirects for
output.txt).
Using bash associative arrays:
declare -A x1was="( $(while IFS='|' read a b c d e f ; do
echo $a 1>&2 ; echo [${e/WAS/}$c]=${a/ /}
done < file1.dat 2> output.txt ) )"
In which case the variable names require brackets:
echo ${x1was[21]}
687954
POSIX shell code (tested using dash):
eval $(while IFS='|' read a b c d e f ; do
echo $a 1>&2; echo x1$(echo $e | tr '[A-Z]' '[a-z]')$c=$(echo $a)
done < file1.dat 2> output.txt )
eval should not be used if there's any doubt about what's in file1.dat. The above code assumes the data in file1.dat is
uniformly dependable.

omit commas from echo output in bash

Hi I am reading in a line from a .csv file and using
echo $line
to print the cell contents of that record to the screen, however the commas are also printed i.e.
1,2,3,a,b,c
where I actually want
1 2 3 a b c
checking the echo man page there isn't an option to omit commas, so does anyone have a nifty bash trick to do this?
Use bash replacement:
$ echo "${line//,/ }"
1 2 3 a b c
Note the importance of double slash:
$ echo "${line/,/ }"
1 2,3,a,b,c
That is, single one would just replace the first occurrence.
For completeness, check other ways to do it:
$ sed 's/,/ /g' <<< "$line"
1 2 3 a b c
$ tr ',' ' ' <<< "$line"
1 2 3 a b c
$ awk '{gsub(",", " ")}1' <<< "$line"
1 2 3 a b c
If you need something more POSIX-compliant due to portability concerns, echo "$line" | tr ',' ' ' works too.
If you have to use the field values as separated values, can be useful to use the IFS built-in bash variable.
You can set it with "," value in order to specify the field separator for read command used to read from .csv file.
ORIG_IFS="$IFS"
IFS=","
while read f1 f2 f3 f4 f5 f6
do
echo "Follow fields of record as separated variables"
echo "f1: $f1"
echo "f2: $f2"
echo "f3: $f3"
echo "f4: $f4"
echo "f5: $f5"
echo "f6: $f6"
done < test.csv
IFS="$OLDIFS"
On this way you have one variable for each field of the line/record and you can use it as you prefer.
NOTE: to avoid unexpected behaviour, remember to set the original value to IFS variable

Resources