Bash - readarray contains only one element - bash

I'm writing this script to count some variables from an input file. I can't figure out why it is not counting the elements in the array (should be 500) but only counts 1.
#initializing variables
timeout=5
headerFile="lab06.output"
dataFile="fortune500.tsv"
dataURL="http://www.tech.mtu.edu/~toarney/sat3310/lab09/"
dataPath="/home/pjvaglic/Documents/labs/lab06/data/"
curlOptions="--silent --fail --connect-timeout $timeout"
#creating the array
declare -a myWebsitearray #=('cut -d '\t' -f3 "dataPath$dataFile"')
#obtaining the data file
wget $dataURL$dataFile -O $dataPath$dataFile
#getting rid of the crap from dos
sed -e "s/^m//" $dataPath$dataFile | readarray -t $myWebsitesarray
readarray -t myWebsitesarray < <(cut -d, -f3 $dataPath$dataFile)
myWebsitesarray=("${#myWebsitesarray[#]:1}")
#printf '%s\n' "${myWebsitesarray2[#]}"
websitesCount=${#myWebsitesarray[*]}
echo $websitesCount

You are overwriting your array with the count of elements in this line
myWebsitesarray=("${#myWebsitesarray[#]:1}")
Remove the hash sign
myWebsitesarray=("${myWebsitesarray[#]:1}")
Also, #chepner suggestions are good to follow.

Related

Bash script with long command as a concatenated string

Here is a sample bash script:
#!/bin/bash
array[0]="google.com"
array[1]="yahoo.com"
array[2]="bing.com"
pasteCommand="/usr/bin/paste -d'|'"
for val in "${array[#]}"; do
pasteCommand="${pasteCommand} <(echo \$(/usr/bin/dig -t A +short $val)) "
done
output=`$pasteCommand`
echo "$output"
Somehow it shows an error:
/usr/bin/paste: invalid option -- 't'
Try '/usr/bin/paste --help' for more information.
How can I fix it so that it works fine?
//EDIT:
Expected output is to get result from the 3 dig executions in a string delimited with | character. Mainly I am using paste that way because it allows to run the 3 dig commands in parallel and I can separate output using a delimiter so then I can easily parse it and still know the dig output to which domain (e.g google.com for first result) is assigned.
First, you should read BashFAQ/050 to understand why your approach failed. In short, do not put complex commands inside variables.
A simple bash script to give intended output could be something like that:
#!/bin/bash
sites=(google.com yahoo.com bing.com)
iplist=
for site in "${sites[#]}"; do
# Capture command's output into ips variable
ips=$(/usr/bin/dig -t A +short "$site")
# Prepend a '|' character, replace each newline character in ips variable
# with a space character and append the resulting string to the iplist variable
iplist+=\|${ips//$'\n'/' '}
done
iplist=${iplist:1} # Remove the leading '|' character
echo "$iplist"
outputs
172.217.18.14|98.137.246.7 72.30.35.9 98.138.219.231 98.137.246.8 72.30.35.10 98.138.219.232|13.107.21.200 204.79.197.200
It's easier to ask a question when you specify input and desired output in your question, then specify your try and why doesn't it work.
What i want is https://i.postimg.cc/13dsXvg7/required.png
$ array=("google.com" "yahoo.com" "bing.com")
$ printf "%s\n" "${array[#]}" | xargs -n1 sh -c '/usr/bin/dig -t A +short "$1" | paste -sd" "' _ | paste -sd '|'
172.217.16.14|72.30.35.9 98.138.219.231 98.137.246.7 98.137.246.8 72.30.35.10 98.138.219.232|204.79.197.200 13.107.21.200
I might try a recursive function like the following instead.
array=(google.com yahoo.com bing.com)
paster () {
dn=$1
shift
if [ "$#" -eq 0 ]; then
dig -t A +short "$dn"
else
paster "$#" | paste -d "|" <(dig -t A +short "$dn") -
fi
}
output=$(paster "${array[#]}")
echo "$output"
Now finally clear with expected output:
domains_arr=("google.com" "yahoo.com" "bing.com")
out_arr=()
for domain in "${domains_arr[#]}"
do
mapfile -t ips < <(dig -tA +short "$domain")
IFS=' '
# Join the ips array into a string with space as delimiter
# and add it to the out_arr
out_arr+=("${ips[*]}")
done
IFS='|'
# Join the out_arr array into a string with | as delimiter
echo "${out_arr[*]}"
If the array is big (and not just 3 sites) you may benefit from parallelization:
array=("google.com" "yahoo.com" "bing.com")
parallel -k 'echo $(/usr/bin/dig -t A +short {})' ::: "${array[#]}" |
paste -sd '|'

BASH Iterating into an array from a file [duplicate]

This question already has answers here:
Creating an array from a text file in Bash
(7 answers)
Closed 2 years ago.
If I have document and want to iterate the second column of the document into an array, would there be a simple way to do this. At present I am trying by using:
cat file.txt | awk -F'\t' '{print $2}' | sort -u
This lists all the unique items in the second column to standard out.
The question is ...how do I now add these items to an array, considering some of these items have whitespace.
I have been trying to declare an array
arr=()
and then tried
${arr}<<cat file.txt | awk -F'\t' '{print $2}' | sort -u
Bash4+ has mapfile aka readarray plus a Process Substituion.
mapfile -t array < <(awk -F'\t' '{print $2}' file.txt | sort -u)
If you don't have bash4+
while IFS= read -r line; do
array+=("$line")
done < <(awk -F'\t' '{print $2}' file.txt | sort -u)
To see the structure in the array
declare -p array
By default read strips leading and trailing white space so to work around that you need to use IFS= to preserve the default line structure.
The -t option from mapfile -t Remove a trailing DELIM from each line read (default newline)
Bash 3 has read -a to read IFS delimited fields from a file stream to an array.
The -d '' switch tells read, the record delimiter is null, so it reads fields until it reaches the end of the file stream EOF or a null character.
declare -a my_array
IFS=$'\n' read -r -d '' -a my_array < <(cut -f2 < file.txt | sort -u)

Bash: Split two strings directly into associative array

I have two strings of same number of substrings divided by a delimiter.
I need to create key-value pairs from substrings.
Short example:
Input:
firstString='00011010:00011101:00100001'
secondString='H:K:O'
delimiter=':'
Desired result:
${translateMap['00011010']} -> 'H'
${translateMap['00011101']} -> 'K'
${translateMap['00100001']} -> 'O'
So, I wrote:
IFS="$delimiter" read -ra fromArray <<< "$firstString"
IFS="$delimiter" read -ra toArray <<< "$secondString"
declare -A translateMap
curIndex=0
for from in "${fromArray[#]}"; do
translateMap["$from"]="${toArray[curIndex]}"
((curIndex++))
done
Is there any way to create the associative array directly from 2 strings without the unneeded arrays and loop? Something like:
IFS="$delimiter" read -rA translateMap["$(read -ra <<< "$firstString")"] <<< "$secondString"
Is it possible?
A (somewhat convoluted) variation on #accdias's answer of assigning the values via the declare -A command, but will need a bit of explanation for each step ...
First we need to break the 2 variables into separate lines for each item:
$ echo "${firstString}" | tr "${delimiter}" '\n'
00011010
00011101
00100001
$ echo "${secondString}" | tr "${delimiter}" '\n'
H
K
O
What's nice about this is that we can now process these 2 sets of key/value pairs as separate files.
NOTE: For the rest off this discussion I'm going to replace "${delimiter}" with ':' to make this a tad bit (but not much) less convoluted.
Next we make use of the paste command to merge our 2 'files' into a single file; we'll also designate ']' as the delimiter between key/value mappings:
$ paste -d ']' <(echo "${firstString}" | tr ':' '\n') <(echo "${secondString}" | tr ':' '\n')
00011010]H
00011101]K
00100001]O
We'll now run these results through a couple sed patterns to build our array assignments:
$ paste -d ']' <(echo "${firstString}" | tr ':' '\n') <(echo "${secondString}" | tr ':' '\n') | sed 's/^/[/g;s/]/]=/g'
[00011010]=H
[00011101]=K
[00100001]=O
What we'd like to do now is use this output in the typeset -A command but unfortunately we need to build the entire command and then eval it:
$ evalstring="typeset -A kv=( "$(paste -d ']' <(echo "${firstString}" | tr ':' '\n') <(echo "${secondString}" | tr ':' '\n') | sed 's/^/[/g;s/]/]=/g')" )"
$ echo "$evalstring"
typeset -A kv=( [00011010]=H
[00011101]=K
[00100001]=O )
If we want to remove the carriage returns and put on a single line we append another tr at the output from the sed command:
$ evalstring="typeset -A kv=( "$(paste -d ']' <(echo "${firstString}" | tr ':' '\n') <(echo "${secondString}" | tr ':' '\n') | sed 's/^/[/g;s/]/]=/g' | tr '\n' ' ')" )"
$ cat "${evalstring}"
typeset -A kv=( [00011010]=H [00011101]=K [00100001]=O )
At this point we can eval our auto-generated typeset -A command:
$ eval "${evalstring}"
And now loop through our array displaying the key/value pairs:
$ for i in ${!kv[#]}; do echo "kv[${i}] = ${kv[${i}]}"; done
kv[00011010] = H
kv[00100001] = O
kv[00011101] = K
Hey, I did say this would be a bit convoluted! :-)
It is probably not what you expect, but this works:
key_string="A:B:C:D"
val_string="1:2:3:4"
declare -A map
while [ -n "$key_string" ] && [ -n "$val_string" ]; do
IFS=: read -r key key_string <<<"$key_string"
IFS=: read -r val val_string <<<"$val_string"
map[$key]="$val"
done
for key in "${!map[#]}"; do echo "$key => ${map[$key]}"; done
It uses recursion in the read function to reassign the string value.
The downside of this method is that it destroys the original strings. The while-loop checks constantly if both strings have a non-zero length.
Next to the above in pure bash, you could any command to generate the associative array. See How do I populate a bash associative array with command output?
This generally looks like:
declare -A map="( $( magic_command ) )"
where the magic_command generates an output like
[key1]=val1
[key2]=val2
[key3]=val3
In this case we use the command:
paste -d "" <(echo "[${key_string//:/]=$'\n'[}]=") \
<(echo "${val_string//:/$'\n'}")
where we use bash substitution to replace the delimiter with a newline. However, any other magic_command might do. For completion:
key_string="A:B:C:D"
val_string="1:2:3:4"
declare -A map="( $(paste -d "" <(echo "[${key_string//:/]=$'\n'[}]=") \
<(echo "${val_string//:/$'\n'}")) )"
for key in "${!map[#]}"; do echo "$key => ${map[$key]}"; done
Both examples generate the following output
D => 4
C => 3
B => 2
A => 1
Not exactly the answer for what you asked but at least it is shorter:
key='00011010:00011101:00100001'
value='H:K:O'
ifs=':'
IFS="$ifs" read -ra keys <<< "$key"
IFS="$ifs" read -ra values <<< "$value"
declare -A kv
for ((i=0; i<${#keys[*]}; i++)); do
kv[${keys[i]}]=${values[i]}
done
As a side note, you can initialize an associative array in one step with:
declare -A kv=([key1]=value1 [key2]=value2 [keyn]=valuen)
But I don't know how to use that in your case.
If values in your strings won't use spaces i would suggest this approach
firstString='00011010:00011101:00100001'
secondString='H:K:O'
delimiter=':'
declare -A translateMap
firstArray=( ${firstString//$delimiter/' '} )
secondArray=( ${secondString//$delimiter/' '} )
for i in ${!firstArray[#]}; {
translateMap[firstArray[$i]}]=${secondArray[$i]}
}

Bash, loop unexpected stop

I'm having problems with this last part of my bash script. It receives input from 500 web addresses and is supposed to fetch the server information from each. It works for a bit but then just stops at like the 45 element. Any thoughts with my loop at the end?
#initializing variables
timeout=5
headerFile="lab06.output"
dataFile="fortune500.tsv"
dataURL="http://www.tech.mtu.edu/~toarney/sat3310/lab09/"
dataPath="/home/pjvaglic/Documents/labs/lab06/data/"
curlOptions="--fail --connect-timeout $timeout"
#creating the array
declare -a myWebsitearray
#obtaining the data file
wget $dataURL$dataFile -O $dataPath$dataFile
#getting rid of the crap from dos
sed -n "s/^m//" $dataPath$dataFile
readarray -t myWebsitesarray < <(cut -f3 -d$'\t' $dataPath$dataFile)
myWebsitesarray=("${myWebsitesarray[#]:1}")
websitesCount=${#myWebsitesarray[*]}
echo "There are $websitesCount websites in $dataPath$dataFile"
#echo -e ${myWebsitesarray[200]}
#printing each line in the array
for line in ${myWebsitesarray[*]}
do
echo "$line"
done
#run each website URL and gather header information
for line in "${myWebsitearray[#]}"
do
((count++))
echo -e "\\rPlease wait... $count of $websitesCount"
curl --head "$curlOptions" "$line" | awk '/Server: / {print $2 }' >> $dataPath$headerFile
done
#display results
echo "Results: "
sort $dataPath$headerFile | uniq -c | sort -n
It would certainly help if you actually passed the --connect-timeout option to curl. As written, you are currently passing the single argument --fail --connect-timeout $timeout rather than 3 distinct arguments --fail, --connect-timeout, and $timeout. This is one instance where you should not quote the variable. IOW, use:
curl --head $curlOptions "$line"

Unix file pattern issue: append changing value of variable pattern to copies of matching line

I have a file with contents:
abc|r=1,f=2,c=2
abc|r=1,f=2,c=2;r=3,f=4,c=8
I want a result like below:
abc|r=1,f=2,c=2|1
abc|r=1,f=2,c=2;r=3,f=4,c=8|1
abc|r=1,f=2,c=2;r=3,f=4,c=8|3
The third column value is r value. A new line would be inserted for each occurrence.
I have tried with:
for i in `cat $xxxx.txt`
do
#echo $i
live=$(echo $i | awk -F " " '{print $1}')
home=$(echo $i | awk -F " " '{print $2}')
echo $live
done
but is not working properly. I am a beginner to sed/awk and not sure how can I use them. Can someone please help on this?
awk to the rescue!
$ awk -F'[,;|]' '{c=0;
for(i=2;i<=NF;i++)
if(match($i,/^r=/)) a[c++]=substr($i,RSTART+2);
delim=substr($0,length($0))=="|"?"":"|";
for(i=0;i<c;i++) print $0 delim a[i]}' file
abc|r=1,f=2,c=2|1
abc|r=1,f=2,c=2;r=3,f=4,c=8|1
abc|r=1,f=2,c=2;r=3,f=4,c=8|3
Use an inner routine (made up of GNU grep, sed, and tr) to compile a second more elaborate sed command, the output of which needs further cleanup with more sed. Call the input file "foo".
sed -n $(grep -no 'r=[0-9]*' foo | \
sed 's/^[0-9]*/&s#.*#\&/;s/:r=/|/;s/.*/&#p;/' | \
tr -d '\n') foo | \
sed 's/|[0-9|]*|/|/'
Output:
abc|r=1,f=2,c=2|1
abc|r=1,f=2,c=2;r=3,f=4,c=8|1
abc|r=1,f=2,c=2;r=3,f=4,c=8|3
Looking at the inner sed code:
grep -no 'r=[0-9]*' foo | \
sed 's/^[0-9]*/&s#.*#\&/;s/:r=/|/;s/.*/&#p;/' | \
tr -d '\n'
It's purpose is to parse foo on-the-fly (when foo changes, so will the output), and in this instance come up with:
1s#.*#&|1#p;2s#.*#&|1#p;2s#.*#&|3#p;
Which is almost perfect, but it leaves in old data on the last line:
sed -n '1s#.*#&|1#p;2s#.*#&|1#p;2s#.*#&|3#p;' foo
abc|r=1,f=2,c=2|1
abc|r=1,f=2,c=2;r=3,f=4,c=8|1
abc|r=1,f=2,c=2;r=3,f=4,c=8|1|3
...which old data |1 is what the final sed 's/|[0-9|]*|/|/' removes.
Here is a pure bash solution. I wouldn't recommend actually using this, but it might help you understand better how to work with files in bash.
# Iterate over each line, splitting into three fields
# using | as the delimiter. (f3 is only there to make
# sure a trailing | is not included in the value of f2)
while IFS="|" read -r f1 f2 f3; do
# Create an array of variable groups from $f2, using ;
# as the delimiter
IFS=";" read -a groups <<< "$f2"
for group in "${groups[#]}"; do
# Get each variable from the group separately
# by splitting on ,
IFS=, read -a vars <<< "$group"
for var in "${vars[#]}"; do
# Split each assignment on =, create
# the variable for real, and quit once we
# have found r
IFS== read name value <<< "$var"
declare "$name=$value"
[[ $name == r ]] && break
done
# Output the desired line for the current value of r
printf '%s|%s|%s\n' "$f1" "$f2" "$r"
done
done < $xxxx.txt
Changes for ksh:
read -A instead of read -a.
typeset instead of declare.
If <<< is a problem, you can use a here document instead. For example:
IFS=";" read -A groups <<EOF
$f2
EOF

Resources