Is this bash for loop statement correct? - bash

Here's the code:
totLines=$(wc -l < extractedips.txt)
for((a=0;a!=$totLines;a++))
{
head -n$a extractedips.txt | nslookup >> ip_extracted.txt
}
I'm not sure what I'm doing wrong.

Yes! Despite what people are saying, this is a valid bash for loop!
Note, however, that it's a bash extension. It's not a valid sh for loop. You can not use this form of loop in e.g. a shell script declared with #!/bin/sh or run with sh yourscript.
The thing that doesn't work is your for loop contents.
You're trying to get the n'th line, but head -n 42 gets the first 42 lines, not line number 42.
You're using 0-based indexing, while head is 1-based.
You're piping to nslookup, but nslookup expects an argument and not stdin.
The shortest fix to your problem is:
totLines=$(wc -l < extractedips.txt)
for ((a=1; a<=totLines; a++)); do
nslookup "$(head -n "$a" extractedips.txt | tail -n 1)" >> ip_extracted.txt
done
However, the more efficient and canonical way of doing it is with a while read loop:
while IFS= read -r line
do
nslookup "$line"
done < extractedips.txt > ip_extracted.txt

You should use do and done instead of curly braces.
Like this:
totLines=$(wc -l < extractedips.txt)
for ((a=0; a!=totLines; a++)); do
head -n "$a" extractedips.txt | nslookup >> ip_extracted.txt
done
However, this code will do some weird stuff... Are you trying to pass it line by line into nslookup ?
What about this?
nslookup < extractedips.txt > ip_extracted.txt
Or you might want this:
while read -r line; do
nslookup "$line" >> ip_extracted.txt
done < extractedips.txt

Looks like you need sth. like this
for i in $(cat extractedips.txt)
do
echo $i | nslookup >> ip_extracted.txt
done
You don't need your counter variable, have a look at here for a better understanding.
Why I think your statement is wrong/not what you want and my answer don't deserve a negative voting :)
This statement will echo the first $a lines inside file extractedips.txt
head -n$a extractedips.txt | nslookup >> ip_extracted.txt
First up you have a file with ip's
cat extractedips.txt
127.0.0.1
192.168.0.1
173.194.44.88
Now we if you do
for a in 1 2 3
head -n$a
done
the script will output
127.0.0.1
127.0.0.1
192.168.0.1
127.0.0.1
192.168.0.1
173.194.44.88
But I guess you need a list with all all ip's only once. If you don't need duplicate ip's you could also remove them. This will give you a sorted list with duplicates removed.
for a in $(sort test.txt | uniq -u)
do
echo $a | nslookup >> ip_extracted.txt
done
Update
As Aleks commented this will not work if the text file contain a '*' char. The shell will expand all files in current directory and echo the filename. I didn't know that :/

Related

Bash, loop unexpected stop

I'm having problems with this last part of my bash script. It receives input from 500 web addresses and is supposed to fetch the server information from each. It works for a bit but then just stops at like the 45 element. Any thoughts with my loop at the end?
#initializing variables
timeout=5
headerFile="lab06.output"
dataFile="fortune500.tsv"
dataURL="http://www.tech.mtu.edu/~toarney/sat3310/lab09/"
dataPath="/home/pjvaglic/Documents/labs/lab06/data/"
curlOptions="--fail --connect-timeout $timeout"
#creating the array
declare -a myWebsitearray
#obtaining the data file
wget $dataURL$dataFile -O $dataPath$dataFile
#getting rid of the crap from dos
sed -n "s/^m//" $dataPath$dataFile
readarray -t myWebsitesarray < <(cut -f3 -d$'\t' $dataPath$dataFile)
myWebsitesarray=("${myWebsitesarray[#]:1}")
websitesCount=${#myWebsitesarray[*]}
echo "There are $websitesCount websites in $dataPath$dataFile"
#echo -e ${myWebsitesarray[200]}
#printing each line in the array
for line in ${myWebsitesarray[*]}
do
echo "$line"
done
#run each website URL and gather header information
for line in "${myWebsitearray[#]}"
do
((count++))
echo -e "\\rPlease wait... $count of $websitesCount"
curl --head "$curlOptions" "$line" | awk '/Server: / {print $2 }' >> $dataPath$headerFile
done
#display results
echo "Results: "
sort $dataPath$headerFile | uniq -c | sort -n
It would certainly help if you actually passed the --connect-timeout option to curl. As written, you are currently passing the single argument --fail --connect-timeout $timeout rather than 3 distinct arguments --fail, --connect-timeout, and $timeout. This is one instance where you should not quote the variable. IOW, use:
curl --head $curlOptions "$line"

process every line from command output in bash

From every line of nmap network scan output I want to store the hosts and their IPs in variables (for further use additionaly the "Host is up"-string):
The to be processed output from nmap looks like:
Nmap scan report for samplehostname.mynetwork (192.168.1.45)
Host is up (0.00047s latency).
thats my script so far:
#!/bin/bash
while IFS='' read -r line
do
host=$(grep report|cut -f5 -d' ')
ip=$(grep report|sed 's/^.*(//;s/)$//')
printf "Host:$host - IP:$ip"
done < <(nmap -sP 192.168.1.1/24)
The output makes something I do not understand. It puts the "Host:" at the very beginning, and then it puts "IP:" at the very end, while it completely omits the output of $ip.
The generated output of my script is:
Host:samplehostname1.mynetwork
samplehostname2.mynetwork
samplehostname3.mynetwork
samplehostname4.mynetwork
samplehostname5.mynetwork - IP:
In separate, the extraction of $host and $ip basically works (although there might a better solution for sure). I can either printf $host or $ip alone.
What's wrong with my script? Thanks!
Your two grep commands are reading from standard input, which they inherit from the loop, so they also read from nmap. read gets one line, the first grep consumes the rest, and the second grep exits immediately because standard input is closed. I suspect you meant to grep the contents of $line:
while IFS='' read -r line
do
host=$(grep report <<< "$line" |cut -f5 -d' ')
ip=$(grep report <<< "$line" |sed 's/^.*(//;s/)$//')
printf "Host:$host - IP:$ip"
done < <(nmap -sP 192.168.1.1/24)
However, this is inefficient and unnecessary. You can use bash's built-in regular expression support to extract the fields you want.
regex='Nmap scan report for (.*) \((.*)\)'
while IFS='' read -r line
do
[[ $line =~ $regex ]] || continue
host=${BASH_REMATCH[1]}
ip=${BASH_REMATCH[2]}
printf "Host:%s - IP:%s\n" "$host" "$ip"
done < <(nmap -sP 192.168.1.1/24)
Try this:
#!/bin/bash
while IFS='' read -r line
do
if [[ $(echo $line | grep report) ]];then
host=$(echo $line | cut -f5 -d' ')
ip=$(echo $line | sed 's/^.*(//;s/)$//')
echo "Host:$host - IP:$ip"
fi
done < <(nmap -sP it-50)
Output:
Host:it-50 - IP:10.0.0.10
I added an if clause to skip unwanted lines.

take ping test average change output

Here is my script I wanto change out put second one:
#!/bin/bash
declare -a arr=("8.8.8.8" "8.8.4.4" "192.168.1.28")
x=0
DATE=`date +%Y-%m-%d:%H:%M:%S`
echo $DATE > denemesh.txt
while [ $x -le 2 ]
do
echo " ${arr[x]}" >> denemesh.txt
ping -c 4 ${arr[x]} | tail -1| awk ' {print $4 }' | cut -d '/' -f 2 >> denemesh.txt
x=$(( $x + 1 ))
done
Currently, the output looks like this:
2014-12-22:20:22:37
8.8.8.8
18.431
8.8.4.4
17.758
192.168.1.28
0.058
Is it possible to change to output to look like this instead?
2014-12-22:20:22:37
8.8.8.8 18.431
8.8.4.4 17.758
192.168.1.28 0.058
You really just need to modify one line:
echo -n " ${arr[x]}" >> denemesh.txt
Using the -n flag suppresses the trailing newline, and so your next statement should append to the current line. You can then adjust the formatting as you please.
Sure it is. Try something like this:
declare -a arr=("8.8.8.8" "8.8.4.4" "192.168.1.28")
d=$(date +%Y-%m-%d:%H:%M:%S)
echo "$d" > denemesh.txt
for ip in "${arr[#]}"
do
printf ' %-12s' "$ip"
ping -c 4 "$ip" | awk 'END{split($4,a,"/"); printf "%12s\n", a[2]}'
done >> denemesh.txt
I've used printf with format specifiers to align the output. The %-12s left-aligns the first column with a fixed width of 12 characters and the %12s in awk right-aligns the second column. Rather than use a while loop, I got rid of your variable x and have looped through the values in the array directly. I have also changed the old-fashioned backtick syntax in your script to use $( ) instead. awk is capable of obtaining the output directly by itself, so I removed your usage of tail and cut too. Finally, you can simply redirect the output of the loop rather than putting >> on the end of each line.

List IPs line by line with bash script

I need list of IPs by dig command I'm using bash script but some of domain like google.com have many IPs I need only one result
#!/bin/bash
while read domain; do
ipaddr=$(dig +short $domain)
echo -e "$ipaddr" >> results.csv
done < domainlist.txt
output ( if we take google an example )
173.194.35.101
173.194.35.102
173.194.35.96
173.194.35.110
173.194.35.98
173.194.35.100
173.194.35.99
173.194.35.104
173.194.35.103
173.194.35.97
173.194.35.105
I need only the first line
#!/bin/bash
while read domain; do
ipaddr=$(dig +short $domain | head -1)
echo -e "$ipaddr" >> results.csv
done < domainlist.txt
Check if this is ok.
ipaddr=$(dig +short $domain | head -1)
Piping through head -1 should return the first ip from the list of ip.s returned by dig command.
Pipe it through head :
ipaddr=$(dig +short $domain | head -n 1)

passing variable containing special chars to sed in bash

I need to remove subdomains from file:
.domain.com
.sub.domain.com -- this must be removed
.domain.com.uk
.sub2.domain.com.uk -- this must be removed
so i have used sed :
sed '/\.domain.com$/d' file
sed '/\.domain.com.uk$/d' file
and this part was simple, but when i try to do it in the loop problems appears:
while read line
do
sed '/\$line$/d' filename > filename
done < filename
I suppose it is "." and $ problem , have tried escaping it in many ways but i am out of ideas now.
A solution inspired by NeronLeVelu's idea:
#!/bin/bash
#set -x
domains=($(rev domains | sort))
for i in `seq 0 ${#domains[#]}` ;do
domain=${domains[$i]}
[ -z "$domain" ] && continue
for j in `seq $i ${#domains[#]}` ;do
[[ ${domains[$j]} =~ $domain.+ ]] && domains[$j]=
done
done
for i in `seq 0 ${#domains[#]}` ;do
[ -n "${domains[$i]}" ] && echo ${domains[$i]} | rev >> result.txt
done
For cat domains:
.domain.com
.sub.domain.com
.domain.co.uk
.sub2.domain.co.uk
sub.domain.co.uk
abc.yahoo.com
post.yahoo.com
yahoo.com
You get cat result.txt:
.domain.co.uk
.domain.com
yahoo.com
sed -n 's/.*/²&³/;H
$ {x;s/$/\
/
: again
s|\(\n\)²\([^³]*\)³\(.*\)\1²[^³]*\2³|\1\2\3|
t again
s/[²³]//g;s/.\(.*\)./\1/
p
}' YourFile
Load the file in working buffer then remove (iterative) any line that end with an earlier one, finally priont the result. Use of temporary edge delimiter easier to manage than \n in pattern
--posix -e for GNU sed (tested from AIX)
Your loop is a bit confusing because you're trying to use sed to delete patterns from a file but you take the patterns from the same file.
If you really want to remove subdomains from filename then I suppose you need more something like the following:
#!/bin/bash
set -x
cp domains domains.tmp
while read domain
do
sed -r -e "/[[:alnum:]]+${domain//./\\.}$/d" domains.tmp > domains.tmp2
cp domains.tmp2 domains.tmp
done < dom.txt
Where cat domains is:
.domain.com
.sub.domain.com
.domain.co.uk
.sub2.domain.co.uk
sub.domain.co.uk
abc.yahoo.com
post.yahoo.com
and cat dom.txt is:
.domain.com
.domain.co.uk
.yahoo.com
Running the script on these inputs results in:
$ cat domains.tmp
.domain.com
.domain.co.uk
Each iteration will remove subdomains of domain currently read from dom.txt, store it in a temporary file the contents of which is used in the next iteration for additional filtering.
It's good to try your scripts with set -x, you'll see some of the substitutions, etc.

Resources