process every line from command output in bash - bash

From every line of nmap network scan output I want to store the hosts and their IPs in variables (for further use additionaly the "Host is up"-string):
The to be processed output from nmap looks like:
Nmap scan report for samplehostname.mynetwork (192.168.1.45)
Host is up (0.00047s latency).
thats my script so far:
#!/bin/bash
while IFS='' read -r line
do
host=$(grep report|cut -f5 -d' ')
ip=$(grep report|sed 's/^.*(//;s/)$//')
printf "Host:$host - IP:$ip"
done < <(nmap -sP 192.168.1.1/24)
The output makes something I do not understand. It puts the "Host:" at the very beginning, and then it puts "IP:" at the very end, while it completely omits the output of $ip.
The generated output of my script is:
Host:samplehostname1.mynetwork
samplehostname2.mynetwork
samplehostname3.mynetwork
samplehostname4.mynetwork
samplehostname5.mynetwork - IP:
In separate, the extraction of $host and $ip basically works (although there might a better solution for sure). I can either printf $host or $ip alone.
What's wrong with my script? Thanks!

Your two grep commands are reading from standard input, which they inherit from the loop, so they also read from nmap. read gets one line, the first grep consumes the rest, and the second grep exits immediately because standard input is closed. I suspect you meant to grep the contents of $line:
while IFS='' read -r line
do
host=$(grep report <<< "$line" |cut -f5 -d' ')
ip=$(grep report <<< "$line" |sed 's/^.*(//;s/)$//')
printf "Host:$host - IP:$ip"
done < <(nmap -sP 192.168.1.1/24)
However, this is inefficient and unnecessary. You can use bash's built-in regular expression support to extract the fields you want.
regex='Nmap scan report for (.*) \((.*)\)'
while IFS='' read -r line
do
[[ $line =~ $regex ]] || continue
host=${BASH_REMATCH[1]}
ip=${BASH_REMATCH[2]}
printf "Host:%s - IP:%s\n" "$host" "$ip"
done < <(nmap -sP 192.168.1.1/24)

Try this:
#!/bin/bash
while IFS='' read -r line
do
if [[ $(echo $line | grep report) ]];then
host=$(echo $line | cut -f5 -d' ')
ip=$(echo $line | sed 's/^.*(//;s/)$//')
echo "Host:$host - IP:$ip"
fi
done < <(nmap -sP it-50)
Output:
Host:it-50 - IP:10.0.0.10
I added an if clause to skip unwanted lines.

Related

how to open all links in a file and ignore comments using firefox?

so the file contains data like
# entertainment
youtube.com
twitch.tv
# research
google.com
wikipedia.com
...
and I would like to pass that file as an argument in a script that would open all lines if they doesn't start with an #. Any clues on how to ?
so far what i have:
for Line in $Lines
do
case "# " in $Line start firefox $Line;; esac
done
some code that could be useful (?):
while read line; do chmod 755 "$line"; done < file.txt
grep -e '^[^#]' inputfile.txt | xargs -d '\n' firefox --new-tab
grep -e '^[^#]': Will print all lines that don't start with a sharp (comments)
xargs -d '\n' firefox --new-tab: Will pass each line that is not blank, as argument to Firefox.
Removes both the lines that start with # and empty lines.
#!/bin/bash
#
while read -r line
do
if [[ $(echo "$line" | grep -Ev "^#|^$") ]]
then
firefox --new-tab "$url" &
fi
done <file.txt
Skip the empty lines and the lines that starts with a #
#!/usr/bin/env bash
while IFS= read -r url; do
[[ "$url" == \#* || -z "$url" ]] && continue
firefox --new-tab "$url" &
done < file.txt
awk 'NF && $1!="#"{print "firefox --new-tab", $0, "&"}' file.txt | bash

Intermittent piping failure in bash

I have a code snippet that looks like this
while grep "{{SECRETS}}" /tmp/kubernetes/$basefile | grep -v "#"; do
grep -n "{{SECRETS}}" /tmp/kubernetes/$basefile | grep -v "#" | head -n1 | while read -r line ; do
lineno=$(echo $line | cut -d':' -f1)
spaces=$(sed "${lineno}!d" /tmp/kubernetes/$basefile | awk -F'[^ \t]' '{print length($1)}')
spaces=$((spaces-1))
# Delete line that had {{SECRETS}}
sed -i -e "${lineno}d" /tmp/kubernetes/$basefile
while IFS='' read -r secretline || [[ -n "$secretline" ]]; do
newline=$(printf "%*s%s" $spaces "" "$secretline")
sed -i "${lineno}i\ ${newline}" /tmp/kubernetes/$basefile
lineno=$((lineno+1))
done < "/tmp/secrets.yaml"
done
done
in /tmp/kubernetes/$basefile, the string {{SECRETS}} appears twice 100% of the time.
Almost every single time, this completes fine. However, very infrequently, the script errors on its second loop through the file. like so, according to set -x
...
IFS=
+ read -r secretline
+ [[ -n '' ]]
+ read -r line
exit code 1
When it works, the set -x looks like this, and continues processesing the file correctly.
...
+ IFS=
+ read -r secretline
+ [[ -n '' ]]
+ read -r line
+ grep '{{SECRETS}}' /tmp/kubernetes/deployment.yaml
+ grep -v '#'
I have no answer for how this can only happen occasionally, so I think there's something about bash piping's parallelism I don't understand. Is there something in grep -n "{{SECRETS}}" /tmp/kubernetes/$basefile | grep -v "#" | head -n1 | while read -r line ; do that could lead to out-of-order execution somehow? Based on the error, it seems like it's trying to read a line, but can't because previous commands didn't work. But there's no indication of that in the set -x output.
A likely cause of the problem is that the pipeline containing the inner loop both reads and writes the "basefile" at the same time. See How to make reading and writing the same file in the same pipeline always “fail”?.
One way to fix the problem is do a full read of the file before trying to update it. Try:
basepath=/tmp/kubernetes/$basefile
secretspath=/tmp/secrets.yaml
while
line=$(grep -n "{{SECRETS}}" "$basepath" | grep -v "#" | head -n1)
[[ -n $line ]]
do
lineno=$(echo "$line" | cut -d':' -f1)
spaces=$(sed "${lineno}!d" "$basepath" \
| awk -F'[^ \t]' '{print length($1)}')
spaces=$((spaces-1))
# Delete line that had {{SECRETS}}
sed -i -e "${lineno}d" "$basepath"
while IFS='' read -r secretline || [[ -n "$secretline" ]]; do
newline=$(printf "%*s%s" $spaces "" "$secretline")
sed -i "${lineno}i\ ${newline}" "$basepath"
lineno=$((lineno+1))
done < "$secretspath"
done
(I introduced the variables basepath and secretspath to make the code easier to test.)
As an aside, it's also possible to do this with pure Bash code:
basepath=/tmp/kubernetes/$basefile
secretspath=/tmp/secrets.yaml
updated_lines=()
is_updated=0
while IFS= read -r line || [[ -n $line ]] ; do
if [[ $line == *'{{SECRETS}}'* && $line != *'#'* ]] ; then
spaces=${line%%[^[:space:]]*}
while IFS= read -r secretline || [[ -n $secretline ]]; do
updated_lines+=( "${spaces}${secretline}" )
done < "$secretspath"
is_updated=1
else
updated_lines+=( "$line" )
fi
done <"$basepath"
(( is_updated )) && printf '%s\n' "${updated_lines[#]}" >"$basepath"
The whole updated file is stored in memory (in the update_lines array) but that shouldn't be a problem because any file that's too big to store in memory will almost certainly be too big to process line-by-line with Bash. Bash is generally extremely slow.
In this code spaces holds the actual space characters for indentation, not the number of them.

Issues with grep and get a count of a string in a loop

I have a set of search strings in a file (File1) and a content file (File2). I am trying to loop through all the search strings within File1 and get a count of each of the search string within File2 and output it - I want to automate this and make it generic so I can search through multiple content files. However, I dont seem to be able to get the exact count when I execute this loop. I get a "0" count for each of the strings although I have those strings in the file. Unable to figure out what I am doing wrong and can use some help !
Below is the script I came up with:
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
count=$(echo cat "$2" | grep -c "$line")
echo "$count - $line"
done < "$1"
Command I am using to run this script:
./scanscript.sh File1.log File2.log
I say this since I searched this command separately and get the right value. This command works by itself but I want to put this in a loop
cat File2.log | grep -c "Search String"
Sample Data for File 1 (Search Strings):
/SERVER_NAME/Root/DEV/Database/NJ-CONTENT/Procs/
/SERVER_NAME3/Root/DEV/Database/NJ-CONTENT/Procs/
Sample Data for File 2 (Content File):
./SERVER_NAME/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:29:
./SERVER_NAME2/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:100:
./SERVER_NAME3/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:143:
./SERVER_NAME4/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:223:
./SERVER_NAME5/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:5589:
Problem is this line:
count=$(echo cat "$2" | grep -c "$line")
That should be changed to:
count=$(grep -Fc "$line" "$2")
Also note -F is to be used for fixed string search instead of regex search.
Full code:
while IFS='' read -r line || [[ -n "$line" ]]; do
count=$(grep -Fc "$line" "$2");
echo "$count - $line";
done < "$1"
Run it as:
./scanscript.sh File1.log File2.log
Output:
1 - /SERVER_NAME/Root/DEV/Database/NJ-CONTENT/Procs/
1 - /SERVER_NAME3/Root/DEV/Database/NJ-CONTENT/Procs/

Creating a bash array, separated by new lines

I am reading in from a .txt file which looks something like this:
:DRIVES
name,server,share_1
other_name,other_server,share_2
new_name,new_server,share 3
:NAME
which is information to mount drives. I want to load them into a bash array to cycle through and mount them, however my current code breaks at the third line because the array is being created by any white space. Instead of reading
new_name,new_server,share 3
as one line, it reads it as 2 lines
new_name,new_server,share
3
I have tried changing the value of IFS to
IFS=$'\n' #and
IFS='
'
however neither work. How can I create an array from the above file separated by newlines. My code is below.
file_formatted=$(cat ~/location/to/file/test.txt)
IFS='
' # also tried $'\n'
drives=($(sed 's/^.*:DRIVES //; s/:.*$//' <<< $file_formatted))
for line in "${drives[#]}"
do
#seperates lines into indiviudal parts
drive="$(echo $line | cut -d, -f2)"
server="$(echo $line | cut -d, -f3)"
share="$(echo $line | cut -d, -f4 | tr '\' '/' | tr '[:upper:]' '[:lower:]')"
#mount //$server/$share using osascript
#script breaks because it tries to mount /server/share instead of /server/share 3
EDIT:
tried this and got the same output as before:
drives=$(sed 's/^.*:DRIVES //; s/:.*$//' <<< $file_formatted)
while IFS= read -r line; do
printf '%s\n' "$line"
done <<< "$drives"
This is the correct way to iterate over your file; no arrays needed.
{
# Skip over lines until we read :DRIVES
while IFS= read -r line; do
[[ $line = :DRIVES ]] && break
done
# Split each comma-separated line into the desired variables,
# until we read :NAMES, wt which point we break out of this loop
while IFS=, read -r drive server share; do
[[ $drive == :NAMES ]] && break
# Use $drive, $server, and $share
done
# No need to read the rest of the file, if any
} < ~/location/to/file/test.txt

bash read loop only reading first line of input variable

I have a read loop that is reading a variable but not behaving the way I expect. I want to read every line of my variable and process each one. Here is my loop:
while read -r line
do
echo $line | sed 's/<\/td>/<\/td>$/g' | cut -d'$' -f2,3,4 >> file.txt
done <<< "$TABLE"
I expect it to process every line of the file but instead it just does the first one. If my the middle is simply echo $line >> file.txt it works as expected. What's going on here? How do I get the behavior I want?
It seems your lines are delimited by \r instead of \n.
Use this while loop to iterate the input with use of read -d $'\r':
while read -rd $'\r' line; do
echo "$line" | sed 's~</td>~</td>$~g' | cut -d'$' -f2,3,4 >> file.txt
done <<< "$TABLE"
If $TABLE contains a multi-line string, I recommend
printf '%s\n' "$TABLE" |
while read -r line; do
echo $line | sed 's/<\/td>/<\/td>$/g' | cut -d'$' -f2,3,4 >> file.txt
done
This is also more portable since the '<<<' operator for here-strings is not POSIX.

Resources