I made a bash script to insert the result of nmap command to an array. The script is working on bash 4.3.30, but it does not work when I try to run it on bash 4.4.12. It looks like the array is empty or it just have the first value.
Here is my code:
#!/bin/bash
declare -a IP_ARRAY
NMAP_OUTPUT=`nmap -sL $1 | grep "Nmap scan report" | awk '{print $NF}'`
read -a IP_ARRAY <<< $NMAP_OUTPUT
printf '%s\n' "${IP_ARRAY[#]}"
With bash 4.3, the values of the string NMAP_OUTPUT are well copied to the array IP_ARRAY. The the other version not and I don't find the error.
The string NMAP_OUTPUT looks like:
10.0.0.0 10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 10.0.0.6 10.0.0.7 10.0.0.8 10.0.0.9 10.0.0.10
Instead of using my code above, this code works:
IP_ARRAY=(${NMAP_OUTPUT})
I would like to understand with my previous code is working on one version and not in the other one.
Thank you very much!!!
Your script has multiple issues which could be fixed. It could be done very simply minimizing a number of steps.
You are using NMAP_OUTPUT as a variable. The bash shell does support arrays which you can use to store a list. Also independent entries present in a variable's context undergo Word-Splitting done by the shell. The consequence of that is, if a entry has spaces in-between, it will be tough to identify if it is a separate word or part of a whole word.
Storing the command output to a variable and later parsing to an array is round about way. You can directly pass the output to an array
Using grep and awk together is not needed, awk can do whatever grep can
Always quote the shell variable and array expansions. Never use unquoted expansion in your results (like in <<< $NMAP_OUTPUT). It could have adverse affects in case of words containing spaces.
Always use lower case variable names for user-defined functions/variables and array names.
Use mapfile built-in
Version of bash v4.0 on-wards provides options mapfile/readarray to directly read from a file or output of command.
All your script needs is
mapfile -t nmapOutput < <(nmap -sL "$1" | awk '/Nmap scan report/{print $NF}')
printf '%s\n' "${nmapOutput[#]}"
There is nothing I could infer why your script didn't work between the versions of bash you've indicated. I was able to run your script on the given input on bash 4.4.12
But the crux of the problem seems to be using variables and arrays interchangeably in the wrong way.
it seems you're trying to do this the hard way.
why not simply:
IP_ARRAY=( `nmap -sL 127.1/29 | grep "Nmap scan report" | awk '{print $NF}'` )
Related
There are many tens, maybe a hundred or more previous questions that seem "identical" to this already here, but after extensive search, I found NOTHING that even came close to working - though I did learn quite a lot - and so I decided to just RTFM and figure this out on my own.
The Problem
I wanted to search the output of a ps auxwww command to find processes of interest, and the issue was that I can't just simply use cut to find the exact data from them that I wanted. ps, it turns out, tries to columnate the output, adding either extra spaces or tabs that get in the way of using cut to get the correct data.
So, since I'm not a master at bash, I did a search... The answers I found were all focused on either variables - a "backup strategy" from my point of view that itself didn't solve the whole problem - or they only trimmed leading or trailing space or all "whitespace" including newlines. NOPE, Won't Work For Cut! And, neither will removing trailing newlines and so forth.
So, restated, the question is, how do we efficiently end up with the white space defined as simply a single space between other characters without eliminating newlines?
Below, I will give my answer, but I welcome others to give theirs - who knows, maybe someone has a better answer?!
Answer:
At least MY answer - please leave your own, too! - was to do this:
ps auxwww | grep <program> | tr -s [:blank:] | cut -d ' ' -f <field_of_interest>
This worked great!
Obviously, there are many ways to adapt this to other needs.
As an alternative to all of the pipes and grep with cut, you could simply use awk. The benefit of using awkwith the default field-separator (FS) being set to break on whitespace is that it considers any number of whitespace between fields as a single separator.
So using awk will do away with needing to use tr -s to "squeeze" whitespace to define fields. Further, awk gives far greater control over field matching using regular expressions rather than having to rely on grep of a full line and cut to locate a pre-determined field numbers. (though to some extent you will still have to tell awk what field out of the ps command you are interested in)
Using bash, you can also eliminate the pipe | by using process substitution to send the output of ps auxwww to awk on stdin using redirection, e.g. awk ... < <(ps auxwww) for a single tidy command line.
To get your "program" and "file_of_interest" into awk you have two options. You can initialize awk variables using the -v var=value option (there can be multiple -v otions given), or you can use the BEGIN rule to initialize the variables. The only difference being with -v you can provide a shell variable for value and there is no whitespace allowed surrounding the = sign, while within BEGIN any whitespace is ignored.
So in your case a couple of examples to get the virtual memory size for firefox processes, you could use:
awk -v prog="firefox" -v fnum="5" '
$11 ~ prog {print $fnum}
' < <(ps auxwww)
(above if you had myprog=firefox as a shell variable, you could use -v prog="$myprog" to initialize the prog variable for awk)
or using the BEGIN rule, you could do:
awk 'BEGIN {prog = "firefox"; fnum = "5"}
$11 ~ prog {print $fnum }
' < <(ps auxwww)
In each command above, it locates the COMMAND field from ps (field 11) and checks whether it contains firefox and if so it outputs field no. 5 the virtual memory size used by each process.
Both work fine as one-liners as well, e.g.
awk -v prog="firefox" -v fnum="5" '$11 ~ prog {print $fnum}' < <(ps auxwww)
Don't get me wrong, the pipeline is perfectly fine, it will just be slow. For short commands with limited output there won't be much difference, but when the output is large, awk will provide orders of magnitude improvement over having to tr and grep and cut reading over the same records three times.
The reason being, the pipes and the process on each side requires separate processes be spawned by the shell. So minimizes their use, improves the efficiency of what your script is doing. Now if the data is small as are the processes, there isn't much of a difference. However if you are reading a 3G file 3 times over -- that's is the difference in orders of magnitude. Hours verses minutes or seconds.
I had to use single quotes on CentosOS Linux to get tr working like described above:
ps -o ppid= $$ | tr -d '[:space:]'
You can reduce the number of pipes using this Perl one-liner, which uses Perl regexes instead of a separate grep process. This combines grep, tr and cut in a single command, with an easy way to manipulate the output (#F is the array of fields, 0-indexed):
Examples:
# Start an example process to provide the input for `ps` in the next commands:
/Applications/Emacs.app/Contents/MacOS/Emacs-x86_64-10_14 --geometry 109x65 /tmp/foo &
# Print single space-delimited output of `ps` for all emacs processes:
ps auxwww | perl -lane 'print "#F" if $F[10] =~ /emacs/i'
# Prints:
# bar 72144 0.0 0.5 4610272 82320 s006 SN 11:15AM 0:01.31 /Applications/Emacs.app/Contents/MacOS/Emacs-x86_64-10_14 --geometry 109x65 /tmp/foo
# Print emacs PID and file name opened with emacs:
ps auxwww | perl -lane 'print join "\t", #F[1, -1] if $F[10] =~ /emacs/i'
# Prints:
# 72144 /tmp/foo
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
perldoc perlre: Perl regular expressions (regexes)
I'm trying to temporarily disable dhcp on all connections in a computer using bash, so I need the process to be reversible. My approach is to comment out lines that contain BOOTPROTO=dhcp, and then insert a line below it with BOOTPROTO=none. I'm not sure of the correct syntax to make sed understand the line number stored in the $insertLine variable.
fileList=$(ls /etc/sysconfig/network-scripts | grep ^ifcfg)
path="/etc/sysconfig/network-scripts/"
for file in $fileList
do
echo "looking for dhcp entry in $file"
if [ $(cat $path$file | grep ^BOOTPROTO=dhcp) ]; then
echo "disabling dhcp in $file"
editLine=$(grep -n ^BOOTPROTO=dhcp /$path$file | cut -d : -f 1 )
#comment out the original dhcp value
sed -i "s/BOOTPROTO=dhcp/#BOOTPROTO=dhcp/g" $path$file
#insert a line below it with value of none.
((insertLine=$editLine+1))
sed "$($insertLine)iBOOTPROTO=none" $path$file
fi
done
Any help using sed or other stream editor greatly appreciated. I'm using RHEL 6.
The sed editor should be able to do the job, without having to to be combine bash, grep, cat, etc. Easier to test, and more reliable.
The whole scripts can be simplified to the below. It performs all operations (substitution and the insert) with a single pass using multiple sed scriptlets.
#! /bin/sh
for file in $(grep -l "^BOOTPROTO=dhcp" /etc/sysconfig/network-scripts/ifcfg*) ; do
sed -i -e "s/BOOTPROTO=dhcp/#BOOTPROTO=dhcp/g" -e "/BOOTPROTO=dhcp/i BOOTPROTO=none" $file
done
As side note consider NOT using path as variable to avoid possible confusion with the 'PATH` environment variable.
Writing it up, your attempt with the following fails:
sed "$($insertLine)iBOOTPROTO=none" $path$file
because:
$($insertLine) encloses $insertLIne in a command substitution which when $insertLIne is evaluated it returns a number which is not a command generating an error.
your call to sed does not include the -i option to edit the file $path$file in place.
You can correct the issues with:
sed -i "${insertLine}i BOOTPROTO=none" $path$file
Which is just sed - i (edit in place) and Ni where N is the number of the line to insert followed by the content to insert and finally what file to insert it in. You add ${..} to insertLine to protect the variable name from the i that follows and then the expression is double-quoted to allow variable expansion.
Let me know if you have any further questions.
(and see dash-o's answer for refactoring the whole thing to simply use sed to make the change without spawning 10 other subshells)
I'm getting the size of a file from a remote webserver and saving the results to a var called remote I get this using:
remote=`curl -sI $FILE | grep -i Length | awk '/Content/{print $(NF-0)}'`
Once I've downloaded the file I'm getting the local files size with:
local=`stat --print="%s" $file`
If I echo remote and local they contain the same value.
I'm trying to run an if statement for this
if [ "$local" -ne "$remote" ]; then
But it always shows the error message, and never advises they match.
Can someone advise what I'm doing wrong.
Thanks
curl's output uses the network format for text, meaning that lines are terminated by a carriage return followed by linefeed; unix tools (like the shell) expect lines to end with just linefeed, so they treat the CR as part of the content of the line, and often get confused. In this case, what's happening is that the remote variable is getting the content length and a CR, which isn't valid in a numeric expression, hence errors. There are many ways to strip the CR, but in this case it's probably easiest to have awk do it along with the field extraction:
remote=$(curl -sI "$remotefile" | grep -i Length | awk '/Content/{sub("\r","",$NF); print $NF}')
BTW, I also took the liberty of replacing backticks with $( ) -- this is easier to read, and doesn't have some oddities with escapes that backticks have, so it's the preferred syntax for capturing command output. Oh, and (NF-0) is equivalent to just NF, so I simplified that. As #Jason pointed out in a comment, it's safest to use lower- or mixed-case for variable names, and put double-quotes around references to them, so I did that by changing $FILE to "$remotefile". You should do the same with the local filename variable.
You could also drop the grep command and have awk search for /^Content-Length:/ to simplify it even further.
I have a bash variable which has the following content:
SSH exit status 255 for i-12hfhf578568tn
i-12hdfghf578568tn is able to connect
i-13456tg is not able to connect
SSH exit status 255 for 1.2.3.4
I want to search the string starting with i- and then extract only that instance id. So, for the above input, I want to have output like below:
i-12hfhf578568tn
i-12hdfghf578568tn
i-13456tg
I am open to use grep, awk, sed.
I am trying to achieve my task by using following command but it gives me whole line:
grep -oE 'i-.*'<<<$variable
Any help?
You can just change your grep command to:
grep -oP 'i-[^\s]*' <<<$variable
Tested on your input:
$ cat test
SSH exit status 255 for i-12hfhf578568tn
i-12hdfghf578568tn is able to connect
i-13456tg is not able to connect
SSH exit status 255 for 1.2.3.4
$ var=`cat test`
$ grep -oP 'i-[^\s]*' <<<$var
i-12hfhf578568tn
i-12hdfghf578568tn
i-13456tg
grep is exactly what you need for this task, sed would be more suitable if you had to reformat the input and awk would be nice if you had either to reformat a string or make some computation of some fields in the rows, columns
Explanation:
-P is to use perl regex
i-[^\s]* is a regex that will match literally i- followed by 0 to N non space character, you could change the * by a + if you want to impose that there is at least 1 char after the - or you could use {min,max} syntax to impose a range.
Let me know if there is something unclear.
Bonus:
Following the comment of Sundeep, you can use one of the improved versions of the regex I have proposed (the first one does use PCRE and the second one posix regex):
grep -oP 'i-\S*' <<<$var
or
grep -o 'i-[^[:blank:]]*' <<<$var
You could use following too(I tested it with GNU awk):
echo "$var" | awk -v RS='[ |\n]' '/^i-/'
You can also use this code (Tested in unix)
echo $test | grep -o "i-[0-z]*"
Here,
-o # Prints only the matching part of the lines
i-[0-z]* # This regular expression, matches all the alphabetical and numerical characters following 'i-'.
Good day. I was reading another post regarding resolving hostnames to IPs and only using the first IP in the list.
I want to do the opposite and used the following script:
#!/bin/bash
IPLIST="/Users/mymac/Desktop/list2.txt"
for IP in 'cat $IPLIST'; do
domain=$(dig -x $IP +short | head -1)
echo -e "$domain" >> results.csv
done < domainlist.txt
I would like to give the script a list of 1000+ IP addresses collected from a firewall log, and resolve the list of destination IP's to domains. I only want one entry in the response file since I will be adding this to the CSV I exported from the firewall as another "column" in Excel. I could even use multiple responses as semi-colon separated on one line (or /,|,\,* etc). The list2.txt is a standard ascii file. I have tried EOF in Mac, Linux, Windows.
216.58.219.78
206.190.36.45
173.252.120.6
What I am getting now:
The domainlist.txt is getting an exact duplicate of list2.txt while the results has nothing. No error come up on the screen when I run the script either.
I am running Mac OS X with Macports.
Your script has a number of syntax and stylistic errors. The minimal fix is to change the quotes around the cat:
for IP in `cat $IPLIST`; do
Single quotes produce a literal string; backticks (or the much preferred syntax $(cat $IPLIST)) performs a command substitution, i.e. runs the command and inserts its output. But you should fix your quoting, and preferably read the file line by line instead. We can also get rid of the useless echo.
#!/bin/bash
IPLIST="/Users/mymac/Desktop/list2.txt"
while read IP; do
dig -x "$IP" +short | head -1
done < "$IPLIST" >results.csv
Seems that in your /etc/resolv.conf you configured a nameserver which does not support reverse lookups and that's why the responses are empty.
You can pass the DNS server which you want to use to the dig command. Lets say 8.8.8.8 (Google) for example:
dig #8.8.8.8 -x "$IP" +short | head -1
The commands returns the domain with a . appended. If you want to replace that you can additionally pipe to sed:
... | sed 's/.$//'