Requirement
I have a txt file in which last column have URLs.
Some of the URL entries have IPs instead of FQDN.
So, for entries with IPs (e.g. url=https://174.37.243.85:443*), I need to do reverse nslookup for IP and replace the result (FQDN) with IP.
Text File Input
httpMethod=SSL-SNI destinationIPAddress=174.37.243.85 url=https://174.37.243.85:443*
httpMethod=SSL-SNI destinationIPAddress=183.3.226.92 url=https://pingtas.qq.com:443/*
httpMethod=SSL-SNI destinationIPAddress=184.173.136.86 url=https://v.whatsapp.net:443/*
Expected Output
httpMethod=SSL-SNI destinationIPAddress=174.37.243.85 url=https://55.f3.25ae.ip4.static.sl-reverse.com:443/*
httpMethod=SSL-SNI destinationIPAddress=183.3.226.92 url=https://pingtas.qq.com:443/*
httpMethod=SSL-SNI destinationIPAddress=184.173.136.86 url=https://v.whatsapp.net:443/*
Here's a quick and dirty attempt in pure Awk.
awk '$3 ~ /^url=https?:\/\/[0-9.]*([:\/?*].*)?$/ {
# Parse out the hostname part
split($3, n, /[\/:?\*]+/);
cmd = "dig +short -x " n[2]
cmd | getline reverse;
sub(/\.$/, "", reverse);
close(cmd)
# Figure out the tail after the hostname part
match($3, /^url=https:?\/\/[0-9.]*/); # update index
$3 = n[1] "://" reverse substr($3, RSTART+RLENGTH) } 1' file
If you don't have dig, you might need to resort to nslookup or host instead; but the only one of these which portably offers properly machine-readable output is dig so you might want to install it for that feature alone.
Solution 1st: Within single awk after discussion on comments adding this now:
awk '
{
if(match($0,/\/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/)){
val_match=substr($0,RSTART+1,RLENGTH-1);
system("nslookup " val_match " > temp")};
val=$0;
while(getline < "temp"){
if($0 ~ /name/){
num=split($0, array," ");
sub(/\./,"",array[num]);
sub(val_match,array[num],val);
print val}}
}
NF
' Input_file
Solution 2nd: It is my initial solution with awk and shell.
Following simple script may help you on same:
cat script.ksh
CHECK_IP () {
fdqn=$(echo "$1" | awk '{if(match($0,/\/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/)){system("nslookup " substr($0,RSTART+1,RLENGTH-1))}}')
actual_fdqn=$(echo "$fqdn" | awk '/name/{sub(/\./,""$NF);print $NF}')
echo "$actual_fdqn"
}
while read line
do
val=$(CHECK_IP "$line")
if [[ -n "$val" ]]
then
echo "$line" | awk -v var="$val" '{if(match($0,/\/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/)){ip_val=substr($0,RSTART+1,RLENGTH-1);sub(ip_val,var)}} 1'
else
echo "$line"
fi
done < "Input_file"
Related
Example is this
192.168.1.1 xyz_user
192.168.5.1 ddd_user
192.168.1.1 abc_user
192.168.6.1 byd_user
What i need as output is this.
192.168.1.1 xyz_user,abc_user
192.168.5.1 ddd_user
192.168.6.1 byd_user
How should i approach this?
Use awk tool for that:
awk '{ Usr[$1]= (Usr[$1]!=""?Usr[$1] "," $2: $2) }
END { for (x in Usr) print x, Usr[x] }' infile
192.168.5.1 ddd_user
192.168.6.1 byd_user
192.168.1.1 xyz_user,abc_user
Here we are appending the second column value as Users of belonging to the same IP into one array value which IP is their key identifier in that array.
Ternary condition (Usr[$1]!=""?Usr[$1] "," $2: $2) is used to add comma between each User found for same related IP and it prevents to have a comma if value of that IP is still not set.
Let's say you output the result in the temporary file /tmp/bashTest.txt.
You can get your wanted result, sorted, like this:
for ipAddress in $( cat /tmp/bashTest.txt |awk '{print $1}' |sort -u ); do
echo -ne "$ipAddress\t";
for name in $( cat /tmp/bashTest.txt |grep -e "^$ipAddress" |awk '{print $2}' ); do
echo -ne "$name,";
done;
echo "";
done
Let me know if it is enough to solve your issue.
I have many hosts files. I collect them from all servers and i put them together in host_files.txt and then I must make one hosts file for all servers.
I do this command to make a unique file, but some rows share the same ip address or hostname.
awk '!a[$0]++' host_files.txt
Here is my host_files.txt
#backup server IPs
95.23.23.56
95.23.23.57
#ftp server IPs
45.89.67.5
45.89.67.3
#apache
12.56.35.36
12.56.35.35
#ftp server IPs
95.23.23.50
#apache
12.56.35.37
I want to output file, but I need to keep the comment line
#backup server IPs <= comment line, i need to keep them
95.23.23.56
95.23.23.57
#ftp server IPs <= comment line, i need to keep them
45.89.67.5
45.89.67.3
95.23.23.50
#apache <= comment line, i need to keep them
12.56.35.36
12.56.35.35
12.56.35.37
i already try :
sort -ur host_files.txt
cat host_files.txt | uniq > ok_host.txt
I need the ip without # just need ip adresse please help me
Thanks in advance
In GNU awk for using multidimensional arrays:
$ awk '
/^#/ { k=$0; next } # group within identical comments, k is key to hash
/./ { a[k][$1]=$0 } # remove empty records and hash ips
END { for(k in a) { # after everything, output
print k
for(i in a[k])
print a[k][i]
}
}' file*
#apache
12.56.35.35 #apacheprivate
12.56.35.36 #apachepub
12.56.35.37 #apachepub
#ftp server IPs
45.89.67.3 #ftpssh
45.89.67.5 #ftpmain
95.23.23.50 #ftp
#backup server IPs
95.23.23.56 #masterbasckup
95.23.23.57 #agentbasckup
The output is random order because of for(k in a), ie. comment groups and ips within groups are in no particular order.
This will work in any awk:
$ cat tst.awk
/^#/ { key = $0; next }
NF && !seen[$0]++ {
ips[key] = ips[key] $0 ORS
}
END {
for (key in ips) {
print key ORS ips[key]
}
}
$ awk -f tst.awk file
#apache
12.56.35.36 #apachepub
12.56.35.35 #apacheprivate
12.56.35.37 #apachepub
#ftp server IPs
45.89.67.5 #ftpmain
45.89.67.3 #ftpssh
95.23.23.50 #ftp
#backup server IPs
95.23.23.56 #masterbasckup
95.23.23.57 #agentbasckup
Output order will be random due to use of the in operator, if that's a problem it's just a couple more lines of code to change.
If awk is not a requirement.
#!/bin/ksh
cat host_files.txt | while read line ; do
[[ $line =~ ^$ ]] && { continue; } # skip empty lines
[[ $line =~ ^# ]] && { group=$line; continue; } # remember the group name
print "$group|$line" # print with group name in front
done | sort \
| while read line ; do
if [[ ${line%\|*} != $last ]]; then # if the group name changed
print "\n${line%\|*}" # print the group name
last=${line%\|*} # remember the new group name
fi
print "${line#*\|}" # print the entry without the group name
done
put the group name in front of the line
sort
detect changing group name and print it
print entry without group name
Using the same concept with awk (avoiding the while loop in shell).
awk '
/^#/ { k=$0; next }
/./ { print k "|" $0 }
' host_files.txt | sort | awk -F '|' '{
if ( k != $1 ) { print "\n" $1; k = $1; }
print $2
}' -
Because it does not use an array it would not loose lines due to duplicate keys.
And, thinking a bit more, the second awk can be avoided. Adding the key to each line. For the header without 'x'. So the header is sorted above the rest. In the output, just remove the added sort-key.
awk '
/^#/ { k=$0; print k "|" $0; next; }
/./ { print k "x|" $0 }
' t18.dat | sort -u | cut -d '|' -f 2
I have command, which gives me output from telnet. Full output from telnet It looks like this:
telnet myserver.com 1234
Server, Name=MyServer, Age=123, Ver=1.23, ..., ..., ...
This command should filter just the number after Age - "Age=123" which I want to filter:
echo "\n" | nc myserver.com 1234 | (awk -F "=" '{print $3}')
Instead of 123 it gives me this output:
123, Ver
Is there a way how to get just number after Age=?
It's just bad awk filtering parameter, but I tried some other ways with awk but this gave me almost best result... Thank you for any help.
Edit: I forgot, number after Age= is dynamic +1 every day...
I'm not sure about the echo "\n" part but I think that this should do what you want:
nc myserver.com 1234 | awk -F "," '{ split($3, a, /=/); print a[2] }'
Instead of splitting into fields on the =, I've done so on the ,. The third field is then split into the array a on the = and the second half is printed.
I also removed the ( ) around the invocation of awk, which was creating a subshell unnecessarily.
If you're confident about the response never varying containing = or , in other places, you could simplify the awk expression further:
awk -F'[=,]' '{ print $5 }'
The bracket expression allows fields to be split on either = or ,, making the part you're interested in the fifth field.
echo "\n" | nc myserver.com 1234 | awk -F "," '{print substr($3,6)}'
You can run the awk command twice.
awk -F "=" '{print $3}'|awk -F "," '{print $1}'
You can also use the cut command:
cut -d "=" -f 3|cut -d "," -f 1
I need a way (the most portable) in bash, to perform a search of the ~/.netrc file, for a particular machine api.mydomain.com and then on the next line, pull the username value.
The format is:
machine a.mydomain.com
username foo
passsword bar
machine api.mydomain.com
username boo
password far
machine b.mydomain.com
username doo
password car
So, it should matchin api.mydomain.com and return exactly boo from this example.
awk '/api.mydomain.com/{getline; print}' ~/.netrc
Get's me the line I want, but how do I find the username value?
$ awk '/api.mydomain.com/{getline; print $2}' ~/.netrc
boo
To capture it in a variable:
$ name=$(awk '/api.mydomain.com/{getline; print $2}' ~/.netrc)
$ echo "$name"
boo
By default, awk splits records (lines) into fields based on whitespace. Thus, on the line username boo, username is assigned to field 1, denoted $1, and boo is assigned to field 2, denoted $2.
If you like to avoid using the getline fuction use this:
awk '/api.mydomain.com/ {f=NR} f&&f+1==NR {print $2}' ~/.netrc
boo
As Ed write here: avoid using it.
http://awk.info/?tip/getline
This will find the line number of the pattern, and then
when line number is one more, print field #2
Can be shorten some to:
awk '/api.mydomain.com/ {f=NR} f&&f+1==NR&&$0=$2' ~/.netrc
or
awk 'f&&!--f&&$0=$2; /api.mydomain.com/ {f=1}' ~/.netrc
This may be the most robust way to do it.
If there are comments line or blank line after domain, other solution fails.
awk '/api.mydomain.com/ {f=1} f && /username/ {print $2;f=0}' ~/.netrc
boo
If domain is found, set flag f. If flag f is true and next line has username print field #2
This vanilla Bourne shell (which includes Bash, KSH, and more) function should parse any valid .netrc file: it handled everything I could think of. Invoke with netrc-fetch MACHINE [FIELD]. For your question, it would be netrc-fetch api.mydomain.com user.
netrc-fetch () {
cat $HOME/.netrc | awk -v host=$1 -v field=${2:-password} '
{
for (i=1; i <= NF; i += 2) {
j=i+1;
if ($i == "machine") {
found = ($j == host);
} else if (found && ($i == field)) {
print $j;
exit;
}
}
}
'
}
This sed is as portable as I can make it:
sed -n '
/machine[ ]\{1,\}api.mydomain.com/ {
# we have matched the machine
:a
# next line
n
# print username, if matched
s/^[ ]\{1,\}username[ ]\{1,\}//p
# goto b if matched
tb
# else goto a
ba
:b
q
}
' ~/.netrc
The whitespace in the brackets is a space and a tab character.
Looking at this with fresh eyes, this is what I would write now:
awk -v machine=api.mydomain.com '
$1 == "machine" {
if (m)
# we have already seen the requested domain but did not find a username
exit 1
if ($2 == machine) m=1
}
m && $1 == "username" {print $2; exit}
' ~/.netrc
or if you like unreadable oneliners
awk '$1=="machine"{if(m)exit 1;if($2==M)m=1} m&&$1=="username"{print $2;exit}' M=api.mydomain.com ~/.netrc
Here's what you could do:
awk '/api.mydomain.com/{getline;print $2}' ~/.netrc ##$2 will print username
I recently stumbled upon this issue and couldn't find an answer which covers the one-line format (i.e. machine x login y password z), and inline or intra-line comments.
What I ended up with is making the file into one line with xargs (so only single spaces remain), then using grep with lookbehind (for the keyword) and lookahead (for the next whitespace or end of line)
xargs < ~/.netrc | grep -oP '(?<=machine api\.domain\.com ).*?(?=( machine)|$)' | grep -oP '(?<=login ).*?(?=\s|$)'
This could be of course developed into a function of sorts with extending the dots in the remote host variable with backslashes, printing the password as well, etc.
get_netrc_entry() {
machine_entry=$(xargs < ~/.netrc | grep -oP "(?<=machine ${1//./\\.} ).*?(?=( machine)|$)")
grep -oP '(?<=login ).*?(?=\s|$)' <<<"${machine_entry}"
grep -oP '(?<=password ).*?(?=\s|$)' <<<"${machine_entry}"
}
Another solution to get user of password:
grep -A2 armdocker.rnd.ericsson.se ~/.config/artifactory.netrc | awk '/login/ {print $2}'
grep -A2 gives back the following 2 lines of a requested machine
I am new to shell script. I have a file app.conf as :
[MySql]
user = root
password = root123
domain = localhost
database = db_name
port = 3306
[Logs]
level = logging.DEBUG
[Server]
port = 8080
I want to parse this file in shell script and want to extract mysql credentials from the same. How can I achieve that?
I'd do this:
pw=$(awk '/^password/{print $3}' app.conf)
user=$(awk '/^user/{print $3}' app.conf)
echo $pw
root123
echo $user
root
The $() sets the variable pw to the output of the command inside. The command inside looks through your app.conf file for a line starting password and then prints the 3rd field in that line.
EDITED
If you are going to parse a bunch of values out of your config file, I would make a variable for the config file name:
CONFIG=app.conf
pw=$(awk '/^password/{print $3}' "${CONFIG}")
user=$(awk '/^user/{print $3}' "${CONFIG}")
Here's how to do the two different ports... by setting a flag to 1 when you come to the right section and exiting when you find the port.
mport=$(awk '/^\[MySQL\]/{f=1} f==1&&/^port/{print $3;exit}' "${CONFIG}")
sport=$(awk '/^\[Server\]/{f=1} f==1&&/^port/{print $3;exit}' "${CONFIG}")
You will want to search for "shell ini file parser". I would start with something like this:
ini_get () {
awk -v section="$2" -v variable="$3" '
$0 == "[" section "]" { in_section = 1; next }
in_section && $1 == variable {
$1=""
$2=""
sub(/^[[:space:]]+/, "")
print
exit
}
in_section && $1 == "" {
# we are at a blank line without finding the var in the section
print "not found" > "/dev/stderr"
exit 1
}
' "$1"
}
mysql_user=$( ini_get app.conf MySql user )
Using awk:
awk -F ' *= *' '$1=="user"||$1=="password"{print $2}' my.cnf
root
gogslab
I ran in a similar problem yesterday and thought the best solution might be, if you get an associative array like "key - value" after parsing the file.
I you like to see a running example have a look at https://github.com/philippkemmeter/set-resolution/blob/master/set-resolution.
Adapted to your problem, this might work:
function receive_assoc_declare_statement {
awk -F '=' 'BEGIN {ORS=" "}
{
gsub(/[ \t]+/, "", $1);
gsub(/[ \t]+/, "", $2);
print "[" $1 "]=" $2
}' app.conf
}
eval 'declare -A CONF=('`receive_assoc_declare_statement`')'
You then have access to for instance user via ${CONF[user]}.
The gsub is trimming keys and values, so that you can use tab etc. to format your config file.
It's lacking sections, but you could add this functionality using sed to create one config array per section:
sed -n '/\[MySql\]/, /\[/ {p}' test.removeme | sed '1 d; $ d'
So answering your question in total, this script might work:
MYSQL=`sed -n '/\[MySql\]/, /\[/ {p}' app.conf | sed '1 d; $ d' | awk -F '=' 'BEGIN {ORS=" "}
{
gsub(/[ \t]+/, "", $1);
gsub(/[ \t]+/, "", $2);
print "[" $1 "]=" $2
}' `
eval 'declare -A MYSQL=('$MYSQL')'
The other sections correspondingly.