I'm a bash-scripting newbie and don't even know how to formulate my question. I've skimmed this tutorial, but couldn't find an appropriate code example. This is what I want...
I have a list of hostnames, (hostname being google.com and such), which looks like:
1,hostname_1
2,hostname_2
...
n,hostname_n
I want to remove the number in the front, that can be easily done with:
originList="originList.txt"
preparedList="preparedList.txt"
ipv6list="ipv6list.txt"
sed 's/[0-9]*,//' <$originList >$preparedList
But instead of piping the output to preparedList.txt I'd like to use it in my dig command:
sed 's/[0-9]*,//' <$originList | dig **HERE** AAAA +short >> $ipv6List
use this:
sed -e 's/^[[:digit:]]*,//' FILE | xargs -I {} dig {} AAAA +short
Using cut & GNU parallel:
cut -d',' -f2 hostnames | parallel 'dig {} AAAA +short'
Related
UPDATE : Still open for solutions using nslookup without parallel, dig or drill
I need to write a script that scans a file containing web page addresses on each line, and adds to these lines the IP address corresponding to the name using nslookup command. The script looks like this at the moment :
#!/usr/bin/
while read ip
do
nslookup "$ip" |
awk '/Name:/{val=$NF;flag=1;next} /Address:/ &&
flag{print val,$NF;val=""}' |
sed -n 'p;n'
done < is8.input
The input file contains the following websites :
www.edu.ro
vega.unitbv.ro
www.wikipedia.org
The final output should look like :
www.edu.ro 193.169.21.181
vega.unitbv.ro 193.254.231.35
www.wikipedia.org 91.198.174.192
The main problem i have with the current state of the script is that it takes the names from nslookup (which is good for www.edu.ro) instead of taking the aliases when those are available. My output looks like this:
www.edu.ro 193.169.21.181
etc.unitbv.ro 193.254.231.35
dyna.wikimedia.org 91.198.174.192
I was thinking about implementing a if-else for aliases but i don't know how to do one on the current command. Also the script can be changed if anyone has a better understanding of how to format nslookup to show it like the output given.
Minimalist workaround quasi-answer. Here's a one-liner replacement for the script using GNU parallel, host (less work to parse than nslookup), and sed:
parallel "host {} 2> /dev/null |
sed -n '/ has address /{s/.* /'{}' /p;q}'" < is8.input
...or using nslookup at the cost of added GNU sed complexity.
parallel "nslookup {} 2> /dev/null |
sed -n '/^A/{s/.* /'{}' /;T;p;q;}'" < is8.input
...or using xargs:
xargs -I '{}' sh -c \
"nslookup {} 2> /dev/null |
sed -n '/^A/{s/.* /'{}' /;T;p;q;}'" < is8.input
Output of any of those:
www.edu.ro 193.169.21.181
vega.unitbv.ro 193.254.231.35
www.wikipedia.org 208.80.154.224
Replace your complete nslookup line with:
echo "$IP $(dig +short "$IP" | grep -m 1 -E '^[0-9.]{7,15}$')"
This might work for you (GNU sed and host):
sed '/\S/{s#.*#host & | sed -n "/ has address/{s///p;q}"#e}' file
For all non-empty lines: invoke the host command on the supplied host name and pipe the results to another invocation of sed which strips out text and quits after the first result.
I am trying to find all instances of "type":"FollowEvent", and then within those instances, if the string "actor": is not followed by {, then capture the string enclosed in " that comes immediately after "actor":. Else, capture the string enclosed in " that comes immediately after "login:".
What I have so far:
zgrep -e '"type":"FollowEvent"' /path/to/dir/* | zgrep -o '"actor":(?!{)*' | cut -f2- -d: | cut -d',' -f1 > results_file.txt
What this does:
For all files in /path/to/dir, for all lines that contain "type":"FollowEvent", find "actor:" not followed by {. Then take everything after the :, and before the next ,. Put the results in results_file.txt.
A single line in the files that are being grep'd could look like this:
{"repo":{"url":"https://url","name":"/"},"type":"FollowEvent","public":true,"created_at":"2011-05-29","payload":{"target":{"gravatar_id":"73","id":64,"repos":35,"followers":58,"login":"username3"}},"actor":{"gravatar_id":"06","id":439,"url":"https://url","avatar_url":"https://.png","login":"username4"},"id":"14"}
or like this:
{"repo":{"url":"https://url/","name":"/"},"type":"FollowEvent","public":true,"created_at":"2011-04-01","payload":{"target":{"gravatar_id":"40","repos":2,"followers":1,"login":"username2"},"actor":"username1","actor_gravatar":"de4"},"actor":{"gravatar_id":"de4","id":716,"url":"https://url","avatar_url":"https://.png","login":"username2"},"id":"12"}
What I want:
a file containing only the usernames of actors. Here, I want, in results_file.txt:
username4
username1
Let's say:
JSON='{"repo":{"url":"https://url","name":"/"},"type":"FollowEvent","public":true,"created_at":"2011-05-29","payload":{"target":{"gravatar_id":"73","id":64,"repos":35,"followers":58,"login":"username3"}},"actor":{"gravatar_id":"06","id":439,"url":"https://url","avatar_url":"https://.png","login":"username4"},"id":"14"}'
For a simple answer, I do suggest you to use jq: https://stedolan.github.io/jq/
$ echo "$JSON" | jq -r '. | select(.type=="FollowEvent") | .actor.login'
username4
You can install it in most of distros with the default package manager.
Anyway if you need to do it with GNU tools.
$ echo "$JSON" | grep '"type":"FollowEvent"' | sed 's/.*"login":"\([^"]*\).*/\1/g'
username4
Looking for a way to pass the second column of output to geoiplookup, ideally on the same line, but not necessarily. This is the best I can muster. It's usable, but the geoiplookup results are unfortunately below the list of connections. I wanted more integrated results. If anyone can suggest improvements, they would be welcome.
ns () {
echo ""
while sleep 1; do
lsof -Pi |
grep ESTABLISHED |
sed "s/[^:]*$//g" |
sed "s/^[^:]*//g" |
sed "s/://g" |
sed "s/->/\t/g" |
grep -v localdomain$ |
tee >(for x in `grep -o "\S*$"`; do geoiplookup $x | sed "s/GeoIP.*: /\t/g"; done)
done
}
The results currently look something like this:
<Port> <URL or IP if no reverse available #1>
<Port> <URL or IP if no reverse available #2>
<geoiplookup trimmed result #1>
<geoiplookup trimmed result #2>
I received an excellent answer here.
I've long been wondering about this question;
say I first try to grep some lines from a file:
cat 101127_2.bam |grep 'TGATTACTTGCTTTATTTTAGTGTTTAATTTGTTCTTTTCTAATAA'
Then it'll pop out the whole line containing this string.
However, can we use some simple bash code to locate at which line this string locates? (100th? 1000th?...)
grep -n 'TGATTACTTGCTTTATTTTAGTGTTTAATTTGTTCTTTTCTAATAA' 101127_2.bam
I found it using man grep and writing /line number
// EDIT: Thanks #Keith Thompson I'm editing post from cat file | grep -n pattern to grep -n pattern file, I was in a hurry sorry
try this:
cat 101127_2.bam |grep -n 'TGATTACTTGCTTTATTTTAGTGTTTAATTTGTTCTTTTCTAATAA'
This might work for you too:
sed '/TGATTACTTGCTTTATTTTAGTGTTTAATTTGTTCTTTTCTAATAA/=;d' 101127_2.bam
or
sed -n '/TGATTACTTGCTTTATTTTAGTGTTTAATTTGTTCTTTTCTAATAA/=' 101127_2.bam
The above solutions only output the matching line numbers, to see the lines matched too:
sed '/TGATTACTTGCTTTATTTTAGTGTTTAATTTGTTCTTTTCTAATAA/!d;=' 101127_2.bam
or
sed -n '/TGATTACTTGCTTTATTTTAGTGTTTAATTTGTTCTTTTCTAATAA/{=;p}' 101127_2.bam
I'm writing a small shell script that needs to reverse the lines of a text file. Is there a standard filter command to do this sort of thing?
My specific application is that I'm getting a list of Git commit identifiers, and I want to process them in reverse order:
git log --pretty=oneline work...master | grep -v DEBUG: | cut -d' ' -f1 | reverse
The best I've come up with is to implement reverse like this:
... | cat -b | sort -rn | cut -f2-
This uses cat to number every line, then sort to sort them in descending numeric order (which ends up reversing the whole file), then cut to remove the unneeded line number.
The above works for my application, but may fail in the general case because cat -b only numbers nonblank lines.
Is there a better, more general way to do this?
In GNU coreutils, there's tac(1)
There is a command for your purpose:
tail -r file.txt
Prints the lines of file.txt in reverse order!
The -r flag is non-standard, may not work on all systems, works e.g. on macOS.
Beware: Amount of lines limited. Works mostly, but when working with huge files be careful and check.
Answer is not 42 but tac.
Edit: Slower but more memory consuming using sed
sed 'x;1!H;$!d;x'
and even longer
perl -e'print reverse<>'
Similar to the sed example above, using perl - maybe more memorable (depending on how your brain is wired):
perl -e 'print reverse <>'
cat -b only numbers nonblank lines"
If that's the only issue you want to avoid, then why not use "cat -n" to number all the lines?
: "#(#)$Id: reverse.sh,v 1.2 1997/06/02 21:45:00 johnl Exp $"
#
# Reverse the order of the lines in each file
awk ' { printf("%d:%s\n", NR, $0);}' $* |
sort -t: +0nr -1 |
sed 's/^[0-9][0-9]*://'
Works like a charm for me...
In this case, just use --reverse:
$ git log --reverse --pretty=oneline work...master | grep -v DEBUG: | cut -d' ' -f1
rev <name of your text file.txt>
You can even do this:
echo <whatever you want to type>|rev
awk '{a[i++]=$0}END{for(;i-->0;)print a[i]}'
More faster than sed and compatible for embed devices like openwrt.