I want to parse following file in makefile. I do not have scripting experience.
Input file format:
Disk "disk1" PORT "port1"
Disk "disk2" PORT "port2"
Disk "disk3" PORT "port3"
. . .
I want a list of all port numbers. I tried to parse it using foreach but no success. Could please suggest how can I parse it in Makefile?
You could use grep:
Makefile
PORTSFILE := "DATA"
PORTS := $(shell grep -Po "(?<=PORT \")[^\"]+" $(PORTSFILE))
test:
#echo $(PORTS)
Example
$ make test
port1 port2 port3
Related
I have a small BASH backup script that uses Rsync to grab a handful of computers on my LAN. It works very well for static devices using an Ethernet cable - trouble comes in for my even smaller number of Laptop users that have docks. Every once in a while they do not connect to the Dock & Ethernet cable/statically assigned address and end up on the WiFi with a DHCP assigned address. I already have a list of known statically assigned targets in a file that is parsed through to actually backed up. So I keep thinking I should fairly easily be able to create a second file with an nmap scan before each backup run with other code I found - something like:
sudo nmap -n -sP 192.168.2.0/24 | awk '/Nmap scan report for/{printf $5;}/MAC Address:/{print " => "$3;}' | sort
which gives me a list of 192.168.2.101 => B4:FB:E4:FE:C6:6F for all found devices in the LAN. I just removed the | sort and send it to a file > found.devices instead.
So Now I have a list of found devices IP and MAC address - and I'd like to compare the two files and create a new target list with any changed IP addresses found (for those Laptop users that forgot to connect to the Dock and are now using DHCP). But I still want to keep my original targets file clean for the times that they do remember and also continue to get those other devices that are wired all the time while ignoring everything else on the LAN.
found.devices
192.168.2.190 => D4:XB:E4:FE:C6:6F
192.168.2.102 => B4:QB:Y4:FE:C6:6F
192.168.2.200 => B4:FB:P4:ZE:C6:6F
192.168.2.104 => B4:FB:E4:BE:P6:6F
known.targets
192.168.2.101 D4:XB:E4:FE:C6:6F domain source destination
192.168.2.102 B4:QB:Y4:FE:C6:6F domain source destination
192.168.2.103 B4:FB:P4:ZE:C6:6F domain source destination
192.168.2.104 B4:FB:E4:BE:P6:6F domain source destination
Should get a list or a file for the current back run to use of:
192.168.2.190 domain source destination
192.168.2.102 domain source destination
192.168.2.200 domain source destination
192.168.2.104 domain source destination
Currently my bash script just reads the file of known.targets one line at a time:
cat /known.targets | while read ip hostname source destination
do
this mounts and backs up the data I want ...
I really like the current system, and have found it to be very reliable for my simple needs, just need to find some way to get those users that intermittently forget to dock. I expect its series of nested loops, but I cannot get my head to wrap around it - been away from actual coding for too long - Any suggestions would be greatly appreciated. I'd also really like to get rid of the => and just use comma or space separated data but every time I mess with that awk statement - I end up shifting the data and getting an oddly placed CR somewhere I cannot figure it either!
Try this pure Bash code:
declare -A found_mac2ip
while read -r ip _ mac; do
[[ -n $mac ]] && found_mac2ip[$mac]=$ip
done <'found.devices'
while read -r ip mac domain source destination; do
ip=${found_mac2ip[$mac]-$ip}
# ... DO BACKUP FOR $ip ...
done <'known.targets'
It first sets up a Bash associative array mapping found mac address to ip addresses.
It then loops through the known.targets file and for each mac address it uses the ip address from the known.targets file if the mac address is listed in it. Otherwise it uses the ip address read from the known.targets file.
It's also possible to extract the "found" MAC and IP address information by getting it directly from the nmap output instead of from a `found.devices' file. This alternative version of the code does that:
declare -A found_mac2ip
nmap_output=$(sudo nmap -n -sP 192.168.2.0/24)
while IFS=$' \t\n()' read -r f1 f2 f3 f4 f5 _; do
[[ "$f1 $f2 $f3 $f4" == 'Nmap scan report for' ]] && ip=$f5
[[ "$f1 $f2" == 'MAC Address:' ]] && found_mac2ip[$f3]=$ip
done <<<"$nmap_output"
while read -r ip mac domain source destination; do
ip=${found_mac2ip[$mac]-$ip}
# ... DO BACKUP FOR $ip ...
done <'known.targets'
UPDATE: per comment from OP, dropping assumption (and associated code) about keeping an IP address that shows up in found.devices but without a match in known.targets (ie, this cannot happen)
Assumptions:
start with a list of IP/MAC addresses from known.targets
if a MAC address also shows up in found.devices then the IP address from found.devices takes precendence
Adding a standalone entry to both files:
$ cat known.targets
192.168.2.101 D4:XB:E4:FE:C6:6F domain source destination
192.168.2.102 B4:QB:Y4:FE:C6:6F domain source destination
192.168.2.103 B4:FB:P4:ZE:C6:6F domain source destination
192.168.2.104 B4:FB:E4:BE:P6:6F domain source destination
111.111.111.111 AA:BB:CC:DD:EE:FF domain source destination
$ cat found.devices
192.168.2.190 => D4:XB:E4:FE:C6:6F
192.168.2.102 => B4:QB:Y4:FE:C6:6F
192.168.2.200 => B4:FB:P4:ZE:C6:6F
192.168.2.104 => B4:FB:E4:BE:P6:6F
222.222.222.222 => FF:EE:CC:BB:AA:11
One awk idea:
$ cat ip.awk
FNR==NR { ip[$2]=$1; dsd[$2]=$3 FS $4 FS $5; next }
$3 in ip { ip[$3]=$1 }
END { for (mac in ip) print ip[mac],dsd[mac] }
Running against our files:
$ awk -f ip.awk known.targets found.devices
192.168.2.200 domain source destination
192.168.2.190 domain source destination
192.168.2.104 domain source destination
111.111.111.111 domain source destination
192.168.2.102 domain source destination
Feeding this to a while loop:
while read ip hostname source destination
do
echo "${ip} : ${hostname} : ${source} : ${destination}"
done < <(awk -f ip.awk known.targets found.devices)
This generates:
192.168.2.200 : domain : source : destination
192.168.2.190 : domain : source : destination
192.168.2.104 : domain : source : destination
111.111.111.111 : domain : source : destination
192.168.2.102 : domain : source : destination
I have a server which is hosting my files which I can list with the following command:
xrdfs servername ls path/to/file
Similarly, I can copy file using the following command:
xrdcp server/path/to/file .
For, some reason the server doesn't support copying an entire folder(of course with -r option). So, I am trying to pipeline these two commands such a way that xrdfs will list the files and xrdcp will copy it to my destination. I tried the following line:
xrdfs servername ls path/to/file | xrdcp server/$() .
I get the following message:
Prepare: [ERROR] Invalid arguments
This is not very enlightening. Can somebody help with this?
Ok, I found an answer and I am posting here for reference
xrdfs servername ls path/to/file | while read -r out; do xrdcp server$out .; done
I am trying to feed some netflow data into kafka. I have some netflow.pcap files which I read like
tcpdump -r netflow.pcap and get such an output:
14:48:40.823468 IP abts-kk-static-242.4.166.122.airtelbroadband.in.35467 > abts-kk-static-126.96.166.122.airtelbroadband.in.9500: UDP, length 1416
14:48:40.824216 IP abts-kk-static-242.4.166.122.airtelbroadband.in.35467 > abts-kk-static-126.96.166.122.airtelbroadband.in.9500: UDP, length 1416
.
.
.
.
In the official docs they mention the traditional way of starting a kafka producer, starting a kafka consumer and in the terminal input some data on producer which will be shown in the consumer. Good. Working.
Here they show how to input a file to kafka producer. Mind you, just one single file, not multiple files.
Question is:
How can I feed the output of a shell script into kakfa broker?
For example, the shell script is:
#!/bin/bash
FILES=/path/to/*
for f in $FILES
do
tcpdump -r netflow.pcap
done
I can't find any documentation or article where they mention how to do this. Any idea? Thanks!
Well, based on the link you gave on how to use the shell kafka producer with an input file, you can do the same with your output. You can redirect the output to a file and then use the producer.
Pay attention that I used >> in order to append to the file and not to overwrite it.
For example:
#!/bin/bash
FILES=/path/to/*
for f in $FILES
do
tcpdump -r netflow.pcap >> /tmp/tcpdump_output.txt
done
kafka-console-produce.sh --broker-list localhost:9092 --topic my_topic
--new-producer < /tmp/tcpdump_output.txt
i have 10 IP which have been listed on a CSV or text file , i need to read each time one line and get the IP and set on eth0 interface of the server, i found the bellow script which some how show me how to create new network setting but i do not know how i could read one line from CSV and put it on variable to use with bellow script . i would be greatly thankful if you give me some hint , thanks
https://wiki.gogrid.com/index.php/Customer:Automatically_convert_your_Linux_server_to_a_static_IP
May something like this:
#!/bin/sh
lineNumber=1
ip=`head -n $lineNumber test.csv | tail -n 1 | line`
echo $ip
Have a text file w/ around 3 million URL's of sites I want to block.
Trying to ping them one by one (yes, I know it is going to take some time).
Have a script (yes, I am a bit slow in BASH) which reads the lines one at a time from text file.
Obviously cannot print text file here. Text file was created >> w/ Python some time ago.
Problem is that ping returns "unknown host" w/ every entry. If I make a smaller file by hand using the same entries the script works. I thought it may be a white space or end of line issue so tried addressing that in script. What could the issue possibly be?
#!/bin/bash
while read line
do
li=$(echo $line|tr -d '\n')
li2=$(echo $li|tr -d ' ')
if [ ${#line} -lt 2 ]
then
continue
fi
ping -c 2 -- $li2>>/dev/null
if [ $? -gt 0 ]
then
echo 'bad'
else
echo 'good'
fi
done<'temp_file.txt'
Does the file contains URLs or hostnames ?
If it contains URLs you must extract the hostname from URLs before pinging:
hostname=$(echo "$li2"|cut -d/ -f3);
ping -c 2 -- "$hostname"
Ping is used to ping hosts. If you have URLs of websites, then it will not work. Check that you have hosts in your file , example www.google.com or an IP address and not actual full website urls. If you want to check actual URLs, use a tool like wget and another tool like grep/awk to grab for errors like 404 or others. Last but not least, people who are security conscious will sometimes block pinging from the outside, so take note.
C heck if the file contains windows-style \r\n line endings: head file | od -c
If so, to fix it: dos2unix filename filename
I wouldn't use ping for this. It can easily be blocked, and it's not the best way to check for either ip addresses or if a server presents web pages.
If you just want to find the corresponding IP, use host:
$ host www.google.com
www.google.com is an alias for www.l.google.com.
www.l.google.com has address 209.85.149.106
www.l.google.com has address 209.85.149.147
www.l.google.com has address 209.85.149.99
www.l.google.com has address 209.85.149.103
www.l.google.com has address 209.85.149.104
www.l.google.com has address 209.85.149.105
As you see, you get all the IPs registered to a host. (Note that this requires you to parse the hostname from your urls!)
If you want to see if a URL points at a web server, use wget:
wget --spider $url
The --spider flag makes wget not save the page, just check that it exists. You could look at the return code, or add the -S flag (which prints the HTTP headers returned)