I have app A and B and they are communicating using unix socket. What I need is to tap this socket and listen/send the communication for evaluation?
socat -t100 -x -v UNIX-LISTEN:/tmp/.sock,mode=777,reuseaddr,fork ,UNIX-CONNECT:/tmp/.sock_original
Works fine for dumping it into console, but how can I add like UDP-SENDTO?
And can I?
Thanks.
Ok, I found the way. It's not optimal but working
socat -t100 -x -v UNIX-LISTEN:/tmp/.sock,mode=777,reuseaddr,fork ,UNIX-CONNECT:/tmp/.sock_original |
awk '{ if (lines > 0) { print; --lines; }} /^>/ { lines = 1}' | while read -r line; do echo $line > /dev/udp/localhost/6543; done
It's filtering out the first line and sending out just packets going in one direction (the /^>/).
Related
I'm trying to create a bash function that automatically updates a cli tool. So far I've managed to get this:
update_cli_tool () {
# the following will automatically be redirected to .../releases/tag/vX.X.X
# there I get the location from the header, and remove it to get the full url
latest_release_url=$(curl -i https://github.com/.../releases/latest | grep location: | awk -F 'location: ' '{print $2}')
# to get the version, I get the 8th element from the url .../releases/tag/vX.X.X
latest_release_version=$(echo "$latest_release_url" | awk -F '/' '{print 8}')
# this is where it breaks
# the first part just replaces the "tag" with "download" in the url
full_url="${latest_release_url/tag/download}/.../${latest_release_version}.zip"
echo "$full_url" # or curl $full_url, also fails
}
Expected output: https://github.com/.../download/vX.X.X/vX.X.X.zip
Actual output: -.zip-.../.../releases/download/vX.X.X
When I just echo "latest_release_url: $latest_release_url" (same for version), it prints it correctly, but not when I use the above mentioned flow. When I hardcode the ..._url and ..._version, the full_url works fine. So my guess is I have to somehow capture the output and convert it to a string? Or perhaps concatenate it another way?
Note: I've also used ..._url=`curl -i ...` (with backticks instead of $(...)), but this gave me the same results.
The curl output will use \r\n line endings. The stray carriage return in the url variable is tripping you up. Observe it with printf '%q\n' "$latest_release_url"
Try this:
latest_release_url=$(
curl --silent -i https://github.com/.../releases/latest \
| awk -v RS='\r\n' '$1 == "location:" {print $2}'
)
Then the rest of the script should look right.
I am searching an event field in a file but is giving wrong output. I am searching gpio-keys event in input devices for which I have written a script, but I'm unable to print anything in output file (in my case I am writing in a button device file it is null always). Please help me to figure out this. Where am I doing wrong in script file?
Bash script:
#!/bin/bash
if grep -q "gpio-keys" /proc/bus/input/devices ; then
EVENT=$(cat /proc/bus/input/devices | grep "Handlers=kbd")
foo= `echo $EVENT | awk '{for(i=1;i<=NF;i++) if($i=="evbug")printf($(i-1))}'`
#foo=${EVENT:(-7)}
echo -n $foo > /home/ubuntu/Setups/buttonDevice
fi
i am still not able to get anything in buttondevce
That's no wonder, since in the input line
H: Handlers=kbd event0
there's nowhere the evbug your awk script is looking for.
I my case it is event0 but it may vary also depends on how kernel allows.
If it is event0 or similar, then it's nonsensical to look for evbug. Change the statement
if($i=="evbug")printf($(i-1))
to
if ($i~"event") print $i
(using regular expression match).
I have rewritten my script like above. but through it, I have got two events(event0, event3) but … my input devices are many but i want the gpio-keys event
Aha - in order to take only the handler line from the gpio-keys section, you can use sed with an address range:
EVENT=`sed -n '/gpio-keys/,/Handlers=kbd/s/.*Handlers=kbd //p' </proc/bus/input/devices`
Prakash, I don't have access to your google drive. But I just want to give you some suggestion:-
foo= `echo $EVENT | awk '{for(i=1;i<=NF;i++) if($i=="evbug")printf($(i-1))}'`
This is old style now. Better use like below:-
foo=$(echo $EVENT | awk '{for(i=1;i<=NF;i++) if($i=="evbug")printf($(i-1))}')
Also always use double quotes "" when echoing a variable. See below:-
echo -n "$foo" > /home/ubuntu/Setups/buttonDevice
Try with the below code it will work for you
#!/bin/bash
if grep "gpio-keys" /proc/bus/input/devices >/dev/null ; then
cat /proc/bus/input/devices | grep "Handlers=kbd" | awk '{for(i=1;i<=NF;i++){ if($i ~ /eve/){printf "%s \n", $i} } }') > /home/ubuntu/Setups/buttonDevice
fi
The output in buttonDevice would be
event0
event1
.
.
.
.
event100
I have wrote a script below to parse a text file that effectively removes line returns. It will take input that looks like this:
TCP 0.0.0.0:135 SVR LISTENING 776
RpcSs
And return this to a new text document
TCP 0.0.0.0:135 SVR LISTENING 776 RpcSs
Some entries span more than two lines so I was not able to write a script that removes the line return from every other line so I came up with this approach below. It worked fine for small collects but a 7MB collect resulted in my computer running out of memory and it took quite a bit of time do this before it failed. I'm curious why it ran out of memory as well as hoping someone could educate me on a better way to do this.
#!/bin/bash
#
# VARS
writeOuput=""
#
while read line
do
curLine=$line #grab current line from document
varWord=$(echo $curLine | awk '{print $1}') #grab first word from each line
if [ "$varWord" == "TCP" ] || [ "$varWord" == "UDP" ]; then
#echo "$curLine" >> results.txt
unset writeOutput
writeOutput=$curLine
elif [ "$varWord" == "Active" ]; then #new session
printf "\n" >> results1.txt
printf "New Session" >> results1.txt
printf "\n" >> results1.txt
else
writeOutput+=" $curLine"
#echo "$writeOutput\n"
printf "$writeOutput\n" >> results1.txt
#sed -e '"$index"s/$/"$curLine"'
fi
done < $1
Consider replacing the line with the awk call with this line:
varWord=${curLine%% *} #grab first word from each line
This saves the fork that happens in each iteration by using Bash-internal functionality only and should make your program run several times faster. See also that other guy's comment linking to this answer for an explanation.
As others have noted, the main bottleneck in your script is probably the forking where you pass each line through its own awk instance.
I have created an awk script which I hope does the same as your bash script, and I suspect it should run faster. Initially I just thought about replacing newlines with spaces, and manually adding newlines in front of every TCP or UDP, like this:
awk '
BEGIN {ORS=" "};
$1~/(TCP|UDP)/ {printf("\n")};
{print};
END {printf("\n")}
' <file>
But your script removes the 'Active' lines from the output, and adds three new lines before the line. You could, of course, pipe this through a second `awk command:
awk '/Active/ {gsub(/Active /, ""); print("\nNew Session\n")}; {print}'
But this awk script is a bit closer to what you did with bash, but it should still be considerably faster:
$ cat join.awk
$1~/Active/ {print("\nNew Session\n"); next}
$1~/(TCP|UDP)/ {if (output) print output; output = ""}
{if (output) output = output " " $0; else output = $0}
END {print output}
$ awk -f join.awk <file>
First, it checks whether the line begins with the word "Active", if it does, it prints the three lines, and goes on to the next input line.
Otherwise it checks for the presence of TCP or UDP as the first word. If it finds them, it prints what has accumulated in writeOutput (provided there is something in the variable), and clears it.
It then adds whatever it finds in the line to writeOutput
At the end, it prints what has accumulated since the last TCP or UDP.
I'm working on a disk space monitor script in OSX and am struggling to first generate a list of volumes. I need this list to be generated dynamically as it changes over time; having this work properly would also make the script portable.
I'm using the following script snippet:
#!/bin/bash
PATH=/bin:/usr/bin:/sbin:/usr/sbin export PATH
FS=$(df -l | grep -v Mounted| awk ' { print $6 } ')
while IFS= read -r line
do
echo $line
done < "$FS"
Which generates:
test.sh: line 9: /
/Volumes/One-TB
/Volumes/pfile-archive-offsite-three-CLONE
/Volumes/ERDF-Files-Offsite-Backup
/Volumes/ESXF-Files-Offsite-Backup
/Volumes/ACON-Files-Offsite-Backup
/Volumes/LRDF-Files-Offsite-Backup
/Volumes/EPLK-Files-Offsite-Backup: No such file or directory
I need the script to generate output like this:
/
/Volumes/One-TB
/Volumes/pfile-archive-offsite-three-CLONE
/Volumes/ERDF-Files-Offsite-Backup
/Volumes/ESXF-Files-Offsite-Backup
/Volumes/ACON-Files-Offsite-Backup
/Volumes/LRDF-Files-Offsite-Backup
/Volumes/EPLK-Files-Offsite-Backup
Ideas, suggestions? Alternate or better methods of generating a list of mounted volumes are also welcome.
Thanks!
Dan
< is for reading from a file. You are not reading from a file but from a bash variable. So try using <<< instead of < on the last line.
Alternatively, you don't need to store the results in a variable, then read from the variable; you can directly read from the output of the pipeline, like this (I have created a function for neatness):
get_data() {
df -l | grep -v Mounted| awk ' { print $6 } '
}
get_data | while IFS= read -r line
do
echo $line
done
Finally, the loop doesn't do anything useful, so you can just get rid of it:
df -l | grep -v Mounted| awk ' { print $6 } '
1. File
A file /etc/ssh/ipblock contains lines that look like this:
2012-01-01 12:00 192.0.2.201
2012-01-01 14:15 198.51.100.123
2012-02-15 09:45 192.0.2.15
2012-03-12 21:45 192.0.2.14
2012-04-25 00:15 203.0.113.243
2. Command
The output of the command iptables -nL somechain looks like this:
Chain somechain (2 references)
target prot opt source destination
DROP all -- 172.18.1.4 anywhere
DROP all -- 198.51.100.123 anywhere
DROP all -- 172.20.4.16 anywhere
DROP all -- 192.0.2.125 anywhere
DROP all -- 172.21.1.2 anywhere
3. The task at hand
First I would like to get a list A of IP addresses that are existent in the iptables chain (field 4) but not in the file.
Then I would like to get a list B of IP addresses that are existent in the file but not in the iptables chain.
IP addresses in list A should then be appended to the file in the same style (date, time, IP)
IP addresses in list B should then be added to the iptables chain with
iptables -A somechain -d IP -j DROP
4. Background
I was hoping to expand my awk-fu so I have been trying to get this to work with an awk script that can be executed without arguments. But I failed.
I know I can get the output from commands with the getline command so I was able to get the time and date that way. And I also know that one can read a file using getline foo < file. But I have only had many failed attempts to combine this all into a working awk script.
I realise that I could get this to work with an other programming language or a shell script. But can this be done with an awk script that can be ran without arguments?
I think this is almost exactly what you were looking for. Does the job, all in one file, code I guess is pretty much self-explanatory...
Easily adaptable, extendable...
USAGE:
./foo.awk CHAIN ip.file
foo.awk:
#!/usr/bin/awk -f
BEGIN {
CHAIN= ARGV[1]
IPBLOCKFILE = ARGV[2]
while((getline < IPBLOCKFILE) > 0) {
IPBLOCK[$3] = 1
}
command = "iptables -nL " CHAIN
command |getline
command |getline
while((command |getline) > 0) {
IPTABLES[$4] = 1
}
close(command)
print "not in IPBLOCK (will be appended):"
command = "date +'%Y-%m-%d %H:%M'"
command |getline DATE
close(command)
for(ip in IPTABLES) {
if(!IPBLOCK[ip]) {
print ip
print DATE,ip >> IPBLOCKFILE
}
}
print "not in IPTABLES (will be appended):"
# command = "echo iptables -A " CHAIN " -s " //use for testing
command = "iptables -A " CHAIN " -s "
for(ip in IPBLOCK) {
if(!IPTABLES[ip]) {
print ip
system(command ip " -j DROP")
}
}
exit
}
Doing 1&3:
comm -13 <(awk '{print $3}' /etc/ssh/ipblock | sort) <(iptables -nL somechain | awk '/\./{print $4}' | sort) | xargs -n 1 echo `date '+%y-%m-%d %H:%M'` >> /etc/ipblock
Doing 2&4:
comm -13 <(awk '{print $3}' /etc/ssh/ipblock | sort) <(iptables -nL somechain | awk '/\./{print $4}' | sort) | xargs -n 1 iptables -A somechain -d IP -j DROP
The command is constructed of the following building blocks:
Bash process substitution feature: it is somewhat similar to pipe features, but is often used when a program requires two or more input files in its arguments/options. Bash creates fifo file, which basically "contains" the output of a given command. In our case the output will be ip adresses.
Then output of awk scripts is passed to comm program, and both awk scripts are pretty simple: they just print ip address. In first case all ips are contained in third column(hence $3), and in the second case all ips are contained in the fourth column, but it is neccessary to get rid of column header("destination" string), so simple regex is used /\./: it filters out all string that doesn't contain a dot.
comm requires both inputs to be sorted, thus output of awk is sorted using sort
Now comm program receives both lists of ip addresses. When no options are given, it prints three columns: lines unique to FILE1, lines unique to FILE2, lines in both files. By passing -23 to it we get only lines unique to FILE1. Similarly, passing -13 makes it output lines unique to FILE2.
xargs is basically a "foreach" loop in bash, it executes a given command per each input line(thanks to -n 1). The second is pretty obvious(it is the desired iptables invocation). The second one isn't complicated too: it just makes date to output current time in proper format.