How to send named-pipe to all connected clients (Bash) - bash

I have a named pipe, to which I output log informations from several scripts. I need to make a TCP server, which sends these informations to connected clients. For one client solution, this worked fine:
tail -f name_of_pipe | nc -lk $tcp_port
Is there some effective way, how to send the content to multiple clients? I think netcat does not support multiple clients. I have found that there is utility named tcpserver from ucspi-tcp, which executes new process for each client, but this won't do what I want - Each line will be read out from named pipe and delivered just to one random client.
In fact, I don't need named pipe acting like FIFO, I can throw away everything before client has connected and send just real-time data.

Something like this?
$ echo "Bye bye" | tee >( cat ) | tee >( cat ) | cat
Bye bye
Bye bye
Bye bye
Bye bye

Related

"WRITE" command works manually but not via script

My Co-Workers and I use the screen program on our Linux JUMP server to utilize as much screen space as possible. With that, we have multiple screens setup so that messages can go to one while we do work in another.
With that, i have a script that is used to verify network device connectivity which will send messages to my co-workers regardless if there is anything down or not.
The script initially references a file with their usernames in it and then grabs the highest PTS number which denotes the last screen session they activated and then puts it into the proper format in an external file like such:
cat ./netops_techs | while read -r line; do
temp=$(echo $line)
temp2=$(who | grep $temp | sed 's/[^0-9]*//g' | sort -n -r | head -n1)
if who | grep $temp; then
echo "$temp pts/$temp2" >> ./tech_send
fi
done
Once it is done, it will then scan our network every 5 minutes and send updates to the folks in the file "./tech_send" like such:
Techs=$(cat ./tech_send)
if [ ! -f ./Failed.log ]; then
echo -e "\nNo network devices down at this time."
for d in $Techs
do
cat ./no-down | write $d
done
else
# Writes downed buildings localy to my terminal
echo -e "\nThe following devices are currently down:"
echo ""
echo "IP Hostname Model Building Room Rack Users Affected" > temp_down.log
grep -f <(sed 's/.*/\^&\\>/' Failed.log) Asset-Location >> temp_down.log
cat temp_down.log | column -t > Down.log
cat Down.log
# This will send the downed buildings to the rest of NetOps
for d in $Techs
do
cat Down.log | write $d
done
fi
The issue stems from, when they are working in their main sectioned screen, the messages will pop up in that active screen instead of the inactive screen. If I send them a message manually such as:
write jsmith pts/25
Test Test
and then CTRL+D, it works fine even if they are in a different session. Via script though, it gives an error stating that:
write: jsmith is logged in more than once; writing to pts/23
write: jsmith/pts/25 is not logged in
I have verified the "tech_send" file and it has the correct format for them:
jsmith pts/25
Would appreciate any insight on why this is happening.

How to delete a connection by type with nmcli?

I want to have a bash script that can delete all my network manager connections of type gsm with nmcli.
What is the best approach for this?
This is actually a trickier question than it seems on the surface, because NetworkManager allows for connection names with spaces in them. This makes programmatic parsing of the output of nmcli connection show for connection names a bit awkward. I think the best option for scripting would be to rely on the UUID, since it seems to consistently be a 36 character group of hexidecimal characters and dashes. This means we can pull it consistently with a regular expression. So for example you could get a list of the UUIDs for gsm connections with the following:
$ nmcli connection show | grep gsm | grep -E -o '[0-9a-f\-]{36}'
cc823da6-d4e1-4757-a37a-aaaaaaaaa
etc
So you could grab the UUIDs and then delete based on the UUID:
GSM_UUIDS=$(nmcli connection show | grep gsm | grep -E -o '[0-9a-f\-]{36}')
while IFS= read -r UUID; do echo nmcli connection delete $UUID; done <<< "$GSM_UUIDS"
Run with the echo to make sure you're getting the result you expect, then you can remove it and you should be in business. I ran locally with some dummy GSM connections and it seemed to work they way you would want it to:
GSM_UUIDS=$(nmcli connection show | grep gsm | grep -E -o '[0-9a-f\-]{36}')
while IFS= read -r UUID; do nmcli connection delete $UUID; done <<< "$GSM_UUIDS"
Connection 'gsm' (cd311376-d7ab-4891-ba73-e4e8a3fc6614) successfully deleted.
Connection 'gsm-1' (54171181-5c37-4224-baf5-9eb36458f773) successfully deleted.
nmcli con del $(nmcli -t -f UUID,TYPE con | awk -F":" '{if ($2 == "gsm") print $1}')

Tail multiple remote files and pipe the result

I'm looking for a way to pipe multiple log files on multiple remote servers, and then pipe the result to another program.
Right now I'm using multitail, but it does not exactly do what I need, or maybe I'm doing something wrong!
I would like to be able to send the merge of all log files, to another program. For example jq. Right now if I do:
multitail --mergeall -l 'ssh server1 "tail -f /path/to/log"' -l 'ssh server2 "tail -f /path/to/log"' -l 'ssh server3 "tail -f /path/to/log"' | jq .
for instance, I get this:
parse error: Invalid numeric literal at line 1, column 2
But more generally, I would like to give the output of this to another program I use to parse and display logs :-)
Thanks everybody!
One way to accomplish this feat would be to pipe all your outputs together into a named pipe and then deal with the output from that named pipe.
First, create your named pipe: $ mknod MYFIFO p
For each location you want to consolidate lines from, $ tail -f logfile > MYFIFO (note, the tail -f can be run through an ssh session).
Then have another process take the data out of the named pipe and handle it appropriately. An ugly solution could be:
$ tail -f MYFIFO | jq
Season to taste.

Greping a tcpdump with tshark

I'm trying to program a little "dirty" website filter - e.g. an user wants to visit an erotic website (based on domain name)
So basically, I got something like
#!/bin/bash
sudo tshark -i any tcp port 80 or tcp port 443 -V | grep "Host.*keyword"
It works great but now I need to do some actions after I find something (iptables and DROPing packets...). The problem I got is that tcp dumping is still running. If I had a complete file with data, the thing I'm trying to reach is easy to solve.
In pseudocoude, I'd like to have something like:
if (tshark and grep found something)
iptables - drop packets
sleep 600 # a punishment for an user
iptables accept packets I was dropping
else
still look for a match in the tcp dump that's still running
Thanks for your help.
Maybe you could try something like the following:
tshark OPTIONS 2>&1 | grep --line-buffered PATTERN | while read line; do
# actions for when the pattern is found, the matched input is in $line
break
done
The 2>&1 is important so that when PATTERN is matched and the while loop terminates, tshark has nowhere to write to and terminates because of the broken pipe.
If you want to keep tshark running and analyze future output, just remove the break. This way, the while loop never terminates and it keeps reading the filtered output from tshark.

Send text file, line by line, with netcat

I'm trying to send a file, line by line, with the following commands:
nc host port < textfile
cat textfile | nc host port
I've tried with tail and head, but with the same result: the entire file is sent as a unique line.
The server is listening with a specific daemon to receive data log information.
I'd like to send and receive the lines one by one, not the whole file in a single shot.
How can I do that?
Do you HAVE TO use netcat?
cat textfile > /dev/tcp/HOST/PORT
can also serve your purpose, at least with bash.
I'de like to send, and receive, one by one the lines, not all the file in a single shot.
Try
while read x; do echo "$x" | nc host port; done < textfile
OP was unclear on whether they needed a new connection for each line. But based on the OP's comment here, I think their need is different than mine. However, Google sends people with my need here so here is where I will place this alternative.
I have a need to send a file line by line over a single connection. Basically, it's a "slow" cat. (This will be a common need for many "conversational" protocols.)
If I try to cat an email message to nc I get an error because the server can't have a "conversation" with me.
$ cat email_msg.txt | nc localhost 25
554 SMTP synchronization error
Now if I insert a slowcat into the pipe, I get the email.
$ function slowcat(){ while read; do sleep .05; echo "$REPLY"; done; }
$ cat email_msg.txt | slowcat | nc localhost 25
220 et3 ESMTP Exim 4.89 Fri, 27 Oct 2017 06:18:14 +0000
250 et3 Hello localhost [::1]
250 OK
250 Accepted
354 Enter message, ending with "." on a line by itself
250 OK id=1e7xyA-0000m6-VR
221 et3 closing connection
The email_msg.txt looks like this:
$ cat email_msg.txt
HELO localhost
MAIL FROM:<system#example.com>
RCPT TO:<bbronosky#example.com>
DATA
From: [IES] <system#example.com>
To: <bbronosky#example.com>
Date: Fri, 27 Oct 2017 06:14:11 +0000
Subject: Test Message
Hi there! This is supposed to be a real email...
Have a good day!
-- System
.
QUIT
Use stdbuf -oL to adjust standard output stream buffering. If MODE is 'L' the corresponding stream will be line buffered:
stdbuf -oL cat textfile | nc host port
Just guessing here, but you probably CR-NL end of lines:
sed $'s/$/\r/' textfile | nc host port

Resources