I have a question to using socat in a special situation. I have a logfile on a system e.g. /var/log/logfile.log and I want to do a binding from the logfile to a tcp (telnet) connection.
So when I start a telnet to the system I will see new entries in the logfile.
I try this:
sudo socat -v tcp-l:4712,reuseaddr,fork file:"/var/lser2net/ser2net.log",nonblock,
That works but even when a new entry will write in the logfile, I got the whole logfile via telnet again.
I only need new lines, not the whole logfile.
Any ideas?
Using the tail command:
To see the last 10 lines and then all new ones until the connection is closed:
socat -v tcp-l:4712,reuseaddr,fork exec:"tail -f -n10 /var/lser2net/ser2net.log"
To see only the last 10 lines:
socat -v tcp-l:4712,reuseaddr,fork exec:"tail -n10 /var/lser2net/ser2net.log"
To see only new lines until the connection is closed:
socat -v tcp-l:4712,reuseaddr,fork exec:"tail -f -n0 /var/lser2net/ser2net.log"
Good luck!
Related
I have written the following shell script:
#! /bin/bash
# This script is designed to find hosts with MySQL installed
nmap -sT my_IP_address -p 3306 >/dev/null -oG MySQLscan
cat MySQLscan | grep open > MySQLscan2
cat MySQLscan2
According to the script the output of nmap should be sent to /dev/null. On the other hand, the final out put should be written to the file MySQLscan2 in my pwd.
Not as I have expected, two files are written to my pwd:
MySQLscan: Contains the output of the scan that I have expected to be in MySQLscan2.
MySQLscan2: This file is empty.
Is there an mistake in my script? How can I solve the problem?
Earlier today, I managed to run the script with correct output. I am not sure if I have changed the script in some way. I checked it again and again, but cannot find, what is wrong...
I am working with Kali Linux and Oracle VM Virtual Box.
> /dev/null causes shell to redirect stdout, that is a file with
file descriptor to 1
to /dev/null before the command starts so in other words to discard
it. When nmap runs with -oG MySQLscan option it opens a new file and
gets a new file descriptor. You can check it with strace:
$ strace -f nmap -sT localhost -p 3306 -oG MySQLscan |& grep MySQLscan
execve("/usr/bin/nmap", ["nmap", "-sT", "localhost", "-p", "22", "-oG", "MySQLscan"], 0x7ffc88805198 /* 60 vars */) = 0
openat(AT_FDCWD, "MySQLscan", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 4
In this example openat() returned 4 as a new file descriptor (you
can read more about this function with man 2 openat). Since file
descriptor 4 hasn't been redirected before command started MySQLscan
gets created. Also notice that even if file descriptor that openat()
returns to open MySQLscan is redirected to /dev/null:
nmap -sT localhost -p 22 -oG MySQLscan 4>/dev/null
it doesn't prevent MySQLscan from being created because openat()
requests a new unused file descriptor from kernel every time it's
run.
I need to ssh into memcached servers and execute ensure connectivity.
I am supposed to re-use the same ssh connection and keep executing the commands whose output will be stored in some log. This is supposed to be a scheduled job which runs at some specific intervals.
Code 1 makes multiple ssh connections for each execution.
#!/bin/bash
USERNAME=ec2-user
HOSTS="10.243.107.xx 10.243.124.xx"
KEY=/home/xxx/xxx.pem
SCRIPT="echo stats | nc localhost 11211 | grep cmd_flush"
for HOSTNAME in ${HOSTS} ; do
ssh -l ${USERNAME} -i ${KEY} ${HOSTNAME} "${SCRIPT}"
done
Code 2 hangs after ssh.
#!/bin/bash
USERNAME=ec2-user
KEY=/home/xxx/xxx.pem
ssh -l ${USERNAME} -i ${KEY} 10.243.xx.xx
while:
do
echo stats | nc localhost 11211 | grep cmd_flush
sleep 1
done
Is there any better way of doing this?
Since you want to have Code 2 run the infinite while loop on the remote host, you can pass that whole thing to the ssh command, after fixing the while statement:
#!/bin/bash
USERNAME=ec2-user
KEY=/home/xxx/xxx.pem
ssh -l ${USERNAME} -i ${KEY} 10.243.xx.xx '
while true
do
echo stats | nc localhost 11211 | grep cmd_flush
sleep 1
done
'
I have to warn that I think the whole approach is somewhat fragile, though. Long-standing ssh connections tend to die for various reasons, which don't always mean the connectivity is actually broken. Make sure the parent script that calls this notices dropped ssh connections and tries again. I guess you can put this script in a loop, and maybe log a warning each time the connection is dropped, and log an error if the connection cannot be re-established.
Basically, I have saved a command that connects to a specific port in order to message in Terminal with another Mac.
$ nc -n -v -l (port)
$ nc -n -v (ip) (port) # --> When the .command file is opened, will run directly.
However, when the .command file is opened the entire feed just stops. I would like it to send a message immediately then close the connection. What would the code
If you want to keep the connection listening you would use -kl (keep listening).
nc -n -v -kl (port)
With the other connection you can immediately send data and close the connection by:
nc -n -v (ip) (port) < <(echo "hello, world!")
The listening connection should stay alive and wait for further connections...
I am trying to save the output of a grep filter to a file.
I want to run tcpdump for a long time, and filter a certain IP to a file.
tcpdump -i eth0 -n -s 0 port 5060 -vvv | grep "A.B.C."
This works fine. It shows me IP's from my network.
But when I add >> file.dump at the end, the file is always empty.
My script:
tcpdump -i eth0 -n -s 0 port 5060 -vvv | grep "A.B.C." >> file.dump
And yes, it must be grep. I don't want to use tcpdump filters because it gives me millions of lines and with grep I get only one line per IP.
How can I redirect (append) the full output of the grep command to a file?
The output of tcpdump is probably going through stderr, not stdout. This means that grep won't catch it unless you convert it into stdout.
To do this you can use |&:
tcpdump -i eth0 -n -s 0 port 5060 -vvv |& grep "A.B.C."
Then, it may happen that the output is a continuous stream, so that you somehow have to tell grep to use line buffering. For this you have the option --line-buffered option.
All together, say:
tcpdump ... |& grep --line-buffered "A.B.C" >> file.dump
I want to use nc as a simple TCP/IP server. So far I run it using:
$ nc -k -l 3000 > temp.tmp
This works, but writes all the received data of all connections into one single file. But, I would like to have individual files.
Basically I get this if I skip the -k switch, but then nc shuts down as soon as the first connection is gone.
How can I keep nc running, but use an individual file for each incoming request? (Or, if this is not possible, is there an alternative *nix tool that is able to do this?)
Please note that I do not want to restart nc, but I want it all with a single running instance.
PS: I know that SO doesn't allow questions on finding tools, but for me this is only the fallback in case nc isn't able to do that by itself. So, sorry for the part in parentheses…
On the receiver:
$ nc -p 3000 -l | tar -x
On the sender:
$ tar -c * | nc <ip_address> 3000
Omit the -k and run it in a loop:
n=0
while nc -l 3000 > "$n".txt ; do
n=$((n+1))
done
I think Cronolog can help in this case
-p option you can determine the period option
Also the filewatcher utility "incron" can be used which will check logs to one file & can split to some other files