A very simple use of netcat is below:
#!/bin/sh
echo "hello world" > original.txt
base64 original.txt > encoded.txt
cat encoded.txt | nc -l -p 1234 -q 0
#!/bin/sh
nc localhost 1234 -q 0 > received.txt
base64 -d received.txt > decoded.txt
rm received.txt encoded.txt
echo "Sent:"
md5sum original.txt
echo "Received:"
md5sum decoded.txt
rm decoded.txt original.txt
This creates a file, encodes it into base64, sends it over netcat, and then decodes it, finally comparing whether what was sent and what was received is identical using a checksum comparison.
On a Kali Linux machine I was using earlier, the netcat connection closes upon execution of the second script, but trying on Ubuntu at home, this is not the case. I need to manually close the connection with Ctrl+D.
How can I make it such that the connection closes after receiving this data?
I think including the -c flag for nc should do it. Also, are you sure you want to use Bourne shell, and not Bash? I'd suggest changing your shebang to #!/bin/bash
Related
mkfifo /tmp/f ; cat /tmp/f | /bin/bash -i 2>&1 | nc -l -p 1234 > /tmp/f
I am new to bash, I am trying to understand this piece of "code".
Why a while loop is not needed? How can this work? Is it itself a loop? Why? How?
Also, cat filePipe by itself ONLY PRINTS ONE LINE, and then exits (I just tested it), and to make cat not to exit I do: while cat pipeFile ; do : ; done. So how does that above work?
I don't get the order of execution... at the beginning /tmp/f is empty, so cat /tmp/f should "send" an empty stream to /bin/bash which just send it to nc which opens a connection and "sends" the interactive bash to whoever connects... and the response of the client is sent to /tmp/f ... and then? What? How can it can go back and do the same things again?
When bash parses the line mkfifo /tmp/f ; cat /tmp/f | /bin/bash -i 2>&1 | nc -l -p 1234 > /tmp/f, several things happen. First, the fifo is created. Then, in no particular order, 3 things happen: cat is started, bash is started and nc is started with its output stream connected to /tmp/f. cat is now going to block until some other process opens /tmp/f for writing; the nc is about to do that (or already did it, but we don't know if cat will start before nc or if nc starts before cat, nor do we know in which order they will open the fifo, but whoever does it first will block until the other completes the operation). Once all 3 processes start, they will just sit there waiting for some data. Eventually, some external process connects to port 1234 and sends some data into nc, which writes into /tmp/f. cat (eventually) reads that data and sends it downstream to bash, which processes the input and (probably) writes some data into nc, which sends it back across the socket connection.
If you have a test case in which cat /tmp/f only writes one line of data, that is simply because whatever process you used to write into /tmp/f only wrote a single line. Try: printf 'foo\nbar\nbaz\n' > /tmp/f & cat /tmp/f or while sleep 1; do date; done > /tmp/f & cat /tmp/f
/tmp/f is NOT empty, but a fifo, a bi-directional link.
Someone connects to port 1234, type something, which nc will forward to fifo which then feeds into bash.
bash runs the command and sends results back to nc.
.1 You misunderstand what happen when you echo "string" >/path/fifo
.a) When you just echo something >/path/to/somewhere, you
(test accessibility, then) open target somewhere for writting
write someting in openned file descriptor (fd)
close (relax) accessed file.
.b) A fifo (The First In is the First Out.) is not a file.
Try this:
# Window 1:
mkfifo /tmp/fifotest
cat /tmp/fifotest
# Window 2:
exec {fd2fifo}>/tmp/fifotest
echo >&$fd2fifo Foo bar
You will see cat not terminating.
echo >&$fd2fifo Baz
exec {fd2fifo}>&-
Now, cat will close
So there is no need of any loop!
.2 command cat /tmp/f | /bin/bash -i 2>&1 | nc -l -p 1234 > /tmp/f
could be written (avoid useless use of cat):
bash -i 2>&1 </tmp/f | nc -l -p 1234 > /tmp/f
but you could do same operation but from different point of vue:
nc -l -p 1234 </tmp/f | bash -i >/tmp/f 2>&1
The goal is
to drive bash's STDIN from nc's STDOUT and
connect back bash's STDOUT and STDERR to nc's STDIN.
.3 The more: bashism
Under bash, you could avoid creating fifo by using unnamed fifo:
coproc nc -l -p 1234; bash -i >&${COPROC[1]} 2>&1 <&${COPROC[0]}
or
exec {ncin}<> <(:); nc -l -p 1234 <&$ncin | bash -i >&$ncin 2>&1
Say I write to a netcat connection:
tail -f ${file} | nc localhost 7050 | do_whatever | nc localhost 7050
what happens here is that we have two socket connections, to do some request/response. But that's not ideal for a few reasons.
What I am looking to do is reuse the same connection, to read from and write to.
Does anyone know how I can reuse just one netcat connection?
The correct way to do this in UNIX is to make use of a back pipe. You can do so as follows:
First, create a pipe: mknod bkpipe p
This creates a file named bkpipe of type pipe.
Next, figure out what you need to do. Here are two useful scenarios. In these, replace the hosts/addresses and port numbers with the appropriate ports for your relay.
To forward data sent to a local port to a remote port on another machine:
nc -l -p 9999 0<bkpipe | nc remotehost 7000 | tee bkpipe
To connect to another machine and then relay data in that connection to another:
nc leftHost 6000 0<bkpipe | nc rightHost 6000 | tee bkpipe
If you simply need to handle basic IPC within a single host, however, you can do away with netcat completely and just use the FIFO pipe that mknod creates. If you stuff things into the FIFO with one process, they will hang out there until something else reads them out.
Yeah, I think the simplest thing to do is use this method:
tail -f ${file} | nc localhost 7050 | do_whatever > ${file}
just write back into the same file (it's a 'named pipe').
As long as your messages are less than about 500 bytes, they won't interleave.
Using ncat is much easier and understandable for beginners as a one-liner ;)
prompt$> ncat -lk 5087 -c ' while true; do read i && echo [You entered:] $i; done'
Connect with telnet (or nc) to localhost port 5087, and everything you type echoes back to you ;)
Use -lk option for listening and keeping/maintaining (multiple) connections.
You can make one bash script out of it like this, using back slashes but it invokes multiple bash, not cheap on resource usage:
#!/bin/bash
# title : ncat-listener-bidirectional.sh
# description : This script will listen for text entered by a client
# like for instance telnet
# and echo back the key strokes
#
ncat -lk 5087 -c ' \
#!/bin/bash \
while true; do \
read i && echo [You entered:] $i; \
done'
I am trying to do something like this on a POSIX compliance way using nc, following the second example given here, it must run on sh (can be tested on dash or busybox' ash, but not on bash).
I have done the following:
# on operator machine
nc -lvp 9999
# on remote machine
out="/tmp/.$$.out"
in="/tmp/.$$.in"
mkfifo "$out" "$in"
trap 'rm "$out" "$in"' EXIT
# nc 192.168.0.10 9999 > "$out" 0< "$in" &
cat "$in" | nc 192.168.0.10 9999 | cat > "$out" &
bash -i > "$in" 2>&1 0< "$out"
I have two questions, please, answer both:
As you can see I have tried to do nc 192.168.0.10 9999 > "$out" 0< "$in" but I do not known why the heck it didn't worked with file redirection, the command freezes, the process isn't even launched, I have tried with pipe on input only and also on output only, but neither way worked. The downside with this cat solution is when you exit the command after connected on operator machine it still keeps the process alive on remote machine, requiring to be killed. I want to know why redirection solution didn't work? And how to solve the exit issue? Even though the cat solution is not the most elegant one, if it manage to work it would be acceptable.
Is there a way to do this two-way I/O redirect using fd instead of mkfifo? For instance, using exec 3> dummy? Remembering it must be POSIX compliance so >(:) is unacceptable.
If the fd solution is possible, it is not required to make the mkfifo solution to work, but I still would be glade to know why the file redirection didn't work.
You're trying to do a reverse-shell? I can't quite tell from the logic.
A few things about fifos - opening a fifo blocks until both sides are opened - i.e. if you open for reading you will get blocked until someone opens it for writing and vice versa.
The reason it won't work with redirection is that the open of the redirection happens in the shell before launching the nc, so as a result until a process opens the write-end of the same fifo, it won't be able to open the read-end of the same fifo. In this case you've got a deadly embrace of blocking:
nc is blocked opening "$out" because nobody's opened it for reading yet.
bash is blocked opening "$in" because nobody's opened it for writing yet.
You can try rearranging the file descriptor opening order:
nc 127.0.0.1 9999 2>&1 > "$out" 0< "$in" &
bash -i 0< "$out" >"$in" 2>&1
i.e. we make sure that the opening of in and out happen in the same order for the commands, which should fix the fifo opening issue.
You don't need to do this with two fifos, you can accomplish it with one using:
ncfifo=/tmp/.$$.nc
mkfifo $ncfifo
cat $ncfifo | bash -i 2>&1 | nc 127.0.0.1 9999 >$ncfifo
As for (2); I don't know - it's not something I've tried to accomplish myself. The reason for using the fifo is that it allows you to wire up two sides of the pipeline together. I can't see any way of accomplishing this without a fifo, although I'm open to being corrected on this.
The nc man page on debian contains the following example, which is for the 'opposite' interpretation:
rm -f /tmp/f; mkfifo /tmp/f
cat /tmp/f | /bin/sh -i 2>&1 | nc -l 127.0.0.1 1234 > /tmp/f
which is what I've used to do a simpler cat answer; although it seems to work as well omitting the leading cat as:
rm -f /tmp/f; mkfifo /tmp/f
/bin/sh -i </tmp/f 2>&1 | nc -l 127.0.0.1 1234 > /tmp/f
I'm just beginning to work with bash scripts and I've tried to get a simple pipe to work:
#!/bin/sh
mkfifo apipe
cat apipe | nc -l $1 | /home/matt/testprogram > apipe
Given that the port number works and the program works as I want it to, what could be making this script mess up?
My program is supposed to print some text as well as take in some user input using fgets. When I run my shell script, I want it to act like as if I was just running the program normally. When I run it I just get it blanking out and not doing anything, and I have to break it with ctrl+C.
I type into the terminal something like:
sh testnc.sh 2342
Thanks for any advice
You are using NC wrong. nc -l $1 is listening for an external connection on that port. So you could run something like this:
host 1:
nc -l <port> | /home/matt/testprogram
host 2:
cat files | nc <host1> <port>
But the usage that you are doing makes no sense.
I have a bash script with some scp commands inside.
It works very well but, if I try to redirect my stdout with "./myscript.sh >log", only my explicit echos are shown in the "log" file.
The scp output is missing.
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
fi
Ok, what should I do now?
Thank you
scp is using interactive terminal in order to print that fancy progress bar. Printing that output to a file does not make sense at all, so scp detects when its output is redirected to somewhere else other than a terminal and does disable this output.
What makes sense, however, is redirect its error output into the file in case there are errors. You might want to disable standard output if you want.
There are two possible ways of doing this. First is to invoke your script with redirection of both stderr and stdout into the log file:
./myscript.sh >log 2>&1
Second, is to tell bash to do this right in your script:
#!/bin/sh
exec 2>&1
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
fi
...
If you need to check for errors, just verify that $? is 0 after scp command is executed:
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
RET=$?
if [ $RET -ne 0 ]; then
echo SOS 2>&1
exit $RET
fi
fi
Another option is to do set -e in your script which tells bash script to report failure as soon as one of commands in scripts fails:
#!/bin/bash
set -e
...
Hope it helps. Good luck!
You cat simply test your tty with:
[ ~]#echo "hello" >/dev/tty
hello
If that works, try:
[ ~]# scp <user>#<host>:<source> /dev/tty 2>/dev/null
This has worked for me...
Unfortunately SCP's output can't simply be redirected to stdout it seems.
I wanted to get the average transfer speed of my SCP transfer, and the only way that I could manage to do that was to send stderr and stdout to a file, and then to echo the file to stdout again.
For example:
#!/bin/sh
echo "Starting with upload test at `date`:"
scp -v -i /root/.ssh/upload_test_rsa /root/uploadtest.tar.gz speedtest#myhost:/home/speedtest/uploadtest.tar.gz > /tmp/scp.log 2>&1
grep -i bytes /tmp/scp.log
rm -f /tmp/scp.log
echo "Done with upload test at `date`."
Which would result in the following output:
Starting with upload test at Thu Sep 20 13:04:44 SAST 2012:
Transferred: sent 10191920, received 5016 bytes, in 15.7 seconds
Bytes per second: sent 650371.2, received 320.1
Done with upload test at Thu Sep 20 13:05:04 SAST 2012.
I found a rough solution for scp:
$ scp -qv $USER#$HOST:$SRC $DEST
According to the scp man page, -q (quiet) disables the progress meter, as well as disabling all other output. Add -v (verbose) as well, you get heaps of output... and the progress meter is still disabled! Disabling the progress meter allows you to redirect the output to a file.
If you don't need all the authentication debug output, redirect the output to stdout and grep out the bits you don't want:
$ scp -qv $USER#$HOST:$SRC $DEST 2>&1 | grep -v debug
Final output is something like this:
Executing: program /usr/bin/ssh host myhost, user (unspecified), command scp -v -f ~/file.txt
OpenSSH_6.0p1 Debian-4, OpenSSL 1.0.1e 11 Feb 2013
Warning: Permanently added 'myhost,10.0.0.1' (ECDSA) to the list of known hosts.
Authenticated to myhost ([10.0.0.1]:22).
Sending file modes: C0644 426 file.txt
Sink: C0644 426 file.txt
Transferred: sent 2744, received 2464 bytes, in 0.0 seconds
Bytes per second: sent 108772.7, received 97673.4
Plus, this can be redirected to a file:
$ scp -qv $USER#$HOST:$SRC $DEST 2>&1 | grep -v debug > scplog.txt