Spawning background process under different user in bash - bash

I know I can run this command to spawn a background process and get the PID:
PID=`$SCRIPT > /dev/null 2>&1 & echo $!`
and to run a command under different user:
su - $USER -c "$COMMAND"
I don't want the script to run as root and I can't quite figure out how to combine the two and get the PID of the spawned process.
Thanks!

I think you want the runuser command. General syntax:
runuser -l userNameHere -c 'command'
I suspect that if you set your $SCRIPT variable to the above (with appropriate changes), your first command will do what you want.

To elaborate on: See the following: - stackoverflow.com/questions/9119885/…
See particularly the following quote from Chris Dodd:
Unfortunately there's no easy way to do this prior to bash version 4, when $BASHPID was
introduced. One thing you can do is to write a tiny program that prints its parent PID:...
If you have bash 4 and BASHPID, see $$ in a script vs $$ in a subshell
I don't have version 4, so I can't provide an example of it's usage.
Or write a tiny C program which execvs it's arguments and make it setuid to USER.
Or even make a setuid shell script (not generally recommended). Hopefully the USER is fixed; if not, get the source for runuser, this is essentially what runuser (not a POSIX command) does.
PID=`su - $USER -c "$SCRIPT > /dev/null 2>&1 & echo $!"`
The problems with the your use of su (above) include:
the $! is being executed in the context of the -c sub-shell of su, not the current shell where PID is,
you're requesting that your SCRIPT be run as a login shell, so you don't even know if USER's shell supports $!,
you have no control over the parent-child process chain that su (and the user's shell) create.
IOW, when you use
PID=`$SCRIPT > /dev/null 2>&1 & echo $!`
there's only one program involved, bash, and two (maybe three?) processes that you pretty much have complete control over. When you throw su into the mix, that changes things much more than is apparent on the surface -- bash and su support similar arguments, right?!?
For obvious reasons, su does mucho magic to protect it and its' children's environment from attacks; it doesn't even like being put in the background....

It's kind of late, but here is a two liner will work, seems to need to be two so that it doesn't wait for the $SCRIPT to complete:
su $USER -c "$SCRIPT 2>&1 & >> $LogOrNull echo $! > /some/writeable/path"
PID="$(cat /some/writeable/path)"
/some/writeable/path will need to be writeable by $USER
And the user running these commands will need to have read access

Related

Is there a way to redirect all stdout and stderr to systemd journal from within script?

I like the idea of using systemd's journal to view and manage the logs of my own scripts. I have become aware you can log to journal from my user scripts on a per message basis..
echo 'hello' | systemd-cat -t myscript -p emerg
Is there a way to redirect all messages to journald, even those generated by other commands? Like..
exec &> systemd-cat
Update:
Some partial success.
Tried Inian's suggestion from terminal.
~/scripts/myscript.sh 2>&1 | systemd-cat -t myscript.sh
and it worked, stdout and stderr were directed to systemd's journal.
Curiously,
~/scripts/myscript.sh &> | systemd-cat -t myscript.sh
didn't work in my Bash terminal.
I still need to find a way to do this inside my script for when other programs call my script.
I tried..
exec 2>&1 | systemd-cat -t myscript.sh
but it doesn't work.
Update 2:
From terminal
systemd-cat ~/scripts/myscript.sh
works. But I'm still looking for a way to do this from within the script.
A pipe to systemd-cat is a process which needs to run concurrently with your script. Bash offers a facility for this, though it's not portable to POSIX sh.
exec > >(systemd-cat -t myscript -p emerg) 2>&1
The >(command) process substitution starts another process and returns a pseudo-filename (something like /dev/fd/63) which you can redirect into. This is basically a wrapper for the mkfifo hacks you could do if you wanted to port this to POSIX sh.
If your script happens to not be a shell script, but some other programming language that allows loading extension modules linked to -lsystemd, there is another way. There is a library function sd_journal_stream_fd that quite precisely matches the task at hand. Calling it from bash itself (as opposed to some child) seems difficult at best. In Python for instance, it is available as systemd.journal.stream. What this function does in essence is connecting a unix domain stream socket and communicating what kind of data is being transmitted (e.g. priority). The difficult part with a shell here is making it connect a unix domain socket (as opposed to connecting in a child).
The key idea to this answer was given by Freenode/libera.chat user grawity.
Apparently, and for reasons that are beyond me, you can't really redirect all stdout and stderr to journald from within a script because it has to be piped in. To work around that I found a trick people were using with syslog's logger which works similarly.
You can wrap all your code into a function and then pipe the function into systemd-cat.
#!/bin/bash
mycode(){
echo "hello world"
echor "echo typo producing error"
}
mycode | systemd-cat -t myscript.sh
exit 0
And then to search journal logs..
journalctl -t myscript.sh --since yesterday
I'm disappointed there isn't a more direct way of doing this.

Effective Methods of changing Shells in UNIX

I used to work with UNIX a couple years ago, and I am just starting to get back into it again. I was wondering if anyone could help me with a question.
For example, if I am in bash, I say chsh --shell /bin/tcsh after this I am prompted to enter my password. If I try to say echo $SHELL it will not tell me I have changed shells. It still tells me I am in bash, not C shell. So I have to exit and restart. Once I log back it, then it tells I am in C shell.
Is there a more effective method to change shells? One that does not require me having to log in and out?
Thank you in advance.
chsh(1): change your login shell
Once you change your shell with chsh, it should automatically login to that shell every time you open a terminal.
If you want to use a different shell temporary, just run that shell directly: "tcsh", "zsh", etc..
If you want to use a particular shell for a script use shebang "#!".
Example -- The following on the first line of a shell script will ensure the script is run with sh (and you can do this for any shell available on your system):
#!/bin/sh
Always check your current shell by using :
echo $0
That way you will get the exact process ( your current shell ) you are running. If you print $SHELL it will return to you the default shell that will be open when you login to the server which unless that's what you need its not reliable.
ubuntu$ echo $SHELL
/bin/bash
ubuntu$ echo $0
-bash
ubuntu$ sh
\[\e[31m\]\u\[\e[m\]$ echo $SHELL
/bin/bash
\[\e[31m\]\u\[\e[m\]$ echo $0
sh
\[\e[31m\]\u\[\e[m\]$
Regards!

Simply forking and redirecting the output of a command to /dev/null

I frequently execute from a shell (in my case Bash) commands that I want to fork immediately and whose output I want to ignore. So frequently in fact that I created a script (silent) to do it:
#!/bin/bash
$# &> /dev/null &
I can then run, e.g.
silent inkscape myfile.svg
and my terminal will not be polluted by the debug output of the process I just forked.
I have two questions:
Is there an "official" way of doing this?, i.e. something shorter but equivalent to &> /dev/null & ?
If not, is there a way I can make tab-completion work after my silent command as if it weren't there ? To give an example, after I've typed silent inksc, I'd like bash to auto-complete my command to silent inkscape when I press [tab].
aside: probably want to exec "$#" &> /dev/null & in your silent script, to cause it to discard the sub-shell, and the quotes around "$#" will keep spaces from getting in the way.
As for #2: complete -F _command silent should do something like what you want. (I call my version of that script launch and have complete -F launch in my .bash_profile)
It looks like nohup does more or less what you want. The tab-completion problem is because bash thinks that you are trying to complete a filename as an argument to the script, whereas its completion rules know that nohup takes a command as its first argument.
Nohup redirects stout and stderr to nohup.out and will also leave the command running if your shell exits.
Here's a little script I use for launching interactive (and chatty) X apps from e.g. an xterm
#!/bin/bash
exe="$1"
shift
"$exe" "$#" 2>/tmp/$$."$exe".err 1>&2 & disown $!
No output, won't die if the terminal exits, but in case something goes wrong there's a log of all output in /tmp
If you don't want the log just use /dev/null instead.
Also will work from a function if you're script-alergic.
Perhaps if you could 'rebind' the tab key? An example on superuser Stackoverflow with the enter key is shown. Is this the right idea?

starting remote script via ssh containing nohup

I want to start a script remotely via ssh like this:
ssh user#remote.org -t 'cd my/dir && ./myscript data my#email.com'
The script does various things which work fine until it comes to a line with nohup:
nohup time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 &
it is supposed to do start the program myprog, pipe its output to mylog and send an email with some datafiles created by myprog as attachment and the log as body. Though when the script reaches this line, ssh outputs:
Connection to remote.org closed.
What is the problem here?
Thanks for any help
Your command runs a pipeline of processes in the background, so the calling script will exit straight away (or very soon afterwards). This will cause ssh to close the connection. That in turn will cause a SIGHUP to be sent to any process attached to the terminal that the -t option caused to be created.
Your time ./myprog process is protected by a nohup, so it should carry on running. But your mutt isn't, and that is likely to be the issue here. I suggest you change your command line to:
nohup sh -c "time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 " &
so the entire pipeline gets protected. (If that doesn't fix it it may be necessary to do something with file descriptors - for instance mutt may have other issues with the terminal not being around - or the quoting may need tweaking depending on the parameters - but give that a try for now...)
This answer may be helpful. In summary, to achieve the desired effect, you have to do the following things:
Redirect all I/O on the remote nohup'ed command
Tell your local SSH command to exit as soon as it's done starting the remote process(es).
Quoting the answer I already mentioned, in turn quoting wikipedia:
Nohuping backgrounded jobs is for example useful when logged in via SSH, since backgrounded jobs can cause the shell to hang on logout due to a race condition [2]. This problem can also be overcome by redirecting all three I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
UPDATE
I've just had success with this pattern:
ssh -f user#host 'sh -c "( (nohup command-to-nohup 2>&1 >output.file </dev/null) & )"'
Managed to solve this for a use case where I need to start backgrounded scripts remotely via ssh using a technique similar to other answers here, but in a way I feel is more simple and clean (at least, it makes my code shorter and -- I believe -- better-looking), by explicitly closing all three streams using the stream-close redirection syntax (as discussed at the following locations:
https://unix.stackexchange.com/questions/131801/closing-a-file-descriptor-vs
https://unix.stackexchange.com/questions/70963/difference-between-2-2-dev-null-dev-null-and-dev-null-21
http://www.tldp.org/LDP/abs/html/io-redirection.html#CFD
https://www.gnu.org/software/bash/manual/html_node/Redirections.html
Rather than the more widely used but (IMHO) hackier "redirect to/from /dev/null", resulting in the deceptively simple:
nohup script.sh >&- 2>&- <&-&
2>&1 works just as well as 2>&-, but I feel the latter is ever-so-slightly more clear. ;) Most people might have a space preceding the final "background job" ampersand, but since it is not required (as the ampersand itself functions like a semicolon in normal usage), I prefer to omit it. :)

Tell if a user has SUed in a shell script?

I have a script which executes a git-pull when I log in. The problem is, if I su to a different user and preserve my environment with an su -lp, the script gets run again and usually gets messed up for various reasons because I'm the wrong user. Is there a way to determine in a shell script whether or not I'm currently SUing? I'm looking for a way that doesn't involve hard coding my username into the script, which is my current solution. I use Bash and ZSH as shells.
You could use the output of the who command with the id command:
WHO=`who am i | sed -e 's/ .*//'`
ID_WHO=`id -u $WHO`
ID=`id -u`
if [[ "$ID" = "$ID_WHO" ]]
then
echo "Not su"
else
echo "Is su"
fi
if test "$(id -u)" = "0";
: # commands executed for root
else
: # commands executed for non root
fi
If you are changing user identities with an suid executable, your real and effective user id will be different. But if use use su (or sudo), they'll both be set to the new user. This means that commands that call getuid() or geteuid() won't be useful.
A better method is to check who owns the terminal the script is being run on. This obviously won't work if the process has detached from it's terminal, but unless the script is being run by a daemon, this is unlikely. Try stat -c %U $(tty). I believe who am i will do the same thing on most Unix-like OSes as well.
You can use "$UID" environment variable.
If its value is ZERO, then the user has SUDOed.. Bcos root as $UID==0
Well.... on linux, if I su to another user the process su is in the new user's process list.
sudo... doesn't leave such pleasant things for you.
I'm using zsh... but I don't think anything in this is shell specific.
if:
%ps | grep " su$"
returns anything, then you're running in an su'd shell.
Note: there is a space before su$ in that to exclude command simply ending in su. Doesn't guard against any custom program/script called su, though.

Resources