Bash Daemon Named "sh" or "sleep" not the filename - bash

I have created a simple bash script, chmod +x, and successfully am running it as a background service.
But, the script is called "sh" or "sleep" or whatever command seems to be running at the time, not my script name, when I view a process list.
How do I name the process of my bash script so I can distinguish it? I want to be sure that I'm not running the script more than once.
I am very new to bash scripting... sorry if this is a dumb question.
I am using #!/bin/bash

Do you have a "shebang" in your script?
I just did a little test. I found that with no shebang, the test script showed in ps as whatever command was executing at the time. However, if I, as I usually do, put:
#!/bin/bash
or
#!/bin/sh
(which on my system is symlinked to /bin/dash) as the first line in the script then the script showed up under its own name in the output of ps.

Your parent shell will be running the whole time. That will be sh. Any other processes spawned by that shell will also be running. Try pstree to show parent-child relationships.
BTW, if you use bash-specific features that aren't in POSIX Bourne shell, you should use #!/bin/bash, not #!/bin/sh. Some systems have bash, but have a lighter-weight /bin/sh.
I am very new to bash scripting... sorry if this is a dumb question.
Not dumb. Basic, but only once you understand how Unix processes work, (and how whatever you're using in OS X that shows you "service" names, since that's not a word that would make sense in any Unix context in this situation.) So you're dealing with a fair amount of complexity, and I don't blame you for asking.
Maybe OS X looks at process group leaders or something to come up with a "service name", if that's what it really calls it. I think that would be the process name of whatever process is running in the foreground (i.e. that you didn't fork off with & at the end of it, so the shell is waiting for it before executing the next command.)

Since that's what ps also shows I have a hunch you're out of luck. Sorry but shell scripts can't change their apparent process name.
However, for the cases that show bash you can create a symlink to bash under a name descriptive to your script and invoke your script via that symlink.

Not sure how portable my solution will be, but it works on Linux.
If you really want to do this (maybe you want to be able to kill your process by looking it up by name), you can write a small C program to call into the shell with a different process name.. For example...
#include <unistd.h>
#include <stdio.h>
int main(int argc, char **argv)
{
if (argv[1])
{
argv[0] = argv[1];
return execvp("sh", argv);
}
else
{
fprintf(stderr, "usage: %s <script> [args]\n", argv[0]);
return 1;
}
}
Say that's called wrapper.c. You can compile with:
gcc -o wrapper wrapper.c
Then you can run:
./wrapper ./my-script
And check top or ps. It should have a "forged" program name..
Now... Whether you actually want to do this? I don't know. It's probably not worth it. Most people don't bother with this sort of thing.

Related

Execute a shell command

I want to execute a shell command in Rust. In Python I can do this:
import os
cmd = r'echo "test" >> ~/test.txt'
os.system(cmd)
But Rust only has std::process::Command. How can I execute a shell command like cd xxx && touch abc.txt?
Everybody is looking for:
use std::process::Command;
fn main() {
let output = Command::new("echo")
.arg("Hello world")
.output()
.expect("Failed to execute command");
assert_eq!(b"Hello world\n", output.stdout.as_slice());
}
For more information and examples, see the docs.
You wanted to simulate &&. std::process::Command has a status method that returns a Result<T> and Result implements and_then. You can use and_then like a && but in more safe Rust way :)
You should really avoid system. What it does depends on what shell is in use and what operating system you're on (your example almost certainly won't do what you expect on Windows).
If you really, desperately need to invoke some commands with a shell, you can do marginally better by just executing the shell directly (like using the -c switch for bash).
If, for some reason, the above isn't feasible and you can guarantee your program will only run on systems where the shell in question is available and users will not be running anything else...
...then you can just use the system call from libc just as you would from regular C. This counts as FFI, so you'll probably want to look at std::ffi::CStr.
For anyone looking for a way to set the current directory for the subprocess running the command i. e. run "ls" in some dir there's Command::current_dir. Usage:
use std::process::Command;
Command::new("ls")
.current_dir("/bin")
.spawn()
.expect("ls command failed to start");

Are shell scripts read in their entirety when invoked?

I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.

How to call bash commands from tcl script?

Bash commands are available from an interactive tclsh session. E.g. in a tclsh session you can have
% ls
instead of
$ exec ls
However, you cant have a tcl script which calls bash commands directly (i.e. without exec).
How can I make tclsh to recognize bash commands while interpreting tcl script files, just like it does in an interactive session?
I guess there is some tcl package (or something like that), which is being loaded automatically while launching an interactive session to support direct calls of bash commans. How can I load it manually in tcl script files?
If you want to have specific utilities available in your scripts, write bridging procedures:
proc ls args {
exec {*}[auto_execok ls] {*}$args
}
That will even work (with obvious adaptation) for most shell builtins or on Windows. (To be fair, you usually don't want to use an external ls; the internal glob command usually suffices, sometimes with extra help from some file subcommands.) Some commands need a little more work (e.g., redirecting input so it comes from the terminal, with an extra <#stdin or </dev/tty; that's needed for stty on some platforms) but that works reasonably well.
However, if what you're asking for is to have arbitrary execution of external programs without any extra code to mark that they are external, that's considered to be against the ethos of Tcl. The issue is that it makes the code quite a lot harder to maintain; it's not obvious that you're doing an expensive call-out instead of using something (relatively) cheap that's internal. Putting in the exec in that case isn't that onerous…
What's going on here is that the unknown proc is getting invoked when you type a command like ls, because that's not an existing tcl command, and by default, that command will check that if the command was invoked from an interactive session and from the top-level (not indirectly in a proc body) and it's checking to see if the proc name exists somewhere on the path. You can get something like this by writing your own proc unknown.
For a good start on this, examine the output of
info body unknown
One thing you should know is that ls is not a Bash command. It's a standalone utility. The clue for how tclsh runs such utilities is right there in its name - sh means "shell". So it's the rough equivalent to Bash in that Bash is also a shell. Tcl != tclsh so you have to use exec.

Shell script that can check if it was backgrounded at invocation

I have written a script that relies on other server responses (uses wget to pull data), and I want it to always be run in the background unquestionably. I know one solution is to just write a wrapper script that will call my script with an & appended, but I want to avoid that clutter.
Is there a way for a bash (or zsh) script to determine if it was called with say ./foo.sh &, and if not, exit and re-launch itself as such?
The definition of a background process (I think) is that it has a controlling terminal but it is not part of that terminal's foreground process group. I don't think any shell, even zsh, gives you any access to that information through a builtin.
On Linux (and perhaps other unices), the STAT column of ps includes a + when the process is part of its terminal's foreground process group. So a literal answer to your question is that you could put your script's content in a main function and invoke it with:
case $(ps -o stat= -p $$) in
*+*) main "$#" &;;
*) main "$#";;
esac
But you might as well run main "$#" & anyway. On Unix, fork is cheap.
However, I strongly advise against doing what you propose. This makes it impossible for someone to run your script and do something else afterwards — one would expect to be able to write your_script; my_postprocessing or your_script && my_postprocessing, but forking the script's main task makes this impossible. Considering that the gain is occasionally saving one character when the script is invoked, it's not worth making your script markedly less useful in this way.
If you really mean for the script to run in the background so that the user can close his terminal, you'll need to do more work — you'll need to daemonize the script, which includes not just backgrounding but also closing all file descriptors that have the terminal open, making the process a session leader and more. I think that will require splitting your script into a daemonizing wrapper script and a main script. But daemonizing is normally done for programs that never terminate unless explicitly stopped, which is not the behavior you describe.
I do not know, how to do this, but you may set variable in parent script and check for it in child:
if [[ -z "$_BACKGROUNDED" ]] ; then
_BACKGROUNDED=1 exec "$0" "$#" & exit
fi
# Put code here
Works both in bash and zsh.
the "tty" command says "not a tty" if you're in the background, or gives the controlling terminal name (/dev/pts/1 for example) if you're in the foreground. A simple way to tell.
Remember that you can't (or, not recommended to) edit the running script. This question and the answers give workarounds.
I don't write shell scripts a long time ago, but I can give you a very good idea (I hope). You can check the value of $$ (this is the PID of the process) and compare with the output of the command "jobs -l". This last command will return the PID of all the backgrounded processes (jobs) and if the value of $$ is contained in the result of the "jobs -l", this means that the current script is running on background.

chroot + execvp + bash

Update
Got it! See my solution (fifth comment)
Here is my problem:
I have created a small binary called "jail" and in /etc/password I have made it the default shell for a test user.
Here is the -- simplified -- source code:
#define HOME "/home/user"
#define SHELL "/bin/bash"
...
if(chdir(HOME) || chroot(HOME)) return -1;
...
char *shellargv[] = { SHELL, "-login", "-rcfile", "/bin/myscript", 0 };
execvp(SHELL, shellargv);
Well, no matter how hard I try, it seems that, when my test user logs in, /bin/myscript will never be sourced. Similarly, if I drop a .bashrc file in user's home directory, it will be ignored as well.
Why would bash snob these guys?
--
Some precisions, not necessarily relevant, but to clear out some of the points made in the comments:
The 'jail' binary is actually suid, thus allowing it to chroot() successfully.
I have used 'ln' to make the appropriate binaries available - my jail cell is nicely padded :)
The issue does not seem to be with chrooting the user...something else is remiss.
As Jason C says, the exec'ed shell isn't interactive.
His solution will force the shell to be interactive if it accepts -i to mean that (and bash does):
char *shellargv[] = { SHELL, "-i", "-login", ... };
execvp(SHELL, shellargv);
I want to add, though, that traditionally a shell will act as a login shell if ARGV[0] begins with a dash.
char *shellargv[] = {"-"SHELL, "-i", ...};
execvp(SHELL, shellargv);
Usually, though, Bash will autodetect whether it should run interactively or not. Its failure to in your case may be because of missing /dev/* nodes.
The shell isn't interactive. Try adding -i to the list of arguments.
I can identify with wanting to do this yourself, but if you haven't already, check out jail chroot project and jailkit for some drop in tools to create a jail shell.
By the time your user is logging in and their shell tries to source this file, it's running under their UID. The chroot() system call is only usable by root -- you'll need to be cleverer than this.
Also, chrooting to a user's home directory will make their shell useless, as (unless they have a lot of stuff in there) they won't have access to any binaries. Useful things like ls, for instance.
Thanks for your help, guys,
I figured it out:
I forgot to setuid()/setgid(), chroot(), setuid()/setgid() back, then pass a proper environment using execve()
Oh, and, if I pass no argument to bash, it will source ~/.bashrc
If I pass "-l" if will source /etc/profile
Cheers!

Resources