I want to execute a shell command in Rust. In Python I can do this:
import os
cmd = r'echo "test" >> ~/test.txt'
os.system(cmd)
But Rust only has std::process::Command. How can I execute a shell command like cd xxx && touch abc.txt?
Everybody is looking for:
use std::process::Command;
fn main() {
let output = Command::new("echo")
.arg("Hello world")
.output()
.expect("Failed to execute command");
assert_eq!(b"Hello world\n", output.stdout.as_slice());
}
For more information and examples, see the docs.
You wanted to simulate &&. std::process::Command has a status method that returns a Result<T> and Result implements and_then. You can use and_then like a && but in more safe Rust way :)
You should really avoid system. What it does depends on what shell is in use and what operating system you're on (your example almost certainly won't do what you expect on Windows).
If you really, desperately need to invoke some commands with a shell, you can do marginally better by just executing the shell directly (like using the -c switch for bash).
If, for some reason, the above isn't feasible and you can guarantee your program will only run on systems where the shell in question is available and users will not be running anything else...
...then you can just use the system call from libc just as you would from regular C. This counts as FFI, so you'll probably want to look at std::ffi::CStr.
For anyone looking for a way to set the current directory for the subprocess running the command i. e. run "ls" in some dir there's Command::current_dir. Usage:
use std::process::Command;
Command::new("ls")
.current_dir("/bin")
.spawn()
.expect("ls command failed to start");
Related
I'm using Ruby on Linux.
I'd like to test for the existence of a command on the Linux system.
I'd like to not get back the output of the command that I'm testing for.
I'd also like to not get back any output that results from the shell being unable to find the command.
I want to avoid using shell redirection from within the command that I send to the shell. So something like system("foo > /dev/null") would be unsuitable.
I'm ok with using redirection if there is a way to do it from Ruby.
The simplest thing would be just to use system. Let's say you're looking for ls.
irb(main):005:0> system("which ls")
/bin/ls
=> true
If that's off the table, you could peek into the directories in ENV["PATH"] for the executable you're looking for. ENV["PATH"].split(":") would give you an array of directory names to check for the desired command. If you find a file with the right name, you may want to ensure it's an executable.
I want to avoid using shell redirection from within the command that I
send to the shell. So something like system("foo > /dev/null") would
be unsuitable. I'm ok with using redirection if there is a way to do it from Ruby.
system("exec which cmd", out: "/dev/null")
puts "Command is available." if ($?).success?
The exec is to explicitly avoid unnecessary forking in the shell.
As a sidenote type -P can be used instead of which, but it relies on Bash and may have surprising effects if script is ported to an environment with a different default shell.
I'm learning about Docker at the moment, and going through the Dockerfile reference, specifically the RUN instruction. There are two forms of RUN - the shell form, which runs the command in a shell, and the exec form, which "does not invoke a command shell" (quoted from the Note section).
If I understood the documentation correctly, my question is - If, and how can, Docker run a command without a shell?
Note that the answers to Can a command be executed without a shell?'s don't actually answer the question.
If I understand your question correctly, you're asking how something can be run (specifically in the context of docker) without invoking a command shell.
The way things are run in the linux kernel are usually using the exec family of system calls.
You pass it the path to the executable you want to run and the arguments that need to be passed to it via an execl call for example.
This is actually what your shell (sh, bash, ksh, zsh) does under the hood anyway. You can observe this yourself if you run something like strace -f bash -c "cat /tmp/foo"
In the output of that command you'll see something like this:
execve("/bin/cat", ["cat", "/tmp/foo"], [/* 66 vars */]) = 0
What's really going on is that bash looks up cat in $PATH, it then finds that cat is actually an executable binary available at /bin/cat. It then simply invokes it via execve. and the correct arguments as you can see above.
You can trivially write a C program that does the same thing as well.
This is what such a program would look like:
#include<unistd.h>
int main() {
execl("/bin/cat", "/bin/cat", "/tmp/foo", (char *)NULL);
return 0;
}
Every language provides its own way of interfacing with these system calls. C does, Python does and Go, which is what's used to write Docker for the most part, does as well. A RUN instruction in the docker likely translates to one of these exec calls when you hit docker build. You can run an strace -f docker build and then grep for exec calls in the log to see how the magic happens.
The only difference between running something via a shell and directly is that you lose out on all the fancy stuff your shell will do for you, such as variable expansion, executable searching etc.
A program can execute another program without a shell; you just create a new process and load an executable onto it, so you don't need the shell for that. The shell is needed for a user to start a program because it is the user interface to the system. Also, a program is not able to run a built-in command like cd or rm without a shell because there's no executable to be found (there are alternative ways, thought, but not as simple).
In very general - docker run will start container with its default process, when docker exec allow you to run any process you want inside the container.
For example, running microsoft/iis container by docker run microsoft/iiswill run default process, which is powershell.
But you can run cmd by running docker exec -i my_container cmd
See this answer for more details.
I am having ghc 6.12.3 and Ubuntu 11.04 installed on my laptop.
I would like to have a function which take some shell commands and execute them as the superuser (like sudo update-manager, sudo iwlist ....) in Haskell. I know that the System.Process module have some functions like createProcess, runInteractiveCommand. But there are for a single raw command or a single shell command, not for compound commnads like "sudo update-manager". All my experiments on those functions to execute "sudo ..." failed. The terminal I used to run my haskell function had no response.
I also looked at HSH package. But it seems to me that functions exported there are not good for sudo commands either.
My guess is that executing commands like "sudo update-manager" requires two process. One is for "sudo" and the other one is for "update-manager". So I need to call functions like "createProcess" twice and somehow connect them so that the second process for "update-manager" get superuser privilege from the first process for "sudo".
Thanks in advance for help!
Try readProcess from System.Process
readProcess :: FilePath -- command to run
-> [String] -- any arguments
-> String -- standard input
-> IO String -- stdout
readProcess forks an external process, reads its standard output
strictly, blocking until the process terminates, and returns the
output string.
Run it like this:
readProcess "/usr/bin/sudo" ("-S":someProgram) (passwort++"\n")
This executes sudo with the options -S and the program. -S is needed to read the password from stdin. The password must finish with a newline, so the program adds one.
Answering the last paragraph. sudo is a regular program, no magic whatsoever. It just happens to run other programs. So does your Haskell program. Your program runs sudo and sudo runs update-manager So no, you should not create two processes.
Have you tried System.Process.system?
import System.Process
main = system "sudo update-manager"
This works for me (GHC 7.0.3). Also, for scripting in Haskell in general (sudo included), you can have a look at a presentation "Practical Haskell: scripting with types" by Don Stewart.
Bash commands are available from an interactive tclsh session. E.g. in a tclsh session you can have
% ls
instead of
$ exec ls
However, you cant have a tcl script which calls bash commands directly (i.e. without exec).
How can I make tclsh to recognize bash commands while interpreting tcl script files, just like it does in an interactive session?
I guess there is some tcl package (or something like that), which is being loaded automatically while launching an interactive session to support direct calls of bash commans. How can I load it manually in tcl script files?
If you want to have specific utilities available in your scripts, write bridging procedures:
proc ls args {
exec {*}[auto_execok ls] {*}$args
}
That will even work (with obvious adaptation) for most shell builtins or on Windows. (To be fair, you usually don't want to use an external ls; the internal glob command usually suffices, sometimes with extra help from some file subcommands.) Some commands need a little more work (e.g., redirecting input so it comes from the terminal, with an extra <#stdin or </dev/tty; that's needed for stty on some platforms) but that works reasonably well.
However, if what you're asking for is to have arbitrary execution of external programs without any extra code to mark that they are external, that's considered to be against the ethos of Tcl. The issue is that it makes the code quite a lot harder to maintain; it's not obvious that you're doing an expensive call-out instead of using something (relatively) cheap that's internal. Putting in the exec in that case isn't that onerous…
What's going on here is that the unknown proc is getting invoked when you type a command like ls, because that's not an existing tcl command, and by default, that command will check that if the command was invoked from an interactive session and from the top-level (not indirectly in a proc body) and it's checking to see if the proc name exists somewhere on the path. You can get something like this by writing your own proc unknown.
For a good start on this, examine the output of
info body unknown
One thing you should know is that ls is not a Bash command. It's a standalone utility. The clue for how tclsh runs such utilities is right there in its name - sh means "shell". So it's the rough equivalent to Bash in that Bash is also a shell. Tcl != tclsh so you have to use exec.
I have created a simple bash script, chmod +x, and successfully am running it as a background service.
But, the script is called "sh" or "sleep" or whatever command seems to be running at the time, not my script name, when I view a process list.
How do I name the process of my bash script so I can distinguish it? I want to be sure that I'm not running the script more than once.
I am very new to bash scripting... sorry if this is a dumb question.
I am using #!/bin/bash
Do you have a "shebang" in your script?
I just did a little test. I found that with no shebang, the test script showed in ps as whatever command was executing at the time. However, if I, as I usually do, put:
#!/bin/bash
or
#!/bin/sh
(which on my system is symlinked to /bin/dash) as the first line in the script then the script showed up under its own name in the output of ps.
Your parent shell will be running the whole time. That will be sh. Any other processes spawned by that shell will also be running. Try pstree to show parent-child relationships.
BTW, if you use bash-specific features that aren't in POSIX Bourne shell, you should use #!/bin/bash, not #!/bin/sh. Some systems have bash, but have a lighter-weight /bin/sh.
I am very new to bash scripting... sorry if this is a dumb question.
Not dumb. Basic, but only once you understand how Unix processes work, (and how whatever you're using in OS X that shows you "service" names, since that's not a word that would make sense in any Unix context in this situation.) So you're dealing with a fair amount of complexity, and I don't blame you for asking.
Maybe OS X looks at process group leaders or something to come up with a "service name", if that's what it really calls it. I think that would be the process name of whatever process is running in the foreground (i.e. that you didn't fork off with & at the end of it, so the shell is waiting for it before executing the next command.)
Since that's what ps also shows I have a hunch you're out of luck. Sorry but shell scripts can't change their apparent process name.
However, for the cases that show bash you can create a symlink to bash under a name descriptive to your script and invoke your script via that symlink.
Not sure how portable my solution will be, but it works on Linux.
If you really want to do this (maybe you want to be able to kill your process by looking it up by name), you can write a small C program to call into the shell with a different process name.. For example...
#include <unistd.h>
#include <stdio.h>
int main(int argc, char **argv)
{
if (argv[1])
{
argv[0] = argv[1];
return execvp("sh", argv);
}
else
{
fprintf(stderr, "usage: %s <script> [args]\n", argv[0]);
return 1;
}
}
Say that's called wrapper.c. You can compile with:
gcc -o wrapper wrapper.c
Then you can run:
./wrapper ./my-script
And check top or ps. It should have a "forged" program name..
Now... Whether you actually want to do this? I don't know. It's probably not worth it. Most people don't bother with this sort of thing.