I am having ghc 6.12.3 and Ubuntu 11.04 installed on my laptop.
I would like to have a function which take some shell commands and execute them as the superuser (like sudo update-manager, sudo iwlist ....) in Haskell. I know that the System.Process module have some functions like createProcess, runInteractiveCommand. But there are for a single raw command or a single shell command, not for compound commnads like "sudo update-manager". All my experiments on those functions to execute "sudo ..." failed. The terminal I used to run my haskell function had no response.
I also looked at HSH package. But it seems to me that functions exported there are not good for sudo commands either.
My guess is that executing commands like "sudo update-manager" requires two process. One is for "sudo" and the other one is for "update-manager". So I need to call functions like "createProcess" twice and somehow connect them so that the second process for "update-manager" get superuser privilege from the first process for "sudo".
Thanks in advance for help!
Try readProcess from System.Process
readProcess :: FilePath -- command to run
-> [String] -- any arguments
-> String -- standard input
-> IO String -- stdout
readProcess forks an external process, reads its standard output
strictly, blocking until the process terminates, and returns the
output string.
Run it like this:
readProcess "/usr/bin/sudo" ("-S":someProgram) (passwort++"\n")
This executes sudo with the options -S and the program. -S is needed to read the password from stdin. The password must finish with a newline, so the program adds one.
Answering the last paragraph. sudo is a regular program, no magic whatsoever. It just happens to run other programs. So does your Haskell program. Your program runs sudo and sudo runs update-manager So no, you should not create two processes.
Have you tried System.Process.system?
import System.Process
main = system "sudo update-manager"
This works for me (GHC 7.0.3). Also, for scripting in Haskell in general (sudo included), you can have a look at a presentation "Practical Haskell: scripting with types" by Don Stewart.
Related
I'm learning about Docker at the moment, and going through the Dockerfile reference, specifically the RUN instruction. There are two forms of RUN - the shell form, which runs the command in a shell, and the exec form, which "does not invoke a command shell" (quoted from the Note section).
If I understood the documentation correctly, my question is - If, and how can, Docker run a command without a shell?
Note that the answers to Can a command be executed without a shell?'s don't actually answer the question.
If I understand your question correctly, you're asking how something can be run (specifically in the context of docker) without invoking a command shell.
The way things are run in the linux kernel are usually using the exec family of system calls.
You pass it the path to the executable you want to run and the arguments that need to be passed to it via an execl call for example.
This is actually what your shell (sh, bash, ksh, zsh) does under the hood anyway. You can observe this yourself if you run something like strace -f bash -c "cat /tmp/foo"
In the output of that command you'll see something like this:
execve("/bin/cat", ["cat", "/tmp/foo"], [/* 66 vars */]) = 0
What's really going on is that bash looks up cat in $PATH, it then finds that cat is actually an executable binary available at /bin/cat. It then simply invokes it via execve. and the correct arguments as you can see above.
You can trivially write a C program that does the same thing as well.
This is what such a program would look like:
#include<unistd.h>
int main() {
execl("/bin/cat", "/bin/cat", "/tmp/foo", (char *)NULL);
return 0;
}
Every language provides its own way of interfacing with these system calls. C does, Python does and Go, which is what's used to write Docker for the most part, does as well. A RUN instruction in the docker likely translates to one of these exec calls when you hit docker build. You can run an strace -f docker build and then grep for exec calls in the log to see how the magic happens.
The only difference between running something via a shell and directly is that you lose out on all the fancy stuff your shell will do for you, such as variable expansion, executable searching etc.
A program can execute another program without a shell; you just create a new process and load an executable onto it, so you don't need the shell for that. The shell is needed for a user to start a program because it is the user interface to the system. Also, a program is not able to run a built-in command like cd or rm without a shell because there's no executable to be found (there are alternative ways, thought, but not as simple).
In very general - docker run will start container with its default process, when docker exec allow you to run any process you want inside the container.
For example, running microsoft/iis container by docker run microsoft/iiswill run default process, which is powershell.
But you can run cmd by running docker exec -i my_container cmd
See this answer for more details.
I'm trying to run bash.exe (Bash on Ubuntu for Windows) as a build command for Sublime Text. However, bash.exe has a bug and does not support outputting its stdout to any pipe.
Question is this: how can I run a cmd line (i.e. "bash.exe -c ls") and capture the output without ever making bash.exe output into pipes on windows?
I'm open to using any languages or environment on Windows to make this tool.
Edit
I ran
bashTest = subprocess.Popen(["bash.exe", "-c", "ls"]), stdout=subproccess.PIPE)
Which yielded:
bashTest.communicate()[0] b'E\x00r\x00r\x00o\x00r\x00:\x00\x000\x00x\x008\x000\x000\x007\x000\x000\x005\x007\x00\r\x00\r\x00\n\x00'
This is currently not possible. There's a github issue about it which was closed as a known limitation. If you want to increase awareness of it, I see 2 related User Voice ideas: Allow Windows programs to spawn Bash and Allow native Win32 applications to launch Linux tools/commands.
There are ways you could hack around it, however. One way would be to write a script which loops forever in a bash.exe console. When the script gets a signal, it runs Linux commands with the output piped to a file then signals that it is complete. Here's some pseudo code:
Linux:
while true
while not exists /mnt/c/dobuild
sleep 1
end
gcc foo.c > /mnt/c/build.log
rm /mnt/c/dobuild
end
Windows:
touch C:\dobuild
while exists C:\dobuild
sleep 1
end
cat C:\build.log
This does require keeping a bash.exe console always open with the script running, which is not ideal.
Another potential workaround, which was already mentioned, is to use ReadConsoleOutput.
You need to use the option shell=True in Popen() to have pipes work.
like this example dont need to split this command.
>>> import subprocess as sp
>>> cmd = 'echo "test" | cat'
>>> process = sp.Popen(cmd,stdout=sp.PIPE,shell=True)
>>> output = process.communicate()[0]
>>> print output
test
Your only realistic option, if you can't wait for a fix, would be to use ReadConsoleOutput and/or the related functions.
I want to execute a shell command in Rust. In Python I can do this:
import os
cmd = r'echo "test" >> ~/test.txt'
os.system(cmd)
But Rust only has std::process::Command. How can I execute a shell command like cd xxx && touch abc.txt?
Everybody is looking for:
use std::process::Command;
fn main() {
let output = Command::new("echo")
.arg("Hello world")
.output()
.expect("Failed to execute command");
assert_eq!(b"Hello world\n", output.stdout.as_slice());
}
For more information and examples, see the docs.
You wanted to simulate &&. std::process::Command has a status method that returns a Result<T> and Result implements and_then. You can use and_then like a && but in more safe Rust way :)
You should really avoid system. What it does depends on what shell is in use and what operating system you're on (your example almost certainly won't do what you expect on Windows).
If you really, desperately need to invoke some commands with a shell, you can do marginally better by just executing the shell directly (like using the -c switch for bash).
If, for some reason, the above isn't feasible and you can guarantee your program will only run on systems where the shell in question is available and users will not be running anything else...
...then you can just use the system call from libc just as you would from regular C. This counts as FFI, so you'll probably want to look at std::ffi::CStr.
For anyone looking for a way to set the current directory for the subprocess running the command i. e. run "ls" in some dir there's Command::current_dir. Usage:
use std::process::Command;
Command::new("ls")
.current_dir("/bin")
.spawn()
.expect("ls command failed to start");
I want to send commands in the ADB shell itself as if i had done the following in cmd.
>adb shell
shell#:/ <command>
I am using python 3.4 on a windows 7 OS 64bit machine. I can send one-line shell commands simply using subprocess.getoutput such as:
subprocess.getoutput ('adb pull /storage/sdcard0/file.txt')
as long as the adb commands themselves are recognized by ADB specifically, such as pull and push, however there are other commands such as grep that need to be run IN the shell, like above, since they are not recognized by adb. for example, the following line will not work:
subprocess.getoutput ('adb shell ls -l | grep ...')
To enter the commands in the shell I thought I needed some kind of expect library as that is what 'everyone' suggests, however pexpect, wexpect, and winexpect all failed to work. they were written for python 2 and after being ported to python 3 and my going through the .py files by hand, even those tweaked for windows, nothing was working - each of them for different reasons.
how can i send the input i want to the adb shell directly?
If none of the already recommended shortcuts work for you you can still go the 'regular' way using 'subprocess.Popen' for entering commands in the adb shell with Popen:
cmd1 = 'adb shell'
cmd2 = 'ls -l | grep ...'
p = subprocess.Popen(cmd1.split(), stdin=PIPE)
time.sleep(1)
p.stdin.write(cmd2.encode('utf-8'))
p.stdin.write('\n'.encode('utf-8'))
p.stdin.flush()
time.sleep(3)
p.kill()
Some things to remember:
even though you import subprocess you still need to invoke subprocess.Popen
sending cmd1 as a string or as items in a list should work too but '.split()' does the trick and is easier on the eyes
since you only specidfied you want to enter input to the shell you only need stdin=PIPE. stdout would only be necessary if you wanted to receive output from the shell
time.sleep(1) isn't really necessary, however since many complained about input issues being faster or slower in python 2 vs 3 consider maybe using it. 'they' might have been using versions of 'expect' that need the shell's reply first. this code also worked when i tested it with simply swapping out and in the process with time.sleep(0)
stdin.write will return an error if the input is not encoded properly. python's default is unicode. entering by binary did not work for me in my tests like this "b\ls ..." but .encode() worked. dont forget the endline!
if you use .encode() there is a worry that the line might not get sent properly, so to be sure it might be good to include a flush().
time.sleep(3) is completely uneccesary, but if your command takes a long time to execute (eg a regressive search through the entire device piped out to a txt file on the memory card) maybe give it some extra time before killing anyhting.
remember to kill. if you didnt kill it, the pipe may remain open, and even after exiting the test app on the console the next commend still went to the shell even though the prompt appearsed to be my regular cmd prompt.
Amichai, I have to start with pointing out that your own "solution" is pretty awful. And your explanation makes it even worse. Doing all those unnecessary things just because you do not understand how shell (here I mean your PC's OS shell, not adb) command parsing works.
When all you needed was just this one command:
subprocess.check_output(['adb', 'shell', 'ls /storage/sdcard0 | grep ...']).decode('utf-8')
Bash commands are available from an interactive tclsh session. E.g. in a tclsh session you can have
% ls
instead of
$ exec ls
However, you cant have a tcl script which calls bash commands directly (i.e. without exec).
How can I make tclsh to recognize bash commands while interpreting tcl script files, just like it does in an interactive session?
I guess there is some tcl package (or something like that), which is being loaded automatically while launching an interactive session to support direct calls of bash commans. How can I load it manually in tcl script files?
If you want to have specific utilities available in your scripts, write bridging procedures:
proc ls args {
exec {*}[auto_execok ls] {*}$args
}
That will even work (with obvious adaptation) for most shell builtins or on Windows. (To be fair, you usually don't want to use an external ls; the internal glob command usually suffices, sometimes with extra help from some file subcommands.) Some commands need a little more work (e.g., redirecting input so it comes from the terminal, with an extra <#stdin or </dev/tty; that's needed for stty on some platforms) but that works reasonably well.
However, if what you're asking for is to have arbitrary execution of external programs without any extra code to mark that they are external, that's considered to be against the ethos of Tcl. The issue is that it makes the code quite a lot harder to maintain; it's not obvious that you're doing an expensive call-out instead of using something (relatively) cheap that's internal. Putting in the exec in that case isn't that onerous…
What's going on here is that the unknown proc is getting invoked when you type a command like ls, because that's not an existing tcl command, and by default, that command will check that if the command was invoked from an interactive session and from the top-level (not indirectly in a proc body) and it's checking to see if the proc name exists somewhere on the path. You can get something like this by writing your own proc unknown.
For a good start on this, examine the output of
info body unknown
One thing you should know is that ls is not a Bash command. It's a standalone utility. The clue for how tclsh runs such utilities is right there in its name - sh means "shell". So it's the rough equivalent to Bash in that Bash is also a shell. Tcl != tclsh so you have to use exec.