Python subprocess.Popen terminates when stdin=PIPE is specified - windows

I'm new to subprocesses in python, I need to spawn a number of independent subprocesses, keep them alive and pass commands into them. At first sight, subprocess library is what I'm looking for.
I've read the documenations for it and as I'm understanding to pass any command into the subprocess, I'd have to specify the input.
I need to run commands via windows command line, hence the toy example below is good enough that if I have it working, I'm pretty much done. Running code below via IDLE opens a new cmd window, printing a list of cwd files, however I can't write to it as stdin is not specified (would be writing to it using p.stdin.write('DIR') with 'DIR' being an example command).
from subprocess import Popen, PIPE
p = Popen(['cmd', '/K', 'DIR'])
Therefore I specify the stdin as PIPE, as per documentations.
from subprocess import Popen, PIPE
p = Popen(['cmd', '/K', 'DIR'], stdin=PIPE)
However, running the second snippet of code instantly terminates the opened cmd window. Is that the expected behavior? As far as I could find in the documentations, only p.kill() or p.terminate() end the child process. If so, what are the possible workarounds? If not, what am I doing incorrectly, what other libraries should I be using? Thanks!

Related

Why won't an external executable run without manual input from the terminal command line?

I am currently writing a Python script that will pipe some RNA sequences (strings) into a UNIX executable, which, after processing them, will then send the output back into my Python script for further processing. I am doing this with the subprocess module.
However, in order for the executable to run, it must also have some additional arguments provided to it. Using the subprocess.call method, I have been trying to run:
import subprocess
seq= "acgtgagtag"
output= subprocess.Popen(["./DNAanalyzer", seq])
Despite having my environmental variables set properly, the executables running without problem from the command line of the terminal, and the subprocess module functioning normally (e.g. subprocess.Popen(["ls"]) works just fine), the Unix executable prints out the same output:
Failed to open input file acgtgagtag.in
Requesting input manually.
There are a few other Unix executables in this package, and all of them behave the same way. I even tried to create a simple text file containing the sequence and specify it as the input in both the Python script as well as within the command line, but the executables only want manual input.
I have looked through the package's manual, but it does not mention why the executables can ostensibly be only run through the command line. Because I have limited experience with this module (and Python in general), can anybody indicate what the best approach to this problem would be?
The Popen() is actually a constructor for an object – that object being a "sub-shell" that directly runs the executable. But because I didn't set a standard input or output (stdin and stdout), they default to None, meaning that the process's I/O are both closed.
What I should have done is pass subprocess.PIPE to signify to the Popen object that I want to pipe input and output between my program and the process.
Additionally, the environment variables of the script (in the main shell) were not the same as the environment variables of the subshell, and these specific executables needed certain environment variables in order to function (in this case, it was the path to the parameter files in its package). This was done in the following fashion:
import subprocess as sb
seq= "acgtgagtag"
my_env= {BIOPACKAGEPATH: "/Users/Bobmcbobson/Documents/Biopackage/"}
p= sb.Popen(['biopackage/bin/DNAanalyzer'], stdin=sb.PIPE, stdout=sb.PIPE, env=my_env)
strb = (seq + '\n').encode('utf-8')
data = p.communicate(input=strb)
After creating the Popen object, we send it a formatted input string using communicate(). The output can now be read, and further processed in whatever way in the script.

How do I send commands to the ADB shell directly from my app?

I want to send commands in the ADB shell itself as if i had done the following in cmd.
>adb shell
shell#:/ <command>
I am using python 3.4 on a windows 7 OS 64bit machine. I can send one-line shell commands simply using subprocess.getoutput such as:
subprocess.getoutput ('adb pull /storage/sdcard0/file.txt')
as long as the adb commands themselves are recognized by ADB specifically, such as pull and push, however there are other commands such as grep that need to be run IN the shell, like above, since they are not recognized by adb. for example, the following line will not work:
subprocess.getoutput ('adb shell ls -l | grep ...')
To enter the commands in the shell I thought I needed some kind of expect library as that is what 'everyone' suggests, however pexpect, wexpect, and winexpect all failed to work. they were written for python 2 and after being ported to python 3 and my going through the .py files by hand, even those tweaked for windows, nothing was working - each of them for different reasons.
how can i send the input i want to the adb shell directly?
If none of the already recommended shortcuts work for you you can still go the 'regular' way using 'subprocess.Popen' for entering commands in the adb shell with Popen:
cmd1 = 'adb shell'
cmd2 = 'ls -l | grep ...'
p = subprocess.Popen(cmd1.split(), stdin=PIPE)
time.sleep(1)
p.stdin.write(cmd2.encode('utf-8'))
p.stdin.write('\n'.encode('utf-8'))
p.stdin.flush()
time.sleep(3)
p.kill()
Some things to remember:
even though you import subprocess you still need to invoke subprocess.Popen
sending cmd1 as a string or as items in a list should work too but '.split()' does the trick and is easier on the eyes
since you only specidfied you want to enter input to the shell you only need stdin=PIPE. stdout would only be necessary if you wanted to receive output from the shell
time.sleep(1) isn't really necessary, however since many complained about input issues being faster or slower in python 2 vs 3 consider maybe using it. 'they' might have been using versions of 'expect' that need the shell's reply first. this code also worked when i tested it with simply swapping out and in the process with time.sleep(0)
stdin.write will return an error if the input is not encoded properly. python's default is unicode. entering by binary did not work for me in my tests like this "b\ls ..." but .encode() worked. dont forget the endline!
if you use .encode() there is a worry that the line might not get sent properly, so to be sure it might be good to include a flush().
time.sleep(3) is completely uneccesary, but if your command takes a long time to execute (eg a regressive search through the entire device piped out to a txt file on the memory card) maybe give it some extra time before killing anyhting.
remember to kill. if you didnt kill it, the pipe may remain open, and even after exiting the test app on the console the next commend still went to the shell even though the prompt appearsed to be my regular cmd prompt.
Amichai, I have to start with pointing out that your own "solution" is pretty awful. And your explanation makes it even worse. Doing all those unnecessary things just because you do not understand how shell (here I mean your PC's OS shell, not adb) command parsing works.
When all you needed was just this one command:
subprocess.check_output(['adb', 'shell', 'ls /storage/sdcard0 | grep ...']).decode('utf-8')

Execute ruby subprocess which requires interactive input

I need to start a subprocess from ruby, that takes over then returns control.
This subprocess needs interactive input from the user, so it's io should be tied to stdin stdout and stderr. Further it requests for input changes depending on the circumstances.
An example of a program like that is TeX, which I would start on a file but during the process TeX may encounter a user error which it has to query the user how to fix.
Essentially I am looking for a reentrant version of exec.
PS
For those who cannot read carefully let me reiterate.
This subprocess needs interactive input from the user
That means that if the ruby program runs in a tty, that its output goes to the tty not the Ruby program and its input comes from the tty, not the Ruby program.
In other words:
Essentially I am looking for a reentrant version of exec.
I use TeX as an example so let me show you an example. I found a sample piece of TeX on the at Sample Tex . I intend to put an error in but it seems I don't have to it chokes on my system. Save it in sample1.tex, sample2.tex, sample3.tex.
Now I would like to run this bit of ruby code:
files=["sample1.tex","sample2.tex","sample3.tex"]
files.each{|file|
# It is really a latex command.
commmand_that_I_am_looking_for("latex #{file}")
}
When I run this code I should see in the terminal, three times a bunch of stuff:
Generic information about the latex program, progress in processing etc.
! LaTeX Error: File `html.sty' not found.
Type X to quit or <RETURN> to proceed,
or enter new name. (Default extension: sty)
Enter file name:
Where upon each of the three times the program waits for the user to type something.
You can pair fork with exec:
Process.fork { exec('./somescript') }
Process.wait
The Process.wait ensures that you wait for the subprocess to complete.
Ruby standard library has a special command for your needs. It is called open3. Here is an example from its docs:
Open3.popen3("pwd", :chdir=>"/") {|stdin, stdout, stderr, thread|
p stdout.read.chomp #=> "/"
}

TCL hangs when trying to close TCL pipe

When launching tclsh and typing this:
close [open "|tclsh" w]
it works fine.
But, when in ~/.tclshrc you have package require Tk, the same line makes tclsh to HANG!
The same issue is with all GUI packages like Tk, Itk, Img, Iwidgets, however with not GUI packages like Itcl, it worsk fine.
How can I fix this issue? The point is to make tclsh not to hang, when typing close [open "|tclsh" w] with package require Tk in ~/.tclshrc.
The same issue is with wish. close [open "|wish" w] makes wish to HANG (with an empty ~/.wishrc file)!
I got this issue on both 32 and 64 bit CentOS.
I have the following versions of packages: tcl-8.5.8, tk-8.5.8, img-1.3, itcl-3.4.b1, itk-3.3, iwidgets-4.0.1.
Tcl applications mostly exit when they have finished their script, whether or not it is provided interactively. However the Tk package changes things around so that when the end of the script is reached, it instead goes into a loop handling events. If you're relying on an end-of-file causing things to exit, that's going to look a lot like a hang, but really it's just waiting properly for the GUI app to finish (so it can report the exit status of the subprocess).
The fix is to make a channel-readable event handler for stdin in the subprocess. There's a few ways to do this in detail, but here's a simple one that can go at the end of the bulk of code that you normally send:
proc ReadFromStdin {} {
if {[gets stdin line] >= 0} {
uplevel "#0" $line
} elseif {[eof stdin]} {
exit
} else {
# Partial read; try later when rest of data available
}
}
fileevent stdin readable ReadFromStdin
This assumes that each line is a full executable command; that might not be true, of course, but writing the code to use info complete to compose lines is less clear and possibly unnecessary here. (You know what you're actually sending better than I…)
My thought would be that it's waiting for wish to finish running, as per the man page:
If channelId is a blocking channel for
a command pipeline then close waits
for the child processes to complete.
Since wish enters an infinite loop (the event loop) and never exits, the close command will hang. Along the same lines, [package require Tk] (I believe) starts the event loop, so will cause the same behavior.
I'll admit though that it's loading .tclshrc at all, since
If there exists a file .tclshrc (or tclshrc.tcl on the Windows platforms) in the home directory of the user, interactive tclsh evaluates the file as a Tcl script just before reading the first command from standard input.
It seems odd to me that [open "|tclsh" w] winds up in an interactive shell.
As a side note, [pacakge require Tk] seems like a really strange thing to do in .tclshrc. In theory, you won't always want Tk (the window and event loop) when running Tcl (ie, command line only apps)... and, when you do want it, you know you do. To each their own, I suppose, it just seems odd to me.

Can Ruby access output from shell commands as it appears?

My Ruby script is running a shell command and parsing the output from it. However, it seems the command is first executed and output saved in an array. I would like to be able to access the output lines in real time just as they are printed. I've played around with threads, but haven't got it to work. Any suggestions?
You are looking for pipes. Here is an example:
# This example runs the netstat command via a pipe
# and processes the data in Ruby as it come back
pipe = IO.popen("netstat 3")
while (line = pipe.gets)
print line
print "and"
end
When call methods/functions to run system/shell commands, your interpreter spawns another process to run it and waits for it to finish, then gives you the output.
Even if you use threads, the only thing that you would accomplish is not letting your program to hang while the command is run, but you still won't get the output till its done.
I think you can accomplish that with pipes, but I am not sure how.
#Marcel got it.

Resources