Subprocess in Python 2.7 -- issuing commands into an engine - bash

I'm new to Python, and I'm given an engine in bash (which takes its own unique commands from a command line prompt or a file and exercises a specific functionality). I want to subprocess it through Python.
I'm able to open the engine with subprocess.call(path, shell=True), and manually interact with it / input commands, but I cannot figure out how to script the unique commands to input into the engine through Python, to see the outputs automatically. I've tried to understand all of the documentation, but it is so, so verbose.
Ideally I would like to script all of my input commands in Python, subprocess the engine, and see the output from my engine in the Python output.
Again, forgive me if this sounds confusing. For example, I've tried:
p = subprocess.Popen(path-to-engine, stdin = subprocess.PIPE, stdout = subprocess.PIPE, shell=True)
p.stdin.write("some commands")
p.stdout.readline()
p.kill()
but this just gives me exit code 0, and no output.

You should use .communicate() instead of .kill():
import subprocess
p = subprocess.Popen(
path_to_engine,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
p.stdin.write("some commands\n")
stdout, stderr = p.communicate()
print('stdout:')
print(stdout)
print('stderr:')
print(stderr)

Related

Python subprocess.Popen terminates when stdin=PIPE is specified

I'm new to subprocesses in python, I need to spawn a number of independent subprocesses, keep them alive and pass commands into them. At first sight, subprocess library is what I'm looking for.
I've read the documenations for it and as I'm understanding to pass any command into the subprocess, I'd have to specify the input.
I need to run commands via windows command line, hence the toy example below is good enough that if I have it working, I'm pretty much done. Running code below via IDLE opens a new cmd window, printing a list of cwd files, however I can't write to it as stdin is not specified (would be writing to it using p.stdin.write('DIR') with 'DIR' being an example command).
from subprocess import Popen, PIPE
p = Popen(['cmd', '/K', 'DIR'])
Therefore I specify the stdin as PIPE, as per documentations.
from subprocess import Popen, PIPE
p = Popen(['cmd', '/K', 'DIR'], stdin=PIPE)
However, running the second snippet of code instantly terminates the opened cmd window. Is that the expected behavior? As far as I could find in the documentations, only p.kill() or p.terminate() end the child process. If so, what are the possible workarounds? If not, what am I doing incorrectly, what other libraries should I be using? Thanks!

subprocess missing output file

I am completely new to python but I am trying to learn.
I would like to use subprocess command to run a simulation program that can me called in the terminal in a bash environment. The syntax is quite simple:
command inputfile.in
where in the the command is a greater simulation script in a tcltk environment.
Ok I have read a lot of the python literature and have decided to use the the Popen functionality of the subprocess command.
So from what I understand I should be able to format the command as follows:
p= subprocess.Popen(['command','inputfile.in'],stdout= subprocess.PIPE])
print(p.communicate())
The output of this command are two files. When I run the command in the terminal I get two files in the same directory as the original input file.
File1.fid File2.spe.
When I use Popen there are two things that confuse me.
(1) I do not get any output files written to directory of the input file. (2) The value p.communicate is present indicating that the simulation was run .
What happen to the output files. Is there a specified way to call a command function that produces files as a result?
I am running this in a Jupyter-notebook cell inside of for loop. This for loop serves to iteratively change the input file thus systematically varying conditions of the simulations. My operating systems is mac osx.
The goal is to simulated or run the command with each iteration of the for loop then store the output file data in a larger dictionary. Later I would like to compare the output file data to the experimental data iteratively in a optimization process that minimizes the residuals.
I would appreciate any help. Also any direction if popen is not the correct python function to do this.
Let's learn from something easy like this:
# This is equivalent with the command line `dir *.* /s /b` on Windows
import subprocess
sp = subprocess.Popen(['dir', '*.*', '/s', '/b'], stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=True)
(std_out, std_err) = sp.communicate() # returns (stdout, stderr)
# print out if error occur
print('std_err: ', std_err) # expect ('std_err: ', '')
# print out saved echoing messages
print('std_out: ', std_out) # expect ('std_out: ', ... can be a long list ...)

Passing input to an executable using Python subprocess module

I have an input file called 0.in. To get the output I do ./a.out < 0.in in the Bash Shell.
Now, I have several such files (more than 500) and I want to automate this process using Python's subprocess module.
I tried doing this:
data=subprocess.Popen(['./a.out','< 0.in'],stdout=subprocess.PIPE,stdin=subprocess.PIPE).communicate()
Nothing was printed (data[0] was blank) when I ran this. What is the right method to do what I want to do?
Redirection using < is a shell feature, not a python feature.
There are two choices:
Use shell=True and let the shell handle redirection:
data = subprocess.Popen(['./a.out < 0.in'], stdout=subprocess.PIPE, shell=True).communicate()
Let python handle redirection:
with open('0.in') as f:
data = subprocess.Popen(['./a.out'], stdout=subprocess.PIPE, stdin=f).communicate()
The second option is usually preferred because it avoids the vagaries of the shell.
If you want to capture stderr in data, then add stderr=subprocess.PIPE to the Popen command. Otherwise, stderr will appear on the terminal or wherever python's error messages are being sent.

Execute ruby subprocess which requires interactive input

I need to start a subprocess from ruby, that takes over then returns control.
This subprocess needs interactive input from the user, so it's io should be tied to stdin stdout and stderr. Further it requests for input changes depending on the circumstances.
An example of a program like that is TeX, which I would start on a file but during the process TeX may encounter a user error which it has to query the user how to fix.
Essentially I am looking for a reentrant version of exec.
PS
For those who cannot read carefully let me reiterate.
This subprocess needs interactive input from the user
That means that if the ruby program runs in a tty, that its output goes to the tty not the Ruby program and its input comes from the tty, not the Ruby program.
In other words:
Essentially I am looking for a reentrant version of exec.
I use TeX as an example so let me show you an example. I found a sample piece of TeX on the at Sample Tex . I intend to put an error in but it seems I don't have to it chokes on my system. Save it in sample1.tex, sample2.tex, sample3.tex.
Now I would like to run this bit of ruby code:
files=["sample1.tex","sample2.tex","sample3.tex"]
files.each{|file|
# It is really a latex command.
commmand_that_I_am_looking_for("latex #{file}")
}
When I run this code I should see in the terminal, three times a bunch of stuff:
Generic information about the latex program, progress in processing etc.
! LaTeX Error: File `html.sty' not found.
Type X to quit or <RETURN> to proceed,
or enter new name. (Default extension: sty)
Enter file name:
Where upon each of the three times the program waits for the user to type something.
You can pair fork with exec:
Process.fork { exec('./somescript') }
Process.wait
The Process.wait ensures that you wait for the subprocess to complete.
Ruby standard library has a special command for your needs. It is called open3. Here is an example from its docs:
Open3.popen3("pwd", :chdir=>"/") {|stdin, stdout, stderr, thread|
p stdout.read.chomp #=> "/"
}

python Input delegation for subprocesses

I am currently displaying the output of a subprocess onthe python shell (in my case iDLE on windows) by using a pipe and displaying each line.
I want to do this with a subprocess that has user input, so that the prompt will appear on the python console, and the user can enter the result, and the result can be send to the subprocess.
Is there a way to do this?
Use process.stdin.write.
Remember to set stdin = subprocess.PIPE when you call subprocess.Popen.

Resources