Checking for a running python process using python - windows

I have a python script called speech.pyw. I don't want it showing up on the screen when run so I used that extension.
How can I check using another python script whether or not this script is running? If it isn't running, this script should launch it.

Off the top of my head, there are at least two ways to do this:
You could make the script create an empty file in a specific location, and the other script could check for that. Note that you might have to manually remove the file if the script exits uncleanly.
You could list all running processes, and check if the first one is among those processes. This is somewhat more brittle and platform-dependant.
An alternative hybrid strategy would be for the script to create the specific file and write it's PID (process id) to it. The runner script could read that file, and if the specified PID either wasn't running or was not the script, it could delete the file. This is also somewhat platform-dependant.

!/usr/bin/env python2
import psutil
import sys
processName="wastetime.py"
def check_if_script_is_running(script_name):
script_name_lower = script_name.lower()
for proc in psutil.process_iter():
try:
for element in proc.cmdline():
if element.lower() == script_name_lower:
return True
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
pass
return False;
print(check_if_script_is_running(processName))
sys.stdin.readline()

Related

Python subprocess.Popen terminates when stdin=PIPE is specified

I'm new to subprocesses in python, I need to spawn a number of independent subprocesses, keep them alive and pass commands into them. At first sight, subprocess library is what I'm looking for.
I've read the documenations for it and as I'm understanding to pass any command into the subprocess, I'd have to specify the input.
I need to run commands via windows command line, hence the toy example below is good enough that if I have it working, I'm pretty much done. Running code below via IDLE opens a new cmd window, printing a list of cwd files, however I can't write to it as stdin is not specified (would be writing to it using p.stdin.write('DIR') with 'DIR' being an example command).
from subprocess import Popen, PIPE
p = Popen(['cmd', '/K', 'DIR'])
Therefore I specify the stdin as PIPE, as per documentations.
from subprocess import Popen, PIPE
p = Popen(['cmd', '/K', 'DIR'], stdin=PIPE)
However, running the second snippet of code instantly terminates the opened cmd window. Is that the expected behavior? As far as I could find in the documentations, only p.kill() or p.terminate() end the child process. If so, what are the possible workarounds? If not, what am I doing incorrectly, what other libraries should I be using? Thanks!

Why won't an external executable run without manual input from the terminal command line?

I am currently writing a Python script that will pipe some RNA sequences (strings) into a UNIX executable, which, after processing them, will then send the output back into my Python script for further processing. I am doing this with the subprocess module.
However, in order for the executable to run, it must also have some additional arguments provided to it. Using the subprocess.call method, I have been trying to run:
import subprocess
seq= "acgtgagtag"
output= subprocess.Popen(["./DNAanalyzer", seq])
Despite having my environmental variables set properly, the executables running without problem from the command line of the terminal, and the subprocess module functioning normally (e.g. subprocess.Popen(["ls"]) works just fine), the Unix executable prints out the same output:
Failed to open input file acgtgagtag.in
Requesting input manually.
There are a few other Unix executables in this package, and all of them behave the same way. I even tried to create a simple text file containing the sequence and specify it as the input in both the Python script as well as within the command line, but the executables only want manual input.
I have looked through the package's manual, but it does not mention why the executables can ostensibly be only run through the command line. Because I have limited experience with this module (and Python in general), can anybody indicate what the best approach to this problem would be?
The Popen() is actually a constructor for an object – that object being a "sub-shell" that directly runs the executable. But because I didn't set a standard input or output (stdin and stdout), they default to None, meaning that the process's I/O are both closed.
What I should have done is pass subprocess.PIPE to signify to the Popen object that I want to pipe input and output between my program and the process.
Additionally, the environment variables of the script (in the main shell) were not the same as the environment variables of the subshell, and these specific executables needed certain environment variables in order to function (in this case, it was the path to the parameter files in its package). This was done in the following fashion:
import subprocess as sb
seq= "acgtgagtag"
my_env= {BIOPACKAGEPATH: "/Users/Bobmcbobson/Documents/Biopackage/"}
p= sb.Popen(['biopackage/bin/DNAanalyzer'], stdin=sb.PIPE, stdout=sb.PIPE, env=my_env)
strb = (seq + '\n').encode('utf-8')
data = p.communicate(input=strb)
After creating the Popen object, we send it a formatted input string using communicate(). The output can now be read, and further processed in whatever way in the script.

Ensuring Programs Run In Ordered Sequence

This is my situation:
I want to run Python scripts sequentially in sequence, starting with scriptA.py. When scriptA.py finishes, scriptB.py should run, followed by scriptC.py. After these scripts have run in order, I need to run an rsync command.
I plan to create bash script like this:
#!/bin/sh
python scriptA.py
python scriptB.py
python scriptC.py
rsync blablabla
Is this the best solution for perfomance and stability ?
To run a command only after the previous command has completed successfully, you can use a logical AND:
python scriptA.py && python scriptB.py && python scriptC.py && rsync blablabla
Because the whole statement will be true only if all are true, bash "short-circuits" and only starts the next statement when the preceding one has completed successfully; if one fails, it stops and doesn't start the next command.
Is that the behavior you're looking for?
If you have some experience with python it will almost certainly be better to write a python script that imports and executes the relevant functions from the other script. That way you will be able to use pythons exceptions handling. Also you can run the rsync from within python.

Remotely manipulating the execution of a batch file

I have a batch file that is located on one machine that I am invoking remotely from another machine. That batch file is pretty simple; all it does is set some environment variables and then executes an application - the application creates a command window and executes inside of it. The application it executes will run forever unless someone types in the command window in which it is executing "quit", at which point it will do some final processing and will exit cleanly. If I just close the command window, the exit is not clean and that is bad for a number of different reasons related to the data that this application produces. Is there a way for me to perhaps write another batch script that will insert the "quit" command into the first command window and then exit?
I'd write a little script using the subprocess module in python, like so:
from subprocess import Popen, PIPE
import os
import os.path
import time
app = Popen(['c:/path/to/app.exe', 'arg1', 'arg2'], stdin=PIPE, env = {
'YOUR_ENV_VAR_1': 'value1',
'YOUR_ENV_VAR_2': 'value2',
# etc as needed to fill environment
})
while not os.path.exists('c:/temp/quit-app.tmp'):
time.sleep(60)
app.communicate('quit\n')
print "app return code is %s" % app.returncode
Then, you remotely invoke a batch script that creates c:/temp/quit-app.tmp when you want to shut down, wait a couple of minutes, and then deletes the file.
Naturally, you need Python installed on the Windows machine for this to work.
It sounds like the type of job for which I'd use expect, though I've never used it under Windows.
You could use the < to take the "quit" from a text file instead of the console... but that would quit your process as soon as it loads. Would that work?
Otherwise you could write a program to send keystrokes to the console... but I don't think this is a production quality trick.
Do you have access to the actual code of the application? if so you can check for a batch file. Else you can do something like the following using powershell.
$Process = Get-Process | Where-Object {$_.ProcessName -eq "notepad"}If (!($Process))
{ "Process isn't running, do stuff"
}Else
{ $myshell.AppActivate("notepad")
$myshell.sendkeys("Exit")
}
I am only suggesting powershell as its easy for you to call the code. you could also put in a loop and wait for it to run.
RE

run command in parent shell from ruby

I'm trying to change the directory of the shell I start the ruby script form via the ruby script itself...
My point is to build a little program to manage favorites directories and easily change among them.
Here's what I did
#!/usr/bin/ruby
Dir.chdir("/Users/luca/mydir")
and than tried executing it in many ways...
my_script (this doesn't change the directory)
. my_script (this is interpreted as bash)
. $(ruby my_script) (this is interpreted as bash too!)
any idea?
Cannot be done. Child processes cannot modify their parents environment (including the current working directory of the parent). The . (also known as source) trick only works with shell scripts because you are telling the shell to run that code in the current process (rather than spawning a subprocess to run it). Just for fun try putting exit in a file you run this way (spoiler: you will get logged out).
If you wish to have the illusion of this working you need to create shell functions that call your Ruby script and have the shell function do the actual cd. Since the functions run in the current process, they can change the directory. For instance, given this ruby script (named temp.rb):
#!/usr/bin/ruby
print "/tmp";
You could write this BASH function (in, say, you ~/.profile):
function gotmp {
cd $(~/bin/temp.rb)
}
And then you could say gotmp at the commandline and have the directory be changed.
#!/usr/bin/env ruby
`../your_script`
Like this?
Or start your script in the directory you want it to do something.
Maybe I don't get your question. Provide some more details.

Resources