A Python GUI that I develop executes an exe file in the same directory. I need to allow the user to open multiple instances of the GUI. This results in the same exe being called simultaneously and raises the following error: the process can not access the file because it is being used by another process. I use a dedicated thread in the python GUI to run the exe.
How can I allow the multiple GUIs to run the same exe simultaneously?
I would appreciate code examples.
Following is the thread. The run includes the execution of the exe. This exe was made using fortran.
class LineariseThread(threading.Thread):
def __init__(self, parent):
threading.Thread.__init__(self)
self._parent = parent
def run(self):
self.p = subprocess.Popen([exe_linearise], shell=True, stdout=subprocess.PIPE)
print threading.current_thread()
print "Subprocess started"
while True:
line = self.p.stdout.readline()
if not line:
break
print line.strip()
self._parent.status.SetStatusText(line.strip())
# Publisher().sendMessage(('change_statusbar'), line.strip())
sys.stdout.flush()
if not self.p.poll():
print " process done"
evt_show = LineariseEvent(tgssr_show, -1)
wx.PostEvent(self._parent, evt_show)
def killtree(self, pid):
print pid
parent = psutil.Process(pid)
print "in killtree sub: "
for child in parent.get_children(recursive=True):
child.kill()
parent.kill()
def abort(self):
if self.isAlive():
print "Linearisation thread is alive"
# kill the respective subprocesses
if not self.p.poll():
# stop them all
self.killtree(int(self.p.pid))
self._Thread__stop()
print str(self.getName()) + " could not be terminated"
self._parent.LineariseThread_killed=True
I think I figured out a way to avoid the error. It was actually not the execution of the exe raised the error. The error raised when the exe accesses the other files which are locked by another instance of the same exe. Therefore, I decided not to allow multiple instance of exe to run. Instead, I thought of allowing multiple cases to be opened within a single instance. That way I can manage the process threads to avoid the above mentioned issue.
I should mention that the comments given to me helped me to study the error messages in detail to figure out what was really going on.
Related
I wish to compare two parallel running of Abaqus simulations with umat coded in Fortran. It seems that I am able to select the correct standard.exe associated with each run, but it won't always be this lucky. This prompted me to ask if there is a way to call the abaqus job and change the name of standard.exe to maybe something like standard1.exe to differentiate between the runs. I checked abaqus help but it doesn't seem like there is an option through the command line.
There is a lot of room for improvement for jobs/analyses submission in Abaqus...
Anyways, feel free to have a look at my GitHub repo. I am trying to fill what's lacking in Abaqus when submitting jobs. Let me know if you have any question.
Or you can use this code to identify the right Process IDentifier (pid) of the job that you are running. You can then kill the process associated with this id.
import psutil
processesList = psutil.pids()
jobname=''
print('\n\nStart')
for proc in processesList:
try:
p = psutil.Process(proc)
if (p.name()=='standard.exe' or p.name()=='explicit.exe' or p.name()=='pre.exe' or p.name()=='explicit_dp.exe'):
i=0
jobCpus='1'
jobGpus='0'
sameJob = False
print('\nPID: %s'%proc)
for line in p.cmdline():
if line == '-job':
if jobname==p.cmdline()[i+1]:
sameJob = True
else:
sameJob=False
jobname=p.cmdline()[i+1]
print('Job Name: %s'%jobname)
elif line == '-indir':
jobdir=p.cmdline()[i+1]
print('Job Dir: %s'%jobdir)
elif line == '-cpus':
jobCpus=p.cmdline()[i+1]
print('Cpus number: %s'%jobCpus)
elif line == '-gpus':
jobGpus=p.cmdline()[i+1]
print('Gpus number: %s'%jobGpus)
i+=1
except:
pass
print('\nEnd\n\n')
In order to kill a process, you can use this command:
import os, signal
os.kill(int(pid), signal.SIGTERM)
I am developing a long-running program in Ruby. I am writing some integration tests for this. These tests need to kill or stop the program after starting it; otherwise the tests hang.
For example, with a file bin/runner
#!/usr/bin/env ruby
while true do
puts "Hello World"
sleep 10
end
The (integration) test would be:
class RunReflectorTest < TestCase
test "it prints a welcome message over and over" do
out, err = capture_subprocess_io do
system "bin/runner"
end
assert_empty err
assert_includes out, "Hello World"
end
end
Only, obviously, this will not work; the test starts and never stops, because the system call never ends.
How should I tackle this? Is the problem in system itself, and would Kernel#spawn provide a solution? If so, how? Somehow the following keeps the out empty:
class RunReflectorTest < TestCase
test "it prints a welcome message over and over" do
out, err = capture_subprocess_io do
pid = spawn "bin/runner"
sleep 2
Process.kill pid
end
assert_empty err
assert_includes out, "Hello World"
end
end
. This direction also seems like it will cause a lot of timing-issues (and slow tests). Ideally, a reader would follow the stream of STDOUT and let the test pass as soon as the string is encountered and then immediately kill the subprocess. I cannot find how to do this with Process.
Test Behavior, Not Language Features
First, what you're doing is a TDD anti-pattern. Tests should focus on behaviors of methods or objects, not on language features like loops. If you must test a loop, construct a test that checks for a useful behavior like "entering an invalid response results in a re-prompt." There's almost no utility in checking that a loop loops forever.
However, you might decide to test a long-running process by checking to see:
If it's still running after t time.
If it's performed at least i iterations.
If a loop exits properly given certain input or upon reaching a boundary condition.
Use Timeouts or Signals to End Testing
Second, if you decide to do it anyway, you can just escape the block with Timeout::timeout. For example:
require 'timeout'
# Terminates block
Timeout::timeout(3) { `sleep 300` }
This is quick and easy. However, note that using timeout doesn't actually signal the process. If you run this a few times, you'll notice that sleep is still running multiple times as a system process.
It's better is to signal the process when you want to exit with Process::kill, ensuring that you clean up after yourself. For example:
pid = spawn 'sleep 300'
Process::kill 'TERM', pid
sleep 3
Process::wait pid
Aside from resource issues, this is a better approach when you're spawning something stateful and don't want to pollute the independence of your tests. You should almost always kill long-running (or infinite) processes in your test teardown whenever you can.
Ideally, a reader would follow the stream of STDOUT and let the test pass as soon as the string is encountered and then immediately kill the subprocess. I cannot find how to do this with Process.
You can redirect stdout of spawned process to any file descriptor by specifying out option
pid = spawn(command, :out=>"/dev/null") # write mode
Documentation
Example of redirection
With the answer from CodeGnome on how to use Timeout::timeout and the answer from andyconhin on how to redirect Process::spawn IO, I came up with two Minitest helpers that can be used as follows:
it "runs a deamon" do
wait_for(timeout: 2) do
wait_for_spawned_io(regexp: /Hello World/, command: ["bin/runner"])
end
end
The helpers are:
def wait_for(timeout: 1, &block)
Timeout::timeout(timeout) do
yield block
end
rescue Timeout::Error
flunk "Test did not pass within #{timeout} seconds"
end
def wait_for_spawned_io(regexp: //, command: [])
buffer = ""
begin
read_pipe, write_pipe = IO.pipe
pid = Process.spawn(command.shelljoin, out: write_pipe, err: write_pipe)
loop do
buffer << read_pipe.readpartial(1000)
break if regexp =~ buffer
end
ensure
read_pipe.close
write_pipe.close
Process.kill("INT", pid)
end
buffer
end
These can be used in a test which allows me to start a subprocess, capture the STDOUT and as soon as it matches the test Regular Expression, it passes, else it will wait 'till timeout and flunk (fail the test).
The loop will capture output and pass the test once it sees matching output. It uses a IO.pipe because that is most transparant for subprocesses (and their children) to write to.
I doubt this will work on Windows. And it needs some cleaning up of the wait_for_spawned_io which is doing slightly too much IMO. Antoher problem is that the Process.kill('INT') might not reach the children which are orphaned but still running after this test has ran. I need to find a way to ensure the entire subtree of processes is killed.
I am doing a demo command line project in Ruby. The structure is like this:
/ROOT_DIR
init.rb
/SCRIPT_DIR
(other scripts and files)
I want users to only go into the application using init.rb, but as it stands, anyone can go into the sub-folder and call other ruby scripts directly.
Questions:
What ways can above scenario be prevented?
If I was to use directory permissions, would it get reset when running the code from a Windows machine to on Linux machine?
Is there anything that can be included in Ruby files itself to prevent it from being directly called from OS command line?
You can't do this with file permissions, since the user needs to read the files; removing the read permission means you can't include it either. Removing the execute permission is useful to signal that these file aren't intended to be executed, but won't prevent people from typing ruby incl.rb.
The easiest way is probably to set a global variable in the init.rb script:
#!/usr/bin/env ruby
FROM_INIT = true
require './incl.rb'
puts 'This is init!'
And then check if this variable is defined in the included incl.rb file:
unless defined? FROM_INIT
puts 'Must be called from init.rb'
exit 0
end
puts 'This is incl!'
A second method might be checking the value of $PROGRAM_NAME in incl.rb; this stores the current program name (like argv[0] in many other languages):
unless $PROGRAM_NAME.end_with? 'init.rb'
puts 'Must be called from init.rb'
exit 0
end
I don't recommend this though, as it's not very future-proof; what if you want to rename init.rb or make a second script?
I'm trying to setup a set of functions to be skipped by gdb from stepping in by commands like:
skip myfunction
. But if I place them in ~/.gdbinit instead of just saying in the terminal gdb prompt, I get the error:
No function found named myfunction.
Ignore function pending future shared library load? (y or [n]) [answered N; input not from terminal]
So I need GDB to get Y answer. I've tried what was suggested for breakpoints as well as set confirm off suggested in a comment to this question. But these don't help with skip command.
How can I set skip in a .gdbinit script, answering Y about future library load?
you can use Python to wait for the execution to start, which is equivalent to pending on:
import gdb
to_skip = []
def try_pending_skips(evt=None):
for skip in list(to_skip): # make a copy for safe remove
try:
# test if the function (aka symbol is defined)
symb, _ = gdb.lookup_symbol(skip)
if not symb:
continue
except gdb.error:
# no frame ?
continue
# yes, we can skip it
gdb.execute("skip {}".format(skip))
to_skip.remove(skip)
if not to_skip:
# no more functions to skip
try:
gdb.events.new_objfile.disconnect(try_pending_skips) # event fired when the binary is loaded
except ValueError:
pass # was not connected
class cmd_pending_skip(gdb.Command):
self = None
def __init__ (self):
gdb.Command.__init__(self, "pending_skip", gdb.COMMAND_OBSCURE)
def invoke (self, args, from_tty):
global to_skip
if not args:
if not to_skip:
print("No pending skip.")
else:
print("Pending skips:")
for skip in to_skip:
print("\t{}".format(skip))
return
new_skips = args.split()
to_skip += new_skips
for skip in new_skips:
print("Pending skip for function '{}' registered.".format(skip))
try:
gdb.events.new_objfile.disconnect(try_pending_skips)
except ValueError: pass # was not connected
# new_objfile event fired when the binary and libraries are loaded in memory
gdb.events.new_objfile.connect(try_pending_skips)
# try right away, just in case
try_pending_skips()
cmd_pending_skip()
Save this code into a Python file pending_skip.py (or surrounded with python ... end in your .gdbinit), then:
source pending_skip.py
pending_skip fct1
pending_skip fct2 fct3
pending_skip # to list pending skips
Documentation references:
GDB Python TOC
Basic Python
Events in Python
Symbols in Python
This feature has been proposed here:
https://sourceware.org/ml/gdb-prs/2015-q2/msg00417.html
https://sourceware.org/bugzilla/show_bug.cgi?id=18531
So far, there's been no activity on that issue for 6 months though. As of writing this, the feature is not included in GDB 7.10.
Here's the code:
while 1
input = gets
puts input
end
Here's what I want to do but I have no idea how to do it:
I want to create multiple instances of the code to run in the background and be able to pass input to a specific instance.
Q1: How do I run multiple instances of the script in the background?
Q2: How do I refer to an individual instance of the script so I can pass input to the instance (Q3)?
Q3: The script is using the cmd "gets" to take input, how would I pass input into an indivdual's script's gets?
e.g
Let's say I'm running threes instances of the code in the background and I refer to the instance as #1, #2, and #3 respectively.
I pass "hello" to #1, #1 puts "hello" to the screen.
Then I pass "world" to #3 and #3 puts "hello" to the screen.
Thanks!
UPDATE:
Answered my own question. Found this awesome tut: http://rubylearning.com/satishtalim/ruby_threads.html and resource here: http://www.ruby-doc.org/core/classes/Thread.html#M000826.
puts Thread.main
x = Thread.new{loop{puts 'x'; puts gets; Thread.stop}}
y = Thread.new{loop{puts 'y'; puts gets; Thread.stop}}
z = Thread.new{loop{puts 'z'; puts gets; Thread.stop}}
while x.status != "sleep" and y.status != "sleep" and z.status !="sleep"
sleep(1)
end
Thread.list.each {|thr| p thr }
x.run
x.join
Thank you for all the help guys! Help clarified my thinking.
I assume that you mean that you want multiple bits of Ruby code running concurrently. You can do it the hard way using Ruby threads (which have their own gotchas) or you can use the job control facilities of your OS. If you're using something UNIX-y, you can just put the code for each daemon in separate .rb files and run them at the same time.
E.g.,
# ruby daemon1.rb &
# ruby daemon2.rb &
There are many ways to "handle input and output" in a Ruby program. Pipes, sockets, etc. Since you asked about daemons, I assume that you mean network I/O. See Net::HTTP.
Ignoring what you think will happen with multiple daemons all fighting over STDIN at the same time:
(1..3).map{ Thread.new{ loop{ puts gets } } }.each(&:join)
This will create three threads that loop indefinitely, asking for input and then outputting it. Each thread is "joined", preventing the main program from exiting until each thread is complete (which it never will be).
You could try using multi_daemons gem which has capability to run multiple daemons and control them.
# this is server.rb
proc_code = Proc do
loop do
sleep 5
end
end
scheduler = MultiDaemons::Daemon.new('scripts/scheduler', name: 'scheduler', type: :script, options: {})
looper = MultiDaemons::Daemon.new(proc_code, name: 'looper', type: :proc, options: {})
MultiDaemons.runner([scheduler, looper], { force_kill_timeout: 60 })
To start and stop
ruby server.rb start
ruby server.rb stop