python3: Run C file and python script in parallel - parallel-processing

Ive got a snipped of my actual python script written in this post. Basically I want to have a C programm and a Pyserial function executed in parallel (the C programm is for controlling a motor, the pySerial is for communicating with a arduino). My programm will be executed on a RPi3b using Spyder3 and Rasbipian.
What Ive already figured out from the sources below is that if you want to have a terminal program executed in python you should use the subprocess class. If you want to execute something in parallel the Process package from multiprocessing will do the job.
So Ive mixed them together and tried to archive my goals by using the code bleow. Unluckly without any success. The p1 process immediatly starts after the p1 process is called [ p1 = Process(target=run_c_file()) ] and the script stops until the C file has finished. Does anyone out there can help? Thank you very much!
BTW Im using python 3.5...
My sources:
https://docs.python.org/3.5/library/multiprocessing.html , https://docs.python.org/3.5/library/subprocess.html?highlight=subprocess
import serial_comm as ssf #My own function. Tested and working when single calling
import subprocess as sub
from multiprocessing import Process
def run_c_file():
sub.run("./C_File") #Call the C File in the same directory. Immeidatly starts when script is at line 14 -> p1 = Process(target=run_c_file())
def run_pyserial(ser_obj):
ssf.command(ser_obj,"Command") #Tell the arduino to do something fancy (tested and working)
ser_obj = ssf.connect()
p1 = Process(target=run_c_file())
p2 = Process(target=run_pyserial(ser_obj))
try:
p1.start()
p2.start()
p1.join() #Process one should start here (as far as I understood)
p2.join() #Process two should start here (as far as I understood)
'''The following part is still in progress'''
except KeyboardInterrupt:
print("Aborting")
p1.terminate()
p2.terminate()

try
p1 = Process(target=run_c_file)
p2 = Process(target=run_pyserial, args=(ser_obj,))
currently you are calling the function instead of passing it.

Related

Python run bash script before exiting

I'm trying to call a bash script with python, but I need it to be last thing executed.
I thought just adding this at the end of my script would do it :
val = subprocess.check_call("/scripts/files.sh '%s'" % title, shell=True)
But it's being executed before the code above it, why?
Last lines above it :
print(q_1)
print(q_2)
print(q_3)
cursor.execute(q_1)
cursor.execute(q_2)
cursor.execute(q_3)
mariadb_connection.commit()
cursor.close()
mariadb_connection.close()
I do use val = subprocess.check_call before all this code to run another bash script too, if that matters
How can I be sure my script will be the last thing executed?
Python is scripting language, meaning the lines are executed frpm first to last.
All you havr to do is place you command at tgr enf of your python script
Let's assume your script looks something like
obj = MySqlSomething()
things
more things
val = subprocess.check_call(x)
If the class MySqlSomething has a destructor, it will be called after check_call, when you fall off the end of the script and obj goes out of scope. The fix is to make this happen earlier, trivially by moving the stuff into a function:
def main_main():
obj = MySqlSomething()
things
more things
main_main()
val = subprocess.check_call(x)
With this arrangement, obj goes out of scope at the end of main_main.

why pool.map in python doesn't work

import multiprocessing as mul
def f(x):
return x**2
pool = mul.Pool(5)
rel = pool.map(f,[1,2,3,4,5,6,7,8,9,10])
print(rel)
When I run the program above, the application is stuck in a loop and can't stop.
I am using python 3.5 in windows, is there something wrong?
This is what I see on my screen:
I am new to finance data analysis; and I am trying to find out a way to solve the big data problem with parallel computing.
Its not working because you are typing the commands in a shell; try saving the code in a file and running it directly.
Don't forget to copy the code correctly, you were missing a very important if statement (see the documentation).
Save this to a file, for example example.py on the desktop:
import multiprocessing as mul
def f(x):
return x**2
if __name__ == '__main__':
pool = mul.Pool(5)
rel = pool.map(f,[1,2,3,4,5,6,7,8,9,10])
print(rel)
Then, open a command prompt and type:
python %USERPROFILE%\Desktop\example.py

Python multiprocessing stdin input

All code written and tested on python 3.4 windows 7.
I was designing a console app and had a need to use stdin from command-line (win os) to issue commands and to change the operating mode of the program. The program depends on multiprocessing to deal with cpu bound loads to spread to multiple processors.
I am using stdout to monitor that status and some basic return information and stdin to issue commands to load different sub-processes based on the returned console information.
This is where I found a problem. I could no get the multiprocessing module to accept stdin inputs but stdout was working just fine. I think found the following help on stack So I tested it and found that with the threading module this all works great, except for the fact that all output to stdout is paused until each time stdin is cycled due to GIL lock with stdin blocking.
I will say I have been successful with a work around implemented with msvcrt.kbhit(). However, I can't help but wonder if there is some sort of bug in the multiprocessing feature that is making stdin not read any data. I tried numerous ways and nothing worked when using multiprocessing. Even attempted to use Queues, but I did not try pools, or any other methods from multiprocessing.
I also did not try this on my linux machine since I was focusing on trying to get it to work.
Here is simplified test code that does not function as intended (reminder this was written in Python 3.4 - win7):
import sys
import time
from multiprocessing import Process
def function1():
while True:
print("Function 1")
time.sleep(1.33)
def function2():
while True:
print("Function 2")
c = sys.stdin.read(1) # Does not appear to be waiting for read before continuing loop.
sys.stdout.write(c) #nothing in 'c'
sys.stdout.write(".") #checking to see if it works at all.
print(str(c)) #trying something else, still nothing in 'c'
time.sleep(1.66)
if __name__ == "__main__":
p1 = Process(target=function1)
p2 = Process(target=function2)
p1.start()
p2.start()
Hopefully someone can shed light on whether this is intended functionality, if I didn't implement it correctly, or some other useful bit of information.
Thanks.
When you take a look at Pythons implementation of multiprocessing.Process._bootstrap() you will see this:
if sys.stdin is not None:
try:
sys.stdin.close()
sys.stdin = open(os.devnull)
except (OSError, ValueError):
pass
You can also confirm this by using:
>>> import sys
>>> import multiprocessing
>>> def func():
... print(sys.stdin)
...
>>> p = multiprocessing.Process(target=func)
>>> p.start()
>>> <_io.TextIOWrapper name='/dev/null' mode='r' encoding='UTF-8'>
And reading from os.devnull immediately returns empty result:
>>> import os
>>> f = open(os.devnull)
>>> f.read(1)
''
You can work this around by using open(0):
file is either a string or bytes object giving the pathname (absolute or relative to the current working directory) of the file to be opened or an integer file descriptor of the file to be wrapped. (If a file descriptor is given, it is closed when the returned I/O object is closed, unless closefd is set to False.)
And "0 file descriptor":
File descriptors are small integers corresponding to a file that has been opened by the current process. For example, standard input is usually file descriptor 0, standard output is 1, and standard error is 2:
>>> def func():
... sys.stdin = open(0)
... print(sys.stdin)
... c = sys.stdin.read(1)
... print('Got', c)
...
>>> multiprocessing.Process(target=func).start()
>>> <_io.TextIOWrapper name=0 mode='r' encoding='UTF-8'>
Got a

ipython notebook : how to parallelize external script

I'm trying to use parallel computing from ipython parallel library. But I have little knowledge about it and I find the doc difficult to read from someone who knows nothing about parallel computing.
Funnily, all tutorials I found just re-use the example in the doc, with the same explanation, which from my point of view, is useless.
Basically what I'd like to do is running few scripts in background so they are executed in the same time. In bash it would be something like :
for my_file in $(cat list_file); do
python pgm.py my_file &
done
But bash interpreter of Ipython notebook doesn't handle the background mode.
It seems that solution was to use parallel library from ipython.
I tried :
from IPython.parallel import Client
rc = Client()
rc.block = True
dview = rc[:2] # I take only 2 engines
But then I'm stuck. I don't know how to run twice (or more) the same script or pgm at the same time.
Thanks.
One year later, I eventually managed to get what I wanted.
1) Create a function with what you want to do on the different cpu. Here it is just calling a script from the bash with the ! magic ipython command. I guess it would work with the call() function.
def my_func(my_file):
!python pgm.py {my_file}
Don't forget the {} when using !
Note also that the path to my_file should be absolute, since the clusters are where you started the notebook (when doing jupyter notebook or ipython notebook) which is not necessarily where you are.
2) Start your ipython notebook Cluster with the number of CPU you want.
Wait 2s and execute the following cell:
from IPython import parallel
rc = parallel.Client()
view = rc.load_balanced_view()
3) Get a list of file you want to process:
files = list_of_files
4) Map asynchronously your function with all your files to the view of your engines you just created. (not sure of the wording).
r = view.map_async(my_func, files)
While it's running you can do something else on the notebook (It runs in "background"!). You can also call r.wait_interactive() that enumerates interactively the number of files processed and the number of time spent so far and the number of files left. This will prevent you to run other cells (but you can interrupt it).
And if you have more files than engines, no worries, they will be processed as soon as an engine finishes with 1 file.
Hope this will help others !
This tutorial might be of some help:
http://nbviewer.ipython.org/github/minrk/IPython-parallel-tutorial/blob/master/Index.ipynb
Note also that I still have IPython 2.3.1, I don't know if it changed since Jupyter.
Edit: Still works with Jupyter, see here for difference and potential issues you may encounter
Note that if you use external libraries in your function, you need to import them on the different engines with:
%px import numpy as np
or
%%px
import numpy as np
import pandas as pd
Same with variable and other functions, you need to push them to the engine name space:
rc[:].push(dict(
foo=foo,
bar=bar))
If you're trying to executing some external scripts in parallel, you don't need to use IPython's parallel functionality. Replicating bash's parallel execution can be achieved with the subprocess module as follows:
import subprocess
procs = []
for i in range(10):
procs.append(subprocess.Popen(['ls', '/Users/shad/tmp/'], stdout=subprocess.PIPE))
results = []
for proc in procs:
stdout, _ = proc.communicate()
results.append(stdout)
Be wary that if your subprocess generates a lot of output, the process will block. If you print the output (results) you get:
print results
['file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n']

Create background process in windows without visible console window

How do I create a background process with Haskell on windows without a visible command window being created?
I wrote a Haskell program that runs backup processes periodically but every time I run it, a command window opens up to the top of all the windows. I would like to get rid of this window. What is the simplest way to do this?
You should really tell us how you are trying to do this currently, but on my system (using linux) the following snippet will run a command without opening a new terminal window. It should work the same way on windows.
module Main where
import System
import System.Process
import Control.Monad
main :: IO ()
main = do
putStrLn "Running command..."
pid <- runCommand "mplayer song.mp3" -- or whatever you want
replicateM_ 10 $ putStrLn "Doing other stuff"
waitForProcess pid >>= exitWith
Thanks for the responses so far, but I've found my own solution. I did try a lot of different things, from writing a vbs script as suggested to a standalone program called hstart. hstart worked...but it creates a separate process which I didn't like very much because then I can't kill it in the normal way. But I found a simpler solution that required simply Haskell code.
My code from before was a simple call to runCommand, which did popup the window. An alternative function you can use is runProcess which has more options. From peeking at the ghc source code file runProcess.c, I found that the CREATE_NO_WINDOW flag is set when you supply redirects for all of STDIN, STOUT, and STDERR. So that's what you need to do, supply redirects for those. My test program looks like:
import System.Process
import System.IO
main = do
inH <- openFile "in" ReadMode
outH <- openFile "out" WriteMode
runProcess "rsync.bat" [] Nothing Nothing (Just inH) (Just outH) (Just outH)
This worked! No command window again! A caveat is that you need an empty file for inH to read in as the STDIN eventhough in my situation it was not needed.
The simplest way I can think of is to run the rsync command from within a Windows Shell script (vbs or cmd).
I don't know anything about Haskell, but I had this problem in a C project a few months ago.
The best way to execute an external program without any windows popping up is to use the ShellExecuteEx() API function with the "open" verb. If ShellExecuteEx() is available to you in Haskell, then you should be able to achieve what you want.
The C code looks something like this:
SHELLEXECUTEINFO Info;
BOOL b;
// Execute it
memset (&Info, 0, sizeof (Info));
Info.cbSize = sizeof (Info);
Info.fMask = SEE_MASK_NOCLOSEPROCESS | SEE_MASK_FLAG_NO_UI;
Info.hwnd = NULL;
Info.lpVerb = "open";
Info.lpFile = "rsync.exe";
Info.lpParameters = "whatever parameters you like";
Info.lpDirectory = NULL;
Info.nShow = SW_HIDE;
b = ShellExecuteEx (&Info);
if (b)
{
// Looks good; if there is an instance, wait for it
if (Info.hProcess)
{
// Wait
WaitForSingleObject (Info.hProcess, INFINITE);
}
}

Resources