Parallel processing not working using create_app in julia - parallel-processing

So I'm using the create_app() method in PackageCompiler to create an app of my julia package. It works except that the original package could run on multiple processes and after it's turned into and app it doesn't anymore.
I put this at the beginning of my script (and in the app main function)
#info("using $(nworkers()) workers\n")
It outputs whatever I pass with the -p flag when running the script indicating it is indeed running with multiple workers. After the package is turned into an app it always prints "using 1 workers" regardless of the flags I pass using --julia-args -pX
Is there something that I should enable to make this work, or is this inherently not possible?
cheers
jiq
UPDATE: it seems that using addprocs() does work (which provides a workaround for me) but I'm still confused as to why the command line argument -p is not picked up

Related

Use non-built-in bash commands without modifying .bashsrc

I'm working on cluster and using custom toolkits (more specifically SRA toolkit). In order to use it, I fist had to download (and unpack it) to a specific folder in my directory.
Then I had to modify .bashsrc to include the following segment:
# User specific aliases and functions
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
Now I can use a stuff from SRATools in bash command line, e.g.
prefetch SR111111
My question is, can I use those tools without modifying my .bashsrc?
The reason that I want to do that is because I wrote a .sh script that takes a long time to run, and my cluster has Sun Grid Engine job management system, and I submitted my script to it, only to see the process fail - because a SRA Toolkit command I used was unrecognized.
EDIT (1):
I modified the location where my prefetch command is, and now it looks like:
/MYNAME/APPS/SRA_TOOLS/bin
different from how it is in .bashsrc:
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
And run what #Darkman suggested (put IF THEN ELSE FI and under ELSE put export). The output is that it didn't find SRATools (because path in .bashsrc is different), but it found them under ELSE and script is running normally. Weird. It works on my job management system.
Thanks everybody.

Why can't I redirect stdout of a python script to a file

I am starting my service running on a Raspberry Pi 2 (Raspbian) using a command in rc.local which looks like this:
python3.4 /home/pi/SwitchService/ServiceStart.py >/home/pi/SwitchService/log &
python3.4 /home/pi/test.py >/home/pi/log2 &
For some reason I don't see any text in the log file of my service although the script prints to stdout.
the two scripts look like this:
test.py
print("Test")
ServiceStart.py
from Server import Server
print("Test")
if __name__ == "__main__":
server = Server()
Because I couldn't get the bash solution to work I tried this other solution whether that works for me. It behaves exactly the same like the bash based method. So my service writes nothing to the log file although the empty file is created.
First, make sure that your script is actually running. Many schedulers and startup routines don't have PATH set, so it may not be finding python3.4. Try modifying the command to include the full path (e.g. /full/path/python3.4).
Secondly, it's not recommended to include long running scripts in rc.local without running them in the background (the documentation even states this). The Raspberry Pi waits for the commands to finish before continuing to boot, so if it runs forever, your Raspberry Pi may never finish booting.
Lastly, assuming the previous two issues have been taken care of, make sure that your program isn't buffering output too aggressively. You can try flushing stdout to see if that helps.

How can I make a local Git hook run a Windows executable and wait for it to return?

I'm working in a Windows environment. I have a Git repository and am writing a custom pre-commit hook. I am much more comfortable writing a quick and dirty console application in C# than trying to figure out Perl syntax so that's the route I'm going.
My .git/hooks/precommit file looks like this:
#!/bin/sh
start MyHelperApp.exe
And this works somewhat. As you can see I have a compiled helper application in the root of the repo directory (and it is .gitignore'd), and this does indeed launch my application successfully when I call git commit. However, it doesn't wait for the process to finish nor does it seem to care what the return code of the process is. I assume this is because start is asynchronous and it returns a 0 exit code every time.
I have reason to suspect that the start process which is getting called here is not the native Windows start command, because I tried changing it to start /wait MyHelperApp.exe but this had no effect. Also trying to call MyHelperApp.exe directly gives a "command not found" error, and so does changing start to call. I suspect that start is an emulated bash command and it's running the bash version instead of the Windows version?
Anyways, my helper app does return different exit codes depending on different conditions, so it'd be great if those could be used. (Pre-commit hooks fail if a program in the script returns any exit code besides zero.) How might I go about utilizing this?
Call the executable directly, don't use start.
Also trying to call MyHelperApp.exe directly gives a "command not found" error
If the PATH variable doesn't contain a . entry, bash won't look in the current directory to find executables. Call ./MyHelperApp.exe to make it explicit that it should be run from the current directory.

How Secure is using execFile for Bash Scripts?

I have a node.js app which is using the child_process.execFile command to run a command-line utility.
I'm worried that it would be possible for a user to run commands locally (a rm / -rf horror scenario comes to mind).
How secure is using execFile for Bash scripts? Any tips to ensure that flags I pass to execFile are escaped by the unix box hosting the server?
Edit
To be more precise, I'm more wondering if the arguments being sent to the file could be interpreted as a command and executed.
The other concern is inside the bash script itself, which is technically outside the scope of this question.
Using child_process.execFile by itself is perfectly safe as long as the user doesn't get to specify the command name.
It does not run the command in a shell (like child_process.exec does), so there is no need to escape anything.
child_process.execFile will execute commands with the user id of the node process, so it can do anything that user could do, which includes removing all the server files.
Not a good idea to let user pass in command as you seem to be implying by your question.
You could consider running the script in a sandbox by using chroot, and limiting the commands and what resides on the available file system, but this could get complet in a hurry.
The command you pass will get executed directly via some flavor of exec, so unless what you trying to execute is a script, it does not need to be escaped in any way.

Calling Rspec with syntax like ruby -I

I am trying to use https://github.com/rifraf/Vendorize which is run using a command like
D:\projects\SomeLibrary\lib>ruby -I..\..\Vendorize\lib -rvendorize some_lib.rb
It does something clever where it intercepts required files and logs them, but only the ones that get executed in your command line. On it's documentation pages it says
You can run the program several times with different options if the
required files depend on the options.
Or just run your tests…
I want to run all the tests with the -I function from the command line above, so that all the different avenues of code are run, and the libraries loaded (and logged). Given that I can run them like:
D:\projects\SomeLibrary\lib>rspec ..\spec\some_spec.rb
How do I do this? Thanks!
NB: I am a/ a ruby newbie and b/ running windows
I would try writing something like this at the top of some_spec.rb:
require_relative '..\..\Vendorize\lib\vendorize'
You might need to change that a bit depending on what your working directory is.
Then just runs your specs with rspec as you normally do without any extra commands.
If that doesn't work, then locate the rspec.rb executable and run:
ruby -I..\..\Vendorize\lib -rvendorize path/to/rspec.rb ..\spec\some_spec.rb

Resources