ruby: How do i get the number of subprocess(fork) running - ruby

I want to limit the subprocesses count to 3. Once it hits 3 i wait until one of the processes stops and then execute a new one. I'm using Kernel.fork to start the process.
How do i get the number of running subprocesses? or is there a better way to do this?

A good question, but I don't think there's such a method in Ruby, at least not in the standard library. There's lots of gems out there....
This problem though sounds like a job for the Mutex class. Look up the section Condition Variables here on how to use Ruby's mutexes.

I usually have a Queue of tasks to be done, and then have a couple of threads consuming tasks until they receive an item indicating the end of work. There's an example in "Programming Ruby" under the Thread library. (I'm not sure if I should copy and paste the example to Stack Overflow - sorry)

My solution was to use trap("CLD"), to trap SIGCLD whenever a child process ended and decrease the counter (a global variable) of processes running.

Related

Waiting for closing of previously started program

I would like to ask you how to specify waiting for closing of specific program which was started before. I am showing here an example with command waitfor but unfortunately I don't know how to write it correctly, therefore I am asking you for help.
system('"C:\Program Files\Google\Chrome\Application\chrome.exe" &');
waitfor "closing of chrome.exe"
You should follow the advice in this other Q&A to find the PID of the new program right after you start it. Then, in a loop, check to see if that PID is still running (using again the same process as before), and pause(1) to avoid checking too frequently.
I guess you can do something like:
time_in_seconds=60
while ~isfile("output_folder/file.db")
pause(time_in_seconds)
end
Note that this program, as is, requires that file to exist at some point, otherwise infinite loop. Make sure you put safeguards to end it in case the file does not get created (like a time limit).
But I am not sure it makes sense to have MATLAB waiting for 40 minutes for a script to finish...

Correct use Bash wait command with unknown processes number

Im writing a bash script that essentially fires off a python script that takes roughly 10 hours to complete, followed by an R script that checks the outputs of the python script for anything I need to be concerned about. Here is what I have:
ProdRun="python scripts/run_prod.py"
echo "Commencing Production Run"
$ProdRun #Runs python script
wait
DupCompare="R CMD BATCH --no-save ../dupCompareTD.R" #Runs R script
$DupCompare
Now my issues is that often the python script can generate a whole heap of different processes on our linux server depending on its input, with lots of different PIDs AND we have heaps of workers using the same server firing off scripts. As far as I can tell from reading, the 'wait' command must wait for all processes to finish or for a specific PID to finish, but when i cannot tell what or how many PIDs will be assigned/processes run, how exactly do I use it?
EDIT: Thank you to all that helped, here is what caused my dilemma for anyone google searching this. I broke up the ProdRun python script into its individual script that it was itself calling, but still had the issue, I think found that one of these scripts was also calling another smaller script that had a "&" at the end of it that was ignoring any commands to wait on it inside the python script itself. Simply removing this and inserting a line of "os.system()" allowed all the code to run sequentially.
It sounds like you are trying to implement a job scheduler with possibly some complex dependencies between different tasks. I recommend to use a job scheduler instead. It allows you to specify to run those jobs whilst also benefitting from features like monitoring, handling exceptional cases, errors, ...
Examples are: the open source rundeck https://github.com/rundeck/rundeck or the commercial one http://www.bmcsoftware.uk/it-solutions/control-m.html
Make your Python program wait on the children it spawns. That's the proper way to fix this scenario. Then you don't have to wait for Python after it finishes (sic).
(Also, don't put your commands in variables.)

Go program launching several processes

I'm playing with Go to understand its features and syntax. I've done a simple producer-consumer program with concurrent go functions, and a priority buffer in the middle. A single producer produces tasks with a certain priority and send them to a buffer using a channel. A set of consumers will ask for a task when idle, received it and consume it. The intermediate buffer will store a set of tasks in a priority queue buffer, so highest priority tasks are served first. The program also prints the Garbage collector activity (how many times it was invoked and how many time it took to collect the garbage).
I'm running it on a Raspberry Pi using Go 1.1.
The software seems to work fine but I found that at SO level, htop shows that there are 4 processes running, with the same memory use, and the sum of the CPU use is over 100% (the Raspberry Pi only has one core so I suppose it has something to do with threads/processes). Also the system load is about 7% of the CPU, I suppose because of a constant context switching at OS level. The GOMAXPROCS environment variable is set to 1 or unset.
Do you know why Go is using more than one OS process?
Code can be found here: http://pastebin.com/HJUq6sab
Thank you!
EDIT:
It seems that htop shows the lightweight processes of the system. Go programs run several of these lightweight processes (they are different from the goroutines threads) so using htop shows several processes, while psor top will show just one, as it should be.
Please try to kill all of the suspect processes and try again running it only once. Also, do not use go run, at least for now - it blurs the number of running processes at minimum.
I suspect the other instances are simply leftovers from your previous development attempts (probably invoked through go run and not properly [indirectly] killed on SIGINT [hypothesis only]), especially because there's a 1 hour "timeout" at the end of 'main' (instead of a proper synchronization or select{}). A Go binary can spawn new threads, but it should never create new processes unless explicitly asked for that. Which is not the case of your code - it doesn't even import "os/exec" or "syscall" in the first place.
If my guess about the combination of go run and using a long timeout is really the culprit, then there's possibly some difference in the RP kernel wrt what the dev guys are using for testing.

Count number of executions of batch-script

This is my problem, I've got a batch-script that I can't modify (lets call it foo) and I would like to count how many times/day this script is executed - to keep track of that data.
Preferably, I would like to write the number of executions with date and exit-code to some kind of log file.
So my question is if this is possible and in that case - how? To create a batch-script/something that works in the background and writes every execution of foo to a log.
(I know this would be easy if I could modify foo but I can't. Also, everything is running on WinXP machines.)
You could write a wrapper script that does the logging and calls the existing script. Then use the wrapper in place of the original script
Consider writing a program that interrogates the Task Manager.
See http://www.netomatix.com/ProcDiagnostics.aspx
You could, for example, write a simple Console app which runs on a timer; every 5 seconds it checks that your foo application process exists. If it finds that it does, it assumes that find as the start time of the application; if it doesn't find it, it assumes the application has now closed and logs that information. It wouldn't be accurate to the second by any means, but would give you a rough approximation of when the thing is running and closing.
You might be able to configure Process Monitor
http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx to capture the information you require

Scaling a ruby script by launching multiple processes instead of using threads

I want to increase the throughput of a script which does net I/O (a scraper). Instead of making it multithreaded in ruby (I use the default 1.9.1 interpreter), I want to launch multiple processes. So, is there a system for doing this to where I can track when one finishes to re-launch it again so that I have X number running at any time. ALso some will run with different command args. I was thinking of writing a bash script but it sounds like a potentially bad idea if there already exists a method for doing something like this on linux.
I would recommend not forking but instead that you use EventMachine (and the excellent em-http-request if you're doing HTTP). Managing multiple processes can be a bit of a handful, even more so than handling multiple threads, but going down the evented path is, in comparison, much simpler. Since you want to do mostly network IO, which consist mostly of waiting, I think that an evented approach would scale as well, or better than forking or threading. And most importantly: it will require much less code, and it will be more readable.
Even if you decide on running separate processes for each task, EventMachine can help you write the code that manages the subprocesses using, for example, EventMachine.popen.
And finally, if you want to do it without EventMachine, read the docs for IO.popen, Open3.popen and Open4.popen. All do more or less the same thing but give you access to the stdin, stdout, stderr (Open3, Open4), and pid (Open4) of the subprocess.
You can try fork http://ruby-doc.org/core/classes/Process.html#M003148
You can get the PID in return and see if this process run again or not.
If you want manage IO concurrency. I suggest you to use EventMachine.
You can either
implement (or find an equivalent gem) a ThreadPool (ProcessPool, in your case), or
prepare an array of all, let's say 1000 tasks to be processed, split it into, say 10 chunks of 100 tasks (10 being the number of parallel processes you want to launch), and launch 10 processes, of which each process right away receives 100 tasks to process. That way you don't need to launch 1000 processes and control that not more than 10 of them work at the same time.

Resources