Gnu Parallel recource limit per task? - parallel-processing

is there any way to set resource limits for each job that gnu parallel is executing?
I am interested in a memory limit or a time limit.

Look at --timeout (especially --timeout 1000% seems useful). For memory limit try:
ulimit -m 1000000
ulimit -v 1000000
to limit to 1 GB.
See also niceload and
https://unix.stackexchange.com/questions/173762/gnu-parallel-limit-memory-usage/173860?noredirect=1#comment287065_173860

Related

Monitoring RAM and CPU consumtion of Snakemake

I want to get CPU and RAM usage of a Snakemake pipeline across time.
I run my pipeline on a slurm managed cluster. I know that Snakemake
include benchmarking functions but they only reports pic consumption.
Ideally, I would like to have an output file looking like this :
t CPU RAM
1 103.00 32
2 ... ...
Is there any program to do so?
Thanks!
Don't know any program already doing this, but you can monitor the CPU and MEM usage via native unix commmands, this post gives an answer that could fit your requirements.
Here is a summary of the answer modified for this context:
You can use this bash function
logsnakemake() { while sleep 1; do ps -p $1 -o pcpu= -o pmem= ; done; }
You can tweak the frequency of logging by modifying the value of sleep.
To log your snakemake process with pid=123 just type in the terminal:
$ logsnakemake 123 | tee /tmp/pid.log
I've found Syrupy on github : a ps parser in python with a clear documentation.

Is there a way to limit time and memory resources for running a bash command?

Basically I want to run my compiled C++ code and limit execution time (to a second for example) and memory (to 100k) like the online judges. Is it possible by adding options to the command? This has to be done without modifying the source code of course.
Try ulimit command, it can set limits on CPU time and memory.
Try this example
bash -c 'ulimit -St 1 ; while true; do true; done;'
The result you will get will be
CPU time limit exceeded (core dumped)
To limit time you can use "timeout" command
timeout 15s command
Check this for more details: link

How to limit allowed time for a script run as a sub process in ksh script

I try to limit the allowed of a sub process in a ksh script. I try using ulimit (hard or soft values) but the sub process always break the limit (if take longer than allowed time).
# value for a test
Sc_Timeout=2
Sc_FileOutRun=MyScript.log.Running
Sc_Cmd=./AScriptToRunInSubShell.sh
(
ulimit -Ht ${Sc_Timeout}
ulimit -St ${Sc_Timeout}
time (
${Sc_Cmd} >> ${Sc_FileOutRun} 2>&1
) >> ${Sc_FileOutRun} 2>&1
# some other command not relevant for this
)
result:
1> ./MyScript.log.Running
ulimit -Ht 2
ulimit -St 2
1>> ./MyScript.log.Running 2>& 1
real 0m11.45s
user 0m3.33s
sys 0m4.12s
I expect a timeout error with a sys or user time of something like 0m2.00s
When i make a test directly from command line, the ulimit Hard seems to effectively limit the time bu not in script
System of test/dev is a AIX 6.1 but should also work other version and on sun and linux
Each process has its own time limits, but time shows the cumulative time for the script. Each time you create a child process, that child will have its own limits. So, for example, if you call cut and grep in the script then those processes use their own CPU time, the quota is not decremented from the script's, although the limits themselves are inherited.
If you want a time limit, you might wish to investigate trap ALRM.

Using maximum remote servers

Im trying to distribute commands to 100 remote computers, but noticed that the commands are only being sent to 16 remote computers. My local machine has 16 cores. Why is parallel only using 16 remote computers instead of 100?
parallel --eta --sshloginfile list_of_100_remote_computers.txt < list_of_commands.txt
I do believe you will need to specify the number of parallel jobs to be executed.
According to the Parallel MAN:
--jobs N
-j N
--max-procs N
-P N
Number of jobslots. Run up to N jobs in parallel. 0 means as many as possible. Default is 100% which will run one job per CPU core.
And keep this in mind:
When you start more than one job with the -j option, it is reasonable
to assume that each job might not take exactly the same amount of time
to complete. If you care about seeing the output in the order that
file names were presented to Parallel (instead of when they
completed), use the --keeporder option.
Parallel Multicore at the Command Line with GNU Parallel, Admin Magazine
If the remote machines are 32 cores then you run 16*32 jobs. By default GNU Parallel uses a file handle for STDOUT and STDERR in total 16*32*2 file handles = 1024 file handles.
If you have a default GNU/Linux system you will be hitting the 1024 file handle limit.
If --ungroup runs more jobs, then that is a clear indication that you have hit the file handle limit. Use ulimit -n to increase the limit.

sh via Ruby: Running ulimit and a program in the same line

I am trying to run some computational-intense program from Ruby via the following command:
%x(heavy_program)
However, I sometimes want to limit the running time of the program. So I tried doing
%x(ulimit -St #{max_time} & heavy_program)
But it seems to fail; the "&" trick does not work even when I try it in a running sh shell outside Ruby.
I'm sure there's a better way of doing this...
use either && or ;:
%x(ulimit -St #{max_time} && heavy_program)
%x(ulimit -St #{max_time}; heavy_program)
However using ulimit may be not what you really need, consider this code:
require 'timeout'
Timeout(max_time){ %x'heavy_program' }
ulimit limits CPU time, and timeout limits total running time, as we, humans, usually count it.
so, for example, if you run sleep 999999 shell command with ulimit -St 5 - it will run not for 5 seconds, but for all 999999 because sleep uses negligible amount of CPU time

Resources