Write program consumes certain amount of memory - bash

In my recently experiment, I need a program that consumes certain amount of memory. I want to implement it in bash script, say, I want this script runs as a daemon and consumes around 200mb physical memory. How to design this script?
If it's possible, I hope it can be run without permission.

Seems like this is what you looking for
mntroot rw
cd /dev
while :
do
dd > /dev/null 2>&1 if=/dev/zero of=myfile1 count=20000 bs=1024 # use 200MB ram
usleep 1
rm myfile1
done

Related

Monitoring RAM and CPU consumtion of Snakemake

I want to get CPU and RAM usage of a Snakemake pipeline across time.
I run my pipeline on a slurm managed cluster. I know that Snakemake
include benchmarking functions but they only reports pic consumption.
Ideally, I would like to have an output file looking like this :
t CPU RAM
1 103.00 32
2 ... ...
Is there any program to do so?
Thanks!
Don't know any program already doing this, but you can monitor the CPU and MEM usage via native unix commmands, this post gives an answer that could fit your requirements.
Here is a summary of the answer modified for this context:
You can use this bash function
logsnakemake() { while sleep 1; do ps -p $1 -o pcpu= -o pmem= ; done; }
You can tweak the frequency of logging by modifying the value of sleep.
To log your snakemake process with pid=123 just type in the terminal:
$ logsnakemake 123 | tee /tmp/pid.log
I've found Syrupy on github : a ps parser in python with a clear documentation.

How to free up memory after running a process in a shell

I have a script that runs a JAVA process that loads data into a database every 10 secs using a loop. This script seems to work perfectly, but after a couple of days I start getting Memory issues. If I stop the script everything frees up, I can start it again and it will run happily for another couple of days.
RUNME=Y
PROPERTIES=someproprties.properties
CHECKFILE=somelockfile.lock
touch $CHECKFILE
while [ "$RUNME" = "Y" ]; do
if [ -f $CHECKFILE ]
then
#Run Process
$DR_HOME/bin/dr -cp $CP_PLUGIN -Xmx64g --engine parallelism=1 --runjson $HOME_DIR/workflows/some_dataflow.dr --overridefile $PROPERTIES 1> /dev/null 2>> $LOG_FILE
#Give Process a little time to finish up before moving on
sleep 10s
else
RUNME=N
fi
done
I had assumed that once the process had run it would make any memory that it had allocated for the process available again, so that the next iteration of the loop could use this. Given that this does not seem to be the case, is there a way I can force the release of memory post the running the process. I appreciate that this may be something that I need to address in the actual JAVA Process rather than in a Shell - but as this is the area I have more control over, I thought I would at least ask.
To check the processes which are running and memory used
sid=$(ps -p $$ -osid=)
while ....
ps --sid $sid -opid,tty,cpu,vsz,etime,command
vsz shows the virtual size used by the process
Then if it's really bash process, it may be environment which is growing, but from the script it can't be that.

What is causing Xorg high CPU usage?

I am running feh image viewer on Debian and after some hours of normal CPU usage (3% aprox.) , xorg suddenly starts using much more CPU (80% aprox.) and everything runs very slowly. I am not running anything else so the bug should be either on feh or on the xserver...
I am using the command "feh -z -q -D20 -R 1" (-z for random image, -q for quiet, -D20 to change the picture every 20 seconds and -R 1 to refresh the directory every second, as I erase and insert pictures pretty often)
When I use the command "free -m" before the high CPU usage and feh running, I get
total used free shared buff/cache available
Mem: 923 117 474 19 331 735
Swap: 99 0 99
And after several hours I get the same for "mem" but the used amount of "swap" is 99.
The fact that your memory usage goes up (swap is full) points directly to memory leak in some program in your system. Considering that feh is not probably designed for such an use case I'd bet it's the cause for going out of memory.
The "everything runs slowly" is caused by kernel going out of memory and it's doing its best to keep the system running. If you insist on runnin feh your choices are
Triage the memory leak bug in feh and create a fix for it.
Try to get somebody else do the same for you.
Periodically kill feh and rerun it again. Basically you can do (in bash)
while true; do timeout 120m feh -z -q -D20 -R 1; sleep 2s; done
which will kill every 120 min and restart it after 2 second delay (which allows you to kill the while loop if needed). Another choice would be to use ulimit to set maximum amount of memory you want to allow for feh and the process probably simply dies once it's using too much.
I solved this problem, but I don't know why, too.
You can try run this code kill this process:
ps -a | grep Xorg | awk '{print $1}' | xargs kill 9

How to limit allowed time for a script run as a sub process in ksh script

I try to limit the allowed of a sub process in a ksh script. I try using ulimit (hard or soft values) but the sub process always break the limit (if take longer than allowed time).
# value for a test
Sc_Timeout=2
Sc_FileOutRun=MyScript.log.Running
Sc_Cmd=./AScriptToRunInSubShell.sh
(
ulimit -Ht ${Sc_Timeout}
ulimit -St ${Sc_Timeout}
time (
${Sc_Cmd} >> ${Sc_FileOutRun} 2>&1
) >> ${Sc_FileOutRun} 2>&1
# some other command not relevant for this
)
result:
1> ./MyScript.log.Running
ulimit -Ht 2
ulimit -St 2
1>> ./MyScript.log.Running 2>& 1
real 0m11.45s
user 0m3.33s
sys 0m4.12s
I expect a timeout error with a sys or user time of something like 0m2.00s
When i make a test directly from command line, the ulimit Hard seems to effectively limit the time bu not in script
System of test/dev is a AIX 6.1 but should also work other version and on sun and linux
Each process has its own time limits, but time shows the cumulative time for the script. Each time you create a child process, that child will have its own limits. So, for example, if you call cut and grep in the script then those processes use their own CPU time, the quota is not decremented from the script's, although the limits themselves are inherited.
If you want a time limit, you might wish to investigate trap ALRM.

sh via Ruby: Running ulimit and a program in the same line

I am trying to run some computational-intense program from Ruby via the following command:
%x(heavy_program)
However, I sometimes want to limit the running time of the program. So I tried doing
%x(ulimit -St #{max_time} & heavy_program)
But it seems to fail; the "&" trick does not work even when I try it in a running sh shell outside Ruby.
I'm sure there's a better way of doing this...
use either && or ;:
%x(ulimit -St #{max_time} && heavy_program)
%x(ulimit -St #{max_time}; heavy_program)
However using ulimit may be not what you really need, consider this code:
require 'timeout'
Timeout(max_time){ %x'heavy_program' }
ulimit limits CPU time, and timeout limits total running time, as we, humans, usually count it.
so, for example, if you run sleep 999999 shell command with ulimit -St 5 - it will run not for 5 seconds, but for all 999999 because sleep uses negligible amount of CPU time

Resources