How do I cause my computer to use lot of RAM? - bash

I am looking for a command(s) that could cause my Linux computer to use lot of RAM ?
Any pointers?
I want to run that in the back ground and do some other task that needs already high RAM usage.
Thanks!

That one will cause growing memory usage by bash:
while read f; do arr[$((i++))]=$f; done < /dev/urandom
Another hard way might be (be careful):
while read f; do arr="$arr:$arr:$arr:$arr:$f"; done < /dev/urandom
Softer version:
while read f; do arr="$arr:$f"; done < /dev/urandom
to reclaim parts of the memory call:
unset arr
You could also mix it with a fork bomb... but I would avoid it

The current answer will try to use more RAM until it's simply completely out of RAM. For a more controlled approach, if you have basic GNU tools (head and tail) or BusyBox on Linux, you can do this to fill a certain amount of free memory:
</dev/zero head -c BYTES | tail
</dev/zero head -c 5000m | tail #~5GB, portable
</dev/zero head -c 5G | tail #5GiB on GNU (not busybox)
cat /dev/zero | head -c 5G | tail #Easier notation; does the same thing
This works because tail needs to keep the current line in memory, in case it turns out to be the last line. The line, read from /dev/zero which outputs only null bytes and no newlines, will be infinitely long, but is limited by head to BYTES bytes, thus tail will use only that much memory. For a more precise amount, you will need to check how much RAM head and tail itself use on your system and subtract that.
To just quickly run out of RAM completely, you can remove the limiting head part:
tail /dev/zero
If you want to also add a duration, this can be done quite easily in bash (will not work in sh):
cat <( </dev/zero head -c 500m) <(sleep SECONDS) | tail
The <(command) thing seems to be little known but is often extremely useful, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html
The cat command will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep tail alive.
If you have pv and want to slowly increase RAM use:
</dev/zero head -c TOTAL | pv -L BYTES_PER_SEC | tail
</dev/zero head -c 1000m | pv -L 10m | tail
The latter will use up to one gigabyte at a rate of ten megabytes per second. As an added bonus, pv will show the current rate of use and the total use so far. Of course this can also be done with previous variants:
</dev/zero head -c 500m | pv | tail
Just inserting the | pv | part will show you the current status (throughput and total by default).
If you do not have a /dev/zero device, the standard yes and tr tools might substitute: yes | tr \\n x | head -c BYTES | tail (yes outputs an infinite amount of "yes"es, tr substitutes the newline such that everything becomes one huge line and tail needs to keep all that in memory).
Another, simpler alternative is using dd: dd if=/dev/zero bs=1G of=/dev/null uses 1GB of memory on GNU and BusyBox, but also 100% CPU on one core.
Finally, if your head does not accept a suffix, you can calculate an amount of bytes inline, for example 50 megabytes: head -c $((1024*1024*50))
Cross-posted from my answer on the Unix StackExchange

ulimit -m can be used to impose an artificial memory limit on a process.

Use this at your own risk :
:(){ :|:& };:
=> Explosion in your ram

Related

Why is generating a higher amount of random data much slower?

I want to generate a high amount of random numbers. I wrote the following bash command (note that I am using cat here for demonstrational purposes; in my real use case, I am piping the numbers into a process):
for i in {1..99999999}; do echo -e "$(cat /dev/urandom | tr -dc '0-9' | fold -w 5 | head -n 1)"; done | cat
The numbers are printed at a very low rate. However, if I generate a smaller amount, it is much faster:
for i in {1..9999}; do echo -e "$(cat /dev/urandom | tr -dc '0-9' | fold -w 5 | head -n 1)"; done | cat
Note that the only difference is 9999 instead of 99999999.
Why is this? Is the data buffered somewhere? Is there a way to optimize this, so that the random numbers are piped/streamed into cat immediately?
Why is this?
Generating {1..99999999} 100000000 arguments and then parsing them requires a lot of memory allocation from bash. This significantly stalls the whole system.
Additionally, large chunks of data are read from /dev/urandom, and about 96% of that data are filtered out by tr -dc '0-9'. This significantly depletes the entropy pool and additionally stalls the whole system.
Is the data buffered somewhere?
Each process has its own buffer, so:
cat /dev/urandom is buffering
tr -dc '0-9' is buffering
fold -w 5 is buffering
head -n 1 is buffering
the left side of pipeline - the shell, has its own buffer
and the right side - | cat has its own buffer
That's 6 buffering places. Even ignoring input buffering from head -n1 and from the right side of the pipeline | cat, that's 4 output buffers.
Also, save animals and stop cat abuse. Use tr </dev/urandom, instead of cat /dev/urandom | tr. Fun fact - tr can't take filename as a argument.
Is there a way to optimize this, so that the random numbers are piped/streamed into cat immediately?
Remove the whole code.
Take only as little bytes from the random source as you need. To generate a 32-bit number you only need 32 bits - no more. To generate a 5-digit number, you only need 17 bits - rounding to 8-bit bytes, that's only 3 bytes. The tr -dc '0-9' is a cool trick, but it definitely shouldn't be used in any real code.
Strangely recently I answered I guess a similar question, copying the code from there, you could:
for ((i=0;i<100000000;++i)); do echo "$((0x$(dd if=/dev/urandom of=/dev/stdout bs=4 count=1 status=none | xxd -p)))"; done | cut -c-5
# cut to take first 5 digits
But that still would be unacceptably slow, as it runs 2 processes for each random number (and I think just taking the first 5 digits will have a bad distribution).
I suggest to use $RANDOM, available in bash. If not, use $SRANDOM if you really want /dev/urandom (and really know why you want it). If not, I suggest to write the random number generation from /dev/urandom in a real programming language, like C, C++, python, perl, ruby. I believe one could write it in awk.
The following looks nice, but still converting binary data to hex, just to convert them to decimal later is a workaround for that shell just can't work with binary data:
count=10;
# take count*4 bytes from input
dd if=/dev/urandom of=/dev/stdout bs=4 count=$count status=none |
# Convert bytes to hex 4 bytes at a time
xxd -p -c 4 |
# Convert hex to decimal using GNU awk
awk --non-decimal-data '{printf "%d\n", "0x"$0}'
Why are you running this in a loop? You can just run a single set of these commands to generate everything, e.g.:
cat /dev/urandom | tr -dc '0-9' | fold -w 5 | head -n 100000000
I.e. just generate a single stream of numbers, rather than generate them individually.
I'd second the suggestion of using another language for this, it should be much more efficient. For example, in Python it would just be:
from random import randrange
for _ in range(100000000):
print(randrange(100000))
#SamMason gave the best answer so far, as he completely did away with the loop:
cat /dev/urandom | tr -dc '0-9' | fold -w 5 | head -n 100000000
That still leaves a lot of room for improvement though. First, tr -dc '0-9' only uses about 4% of the stuff that's coming out of /dev/urandom :-) Second, depending on how those random numbers will be consumed in the end, some additional overhead may be incurred for getting rid of leading zeros -- so that some numbers will not be interpreted as octal. Let me suggest a better alternative, using the od command:
outputFile=/dev/null # For test. Replace with the real file.
count=100000000
od -An -t u2 -w2 /dev/urandom | head -n $count >$outputFile
A qick test with the time command showed this to be roughly four times faster than the tr version. And there is really no need for using "another language", as both od and head are highly optimized, and this whole thing runs at native speed.
NOTE: The above command will be generating 16-bit integers, ranging from 0 to 65535 inclusive. If you need a larger range, then you could go for 32 bit numbers, and that will give you a range from 0 to 4294967295:
od -An -t u4 -w4 /dev/urandom | head -n $count >$outputFile
If needed, the end user can scale those down to the desired size with a modulo division.

Optimize bash command to count all lines in HDFS txt files

Summary:
I need to count all unique lines in all .txt files in a HDFS instance.
Total size of .txt files ~450GB.
I use this bash command:
hdfs dfs -cat /<top-level-dir>/<sub-dir>/*/*/.txt | cut -d , -f 1 | sort --parallel=<some-number> | uniq | wc -l
The problem is that this command takes all free ram and the HDFS instance exits with code 137 (out of memory).
Question:
Is there any way I can limit the ram usage of this entire command to let's say half of what's free in the hdfs OR somehow clean the memory while the command is still running?
Update:
I need to remove | sort | because it is a merge sort implementation so O(n) space complexity.
I can use only | uniq | without | sort |.
Some things you can try to limit sort's memory consumption:
Use sort -u instead of sort | uniq. That way sort has a chance to remove duplicates on the spot instead of having to keep them until the end. 🞵
Write the input to a file and sort the file instead of running sort in a pipe. Sorting pipes is slower than sorting files and I assume that sorting pipes requires more memory than sorting files:
hdfs ... | cut -d, -f1 > input && sort -u ... input | wc -l
Set the buffer size manually using -S 2G. The size buffer is shared between all threads. The size specified here roughly equals the overall memory consumption when running sort.
Change the temporary directory using -T /some/dir/different/from/tmp. On many linux systems /tmp is a ramdisk so be sure to use the actual hard drive.
If the hard disk is not an option you could also try --compress-program=PROG to compress sort's temporary files. I'd recommend a fast compression algorithm like lz4.
Reduce parallelism using --parallel=N as more threads need more memory. With a small buffer too much threads are less efficient.
Merge at most two temporary files at once using --batch-size=2.
🞵 I assumed that sort was smart enough to immediately remove sequential duplicates in the unsorted input. However, from my experiments it seems that (at least) sort (GNU coreutils) 8.31 does not.
If you know that your input contains a lot of sequential duplicates as in the input generated by the following commands …
yes a | head -c 10m > input
yes b | head -c 10m >> input
yes a | head -c 10m >> input
yes b | head -c 10m >> input
… then you can drastically save resources on sort by using uniq first:
# takes 6 seconds and 2'010'212 kB of memory
sort -u input
# takes less than 1 second and 3'904 kB of memory
uniq input > preprocessed-input &&
sort -u preprocessed-input
Times and memory usage were measured using GNU time 1.9-2 (often installed in /usr/bin/time) and its -v option. My system has an Intel Core i5 M 520 (two cores + hyper-threading) and 8 GB memory.
Reduce number of sorts run in parallel.
From info sort:
--parallel=N: Set the number of sorts run in parallel to N. By default, N is set
to the number of available processors, but limited to 8, as there
are diminishing performance gains after that. Note also that using
N threads increases the memory usage by a factor of log N.
it runs out of memory.
From man sort:
--batch-size=NMERGE
merge at most NMERGE inputs at once; for more use temp files
--compress-program=PROG
compress temporaries with PROG; decompress them with PROG -d-T,
-S, --buffer-size=SIZE
use SIZE for main memory buffer
-T, --temporary-directory=DIR
use DIR for temporaries, not $TMPDIR or /tmp; multiple options
specify multiple directories
These are the options you could be looking into. Specify a temporary directory on the disc and specify buffer size ex. 1GB. So like sort -u -T "$HOME"/tmp -S 1G.
Also as advised in other answers, use sort -u instead of sort | uniq.
Is there any way I can limit the ram usage of this entire command to let's say half of what's free in the hdfs
Kind-of, use -S option. You could sort -S "$(free -t | awk '/Total/{print $4}')".

How to get CPU utilization in % in terminal (mac)

Ive seen the same question asked on linux and windows but not mac (terminal). Can anyone tell me how to get the current processor utilization in %, so an example output would be 40%. Thanks
This works on a Mac (includes the %):
ps -A -o %cpu | awk '{s+=$1} END {print s "%"}'
To break this down a bit:
ps is the process status tool. Most *nix like operating systems support it. There are a few flags we want to pass to it:
-A means all processes, not just the ones running as you.
-o lets us specify the output we want. In this case, it all we want to the cpu% column of ps's output.
This will get us a list of all of the processes cpu usage, like
0.0
1.3
27.0
0.0
We now need to add up this list to get a final number, so we pipe ps's output to awk. awk is a pretty powerful tool for parsing and operating on text. We just simply add up the numbers, then print out the result, and add a "%" on the end.
Adding up all those CPU % can give a number > 100% (probably multiple cores).
Here's a simpler method, although it comes with some problems:
top -l 2 | grep -E "^CPU"
This gives 2 samples, the first of which is nonsense (because it calculates CPU load between samples).
Also, you need to use RegEx like (\d+\.\d*)% or some string functions to extract values, and add "user" and "sys" values to get the total.
(From How to get CPU utilisation, RAM utilisation in MAC from commandline)
Building on previous answers from #Jon R. and #Rounak D, the following line prints the sum of user and system values, with the added percent. I've have tested this value and I like that it roughly tracks well with the percentages shown in the macOS Activity Monitor.
top -l 2 | grep -E "^CPU" | tail -1 | awk '{ print $3 + $5"%" }'
You can then capture that value in a variable in script like this:
cpu_percent=$(top -l 2 | grep -E "^CPU" | tail -1 | awk '{ print $3 + $5"%" }')
PS: You might also be interested in the output of uptime, which shows system load.
Building upon #Jon R's answer, we can pick up the user CPU utilization through some simple pattern matching
top -l 1 | grep -E "^CPU" | grep -Eo '[^[:space:]]+%' | head -1
And if you want to get rid of the last % symbol as well,
top -l 1 | grep -E "^CPU" | grep -Eo '[^[:space:]]+%' | head -1 | sed s/\%/\/
top -F -R -o cpu
-F Do not calculate statistics on shared libraries, also known as frameworks.
-R Do not traverse and report the memory object map for each process.
-o cpu Order by CPU usage
Answer Source
You can do this.
printf "$(ps axo %cpu | awk '{ sum+=$1 } END { printf "%.1f\n", sum }' | tail -n 1),"

How to generate a memory shortage using bash script

I need to write a bash script that would consume a maximum of RAM of my ESXi and potentially generate a memory shortage.
I already checked here and try to run the given script several times so that I can consume more thant 500Mb of RAM.
However I get a "sh: out of memory" error (of course) and I'd like to know if there is any possibility to configuration the amount of memory allocated to my shell ?
Note1 : Another requirement is that I cannot enter a VM a run a greedy task.
Note2 : I tried to script the creation of greedy new VMs with huge RAM however I cannot get to ESXi state where there is a shortage of memory.
Note3 : I cannot use a C compiler and I only have very limited python library.
Thank you in advance for your help :)
From an earlier answer of mine: https://unix.stackexchange.com/a/254976/30731
If you have basic GNU tools (head and tail) or BusyBox on Linux, you can do this to fill a certain amount of free memory:
</dev/zero head -c BYTES | tail
# Protip: use $((1024**3*7)) to calculate 7GiB easily
</dev/zero head -c $((1024**3*7)) | tail
This works because tail needs to keep the current line in memory, in case it turns out to be the last line. The line, read from /dev/zero which outputs only null bytes and no newlines, will be infinitely long, but is limited by head to BYTES bytes, thus tail will use only that much memory. For a more precise amount, you will need to check how much RAM head and tail itself use on your system and subtract that.
To just quickly run out of RAM completely, you can remove the limiting head part:
tail /dev/zero
If you want to also add a duration, this can be done quite easily in bash (will not work in sh):
cat <( </dev/zero head -c BYTES) <(sleep SECONDS) | tail
The <(command) thing seems to be little known but is often extremely useful, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html
Then for the use of cat: cat will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep tail alive.
If you have pv and want to slowly increase RAM use:
</dev/zero head -c BYTES | pv -L BYTES_PER_SEC | tail
For example:
</dev/zero head -c $((1024**3)) | pv -L $((1024**2)) | tail
Will use up to a gigabyte at a rate of a megabyte per second. As an added bonus, pv will show you the current rate of use and the total use so far. Of course this can also be done with previous variants:
</dev/zero head -c BYTES | pv | tail
Just inserting the | pv | part will show you the current status (throughput and total by default).
Credits to falstaff for contributing a variant that is even simpler and more broadly compatible (like with BusyBox).
Bash has the ulimit command, which can be used to set the size of the virtual memory which may be used by a bash process and the processes started by it.
There are two limits, a hard limit and a soft limit. You can lower both limits, but only raise the soft limit up to the hard limit.
ulimit -S -v unlimited sets the virtual memory size to unlimited (if the hard limit allows this).
If there is a hard limit set (see ulimit -H -v), check any initialisation scripts for lines setting it.

Grepping a huge file (80GB) any way to speed it up?

grep -i -A 5 -B 5 'db_pd.Clients' eightygigsfile.sql
This has been running for an hour on a fairly powerful linux server which is otherwise not overloaded.
Any alternative to grep? Anything about my syntax that can be improved, (egrep,fgrep better?)
The file is actually in a directory which is shared with a mount to another server but the actual diskspace is local so that shouldn't make any difference?
the grep is grabbing up to 93% CPU
Here are a few options:
1) Prefix your grep command with LC_ALL=C to use the C locale instead of UTF-8.
2) Use fgrep because you're searching for a fixed string, not a regular expression.
3) Remove the -i option, if you don't need it.
So your command becomes:
LC_ALL=C fgrep -A 5 -B 5 'db_pd.Clients' eightygigsfile.sql
It will also be faster if you copy your file to RAM disk.
If you have a multicore CPU, I would really recommend GNU parallel. To grep a big file in parallel use:
< eightygigsfile.sql parallel --pipe grep -i -C 5 'db_pd.Clients'
Depending on your disks and CPUs it may be faster to read larger blocks:
< eightygigsfile.sql parallel --pipe --block 10M grep -i -C 5 'db_pd.Clients'
It's not entirely clear from you question, but other options for grep include:
Dropping the -i flag.
Using the -F flag for a fixed string
Disabling NLS with LANG=C
Setting a max number of matches with the -m flag.
Some trivial improvement:
Remove the -i option, if you can, case insensitive is quite slow.
Replace the . by \.
A single point is the regex symbol to match any character, which is also slow
Two lines of attack:
are you sure, you need the -i, or do you habe a possibility to get rid of it?
Do you have more cores to play with? grep is single-threaded, so you might want to start more of them at different offsets.
< eightygigsfile.sql parallel -k -j120% -n10 -m grep -F -i -C 5 'db_pd.Clients'
If you need to search for multiple strings, grep -f strings.txt saves a ton of time. The above is a translation of something that I am currently testing. the -j and -n option value seemed to work best for my use case. The -F grep also made a big difference.
Try ripgrep
It provides much better results compared to grep.
All the above answers were great. What really did help me on my 111GB file was using the LC_ALL=C fgrep -m < maxnum > fixed_string filename.
However, sometimes there may be 0 or more repeating patterns, in which case calculating the maxnum isn't possible. The workaround is to use the start and end patterns for the event(s) you are trying to process, and then work on the line numbers between them. Like so -
startline=$(grep -n -m 1 "$start_pattern" file|awk -F":" {'print $1'})
endline=$(grep -n -m 1 "$end_pattern" file |awk -F":" {'print $1'})
logs=$(tail -n +$startline file |head -n $(($endline - $startline + 1)))
Then work on this subset of logs!
hmm…… what speeds do you need ? i created a synthetic 77.6 GB file with nearly 525 mn rows with plenty of unicode :
rows = 524759550. | UTF8 chars = 54008311367. | bytes = 83332269969.
and randomly selected rows at an avg. rate of 1 every 3^5, using rand() not just NR % 243, to place the string db_pd.Clients at a random position in the middle of the existing text, totaling 2.16 mn rows where the regex pattern hits
rows = 2160088. | UTF8 chars = 42286394. | bytes = 42286394.
% dtp; pvE0 < testfile_gigantic_001.txt|
mawk2 '
_^(_<_)<NF { print (__=NR-(_+=(_^=_<_)+(++_)))<!_\
?_~_:__,++__+_+_ }' FS='db_pd[.]Clients' OFS=','
in0: 77.6GiB 0:00:59 [1.31GiB/s] [1.31GiB/s] [===>] 100%
out9: 40.3MiB 0:00:59 [ 699KiB/s] [ 699KiB/s] [ <=> ]
524755459,524755470
524756132,524756143
524756326,524756337
524756548,524756559
524756782,524756793
524756998,524757009
524757361,524757372
And mawk2 took just 59 seconds to extract out a list of row ranges it needs. From there it should be relatively trivial. Some overlapping may exist.
At throughput rates of 1.3GiB/s, as seen above calculated by pv, it might even be detrimental to use utils like parallel to split the tasks.

Resources