In my .bashrc file I appended the line psswd() { LC_ALL=C tr -dc 'a-zA-Z0-9-!"#/#$%^&*()_+~' < /dev/urandom | head -c "$1";echo ;} so that when I type psswd n in the terminal, it returns a string of n random characters. I would like to achieve the same but using /dev/random instead of /dev/urandom. However when I replace /urandom by /random, calling psswd does nothing (cannot even output a single random character after 1 hour), it's as if it's frozen. I don't know why it is so, and I know it's not a problem of not having enough entropy. The reason is that the command od -An -N1 -i /dev/random returns a random number.
Note that this last command returns a random number almost instantly if I type it say after a fresh reboot. But if I have invoked a call to psswd n with /random, then the command returns a random number after about 15 seconds. So the call to /random seems to have some effect on /dev/random even though it produces no output when I call the function psswd.
Overall I'd like to know how I could create a function that uses /dev/random to generate a random string of n characters.
This happens because libc will buffer tr's output when it's not a terminal. On GNU/Linux, it's 4096 bytes. This means that tr has to produce 4096 bytes of output before head will see the first few bytes, even though it just asks for e.g. 8.
Since you only keep 78 out of 256 values, /dev/random has to produce on average 4096*256/78 = 13443 bytes of random output before you get your password.
/dev/random on my system, starting from an empty pool, took 26 seconds to generate 20 bytes. That means those bytes would take 13443*26/20 = 17475 seconds, or almost 5 hours, to generate a password.
At this point it would print the password, but it would require another bufferful for tr to realize head doesn't want anymore, so it would take another 5 hours before the command would exit.
If you disabled buffering, you would only need to generate (8+1)*256/78 = 29 bytes, which would take a mere ~38 seconds. On GNU/Linux, you can do this with stdbuf -o0:
$ time { LC_ALL=C stdbuf -o0 tr -dc 'a-zA-Z0-9-!"#/#$%^&*()_+~' < /dev/random | head -c 8; echo; }
9D^MKbT)
real 0m36.172s
user 0m0.000s
sys 0m0.010s
…when I replace /urandom by /random, calling psswd does nothing…
That happens. As Wikipedia explains:
In Unix-like operating systems, /dev/random is a special file that serves as a blocking pseudorandom number generator....
A counterpart to /dev/random is /dev/urandom ("unlimited"/non-blocking random source [Emphasis added]
"Blocking" means that when it thinks it has run out of entropy, it stops producing numbers. For example, on my system, the first command in your pipeline produces the following:
$ LC_ALL=C tr -dc 'a-zA-Z0-9-!"#/#$%^&*()_+~' < /dev/random
~Sk(+!h
And, then, it hangs, presumably while waiting for more entropy.
This issue is discussed further here where it is argued that /dev/urandom is good enough.
Speeding it up
tr appears to buffer its output which delays the appearance of characters. The work-around is to use stdbuf and, for me, this results in a substantial speed-up:
LC_ALL=C stdbuf -o0 tr -dc 'a-zA-Z0-9-!"#/#$%^&*()_+~' < /dev/random
Related
I want to generate a high amount of random numbers. I wrote the following bash command (note that I am using cat here for demonstrational purposes; in my real use case, I am piping the numbers into a process):
for i in {1..99999999}; do echo -e "$(cat /dev/urandom | tr -dc '0-9' | fold -w 5 | head -n 1)"; done | cat
The numbers are printed at a very low rate. However, if I generate a smaller amount, it is much faster:
for i in {1..9999}; do echo -e "$(cat /dev/urandom | tr -dc '0-9' | fold -w 5 | head -n 1)"; done | cat
Note that the only difference is 9999 instead of 99999999.
Why is this? Is the data buffered somewhere? Is there a way to optimize this, so that the random numbers are piped/streamed into cat immediately?
Why is this?
Generating {1..99999999} 100000000 arguments and then parsing them requires a lot of memory allocation from bash. This significantly stalls the whole system.
Additionally, large chunks of data are read from /dev/urandom, and about 96% of that data are filtered out by tr -dc '0-9'. This significantly depletes the entropy pool and additionally stalls the whole system.
Is the data buffered somewhere?
Each process has its own buffer, so:
cat /dev/urandom is buffering
tr -dc '0-9' is buffering
fold -w 5 is buffering
head -n 1 is buffering
the left side of pipeline - the shell, has its own buffer
and the right side - | cat has its own buffer
That's 6 buffering places. Even ignoring input buffering from head -n1 and from the right side of the pipeline | cat, that's 4 output buffers.
Also, save animals and stop cat abuse. Use tr </dev/urandom, instead of cat /dev/urandom | tr. Fun fact - tr can't take filename as a argument.
Is there a way to optimize this, so that the random numbers are piped/streamed into cat immediately?
Remove the whole code.
Take only as little bytes from the random source as you need. To generate a 32-bit number you only need 32 bits - no more. To generate a 5-digit number, you only need 17 bits - rounding to 8-bit bytes, that's only 3 bytes. The tr -dc '0-9' is a cool trick, but it definitely shouldn't be used in any real code.
Strangely recently I answered I guess a similar question, copying the code from there, you could:
for ((i=0;i<100000000;++i)); do echo "$((0x$(dd if=/dev/urandom of=/dev/stdout bs=4 count=1 status=none | xxd -p)))"; done | cut -c-5
# cut to take first 5 digits
But that still would be unacceptably slow, as it runs 2 processes for each random number (and I think just taking the first 5 digits will have a bad distribution).
I suggest to use $RANDOM, available in bash. If not, use $SRANDOM if you really want /dev/urandom (and really know why you want it). If not, I suggest to write the random number generation from /dev/urandom in a real programming language, like C, C++, python, perl, ruby. I believe one could write it in awk.
The following looks nice, but still converting binary data to hex, just to convert them to decimal later is a workaround for that shell just can't work with binary data:
count=10;
# take count*4 bytes from input
dd if=/dev/urandom of=/dev/stdout bs=4 count=$count status=none |
# Convert bytes to hex 4 bytes at a time
xxd -p -c 4 |
# Convert hex to decimal using GNU awk
awk --non-decimal-data '{printf "%d\n", "0x"$0}'
Why are you running this in a loop? You can just run a single set of these commands to generate everything, e.g.:
cat /dev/urandom | tr -dc '0-9' | fold -w 5 | head -n 100000000
I.e. just generate a single stream of numbers, rather than generate them individually.
I'd second the suggestion of using another language for this, it should be much more efficient. For example, in Python it would just be:
from random import randrange
for _ in range(100000000):
print(randrange(100000))
#SamMason gave the best answer so far, as he completely did away with the loop:
cat /dev/urandom | tr -dc '0-9' | fold -w 5 | head -n 100000000
That still leaves a lot of room for improvement though. First, tr -dc '0-9' only uses about 4% of the stuff that's coming out of /dev/urandom :-) Second, depending on how those random numbers will be consumed in the end, some additional overhead may be incurred for getting rid of leading zeros -- so that some numbers will not be interpreted as octal. Let me suggest a better alternative, using the od command:
outputFile=/dev/null # For test. Replace with the real file.
count=100000000
od -An -t u2 -w2 /dev/urandom | head -n $count >$outputFile
A qick test with the time command showed this to be roughly four times faster than the tr version. And there is really no need for using "another language", as both od and head are highly optimized, and this whole thing runs at native speed.
NOTE: The above command will be generating 16-bit integers, ranging from 0 to 65535 inclusive. If you need a larger range, then you could go for 32 bit numbers, and that will give you a range from 0 to 4294967295:
od -An -t u4 -w4 /dev/urandom | head -n $count >$outputFile
If needed, the end user can scale those down to the desired size with a modulo division.
My requirement is to chop off the header and trailer records from a large file, I'm using a file of size 2.5GB with 1.8 million records. For doing so, I'm executing:
head -n $((count-1)) largeFile | tail -n $((count-2)) > outputFile
Whenever I select count>=725,000 records (size=1,063,577,322), the prompt is returning an error:
tail:unable to malloc memory
I assumed that the pipe buffer went full and tried:
head -n 1000000 largeFile | tail -n 720000 > outputFile
which should also fail since i'm passing count> 725000 to head, but, it generated the output.
Why it is so? As head is generating same amount of data (or more), both commands should fail, but the command is depending on tail count. Is it not the way where, first head writes into pipe and then tail uses pipe as input. If it is not, how parallelism is supported here, since tail works from end which is not known until head completes execution. Please correct me, I've assumed lot of things here.
PS: For the time being I've used grep to remove header and trailer. Also, ulimit on my machine returns:
pipe (512 byte) 64 {32 KB}
Thanks guys...
Just do this instead:
awk 'NR>2{print prev} {prev=$0}' largeFile > outputFile
it'll only store 1 line in memory at a time so no need to worry about memory issues.
Here's the result:
$ seq 5 | awk 'NR>2{print prev} {prev=$0}'
2
3
4
I did not test this with a large file, but it will avoid a pipe.
sed '1d;$d' largeFile > outputFile
Ed Morton and Walter A have already given workable alternatives; I'll take a stab at explaining why the original is failing. It's because of the way tail works: tail will read from the file (or pipe), starting at the beginning. It stores the last lines seen, and then when it reaches the end of the file, it outputs the stored lines. That means that when you use tail -n 725000, it needs to store the last 725,000 lines in memory, so it can print them when it reaches the end of the file. If 725,000 lines (most of a 2.5GB file) won't fit in memory, you get a malloc ("memory allocate") error.
Solution: use a process that doesn't have to buffer most of the file before outputting it, as both Ed and Walter's solutions do. As a bonus, they both trim the first line in the same process.
Goal
Use GNU Parallel to split a large .gz file into children. Since the server has 16 CPUs, create 16 children. Each child should contain, at most, N lines. Here, N = 104,214,420 lines. Children should be in .gz format.
Input File
name: file1.fastq.gz
size: 39 GB
line count: 1,667,430,708 (uncompressed)
Hardware
36 GB Memory
16 CPUs
HPCC environment (I'm not admin)
Code
Version 1
zcat "${input_file}" | parallel --pipe -N 104214420 --joblog split_log.txt --resume-failed "gzip > ${input_file}_child_{#}.gz"
Three days later, the job was not finished. split_log.txt was empty. No children were visible in the output directory. Log files indicated that Parallel had increased the --block-size from 1 MB (the default) to over 2 GB. This inspired me to change my code to Version 2.
Version 2
# --block-size 3000000000 means a single record could be 3 GB long. Parallel will increase this value if needed.
zcat "${input_file}" | "${parallel}" --pipe -N 104214420 --block-size 3000000000 --joblog split_log.txt --resume-failed "gzip > ${input_file}_child_{#}.gz"
The job has been running for ~2 hours. split_log.txt is empty. No children are visible in the output directory yet. So far, log files show the following warning:
parallel: Warning: --blocksize >= 2G causes problems. Using 2G-1.
Questions
How can my code be improved ?
Is there a faster way to accomplish this goal ?
Let us assume that the file is a fastq file, and that the record size therefore is 4 lines.
You tell that to GNU Parallel with -L 4.
In a fastq file the order does not matter, so you want to pass blocks of n*4 lines to the children.
To do that efficiently you use --pipe-part, except --pipe-part does not work with compressed files and does not work with -L, so you have to settle for --pipe.
zcat file1.fastq.gz |
parallel -j16 --pipe -L 4 --joblog split_log.txt --resume-failed "gzip > ${input_file}_child_{#}.gz"
This will pass a block to 16 children, and a block defaults to 1 MB, which is chopped at a record boundary (i.e. 4 lines). It will run a job for each block. But what you really want is to have the input passed to only 16 jobs in total, and you can do that round robin. Unfortunately there is an element of randomness in --round-robin, so --resume-failed will not work:
zcat file1.fastq.gz |
parallel -j16 --pipe -L 4 --joblog split_log.txt --round-robin "gzip > ${input_file}_child_{#}.gz"
parallel will be struggling to keep up with the 16 gzips, but you should be able to compress 100-200 MB/s.
Now if you had the fastq-file uncompressed we can do it even faster, but we will have to cheat a little: Often in fastq files you will have a seqname that starts the same string:
#EAS54_6_R1_2_1_413_324
CCCTTCTTGTCTTCAGCGTTTCTCC
+
;;3;;;;;;;;;;;;7;;;;;;;88
#EAS54_6_R1_2_1_540_792
TTGGCAGGCCAAGGCCGATGGATCA
+
;;;;;;;;;;;7;;;;;-;;;3;83
#EAS54_6_R1_2_1_443_348
GTTGCTTCTGGCGTGGGTGGGGGGG
+EAS54_6_R1_2_1_443_348
;;;;;;;;;;;9;7;;.7;393333
Here it is #EAS54_6_R. Unfortunately this is also a valid string in the quality line (which is a really dumb design), but in practice we would be extremely surprised to see a quality line starting with #EAS54_6_R. It just does not happen.
We can use that to our advantage, because now you can use \n followed by #EAS54_6_R as a record separator, and then we can use --pipe-part. The added benefit is that the order will remain the same. Here you would have to give the block size to 1/16 of the size of file1-fastq:
parallel -a file1.fastq --block <<1/16th of the size of file1.fastq>> -j16 --pipe-part --recend '\n' --recstart '#EAS54_6_R' --joblog split_log.txt "gzip > ${input_file}_child_{#}.gz"
If you use GNU Parallel 20161222 then GNU Parallel can do that computation for you. --block -1 means: Choose a block-size so that you can give one block to each of the 16 jobslots.
parallel -a file1.fastq --block -1 -j16 --pipe-part --recend '\n' --recstart '#EAS54_6_R' --joblog split_log.txt "gzip > ${input_file}_child_{#}.gz"
Here GNU Parallel will not be the limiting factor: It can easily transfer 20 GB/s.
It is annoying having to open the file to see what the recstart value should be, so this will work in most cases:
parallel -a file1.fastq --pipe-part --block -1 -j16
--regexp --recend '\n' --recstart '#.*\n[A-Za-z\n\.~]'
my_command
Here we assume that the lines will start like this:
#<anything>
[A-Za-z\n\.~]<anything>
<anything>
<anything>
Even if you have a few quality lines starting with '#', then they will never be followed by a line starting with [A-Za-z\n.~], because a quality line is always followed by the seqname line, which starts with #.
You could also have a block size so big that it corresponded to 1/16 of the uncompressed file, but that would be a bad idea:
You would have to be able to keep the full uncompressed file in RAM.
The last gzip will only be started after the last byte had been read (and the first gzip will probably be done by then).
By setting the number of records to 104214420 (using -N) this is basically what you are doing, and your server is probably struggling with keeping the 150 GB of uncompressed data in its 36 GB of RAM.
Paired end poses a restriction: The order does not matter, but the order must be predictable for different files. E.g. record n in file1.r1.fastq.gz must match record n in file1.r2.fastq.gz.
split -n r/16 is very efficient for doing simple round-robin. It does, however, not support multiline records. So we insert \0 as a record separator after every 4th line, which we remove after the splitting. --filter runs a command on the input, so we do not need to save the uncompressed data:
doit() { perl -pe 's/\0//' | gzip > $FILE.gz; }
export -f doit
zcat big.gz | perl -pe '($.-1)%4 or print "\0"' | split -t '\0' -n r/16 --filter doit - big.
Filenames will be named big.aa.gz .. big.ap.gz.
I need to write a bash script that would consume a maximum of RAM of my ESXi and potentially generate a memory shortage.
I already checked here and try to run the given script several times so that I can consume more thant 500Mb of RAM.
However I get a "sh: out of memory" error (of course) and I'd like to know if there is any possibility to configuration the amount of memory allocated to my shell ?
Note1 : Another requirement is that I cannot enter a VM a run a greedy task.
Note2 : I tried to script the creation of greedy new VMs with huge RAM however I cannot get to ESXi state where there is a shortage of memory.
Note3 : I cannot use a C compiler and I only have very limited python library.
Thank you in advance for your help :)
From an earlier answer of mine: https://unix.stackexchange.com/a/254976/30731
If you have basic GNU tools (head and tail) or BusyBox on Linux, you can do this to fill a certain amount of free memory:
</dev/zero head -c BYTES | tail
# Protip: use $((1024**3*7)) to calculate 7GiB easily
</dev/zero head -c $((1024**3*7)) | tail
This works because tail needs to keep the current line in memory, in case it turns out to be the last line. The line, read from /dev/zero which outputs only null bytes and no newlines, will be infinitely long, but is limited by head to BYTES bytes, thus tail will use only that much memory. For a more precise amount, you will need to check how much RAM head and tail itself use on your system and subtract that.
To just quickly run out of RAM completely, you can remove the limiting head part:
tail /dev/zero
If you want to also add a duration, this can be done quite easily in bash (will not work in sh):
cat <( </dev/zero head -c BYTES) <(sleep SECONDS) | tail
The <(command) thing seems to be little known but is often extremely useful, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html
Then for the use of cat: cat will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep tail alive.
If you have pv and want to slowly increase RAM use:
</dev/zero head -c BYTES | pv -L BYTES_PER_SEC | tail
For example:
</dev/zero head -c $((1024**3)) | pv -L $((1024**2)) | tail
Will use up to a gigabyte at a rate of a megabyte per second. As an added bonus, pv will show you the current rate of use and the total use so far. Of course this can also be done with previous variants:
</dev/zero head -c BYTES | pv | tail
Just inserting the | pv | part will show you the current status (throughput and total by default).
Credits to falstaff for contributing a variant that is even simpler and more broadly compatible (like with BusyBox).
Bash has the ulimit command, which can be used to set the size of the virtual memory which may be used by a bash process and the processes started by it.
There are two limits, a hard limit and a soft limit. You can lower both limits, but only raise the soft limit up to the hard limit.
ulimit -S -v unlimited sets the virtual memory size to unlimited (if the hard limit allows this).
If there is a hard limit set (see ulimit -H -v), check any initialisation scripts for lines setting it.
Say I have some process my_proc that generates output. I use > in bash to redirect that output to a file, like so:
./my_proc > /some/file
What happens when the file system can't keep up with the output from my_proc (i.e. my_proc is generating output faster than it can be written to disk)? I assume the file system will do some buffering, but what if it never catches up?
Is there a way to configure the maximum buffer size?
The optimal solution for me would be to just start dropping output if the buffer overflows (start redirecting to /dev/null or something). Is there an easy way to do that with bash?
Your app write calls will be delayed as long as file system catches up. Most probably net effect would be your app running slower waiting on the filesystem.
Write calls are usually buffered by the OS IO subsystem unless destination file is opened with appropriate flags. But this is not the case with stdout. File system can be mounted with appropriate options to disable buffering (i.e. sync mode) and that would avoid buffering but this is not usually done for performance reasons.
To get what you want, you would need to program your app to buffer output and discard buffer if it detects that filesystem is slowing you down. But it makes no sense. If you need the output, then you need to wait. If you don't need it, then it's better to not write it in the first place.
I think #akostadinov's answer it right on the money. This can be easily illustrated with a simple example:
$ time seq 1 1000000
1
2
...
999999
1e+06
real 0m40.817s
user 0m0.600s
sys 0m0.510s
$ time seq 1 1000000 > file.txt
real 0m0.556s
user 0m0.540s
sys 0m0.020s
$ time seq 1 1000000 > /dev/null
real 0m0.546s
user 0m0.540s
sys 0m0.000s
$
We use the seq utility to output numbers 1 to 1000000, and redirect the output to various places:
With no redirection (output to stdout/terminal), seq runs many times slower
Redirection to /dev/null and to a real file are fairly close, but significantly, for the /dev/null version, the "sys" component of the time taken is zero.
There's no simple way of doing this in bash, but you can do it in C. But first, maybe you can get away with just writing every N lines? To write only every 100th line to a file, you can do:
slowprogram | sed -n '1~100p' > file
Anyways, let's do it with true non-blocking with a C snippet. Since at acts like a buffer but isn't really, we can be funny and call it bluffer.c:
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define BUFFER_SIZE 4096
int main(int argc, char** argv) {
int out;
char buffer[BUFFER_SIZE];
int c;
out = open("/dev/stdout", O_NONBLOCK | O_APPEND | O_WRONLY);
while((c = read(0, buffer, BUFFER_SIZE)) != 0) {
write(out, buffer, c);
}
}
Now consider a command that quickly produces one million lines (~6.8MB) of data:
time printf "%s\n" {1..1000000} > /dev/null
real 0m1.278s
Now let's simulate slow IO by rate limiting it to 1MB/s with pv:
time printf "%s\n" {1..1000000} | pv -q -L 1M > slowfile
real 0m7.514s
As expected, it takes a lot longer, but slowfile contains all 1,000,000 lines.
Now let's insert our bluffer:
time printf "%s\n" {1..1000000} | ./bluffer | pv -q -L 1M > fastfile
real 0m1.972s
This time it's finishes quickly again, and fastfile contains just 141,960 of the 1,000,000 lines. In the file, we see gaps like this:
52076
52077
188042
188043