How can i generate a random file filled with random number or character in shell script? I also want to specify size of the file.
Use dd command to read data from /dev/random.
dd if=/dev/random of=random.dat bs=1000000 count=5000
That would read 5000 1MB blocks of random data, that is a whole 5 gigabytes of random data!
Experiment with blocksize argument to get the optimal performance.
head -c 10 /dev/random > rand.txt
change 10 to whatever. Read "man random" for differences between /dev/random and /dev/urandom.
Or, for only base64 characters
head -c 10 /dev/random | base64 | head -c 10 > rand.txt
The base64 might include some characters you're not interested in, but didn't have time to come up with a better single-liner character converter...
(also we're taking too many bytes from /dev/random. sorry, entropy pool!)
A good start would be:
http://linuxgazette.net/153/pfeiffer.html
#!/bin/bash
# Created by Ben Okopnik on Wed Jul 16 18:04:33 EDT 2008
######## User settings ############
MAXDIRS=5
MAXDEPTH=2
MAXFILES=10
MAXSIZE=1000
######## End of user settings ############
# How deep in the file system are we now?
TOP=`pwd|tr -cd '/'|wc -c`
populate() {
cd $1
curdir=$PWD
files=$(($RANDOM*$MAXFILES/32767))
for n in `seq $files`
do
f=`mktemp XXXXXX`
size=$(($RANDOM*$MAXSIZE/32767))
head -c $size /dev/urandom > $f
done
depth=`pwd|tr -cd '/'|wc -c`
if [ $(($depth-$TOP)) -ge $MAXDEPTH ]
then
return
fi
unset dirlist
dirs=$(($RANDOM*$MAXDIRS/32767))
for n in `seq $dirs`
do
d=`mktemp -d XXXXXX`
dirlist="$dirlist${dirlist:+ }$PWD/$d"
done
for dir in $dirlist
do
populate "$dir"
done
}
populate $PWD
Create 100 randomly named files of 50MB in size each:
for i in `seq 1 100`; do echo $i; dd if=/dev/urandom bs=1024 count=50000 > `echo $RANDOM`; done
The RANDOM variable will give you a different number each time:
echo $RANDOM
Save as "script.sh", run as ./script.sh SIZE. The printf code was lifted from http://mywiki.wooledge.org/BashFAQ/071. Of course, you could initialize the mychars array with brute force, mychars=("0" "1" ... "A" ... "Z" "a" ... "z"), but that wouldn't be any fun, would it?
#!/bin/bash
declare -a mychars
for (( I=0; I<62; I++ )); do
if [ $I -lt 10 ]; then
mychars[I]=$I
elif [ $I -lt 36 ]; then
D=$((I+55))
mychars[I]=$(printf \\$(($D/64*100+$D%64/8*10+$D%8)))
else
D=$((I+61))
mychars[I]=$(printf \\$(($D/64*100+$D%64/8*10+$D%8)))
fi
done
for (( I=$1; I>0; I-- )); do
echo -n ${mychars[$((RANDOM%62))]}
done
echo
Related
I need to generate a random port number between 2000-65000 from a shell script. The problem is $RANDOM is a 15-bit number, so I'm stuck!
PORT=$(($RANDOM%63000+2001)) would work nicely if it wasn't for the size limitation.
Does anyone have an example of how I can do this, maybe by extracting something from /dev/urandom and getting it within a range?
shuf -i 2000-65000 -n 1
Enjoy!
Edit: The range is inclusive.
On Mac OS X and FreeBSD you may also use jot:
jot -r 1 2000 65000
According to the bash man page, $RANDOM is distributed between 0 and 32767; that is, it is an unsigned 15-bit value. Assuming $RANDOM is uniformly distributed, you can create a uniformly-distributed unsigned 30-bit integer as follows:
$(((RANDOM<<15)|RANDOM))
Since your range is not a power of 2, a simple modulo operation will only almost give you a uniform distribution, but with a 30-bit input range and a less-than-16-bit output range, as you have in your case, this should really be close enough:
PORT=$(( ((RANDOM<<15)|RANDOM) % 63001 + 2000 ))
and here's one with Python
randport=$(python -S -c "import random; print random.randrange(2000,63000)")
and one with awk
awk 'BEGIN{srand();print int(rand()*(63000-2000))+2000 }'
The simplest general way that comes to mind is a perl one-liner:
perl -e 'print int(rand(65000-2000)) + 2000'
You could always just use two numbers:
PORT=$(($RANDOM + ($RANDOM % 2) * 32768))
You still have to clip to your range. It's not a general n-bit random number method, but it'll work for your case, and it's all inside bash.
If you want to be really cute and read from /dev/urandom, you could do this:
od -A n -N 2 -t u2 /dev/urandom
That'll read two bytes and print them as an unsigned int; you still have to do your clipping.
If you're not a bash expert and were looking to get this into a variable in a Linux-based bash script, try this:
VAR=$(shuf -i 200-700 -n 1)
That gets you the range of 200 to 700 into $VAR, inclusive.
Here's another one. I thought it would work on just about anything, but sort's random option isn't available on my centos box at work.
seq 2000 65000 | sort -R | head -n 1
Same with ruby:
echo $(ruby -e 'puts rand(20..65)') #=> 65 (inclusive ending)
echo $(ruby -e 'puts rand(20...65)') #=> 37 (exclusive ending)
Bash documentation says that every time $RANDOM is referenced, a random number between 0 and 32767 is returned. If we sum two consecutive references, we get values from 0 to 65534, which covers the desired range of 63001 possibilities for a random number between 2000 and 65000.
To adjust it to the exact range, we use the sum modulo 63001, which will give us a value from 0 to 63000. This in turn just needs an increment by 2000 to provide the desired random number, between 2000 and 65000. This can be summarized as follows:
port=$((((RANDOM + RANDOM) % 63001) + 2000))
Testing
# Generate random numbers and print the lowest and greatest found
test-random-max-min() {
max=2000
min=65000
for i in {1..10000}; do
port=$((((RANDOM + RANDOM) % 63001) + 2000))
echo -en "\r$port"
[[ "$port" -gt "$max" ]] && max="$port"
[[ "$port" -lt "$min" ]] && min="$port"
done
echo -e "\rMax: $max, min: $min"
}
# Sample output
# Max: 64990, min: 2002
# Max: 65000, min: 2004
# Max: 64970, min: 2000
Correctness of the calculation
Here is a full, brute-force test for the correctness of the calculation. This program just tries to generate all 63001 different possibilities randomly, using the calculation under test. The --jobs parameter should make it run faster, but it's not deterministic (total of possibilities generated may be lower than 63001).
test-all() {
start=$(date +%s)
find_start=$(date +%s)
total=0; ports=(); i=0
rm -f ports/ports.* ports.*
mkdir -p ports
while [[ "$total" -lt "$2" && "$all_found" != "yes" ]]; do
port=$((((RANDOM + RANDOM) % 63001) + 2000)); i=$((i+1))
if [[ -z "${ports[port]}" ]]; then
ports["$port"]="$port"
total=$((total + 1))
if [[ $((total % 1000)) == 0 ]]; then
echo -en "Elapsed time: $(($(date +%s) - find_start))s \t"
echo -e "Found: $port \t\t Total: $total\tIteration: $i"
find_start=$(date +%s)
fi
fi
done
all_found="yes"
echo "Job $1 finished after $i iterations in $(($(date +%s) - start))s."
out="ports.$1.txt"
[[ "$1" != "0" ]] && out="ports/$out"
echo "${ports[#]}" > "$out"
}
say-total() {
generated_ports=$(cat "$#" | tr ' ' '\n' | \sed -E s/'^([0-9]{4})$'/'0\1'/)
echo "Total generated: $(echo "$generated_ports" | sort | uniq | wc -l)."
}
total-single() { say-total "ports.0.txt"; }
total-jobs() { say-total "ports/"*; }
all_found="no"
[[ "$1" != "--jobs" ]] && test-all 0 63001 && total-single && exit
for i in {1..1000}; do test-all "$i" 40000 & sleep 1; done && wait && total-jobs
For determining how many iterations are needed to get a given probability p/q of all 63001 possibilities having been generated, I believe we can use the expression below. For example, here is the calculation for a probability greater than 1/2, and here for greater than 9/10.
$RANDOM is a number between 0 and 32767. You want a port between 2000 and 65000. These are 63001 possible ports. If we stick to values of $RANDOM + 2000 between 2000 and 33500, we cover a range of 31501 ports. If we flip a coin and then conditionally add 31501 to the result, we can get more ports, from 33501 to 65001. Then if we just drop 65001, we get the exact coverage needed, with a uniform probability distribution for all ports, it seems.
random-port() {
while [[ not != found ]]; do
# 2000..33500
port=$((RANDOM + 2000))
while [[ $port -gt 33500 ]]; do
port=$((RANDOM + 2000))
done
# 2000..65001
[[ $((RANDOM % 2)) = 0 ]] && port=$((port + 31501))
# 2000..65000
[[ $port = 65001 ]] && continue
echo $port
break
done
}
Testing
i=0
while true; do
i=$((i + 1))
printf "\rIteration $i..."
printf "%05d\n" $(random-port) >> ports.txt
done
# Then later we check the distribution
sort ports.txt | uniq -c | sort -r
You can do this
cat /dev/urandom|od -N2 -An -i|awk -v f=2000 -v r=65000 '{printf "%i\n", f + r * $1 / 65536}'
If you need more details see Shell Script Random Number Generator.
PORT=$(($RANDOM%63000+2001)) is close to what you want I think.
PORT=$(($RANDOM$RANDOM$RANDOM%63000+2001)) gets around the size limitation that troubles you. Since bash makes no distinctions between a number variable and a string variable, this works perfectly well. The "number" $RANDOM can be concatenated like a string, and then used as a number in a calculation. Amazing!
Or on OS-X the following works for me:
$ gsort --random-sort
Generate random numbers in the range [$floor,$ceil), no dependence:
$(((RANDOM % $(($ceil- $floor))) + $floor))
Generate 100 numbers between 2000 to 65000:
for i in $(seq 100); do echo $(((RANDOM % $((65000 - 2000))) + 2000));done
You can get the random number through urandom
head -200 /dev/urandom | cksum
Output:
3310670062 52870
To retrieve the one part of the above number.
head -200 /dev/urandom | cksum | cut -f1 -d " "
Then the output is
3310670062
To meet your requirement,
head -200 /dev/urandom |cksum | cut -f1 -d " " | awk '{print $1%63000+2001}'
This is how I usually generate random numbers. Then I use "NUM_1" as the variable for the port number I use. Here is a short example script.
#!/bin/bash
clear
echo 'Choose how many digits you want for port# (1-5)'
read PORT
NUM_1="$(tr -dc '0-9' </dev/urandom | head -c $PORT)"
echo "$NUM_1"
if [ "$PORT" -gt "5" ]
then
clear
echo -e "\x1b[31m Choose a number between 1 and 5! \x1b[0m"
sleep 3
clear
exit 0
fi
This works for me:
export CUDA_VISIBLE_DEVICES=$((( RANDOM % 8 )))
you can add 1 if you want it to start from 1 instead of 0.
Generating 50 numbers in Bash from a range 100000000000-999999999999 and saving them into a file filename.csv
shuf -i 100000000000-999999999999 -n 50 -o filename.csv
If you need a range bigger than 15 bit, dont use the slow unsecure and outdated 15 bit RANDOM, use the fast and secure 32 bit SRANDOM.
SRANDOM are available since about 2021 bash 5.1 roll out.
"one interesting addition to note with Bash 5.1 is the new SRANDOM variable. The SRANDOM variable provides random data from the system's entropy engine and cannot be reseeded. In particular, the SRANDOM variable provides a 32-bit random number that relies upon getrandom/getentropy -- with fall-backs to /dev/urandom or arc4random or even another fallback after that if necessary."
Source: https://www.phoronix.com/news/GNU-Bash-5.1
See what are the different of RANDOM and SRANDOM in bash:
Difference between RANDOM and SRANDOM in Bash
Feel free to improve this answer.
I realize it's very unlikely that the size of a single line in a text file would ever organically exceed 2048 bytes in size. But I still think it would be very valuable to know how to make sure it wasn't the case
Edit: Just wanted to say that the reason I asked this question is because I'm writing a script that verifies that a file is a text file as defined by POSIX. One of the requirements is that no line in a text file shall exceed {LINE_MAX} bytes in length (newline inclusive). On Ubuntu and FreeBSD this value is 2048.
On GNU Linux you need not worry about this limitation, as it will allow a line length that is bound only by memory. FreeBSD, however, does impose this limitation, and I've recently made a serious effort to learn FreeBSD, so I think it's an important thing for me to able to do.
Edit: I think I was wrong about FreeBSD. I'm able to process lines that exceed 2048 bytes in length with grep
This will literally find the number of bytes:
LANG=C grep -E '^.{2049}' some.txt
For example:
$ printf é | LANG=C grep -E '^.{2}'
é
If you instead mean characters, use the relevant LANG value or don't set it to rely on your shell default:
$ printf é | LANG=en_US.utf8 grep -E '^.{2}'
$ echo $?
1
If you mean graphemes, use this:
printf 🐚 | grep -Px '\X{2}'
$ echo $?
1
You can see how many lines are too long:
cut -b 2049- < inputfile | grep -c '.'
# When you want to count chars, not bytes, use "-c"
cut -c 2049- < inputfile | grep -c '.'
You can use this in a function
checkfile() {
if [ $# -eq 2 ]; then
overflow="$2"
else
overflow=2049
fi
cut -b "${overflow}" < "$1" | grep -c '.' > /dev/null
}
# Run test
testfile=/tmp/overflow.txt
echo "1234567890" > "${testfile}" # length 10, not counting '\n'
for boundary in 5 10 20; do
echo "Check with maxlen ${boundary}"
checkfile "${testfile}" ${boundary}
if [ $? -eq 0 ]; then
echo File OK
else
echo Overflow
fi
# example in check. Look out: the last ';' is needed.
checkfile "${testfile}" ${boundary} || { echo "Your comment"; echo "can call exit now"; }
# checkfile "${testfile}" ${boundary} || { echo "${testfile} has long lines" ; exit 1"; }
done
I am trying to create a bash script that is essentially like a magic 8 ball with 6 different responses (Yes, No, Maybe, Hard to tell, Unlikely, and Unknown). The key is that once a response is given, it should not be given again until all responses have been given.
Here is what I have so far:
#!/bin/bash
echo "Ask and you shall receive your fortune: "
n=$((RANDOM*6/32767))
while [`grep $n temp | wc awk '{print$3}'` -eq 0]; do
n=$((RANDOM*6/32767))
done
grep -v $n temp > temp2
mv temp2 temp
Basically I have the 6 responses all on different lines in the temp file, and I am trying to construct the loops so that once a response is given, it creates a new file without that response (temp2), then copies it back to temp. Then once the temp file is empty it will continue from the beginning.
I'm quite positive that my current inner loop is wrong, and that I need an outer loop, but I'm fairly new to this and I am stuck.
Any help will be greatly appreciated.
Try something like this:
#!/bin/bash
shuffle() {
local i tmp size max rand
# $RANDOM % (i+1) is biased because of the limited range of $RANDOM
# Compensate by using a range which is a multiple of the array size.
size=${#array[*]}
max=$(( 32768 / size * size ))
for ((i=size-1; i>0; i--)); do
while (( (rand=$RANDOM) >= max )); do :; done
rand=$(( rand % (i+1) ))
tmp=${array[i]} array[i]=${array[rand]} array[rand]=$tmp
done
}
array=( 'Yes' 'No' 'Maybe' 'Hard to tell' 'Unknown' 'Unlikely' )
shuffle
for var in "${array[#]}"
do
echo -n "Ask a question: "
read q
echo "${var}"
done
I wrote a script that follows your initial approach (using temp files):
#!/bin/bash
# Make a copy of temp, so you don't have to recreate the file every time you run this script
TEMP_FILE=$(tempfile)
cp temp $TEMP_FILE
# You know this from start, the file contains 6 possible answers, if need to add more in future, change this for the line count of the file
TOTAL_LINES=6
echo "Ask and you shall receive your fortune: "
# Dummy reading of the char, adds a pause to the script and involves the user interaction
read
# Conversely to what you stated, you don't need an extra loop, with one is enough
# just change the condition to count the line number of the TEMP file
while [ $TOTAL_LINES -gt 0 ]; do
# You need to add 1 so the answer ranges from 1 to 6 instead of 0 to 5
N=$((RANDOM*$TOTAL_LINES/32767 + 1))
# This prints the answer (grab the first N lines with head then remove anything above the Nth line with tail)
head -n $N < $TEMP_FILE | tail -n 1
# Get a new file deleting the $N line and store it in a temp2 file
TEMP_FILE_2=$(tempfile)
head -n $(( $N - 1 )) < $TEMP_FILE > $TEMP_FILE_2
tail -n $(( $TOTAL_LINES - $N )) < $TEMP_FILE >> $TEMP_FILE_2
mv $TEMP_FILE_2 $TEMP_FILE
echo "Ask and you shall receive your fortune: "
read
# Get the total lines of TEMP (use cut to delete the file name from the wc output, you only need the number)
TOTAL_LINES=$(wc -l $TEMP_FILE | cut -d" " -f1)
done
$ man shuf
SHUF(1) User Commands
NAME
shuf - generate random permutations
SYNOPSIS
shuf [OPTION]... [FILE]
shuf -e [OPTION]... [ARG]...
shuf -i LO-HI [OPTION]...
DESCRIPTION
Write a random permutation of the input lines to standard output.
More stuff follows, you can read it on your own machine :)
I'd like to generate dummy files in bash. The content doesn't matter, if it was random it would be nice, but all the same byte is also acceptable.
My first attempt was the following command:
rm dummy.zip;
touch dummy.zip;
x=0;
while [ $x -lt 100000 ];
do echo a >> dummy.zip;
x=`expr $x + 1`;
done;
The problem was its poor performance. I'm using GitBash on Windows, so it might be much faster under Linux but the script is obviously not optimal.
Could you suggest me a quicker and nice way to generate dummy (binary) files of given size?
You can try head command:
$ head -c 100000 /dev/urandom >dummy
You may use dd for this purpose:
dd if=/dev/urandom bs=1024 count=5 of=dummy
if:= in file
of:= out file
bs:= block size
Note, that
x=`expr $x + 1`;
isn't the most efficient way to calculation in bash. Do arithmetic integer calculation in double round parenthesis:
x=((x+1))
But for an incremented counter in a loop, there was the for-loop invented:
x=0;
while [ $x -lt 100000 ];
do echo a >> dummy.zip;
x=`expr $x + 1`;
done;
in contrast to:
for ((x=0; x<100000; ++x))
do
echo a
done >> dummy.zip
Here are 3 things to note:
unlike the [ -case, you don't need the spacing inside the parens.
you may use prefix (or postfix) increment here: ++x
the redirection to the file is pulled out of the loop. Instead of 1000000 opening- and closing steps, the file is only opened once.
But there is still a more simple form of the for-loop:
for x in {0..100000}
do
echo a
done >> dummy.zip
This will generate a text file 100,000 bytes large:
yes 123456789 | head -10000 > dummy.file
If your file system is ext4, btrfs, xfs or ocfs2, and if you don't care about the content you can use fallocate. It's the fastest method if you need big files.
fallocate -l 100KB dummy_100KB_file
See "Quickly create a large file on a Linux system?" for more details.
$ openssl rand -out random.tmp 1000000
Possibly
dd if=/dev/zero of=/dummy10MBfile bs=1M count=10
echo "To print the word in sequence from the file"
c=1
for w in cat file
do
echo "$c . $w"
c = expr $c +1
done
Easy way:
make file test and put one line "test"
Then execute:
cat test >> test
ctrl+c after a minute will result in plenty of gigabytes :)
I need to generate a random port number between 2000-65000 from a shell script. The problem is $RANDOM is a 15-bit number, so I'm stuck!
PORT=$(($RANDOM%63000+2001)) would work nicely if it wasn't for the size limitation.
Does anyone have an example of how I can do this, maybe by extracting something from /dev/urandom and getting it within a range?
shuf -i 2000-65000 -n 1
Enjoy!
Edit: The range is inclusive.
On Mac OS X and FreeBSD you may also use jot:
jot -r 1 2000 65000
According to the bash man page, $RANDOM is distributed between 0 and 32767; that is, it is an unsigned 15-bit value. Assuming $RANDOM is uniformly distributed, you can create a uniformly-distributed unsigned 30-bit integer as follows:
$(((RANDOM<<15)|RANDOM))
Since your range is not a power of 2, a simple modulo operation will only almost give you a uniform distribution, but with a 30-bit input range and a less-than-16-bit output range, as you have in your case, this should really be close enough:
PORT=$(( ((RANDOM<<15)|RANDOM) % 63001 + 2000 ))
and here's one with Python
randport=$(python -S -c "import random; print random.randrange(2000,63000)")
and one with awk
awk 'BEGIN{srand();print int(rand()*(63000-2000))+2000 }'
The simplest general way that comes to mind is a perl one-liner:
perl -e 'print int(rand(65000-2000)) + 2000'
You could always just use two numbers:
PORT=$(($RANDOM + ($RANDOM % 2) * 32768))
You still have to clip to your range. It's not a general n-bit random number method, but it'll work for your case, and it's all inside bash.
If you want to be really cute and read from /dev/urandom, you could do this:
od -A n -N 2 -t u2 /dev/urandom
That'll read two bytes and print them as an unsigned int; you still have to do your clipping.
If you're not a bash expert and were looking to get this into a variable in a Linux-based bash script, try this:
VAR=$(shuf -i 200-700 -n 1)
That gets you the range of 200 to 700 into $VAR, inclusive.
Here's another one. I thought it would work on just about anything, but sort's random option isn't available on my centos box at work.
seq 2000 65000 | sort -R | head -n 1
Same with ruby:
echo $(ruby -e 'puts rand(20..65)') #=> 65 (inclusive ending)
echo $(ruby -e 'puts rand(20...65)') #=> 37 (exclusive ending)
Bash documentation says that every time $RANDOM is referenced, a random number between 0 and 32767 is returned. If we sum two consecutive references, we get values from 0 to 65534, which covers the desired range of 63001 possibilities for a random number between 2000 and 65000.
To adjust it to the exact range, we use the sum modulo 63001, which will give us a value from 0 to 63000. This in turn just needs an increment by 2000 to provide the desired random number, between 2000 and 65000. This can be summarized as follows:
port=$((((RANDOM + RANDOM) % 63001) + 2000))
Testing
# Generate random numbers and print the lowest and greatest found
test-random-max-min() {
max=2000
min=65000
for i in {1..10000}; do
port=$((((RANDOM + RANDOM) % 63001) + 2000))
echo -en "\r$port"
[[ "$port" -gt "$max" ]] && max="$port"
[[ "$port" -lt "$min" ]] && min="$port"
done
echo -e "\rMax: $max, min: $min"
}
# Sample output
# Max: 64990, min: 2002
# Max: 65000, min: 2004
# Max: 64970, min: 2000
Correctness of the calculation
Here is a full, brute-force test for the correctness of the calculation. This program just tries to generate all 63001 different possibilities randomly, using the calculation under test. The --jobs parameter should make it run faster, but it's not deterministic (total of possibilities generated may be lower than 63001).
test-all() {
start=$(date +%s)
find_start=$(date +%s)
total=0; ports=(); i=0
rm -f ports/ports.* ports.*
mkdir -p ports
while [[ "$total" -lt "$2" && "$all_found" != "yes" ]]; do
port=$((((RANDOM + RANDOM) % 63001) + 2000)); i=$((i+1))
if [[ -z "${ports[port]}" ]]; then
ports["$port"]="$port"
total=$((total + 1))
if [[ $((total % 1000)) == 0 ]]; then
echo -en "Elapsed time: $(($(date +%s) - find_start))s \t"
echo -e "Found: $port \t\t Total: $total\tIteration: $i"
find_start=$(date +%s)
fi
fi
done
all_found="yes"
echo "Job $1 finished after $i iterations in $(($(date +%s) - start))s."
out="ports.$1.txt"
[[ "$1" != "0" ]] && out="ports/$out"
echo "${ports[#]}" > "$out"
}
say-total() {
generated_ports=$(cat "$#" | tr ' ' '\n' | \sed -E s/'^([0-9]{4})$'/'0\1'/)
echo "Total generated: $(echo "$generated_ports" | sort | uniq | wc -l)."
}
total-single() { say-total "ports.0.txt"; }
total-jobs() { say-total "ports/"*; }
all_found="no"
[[ "$1" != "--jobs" ]] && test-all 0 63001 && total-single && exit
for i in {1..1000}; do test-all "$i" 40000 & sleep 1; done && wait && total-jobs
For determining how many iterations are needed to get a given probability p/q of all 63001 possibilities having been generated, I believe we can use the expression below. For example, here is the calculation for a probability greater than 1/2, and here for greater than 9/10.
$RANDOM is a number between 0 and 32767. You want a port between 2000 and 65000. These are 63001 possible ports. If we stick to values of $RANDOM + 2000 between 2000 and 33500, we cover a range of 31501 ports. If we flip a coin and then conditionally add 31501 to the result, we can get more ports, from 33501 to 65001. Then if we just drop 65001, we get the exact coverage needed, with a uniform probability distribution for all ports, it seems.
random-port() {
while [[ not != found ]]; do
# 2000..33500
port=$((RANDOM + 2000))
while [[ $port -gt 33500 ]]; do
port=$((RANDOM + 2000))
done
# 2000..65001
[[ $((RANDOM % 2)) = 0 ]] && port=$((port + 31501))
# 2000..65000
[[ $port = 65001 ]] && continue
echo $port
break
done
}
Testing
i=0
while true; do
i=$((i + 1))
printf "\rIteration $i..."
printf "%05d\n" $(random-port) >> ports.txt
done
# Then later we check the distribution
sort ports.txt | uniq -c | sort -r
You can do this
cat /dev/urandom|od -N2 -An -i|awk -v f=2000 -v r=65000 '{printf "%i\n", f + r * $1 / 65536}'
If you need more details see Shell Script Random Number Generator.
PORT=$(($RANDOM%63000+2001)) is close to what you want I think.
PORT=$(($RANDOM$RANDOM$RANDOM%63000+2001)) gets around the size limitation that troubles you. Since bash makes no distinctions between a number variable and a string variable, this works perfectly well. The "number" $RANDOM can be concatenated like a string, and then used as a number in a calculation. Amazing!
Or on OS-X the following works for me:
$ gsort --random-sort
Generate random numbers in the range [$floor,$ceil), no dependence:
$(((RANDOM % $(($ceil- $floor))) + $floor))
Generate 100 numbers between 2000 to 65000:
for i in $(seq 100); do echo $(((RANDOM % $((65000 - 2000))) + 2000));done
You can get the random number through urandom
head -200 /dev/urandom | cksum
Output:
3310670062 52870
To retrieve the one part of the above number.
head -200 /dev/urandom | cksum | cut -f1 -d " "
Then the output is
3310670062
To meet your requirement,
head -200 /dev/urandom |cksum | cut -f1 -d " " | awk '{print $1%63000+2001}'
This is how I usually generate random numbers. Then I use "NUM_1" as the variable for the port number I use. Here is a short example script.
#!/bin/bash
clear
echo 'Choose how many digits you want for port# (1-5)'
read PORT
NUM_1="$(tr -dc '0-9' </dev/urandom | head -c $PORT)"
echo "$NUM_1"
if [ "$PORT" -gt "5" ]
then
clear
echo -e "\x1b[31m Choose a number between 1 and 5! \x1b[0m"
sleep 3
clear
exit 0
fi
This works for me:
export CUDA_VISIBLE_DEVICES=$((( RANDOM % 8 )))
you can add 1 if you want it to start from 1 instead of 0.
Generating 50 numbers in Bash from a range 100000000000-999999999999 and saving them into a file filename.csv
shuf -i 100000000000-999999999999 -n 50 -o filename.csv
If you need a range bigger than 15 bit, dont use the slow unsecure and outdated 15 bit RANDOM, use the fast and secure 32 bit SRANDOM.
SRANDOM are available since about 2021 bash 5.1 roll out.
"one interesting addition to note with Bash 5.1 is the new SRANDOM variable. The SRANDOM variable provides random data from the system's entropy engine and cannot be reseeded. In particular, the SRANDOM variable provides a 32-bit random number that relies upon getrandom/getentropy -- with fall-backs to /dev/urandom or arc4random or even another fallback after that if necessary."
Source: https://www.phoronix.com/news/GNU-Bash-5.1
See what are the different of RANDOM and SRANDOM in bash:
Difference between RANDOM and SRANDOM in Bash
Feel free to improve this answer.