I want to generate a random number from given list
For example if I give the numbers
1,22,33,400,400,23,12,53 etc.
I want to select a random number from the given numbers.
Couldn't find an exact duplicate of this. So here goes my attempt, exactly what 123 mentions in comments. The solution is portable across shell variants and does not make use of any shell binaries to simplify performance.
You can run the below commands directly on the console.
# Read the elements into bash array, with IFS being the de-limiter for input
IFS="," read -ra randomNos <<< "1,22,33,400,400,23,12,53"
# Print the random numbers using the '$RANDOM' variable built-in modulo with
# array length.
printf "%s\n" "${randomNos[ $RANDOM % ${#randomNos[#]}]}"
As per the comments below, if you want to ignore a certain list of numbers from a range to select; do the approach as below
#!/bin/bash
# Initilzing the ignore list with the numbers you have mentioned
declare -A ignoreList='([21]="1" [25]="1" [53]="1" [80]="1" [143]="1" [587]="1" [990]="1" [993]="1")'
# Generating the random number
randomNumber="$(($RANDOM % 1023))"
# Printing the number if it is not in the ignore list
[[ ! -n "${ignoreList["$randomNumber"]}" ]] && printf "%s\n" "$randomNumber"
You can save it in a bash variable like
randomPortNumber=$([[ ! -n "${ignoreList["$randomNumber"]}" ]] && printf "%s\n" "$randomNumber")
Remember associative-arrays need bash version ≥4 to work.
Related
I have 2 large arrays with hash values stored in them. I'm trying to find the best way to verify all of the hash values in array_a are also found in array_b. The best I've got so far is
Import the Hash files into an array
Sort each array
For loop through array_a
Inside of array_a's for loop, do another for look for array_b (seems inefficient).
If found unset value in array_b
Set "found" value to 1 and break loop
If array_a doesn't have a match output to file.
I have large images that I need to verify have been uploaded to the site and the hash values match. I've created a file from the original files and scraped the website ones to create a second list of hash values. Trying to keep this a vanilla as possible, so only using typical bash functionality.
#!/bin/bash
array_a=($(< original_sha_values.txt))
array_b=($(< sha_values_after_downloaded.txt))
# Sort to speed up.
IFS=$'\n' array_a_sorted=($(sort <<<"${array_a[*]}"))
unset IFS
IFS=$'\n' array_b_sorted=($(sort <<<"${array_b[*]}"))
unset IFS
for item1 in "${array_a_sorted[#]}" ; do
found=0
for item2 in "${!array_b_sorted[#]}" ; do
if [[ $item1 == ${array_b_sorted[$item2]} ]]; then
unset 'array_b_sorted[item2]'
found=1
break
fi
done
if [[ $found == 0 ]]; then
echo "$item1" >> hash_is_missing_a_match.log
fi
done
Sorting to sped it up a lot
IFS=$'\n' array_a_sorted=($(sort <<<"${array_a[*]}"))
unset IFS
IFS=$'\n' array_b_sorted=($(sort <<<"${array_b[*]}"))
unset IFS
Is this really the best way of doing this?
for item1 in "${array_a_sorted[#]}" ; do
...
for item2 in "${!array_b_sorted[#]}" ; do
if ...
unset 'array_b_sorted[item2]'
break
Both arrays have 12,000 lines of 64bit hashes, taking 20+ minutes to compare. Is there a way to improve the speed?
you're doing it hard way.
If the task is: find the entries in file1 not in file2. Here is a shorter approach
$ comm -23 <(sort f1) <(sort f2)
I think karakfa's answer is probably the best approach if you just want to get it done and not worry about optimizing bash code.
However, if you still want to do it in bash, and you are willing to use some bash-specific features, you could shave off a lot of time using an associative array instead of two regular arrays:
# Read the original hash values into a bash associative array
declare -A original_hashes=()
while read hash; do
original_hashes["$hash"]=1
done < original_sha_values.txt
# Then read the downloaded values and check each one to see if it exists
# in the associative array. Lookup time *should* be O(1)
while read hash; do
if [[ -z "${original_hashes["$hash"]+x}" ]]; then
echo "$hash" >> hash_is_missing_a_match.log
fi
done < sha_values_after_downloaded.txt
This should be a lot faster than the nested loop implementation using regular arrays. Also, I didn't need any sorting, and all of the insertions and lookups on the associative array should be O(1), assuming bash implements associative arrays as hash tables. I couldn't find anything authoritative to back that up though, so take that with a grain of salt. Either way, it should still be faster than the nested loop method.
If you want the output sorted, you can just change the last line to:
done < <(sort sha_values_after_downloaded.txt)
in which case you're still only having to sort one file, not two.
I'm working on a script where I need a specific amount of unique numbers (this amount is given by a list of words) with constant 5 digits each number.
My first tries would be:
test=`cat amount.log`
for i in $test
do
echo $i $((RANDOM%10000+20000)) > random_numbers.log
done
the output of this script is exactly the one, I am searching for:
word1 25439
word2 26134
word3 21741
But I don't trust the $random variable to give me a unique list, where I don't have a number more written than once.
To be sure, the numbers are unique, my first attempt would be to use sort -u to get rid of duplicate entries, but this would mean that I have posibly less numbers then words in the list, or some words I will need to run the script again, to get a unique number for it.
I'll appreciate any suggestions, it needs to be done in unix/aix ksh shell.
You could ensure that each number is really unique by looking for it back in the output file...
test=`cat amount.log`
touch random_numbers.log
for i in $test
do
while true
do
num=$((RANDOM%10000+20000))
grep $num random_numbers.log > /dev/null
if [[ $? -gt 0 ]]
then
break
fi
done
echo $i $num >> random_numbers.log
done
Have a look at the answers to How to generate a list of unique random strings? One of the ideas there should help you.
I'm trying to compare values of two variables both containing strings-as-numbers. For example:
var1="5.4.7.1"
var2="6.2.4.5"
var3="1-4"
var4="1-5"
var5="2.3-3"
var6="2.3.4"
Sadly, I don't even know where to start... Any help will be appreciated!
What I meant is how would I go about comparing the value of $var5 to $var6 and determine with one of them is higher.
EDIT: Better description of the problem.
You can use [[ ${str1} < ${str2} ]] style test. This should work:
function max()
{
[[ "$1" < "$2" ]] && echo $2 || echo $1
}
max=$(max ${var5} ${var6})
echo "max=${max}."
It depends of the required portability of the solution. If you don't care about that and you use a deb based distribution, you can use the dpkg --compare-versions feature.
However, if you need to run your script on distros without dpkg I would use following approach.
The value you need to compare consist of first (the first element) and the rest (all others). The first is usually called the head and the rest - tail, but I deliberately use names first and rest, to not confuse with head(1) and tail(1) tools available on Unix systems.
In case first($var1) is not equal to first($var2) you just compares those firsts elements. If firsts are equal, just recursively run the compare function on rest($var1) and rest($var2). As a border case you need to decide what to do if values are like:
var1 = "2.3.4"
var2 = "2.3"
and in some step you will compare empty and non-empty first.
Hint for implementing first and rest functions:
foo="2.3-4.5"
echo ${foo%%[^0-9][0-9]*}
echo ${foo#[0-9]*[^0-9]}
If those are unclear to you, read man bash section titled Parameter Expansion. Searching the manual for ## string will show you the exact section immediately.
Also, make sure, you are comparing elements numerically not in lexical order. For example compare the result of following commands:
[[ 9 > 10 ]]; echo $?
[[ 9 -gt 10 ]]; echo $?
I have written a little bash script that reads commands (one per line), in a text file.
At the moment, the script (shown below), is executing the commands in a sequential order (i.e. in the same order entered in the file).
I would like help to modify the script below, so that it reads the commands into an array, then randomizes that array (i.e. list) before iterating though the randomized list.
This is what I have so far:
while read -r -a array
do
python make_move.py "${array[#]}"
done < game_commands.dat
I am running bash 4.1.5 on Ubuntu 10.0.4 LTS
[[Edit]]
I need to execute ALL of the commands in the list, with each command being executed ONLY ONCE.
You can shuffle the lines of a file using the shuf command.
Edit: Your code using shuf would look
while read -r -a array
do
python make_move.py "${array[#]}"
done < <(shuf game_commands.dat)
If you need to execute something like this on a system where shuf is not available,
(bash 4 only, easily adaptable for most modern shells):
unset max s i
readarray -t _cmd < game_commands.dat
while (( max < ${#_cmd[#]} )); do
(( i = RANDOM % ${#_cmd[#]} ))
[[ $s == *,$i,* ]] || {
python make_move.py "${_cmd[i]}"
(( max++ ))
}
s+=,$i,
done
Try sort -R. That will shuffle the lines randomly. EDIT: But the same lines will always appear in blocks...
I was given this text file, call stock.txt, the content of the text file is:
pepsi;drinks;3
fries;snacks;6
apple;fruits;9
baron;drinks;7
orange;fruits;2
chips;snacks;8
I will need to use bash-script to come up this output:
Total amount for drinks: 10
Total amount for snacks: 14
Total amount for fruits: 11
Total of everything: 35
My gut tells me I will need to use sed, group, grep and something else.
Where should I start?
I would break the exercise down into steps
Step 1: Read the file one line at a time
while read -r line
do
# do something with $line
done
Step 2: Pattern match (drinks, snacks, fruits) and do some simple arithmetic. This step requires that you tokenized each line which I'll leave an exercise for you to figure out.
if [[ "$line" =~ "drinks" ]]
then
echo "matched drinks"
.
.
.
fi
Pure Bash. A nice application for an associative array:
declare -A category # associative array
IFS=';'
while read name cate price ; do
((category[$cate]+=price))
done < stock.txt
sum=0
for cate in ${!category[#]}; do # loop over the indices
printf "Total amount of %s: %d\n" $cate ${category[$cate]}
((sum+=${category[$cate]}))
done
printf "Total amount of everything: %d\n" $sum
There is a short description here about processing comma separated files in bash here:
http://www.cyberciti.biz/faq/unix-linux-bash-read-comma-separated-cvsfile/
You could do something similar. Just change IFS from comma to semicolon.
Oh yeah, and a general hint for learning bash: man is your friend. Use this command to see manual pages for all (or most) of commands and utilities.
Example: man read shows the manual page for read command. On most systems it will be opened in less, so you should exit the manual by pressing q (may be funny, but it took me a while to figure that out)
The easy way to do this is using a hash table, which is supported directly by bash 4.x and of course can be found in awk and perl. If you don't have a hash table then you need to loop twice: once to collect the unique values of the second column, once to total.
There are many ways to do this. Here's a fun one which doesn't use awk, sed or perl. The only external utilities I've used here are cut, sort and uniq. You could even replace cut with a little more effort. In fact lines 5-9 could have been written more easily with grep, (grep $kind stock.txt) but I avoided that to show off the power of bash.
for kind in $(cut -d\; -f 2 stock.txt | sort | uniq) ; do
total=0
while read d ; do
total=$(( total+d ))
done < <(
while read line ; do
[[ $line =~ $kind ]] && echo $line
done < stock.txt | cut -d\; -f3
)
echo "Total amount for $kind: $total"
done
We lose the strict ordering of your original output here. An exercise for you might be to find a way not to do that.
Discussion:
The first line describes a sub-shell with a simple pipeline using cut. We read the third field from the stock.txt file, with fields delineated by ;, written \; here so the shell does not interpret it. The result is a newline-separated list of values from stock.txt. This is piped to sort, then uniq. This performs our "grouping" step, since the pipeline will output an alphabetic list of items from the second column but will only list each item once no matter how many times it appeared in the input file.
Also on the first line is a typical for loop: For each item resulting from the sub-shell we loop once, storing the value of the item in the variable kind. This is the other half of the grouping step, making sure that each "Total" output line occurs once.
On the second line total is initialized to zero so that it always resets whenever a new group is started.
The third line begins the 'totaling' loop, in which for the current kind we find the sum of its occurrences. here we declare that we will read the variable d in from stdin on each iteration of the loop.
On the fourth line the totaling actually occurs: Using shell arithmatic we add the value in d to the value in total.
Line five ends the while loop and then describes its input. We use shell input redirection via < to specify that the input to the loop, and thus to the read command, comes from a file. We then use process substitution to specify that the file will actually be the results of a command.
On the sixth line the command that will feed the while-read loop begins. It is itself another while-read loop, this time reading into the variable line. On the seventh line the test is performed via a conditional construct. Here we use [[ for its =~ operator, which is a pattern matching operator. We are testing to see whether $line matches our current $kind.
On the eighth line we end the inner while-read loop and specify that its input comes from the stock.txt file, then we pipe the output of the entire loop, which by now is simply all lines matching $kind, to cut and instruct it to show only the third field, which is the numeric field. On line nine we then end the process substitution command, the output of which is a newline-delineated list of numbers from lines which were of the group specified by kind.
Given that the total is now known and the kind is known it is a simple matter to print the results to the screen.
The below answer is OP's. As it was edited in the question itself and OP hasn't come back for 6 years, I am editing out the answer from the question and posting it as wiki here.
My answer, to get the total price, I use this:
...
PRICE=0
IFS=";" # new field separator, the end of line
while read name cate price
do
let PRICE=PRICE+$price
done < stock.txt
echo $PRICE
When I echo, its :35, which is correct. Now I will moving on using awk to get the sub-category result.
Whole Solution:
Thanks guys, I manage to do it myself. Here is my code:
#!/bin/bash
INPUT=stock.txt
PRICE=0
DRINKS=0
SNACKS=0
FRUITS=0
old_IFS=$IFS # save the field separator
IFS=";" # new field separator, the end of line
while read name cate price
do
if [ $cate = "drinks" ]; then
let DRINKS=DRINKS+$price
fi
if [ $cate = "snacks" ]; then
let SNACKS=SNACKS+$price
fi
if [ $cate = "fruits" ]; then
let FRUITS=FRUITS+$price
fi
# Total
let PRICE=PRICE+$price
done < $INPUT
echo -e "Drinks: " $DRINKS
echo -e "Snacks: " $SNACKS
echo -e "Fruits: " $FRUITS
echo -e "Price " $PRICE
IFS=$old_IFS