I'm currently working on a maths project and just run into a bit of a brick wall with programming in bash.
Currently I have a directory containing 800 texts files, and what I want to do is run a loop to cat the first 80 files (_01 through to _80) into a new file and save elsewhere, then the next 80 (_81 to _160) files etc.
all the files in the directory are listed like so: ath_01, ath_02, ath_03 etc.
Can anyone help?
So far I have:
#!/bin/bash
for file in /dir/*
do
echo ${file}
done
Which just simple lists my file. I know I need to use cat file1 file2 > newfile.txt somehow but it's confusing my with the numerated extension of _01, _02 etc.
Would it help if I changed the name of the file to use something other than an underscore? like ath.01 etc?
Cheers,
Since you know ahead of time how many files you have and how they are numbered, it may be easier to "unroll the loop", so to speak, and use copy-and-paste and a little hand-tweaking to write a script that uses brace expansion.
#!/bin/bash
cat ath_{001..080} > file1.txt
cat ath_{081..160} > file2.txt
cat ath_{161..240} > file3.txt
cat ath_{241..320} > file4.txt
cat ath_{321..400} > file5.txt
cat ath_{401..480} > file6.txt
cat ath_{481..560} > file7.txt
cat ath_{561..640} > file8.txt
cat ath_{641..720} > file9.txt
cat ath_{721..800} > file10.txt
Or else, use nested for-loops and the seq command
N=800
B=80
for n in $( seq 1 $B $N ); do
for i in $( seq $n $((n+B - 1)) ); do
cat ath_$i
done > file$((n/B + 1)).txt
done
The outer loop will iterate n through 1, 81, 161, etc. The inner loop will iterate i over 1 through 80, then 81 through 160, etc. The body of the inner loops just dumps the contents if the ith file to standard output, but the aggregated output of the loop is stored in file 1, then 2, etc.
You could try something like this:
cat "$file" >> "concat_$(( ${file#/dir/ath_} / 80 ))"
with ${file#/dir/ath_} you remove the prefix /dir/ath_ from the filename
$(( / 80 )) you get the suffix divided by 80 (integer division)
Also change the loop to
for file in /dir/ath_*
So you only get the files you need
If you want groups of 80 files, you'd do best to ensure that the names are sortable; that's why leading zeroes were often used. Assuming that you only have one underscore in the file names, and no newlines in the names, then:
SOURCE="/path/to/dir"
TARGET="/path/to/other/directory"
(
cd $SOURCE || exit 1
ls |
sort -t _ -k2,2n |
awk -v target="$TARGET" \
'{ file[n++] = $1
if (n >= 80)
{
printf "cat"
for (i = 0; i < 80; i++)
printf(" %s", file[i]
printf(" >%s/%s.%.2d\n", target, "newfile", ++number)
n = 0
}
END {
if (n > 0)
{
printf "cat"
for (i = 0; i < n; i++)
printf(" %s", file[i]
printf(" >%s/%s.%.2d\n", target, "newfile", ++number)
}
}' |
sh -x
)
The two directories are specified (where the files are and where the summaries should go); the command changes directory to the source directory (where the 800 files are). It lists the names (you could specify a glob pattern if you needed to) and sorts them numerically. The output is fed into awk which generates a shell script on the fly. It collects 80 names at a time, then generates a cat command that will copy those files to a single destination file such as "newfile.01"; tweak the printf() command to suit your own naming/numbering conventions. The shell commands are then passed to a shell for execution.
While testing, replace the sh -x with nothing, or sh -vn or something similar. Only add an active shell when you're sure it will do what you want. Remember, the shell script is in the source directory as it is running.
Superficially, the xargs command would be nice to use; the difficulty is coordinating the output file number. There might be a way to do that with the -n 80 option to group 80 files at a time and some fancy way to generate the invocation number, but I'm not aware of it.
Another option is to use xargs -n to execute a shell script that can deduce the correct output file number by listing what's already in the target directory. This would be cleaner in many ways:
SOURCE="/path/to/dir"
TARGET="/path/to/other/directory"
(
cd $SOURCE || exit 1
ls |
sort -t _ -k2,2n |
xargs -n 80 cpfiles "$TARGET"
)
Where cpfiles looks like:
TARGET="$1"
shift
if [ $# -gt 0 ]
then
old=$(ls -r newfile.?? | sed -n -e 's/newfile\.//p; 1q')
new=$(printf "%.2d" $((old + 1)))
cat "$#" > "$TARGET/newfile. $new
fi
The test for zero arguments avoids trouble with xargs executing the command once with zero arguments. On the whole, I prefer this solution to the one using awk.
Here's a macro for #chepner's first solution, using GNU Make as the templating language:
SHELL := /bin/bash
N = 800
B = 80
fileNums = $(shell seq 1 $$((${N}/${B})) )
files = ${fileNums:%=file%.txt}
all: ${files}
file%.txt : start = $(shell echo $$(( ($*-1)*${B}+1 )) )
file%.txt : end = $(shell echo $$(( $* * ${B} )) )
file%.txt:
cat ath_{${start}..${end}} > $#
To use:
$ make -n all
cat ath_{1..80} > file1.txt
cat ath_{81..160} > file2.txt
cat ath_{161..240} > file3.txt
cat ath_{241..320} > file4.txt
cat ath_{321..400} > file5.txt
cat ath_{401..480} > file6.txt
cat ath_{481..560} > file7.txt
cat ath_{561..640} > file8.txt
cat ath_{641..720} > file9.txt
cat ath_{721..800} > file10.txt
Related
I have implemented a rsync based system to move files from different environments to others.
The problem I'm facing now is that sometimes, there are files with the same name, but different path and content.
I want to make rsync (if possible) rename duplicated files because I need and use --no-relative option.
Duplicated files can occur in two ways:
There was a file with same name in dest directory already.
In the same rsync execution, we are transferring file with same name in a different location. Ex: dir1/file.txt and dir2/file.txt
Adding -b --suffix options, allows me to have at least 1 repetition for the first duplicated file's type mentioned.
A minimum example (for Linux based systems):
mkdir sourceDir1 sourceDir2 sourceDir3 destDir;
echo "1" >> sourceDir1/file.txt;
echo "2" >> sourceDir2/file.txt;
echo "3" >> sourceDir3/file.txt;
rsync --no-relative sourceDir1/file.txt destDir
rsync --no-relative -b --suffix="_old" sourceDir2/file.txt sourceDir3/file.txt destDir
Is there any way to achieve my requirements?
I don't think that you can do it directly with rsync.
Here's a work-around in bash that does some preparation work with find and GNU awk and then calls rsync afterwards.
The idea is to categorize the input files by "copy number" (for example sourceDir1/file.txt would be the copy #1 of file.txt, sourceDir2/file.txt the copy #2 and sourceDir3/file.txt the copy #3) and generate a file per "copy number" containing the list of all the files in that category.
Then, you just have to launch an rsync with --from-file and a customized --suffix per category.
Pros
fast: incomparable to firing one rsync per file.
safe: it won't ever overwrite a file (see the step #3 below).
robust: handles any filename, even with newlines in it.
Cons
the destination directory have to be empty (or else it might overwrite a few files).
the code is a little long (and I made it longer by using a few process substitutions and by splitting the awk call into two).
Here are the steps:
0) Use a correct shebang for bash in your system.
#!/usr/bin/env bash
1) Create a directory for storing the temporary files.
tmpdir=$( mktemp -d ) || exit 1
2) Categorize the input files by "duplicate number", generate the files for rsync --from-file (one per dup category), and get the total number of categories.
read filesCount < <(
find sourceDir* -type f -print0 |
LANG=C gawk -F '/' '
BEGIN {
RS = ORS = "\0"
tmpdir = ARGV[2]
delete ARGV[2]
}
{
id = ++seen[$NF]
if ( ! (id in outFiles) ) {
outFilesCount++
outFiles[id] = tmpdir "/" id
}
print $0 > outFiles[id]
}
END {
printf "%d\n", outFilesCount
}
' - "$tmpdir"
)
3) Find a unique suffix — generated using a given set of chars — for rsync --suffix => the string shall be appended to it.
note: You can skip this step if you know for sure that there's no existing filename that ends with _old+number.
(( filesCount > 0 )) && IFS='' read -r -d '' suffix < <(
LANG=C gawk -F '/' '
BEGIN {
RS = ORS = "\0"
charsCount = split( ARGV[2], chars)
delete ARGV[2]
for ( i = 1; i <= 255; i++ )
ord[ sprintf( "%c", i ) ] = i
}
{
l0 = length($NF)
l1 = length(suffix)
if ( substr( $NF, l0 - l1, l1) == suffix ) {
n = ord[ substr( $NF, l0 - l1 - 1, 1 ) ]
suffix = chars[ (n + 1) % charsCount ] suffix
}
}
END {
print suffix
}
' "$tmpdir/1" '0/1/2/3/4/5/6/7/8/9/a/b/c/d/e/f'
)
4) Run the rsync(s).
for (( i = filesCount; i > 0; i-- ))
do
fromFile=$tmpdir/$i
rsync --no-R -b --suffix="_old${i}_$suffix" -0 --files-from="$fromFile" ./ destDir/
done
5) Clean-up the temporary directory.
rm -rf "$tmpdir"
Guess it's not possible with only rsync. You have to make a list of files first and analyze it to work around dupes. Take a look at this command:
$ rsync --no-implied-dirs --relative --dry-run --verbose sourceDir*/* dst/
sourceDir1/file.txt
sourceDir2/file.txt
sourceDir3/file.txt
sent 167 bytes received 21 bytes 376.00 bytes/sec
total size is 6 speedup is 0.03 (DRY RUN)
Lets use it to create list of source files:
mapfile -t list < <(rsync --no-implied-dirs --relative --dry-run --verbose sourceDir*/* dst/)
Now we can loop through this list with something like this:
declare -A count
for item in "${list[#]}"; {
[[ $item =~ ^sent.*bytes/sec$ ]] && break
[[ $item ]] || break
fname=$(basename $item)
echo "$item dst/$fname${count[$fname]}"
((count[$fname]++))
}
sourceDir1/file.txt dst/file.txt
sourceDir2/file.txt dst/file.txt1
sourceDir3/file.txt dst/file.txt2
Change echo to rsync and that is it.
I am stuck on that. So I have this while-read loop within my code that is taking so long and I would like to run it in many processors. But, I'd like to split the input file and run 14 loops (because I have 14 threads), one for each splited file, in parallel. Thing is that I don't know how to tell the while loop which file to get and work with.
For example, in a regular while-read loop I would code:
while read line
do
<some code>
done < input file or variable...
But in this case I would like to split the above input file in 14 files and run 14 while loops in parallel, one for each splited file.
I tried :
split -n 14 input_file
find . -name "xa*" | \
parallel -j 14 | \
while read line
do
<lot of stuff>
done
also tried
split -n 14 input_file
function loop {
while read line
do
<lot of stuff>
done
}
export -f loop
parallel -j 14 ::: loop
But neither I was able to tell which file would be the input to the loop so parallel would understand "take each of those xa* files and place into individual loops in parallel"
An example of the input file (a list of strings)
AEYS01000010.10484.12283
CVJT01000011.50.2173
KF625180.1.1799
KT949922.1.1791
LOBZ01000025.54942.57580
EDIT
This is the code.
The output is a table (741100 lines) with some statistics regarding DNA sequences alignments already made.
The loop takes an input_file (no broken lines, varies from 500 to ~45000 lines, 800Kb) with DNA sequence acessions, reads it line-by-line and look for each correspondent full taxonomy for those acessions in a databank (~45000 lines). Then, it does a few sums/divisions. Output is a .tsv and looks like this (an example for sequence "KF625180.1.1799"):
Rate of taxonomies for this sequence in %: KF625180.1.1799 D_6__Bacillus_atrophaeus
Taxonomy %aligned number_ocurrences_in_the_alignment num_ocurrences_in_databank %alingment/databank
D_6__Bacillus_atrophaeus 50% 1 20 5%
D_6__Bacillus_amyloliquefaciens 50% 1 154 0.649351%
$ head input file
AEYS01000010.10484.12283
CVJT01000011.50.217
KF625180.1.1799
KT949922.1.1791
LOBZ01000025.54942.57580
Two additional files are also used inside the loop. They are not the loop input.
1) a file called alnout_file that only serves for finding how many hits (or alignments) a given sequence had against the databank. It was also previously made outside this loop. It can vary in the number of lines from hundreads to thousands. Only columns 1 and 2 matters here. Column1 is the name of the sequence and col2 is the name of all sequences it matched in the databnk. It looks like that:
$ head alnout_file
KF625180.1.1799 KF625180.1.1799 100.0 431 0 0 1 431 1 431 -1 0
KF625180.1.1799 KP143082.1.1457 99.3 431 1 2 1 431 1 429 -1 0
KP143082.1.1457 KF625180.1.1799 99.3 431 1 2 1 429 1 431 -1 0
2) a databank .tsv file containing ~45000 taxonomies correspondent to the DNA sequences. Each taxonomy is in one line:
$ head taxonomy.file.tsv
KP143082.1.1457 D_0__Bacteria;D_1__Firmicutes;D_2__Bacilli;D_3__Bacillales;D_4__Bacillaceae;D_5__Bacillus;D_6__Bacillus_amyloliquefaciens
KF625180.1.1799 D_0__Bacteria;D_1__Firmicutes;D_2__Bacilli;D_3__Bacillales;D_4__Bacillaceae;D_5__Bacillus;D_6__Bacillus_atrophaeus
So, given sequence KF625180.1.1799. I previously aligned it against a databank containing ~45000 other DNA sequences and got an output whis has all the accessions to sequences that it matched. What the loop does is that it finds the taxonomies for all those sequences and calculates the "statistics" I mentionded previously. Code does it for all the DNA-sequences-accesions I have.
TAXONOMY=path/taxonomy.file.tsv
while read line
do
#find hits
hits=$(grep $line alnout_file | cut -f 2)
completename=$(grep $line $TAXONOMY | sed 's/D_0.*D_4/D_4/g')
printf "\nRate of taxonomies for this sequence in %%:\t$completename\n"
printf "Taxonomy\t%aligned\tnumber_ocurrences_in_the_alignment\tnum_ocurrences_in_databank\t%alingment/databank\n"
#find hits and calculate the frequence (%) of the taxonomy in the alignment output
# ex.: Bacillus_subtilis 33
freqHits=$(grep "${hits[#]}" $TAXONOMY | \
cut -f 2 | \
awk '{a[$0]++} END {for (i in a) {print i, "\t", a[i]/NR*100, "\t", a[i]}}' | \
sed -e 's/D_0.*D_5/D_5/g' -e 's#\s\t\s#\t#g' | \
sort -k2 -hr)
# print frequence of each taxonomy in the databank
freqBank=$(while read line; do grep -c "$line" $TAXONOMY; done < <(echo "$freqHits" | cut -f 1))
#print cols with taxonomy and calculations
paste <(printf %s "$freqHits") <(printf %s "$freqBank") | awk '{print $1,"\t",$2"%","\t",$3,"\t",$4,"\t",$3/$4*100"%"}'
done < input_file
It is a lot of greps and parsing so it takes about ~12h running in one processor for doing it to all the 45000 DNA sequence accessions. The, I would like to split input_file and do it in all the processors I have (14) because it would the time spend in that.
Thank you all for being so patient with me =)
You are looking for --pipe. In this case you can even use the optimized --pipepart (version >20160621):
export TAXONOMY=path/taxonomy.file.tsv
doit() {
while read line
do
#find hits
hits=$(grep $line alnout_file | cut -f 2)
completename=$(grep $line $TAXONOMY | sed 's/D_0.*D_4/D_4/g')
printf "\nRate of taxonomies for this sequence in %%:\t$completename\n"
printf "Taxonomy\t%aligned\tnumber_ocurrences_in_the_alignment\tnum_ocurrences_in_databank\t%alingment/databank\n"
#find hits and calculate the frequence (%) of the taxonomy in the alignment output
# ex.: Bacillus_subtilis 33
freqHits=$(grep "${hits[#]}" $TAXONOMY | \
cut -f 2 | \
awk '{a[$0]++} END {for (i in a) {print i, "\t", a[i]/NR*100, "\t", a[i]}}' | \
sed -e 's/D_0.*D_5/D_5/g' -e 's#\s\t\s#\t#g' | \
sort -k2 -hr)
# print frequence of each taxonomy in the databank
freqBank=$(while read line; do grep -c "$line" $TAXONOMY; done < <(echo "$freqHits" | cut -f 1))
#print cols with taxonomy and calculations
paste <(printf %s "$freqHits") <(printf %s "$freqBank") | awk '{print $1,"\t",$2"%","\t",$3,"\t",$4,"\t",$3/$4*100"%"}'
done
}
export -f doit
parallel -a input_file --pipepart doit
This will chop input_file into 10*ncpu blocks (where ncpu is the number of CPU threads), pass each block to doit, run ncpu jobs in parallel.
That said I think your real problem is spawning too many programs: If you rewrite doit in Perl or Python I will expect you will see a major speedup.
As an alternative I threw together a quick test.
#! /bin/env bash
mkfifo PIPELINE # create a single queue
cat "$1" > PIPELINE & # supply it with records
{ declare -i cnt=0 max=14
while (( ++cnt <= max )) # spawn loop creates worker jobs
do printf -v fn "%02d" $cnt
while read -r line # each work loop reads common stdin...
do echo "$fn:[$line]"
sleep 1
done >$fn.log 2>&1 & # these run in background in parallel
done # this one exits
} < PIPELINE # *all* read from the same queue
wait
cat [0-9][0-9].log
Doesn't need split, but does need a mkfifo.
Obviously, change the code inside the internal loop.
This answers what you asked, namely how to process in parallel the 14 files you get from running split. However, I don't think it is the best way of doing whatever it is that you are trying to do - but we would need some answers from you for that.
So, let's make a million line file and split it into 14 parts:
seq 1000000 > 1M
split -n 14 1M part-
That gives me 14 files called part-aa through part-an. Now your question is how to process those 14 parts in parallel - (read the last line first):
#!/bin/bash
# This function will be called for each of the 14 files
DoOne(){
# Pick up parameters
job=$1
file=$2
# Count lines in specified file
lines=$(wc -l < "$file")
echo "Job No: $job, file: $file, lines: $lines"
}
# Make the function above known to processes spawned by GNU Parallel
export -f DoOne
# Run 14 parallel instances of "DoOne" passing job number and filename to each
parallel -k -j 14 DoOne {#} {} ::: part-??
Sample Output
Job No: 1, file: part-aa, lines: 83861
Job No: 2, file: part-ab, lines: 72600
Job No: 3, file: part-ac, lines: 70295
Job No: 4, file: part-ad, lines: 70295
Job No: 5, file: part-ae, lines: 70294
Job No: 6, file: part-af, lines: 70295
Job No: 7, file: part-ag, lines: 70295
Job No: 8, file: part-ah, lines: 70294
Job No: 9, file: part-ai, lines: 70295
Job No: 10, file: part-aj, lines: 70295
Job No: 11, file: part-ak, lines: 70295
Job No: 12, file: part-al, lines: 70294
Job No: 13, file: part-am, lines: 70295
Job No: 14, file: part-an, lines: 70297
You would omit the -k argument to GNU Parallel normally - I only added it so the output comes in order.
I think that using a bunch of grep and awk commands is the wrong approach here - you would be miles better off using Perl, or awk. As you have not provided any sample files I generated some using this code:
#!/bin/bash
for a in {A..Z} {0..9} ; do
for b in {A..Z} {0..9} ; do
for c in {A..Z} {0..9} ; do
echo "${a}${b}${c}"
done
done
done > a
# Now make file "b" which has the same stuff but shuffled into a different order
gshuf < a > b
Note that there are 26 letters in the alphabet, so if I add the digits 0..9 to the letters of the alphabet, I get 36 alphanumeric digits and if I nest 3 loops of that I get 36^3 or 46,656 lines which matches your file sizes roughly. File a now looks like this:
AAA
AAB
AAC
AAD
AAE
AAF
File b looks like this:
UKM
L50
AOC
79U
K6S
6PO
12I
XEV
WJN
Now I want to loop through a finding the corresponding line in b. First, I use your approach:
time while read thing ; do grep $thing b > /dev/null ; done < a
That takes 9 mins 35 seconds.
If I now exit grep on the first match, on average I will find it in the middle, which means the time will be halved since I won't continue to needlessly read b after I find what I want.
time while read thing ; do grep -m1 $thing b > /dev/null ; done < a
That improves the time down to 4 mins 30 seconds.
If I now use awk to read the contents of b into an associative array (a.k.a. hash) and then read the elements of a and find them in b like this:
time awk 'FNR==NR{a[$1]=$1; next} {print a[$1]}' b a > /dev/null
That now runs in 0.07 seconds. Hopefully you get the idea of what I am driving at. I expect Perl would do this in the same time and also provide more expressive facilities for the maths in the middle of your loop too.
I hope this small script helps you out:
function process {
while read line; do
echo "$line"
done < $1
}
function loop {
file=$1
chunks=$2
dir=`mktemp -d`
cd $dir
split -n l/$chunks $file
for i in *; do
process "$i" &
done
rm -rf $dir
}
loop /tmp/foo 14
It runs the process loop on the specified file with the specified number of chunks (without splitting lines) in parallel (using & to put each invocation in the background). I hope it gets you started.
This can do the job for You, I am not familiar with parallel instead using native bash spawning processes &:
function loop () {
while IFS= read -r -d $'\n'
do
# YOUR BIG STUFF
done < "${1}"
}
arr_files=(./xa*)
for i in "${arr_files[#]}"
do loop "${i}" &
done
wait
I have a reference file with device names in them. For example WABEL8499IPM101. I'm using this script to set the base name (without the last 3 digits) to look at the reference file and see what is already used. If 101 is used it will create a file for me with 102, 103 if I request 2 total. I'm looking to use an input file to run it multiple times. I'm also trying to figure out how to start at 101 if there isn't a name found when searching the reference file
I would like to loop this using an input file instead of manually entering bash test.sh WABEL8499IPM 2 each time. I would like to be able to build an input file of all the names that need compared and then output. It would also be nice that if there isn't a match that it starts creating names at WABEL8499IPM101 instead of just WABEL8499IPM1.
Input file example:
ColumnA (BASE NAME) ColumnB (QUANTITY)
WABEL8499IPM 2
Script:
SRCFILE="~/Desktop/deviceinfo.csv"
LOGDIR="~/Desktop/"
LOGFILE="$LOGDIR/DeviceNames.csv"
# base name, such as "WABEL8499IPM"
device_name=$1
# quantity, such as "2"
quantityNum=$2
# the largest in sequence, such as "WABEL8499IPM108"
max_sequence_name=$(cat $SRCFILE | grep -o -e "$device_name[0-9]*" | sort --reverse | head -n 1)
# extract the last 3digit number (such as "108") from max_sequence_name
max_sequence_num=$(echo $max_sequence_name | rev | cut -c 1-3 | rev)
# create new sequence_name
# such as ["WABEL8499IPM109", "WABEL8499IPM110"]
array_new_sequence_name=()
for i in $(seq 1 $quantityNum);
do
cnum=$((max_sequence_num + i))
array_new_sequence_name+=($(echo $device_name$cnum))
done
#CODE FOR CREATING OUTPUT FILE HERE
#for fn in ${array_new_sequence_name[#]}; do touch $fn; done;
# write log
for sqn in ${array_new_sequence_name[#]};
do
echo $sqn >> $LOGFILE
done
Usage:
bash test.sh WABEL8499IPM 2
Result in the log file:
WABEL8499IPM109
WABEL8499IPM110
Just wrap a loop around your code instead of assuming the args come in on the command line.
SRCFILE="~/Desktop/deviceinfo.csv"
LOGDIR="~/Desktop/"
LOGFILE="$LOGDIR/DeviceNames.csv"
while read device_name quantityNum
do max_sequence_name=$( grep -o -e "$device_name[0-9]*" $SRCFILE |
sort --reverse | head -n 1)
max_sequence_num=${max_sequence_name: -3}
array_new_sequence_name=()
for i in $(seq 1 $quantityNum)
do cnum=$((max_sequence_num + i))
array_new_sequence_name+=("$device_name$cnum")
done
for sqn in ${array_new_sequence_name[#]};
do echo $sqn >> $LOGFILE
done
done < input.file
I'd maybe pass the input file as the parameter now.
I have a set of data files across a number of directories with format
ls lcp01/output/
> dst000.dat dst001.dat ... dst075.dat nn000.dat nn001.dat ... nn036.dat aa000.dat aa001.dat ... aa040.dat
That is to say, there are a set of directories lcp01 through lcp25 with a collection of different data files in their output folders. I want to know what the highest number dstXXX.dat file is in each directory (in the example shown the result would be 75).
I wrote a script which achieves this, but I'm not satisfied with the final step which feels a bit hacky:
#!/bin/bash
for i in `seq -f "%02g" 1 25`; #specify dir extensions 1 through 25
do
echo " "
echo $i
names=($(ls lcp$i/output | grep dst )) #dir containing dst files
NUMS=()
for j in "${names[#]}";
do
temp="$(echo $j | tr -dc '0-9' && printf " ")" # record suffixes for each dst file
NUMS+=("$((10#$temp))") #force base 10 interpretation of dst suffixes
done
numList="$(echo "${NUMS[*]}" | sort -nr | head -n1)"
echo ${numList:(-3)} #print out the last 3 characters of the sorted list - the largest file suffix
done
The final two steps organise the list of output indices, then I show the last 3 characters of that list which will be my largest file number (providing the file numbers are smaller than 100).
Is there a cleaner way of doing this? Ideally I would like more control over the output format, but mainly it's the step of reading the last 3 characters out. I would like to be able to just output the largest number, which should be the last element of the list but I cannot figure out how.
You could do something like the following:
for d in lc[0-9][0-9]; do find $d -name 'dst*.dat' -print | sort -u | tail -n1; done
Above command will only work if the numbering has the same number of digits (dst001..999.dat), as it is sorted as a string; if that's not the case:
for d in lc[0-9][0-9]; do echo -n $d: ; find $d -name 'dst*.dat' -print | grep -o '[0-9]*.dat' | sort -n | tail -n1; done
using filename expansions
for d in lcp*/output; do
files=( $d/dst*.dat )
file=${files[-1]}
[[ -e $file ]] || continue
file=${file#dst*}
echo ${file%.dat}
done
or with extension option to restrict pattern to numbers
shopt -s extglob
... lcp*([0-9])/output
... $d/dst*([0-9]).dat
...
file=${file##dst*(0)}
...
What's an easy way to read random line from a file in a shell script?
You can use shuf:
shuf -n 1 $FILE
There is also a utility called rl. In Debian it's in the randomize-lines package that does exactly what you want, though not available in all distros. On its home page it actually recommends the use of shuf instead (which didn't exist when it was created, I believe). shuf is part of the GNU coreutils, rl is not.
rl -c 1 $FILE
Another alternative:
head -$((${RANDOM} % `wc -l < file` + 1)) file | tail -1
sort --random-sort $FILE | head -n 1
(I like the shuf approach above even better though - I didn't even know that existed and I would have never found that tool on my own)
This is simple.
cat file.txt | shuf -n 1
Granted this is just a tad slower than the "shuf -n 1 file.txt" on its own.
perlfaq5: How do I select a random line from a file? Here's a reservoir-sampling algorithm from the Camel Book:
perl -e 'srand; rand($.) < 1 && ($line = $_) while <>; print $line;' file
This has a significant advantage in space over reading the whole file in. You can find a proof of this method in The Art of Computer Programming, Volume 2, Section 3.4.2, by Donald E. Knuth.
using a bash script:
#!/bin/bash
# replace with file to read
FILE=tmp.txt
# count number of lines
NUM=$(wc - l < ${FILE})
# generate random number in range 0-NUM
let X=${RANDOM} % ${NUM} + 1
# extract X-th line
sed -n ${X}p ${FILE}
Single bash line:
sed -n $((1+$RANDOM%`wc -l test.txt | cut -f 1 -d ' '`))p test.txt
Slight problem: duplicate filename.
Here's a simple Python script that will do the job:
import random, sys
lines = open(sys.argv[1]).readlines()
print(lines[random.randrange(len(lines))])
Usage:
python randline.py file_to_get_random_line_from
Another way using 'awk'
awk NR==$((${RANDOM} % `wc -l < file.name` + 1)) file.name
A solution that also works on MacOSX, and should also works on Linux(?):
N=5
awk 'NR==FNR {lineN[$1]; next}(FNR in lineN)' <(jot -r $N 1 $(wc -l < $file)) $file
Where:
N is the number of random lines you want
NR==FNR {lineN[$1]; next}(FNR in lineN) file1 file2
--> save line numbers written in file1 and then print corresponding line in file2
jot -r $N 1 $(wc -l < $file) --> draw N numbers randomly (-r) in range (1, number_of_line_in_file) with jot. The process substitution <() will make it look like a file for the interpreter, so file1 in previous example.
#!/bin/bash
IFS=$'\n' wordsArray=($(<$1))
numWords=${#wordsArray[#]}
sizeOfNumWords=${#numWords}
while [ True ]
do
for ((i=0; i<$sizeOfNumWords; i++))
do
let ranNumArray[$i]=$(( ( $RANDOM % 10 ) + 1 ))-1
ranNumStr="$ranNumStr${ranNumArray[$i]}"
done
if [ $ranNumStr -le $numWords ]
then
break
fi
ranNumStr=""
done
noLeadZeroStr=$((10#$ranNumStr))
echo ${wordsArray[$noLeadZeroStr]}
Here is what I discovery since my Mac OS doesn't use all the easy answers. I used the jot command to generate a number since the $RANDOM variable solutions seems not to be very random in my test. When testing my solution I had a wide variance in the solutions provided in the output.
RANDOM1=`jot -r 1 1 235886`
#range of jot ( 1 235886 ) found from earlier wc -w /usr/share/dict/web2
echo $RANDOM1
head -n $RANDOM1 /usr/share/dict/web2 | tail -n 1
The echo of the variable is to get a visual of the generated random number.
Using only vanilla sed and awk, and without using $RANDOM, a simple, space-efficient and reasonably fast "one-liner" for selecting a single line pseudo-randomly from a file named FILENAME is as follows:
sed -n $(awk 'END {srand(); r=rand()*NR; if (r<NR) {sub(/\..*/,"",r); r++;}; print r}' FILENAME)p FILENAME
(This works even if FILENAME is empty, in which case no line is emitted.)
One possible advantage of this approach is that it only calls rand() once.
As pointed out by #AdamKatz in the comments, another possibility would be to call rand() for each line:
awk 'rand() * NR < 1 { line = $0 } END { print line }' FILENAME
(A simple proof of correctness can be given based on induction.)
Caveat about rand()
"In most awk implementations, including gawk, rand() starts generating numbers from the same starting number, or seed, each time you run awk."
-- https://www.gnu.org/software/gawk/manual/html_node/Numeric-Functions.html