Im trying to replace a number in a file with a calculated floating variable in a bash file. So im trying to replace 1.1111 with the value of "km" and save it in the mesh.in file. I keep getting an error on the sed line, I think there may be an issue with the floating variable. Echo "$km" does work so i know that the km is not the issue
#!/bin/bash
read -p "Angle in degrees : " n1
read -p "bcsa : " n2
cd viv_example_se2d
sed s/^bcsa.\*/"bcsa $n2"/ runfile.viv >temp
mv -f temp runfile.viv
cd ../
for i in $(seq 2 0.5 12)
do
if [ ! -d U*_$i ];then
mkdir U*_$i
fi
printf -v "km" "%.4f\n" $(echo | bc | awk "BEGIN {print 4*3.14159265359*3.14159265359/($i*$i)}")
echo "$km"
cd viv_example_se2d
sed s/1.1111/$km/g mesh_master.in > temp$i
mv -f temp$i mesh.in
cd ../
echo $home/lustre/projects/p057_swin/ogoldman/Ellipse_$n1/U*_$i | xargs -n 1 cp viv_example_se2d/*
done;
The problem is the newline in the value of $km. It is confusing sed.
That being said this script is also a bit of a mess.
You should quote your variables when you use them to protect against problems with whitespace and glob characters in the values.
You don't need xargs to cp multiple files that you can expand via a glob. cp will happily take multiple files to copy directly. (Oh, or is that copying multiple files to directories produced via that glob?)
You have a useless echo | bc | bit near the awk command.
Using full/relative paths in sed/etc. is better than cding around generally.
Related
In my bash, the whole script won't work... When I use `
My script is
#!/bin/bash
yesterday=$(date --date "$c days ago" +%F)
while IFS= read -r line
do
dir=$(echo $line | awk -F, '{print $1 }')
country=$(echo $line | awk -F, '{print $2 }')
cd path/$dir
cat `ls -v | grep email.csv` > e.csv
done < "s.csv"
Above output is blank.
If i use ""
output is No such file or directory
but if I use only 1 line on the terminal it works
cat `ls -v | grep email.csv` > e.csv
I also try with / , but didnt work either...
You should generally avoid ls in scripts.
Also, you should generally prefer the modern POSIX $(command substitution) syntax like you already do in several other places in your script; the obsolescent backtick `command substitution` syntax is clunky and somewhat more error-prone.
If this works in the current directory but fails in others, it means that you have a file matching the regex in the current directory, but not in the other directory.
Anyway, the idiomatic way to do what you appear to be attempting is simply
cat *email?csv* >e.csv
If you meant to match a literal dot, that's \. in a regular expression. The ? is a literal interpretation of what your grep actually did; but in the following, I will assume you actually meant to match *email.csv* (or in fact probably even *email.csv without a trailing wildcard).
If you want to check if there are any files, and avoid creating e.csv if not, that's slightly tricky; maybe try
for file in *email.csv*; do
test -e "$file" || break
cat *email.csv* >e.csv
break
done
Alternatively, look into the nullglob feature of Bash. See also Check if a file exists with wildcard in shell script.
On the other hand, if you just want to check whether email.csv exists, without a wildcard, that's easy:
if [ -e email.csv ]; then
cat email.csv >e.csv
fi
In fact, that can even be abbreviated down to
test -e email.csv && cat email.csv >e.csv
As an aside, read can perfectly well split a line into tokens.
#!/bin/bash
yesterday=$(date --date "$c days ago" +%F)
while IFS=, read -r dir country _
do
cd "path/$dir" # notice proper quoting, too
cat *email.csv* > e.csv
# probably don't forget to cd back
cd ../..
done < "s.csv"
If this is in fact all your script does, probably do away with the silly and slightly error-prone cd;
while IFS=, read -r dir country _
do
cat "path/$dir/"*email.csv* > "path/$dir/e.csv"
done < "s.csv"
See also When to wrap quotes around a shell variable.
I have a folder contain daily rainfall data in geotiff format from 1981-2019 with naming convention chirps-v2.0.yyyymmdd.1days.tif
I would like to arrange all the files based on MONTH information, and move into a new folder, ie all files with Month = January will move to Month01 folder.
Is there any one-liner solution for that, I am using terminal on macos.
This should do it:
for i in $(seq -f "%02g" 1 12); do mkdir -p "Month$i"; mv chirps-v2.0.????$i*.tif "Month$i"; done
Explanation:
For each number in the range 1, 12 (padded with 0 if necessary)...
Make the directories Month01, Month02, etc. If the directory already exists, continue.
Move all files that include the current month number in the relevant part of the filename to the appropriate folder. The question marks in chirps-v2.0.????$i*.tif represent single-character wildcards.
Note: If there is any chance there will be spaces in your .tif filenames, you can use "chirps-v2.0."????"$i"*".tif" instead.
I don't think there is a simple way to do this. You can, however, do a "one-liner" solution if you use pipes and for loops, things like that:
for file in $(ls *.tif); do sed -r 's/(.*\.[0-9]{4})([0-9]{2})(.*)/\1 \2 \3/' <<< "$file" | awk '{print "mkdir -p dstDir/Month" $2 "; cp", $1 $2 $3, "dstDir/Month" $2}' | bash ; done
Formatting this a bit:
for file in $(ls *.tif); do \
sed -r 's/(.*\.[0-9]{4})([0-9]{2})(.*)/\1 \2 \3/' <<< "$file" \
| awk '{print "mkdir -p dstDir/Month" $2 "; cp", $1 $2 $3, "dstDir/Month" $2}' \
| bash
done
This needs to be executed from the directory containing your files (see "ls *.tif). You will also need to replace "dstDir" with the name of the parent directory where "Month01" will be created.
This may not be perfect, but you can edit it, if required. Also, if you don't have bash, only zsh, replace the "bash" bit by "zsh" should still work.
I have to merge files with the following naming pattern :
[SampleID]_[custom_ID01]_ID[RUN_ID]_L001_R1.fastq
[SampleID]_[custom_ID02]_ID[RUN_ID]_L002_R1.fastq
[SampleID]_[custom_ID03]_ID[RUN_ID]_L003_R1.fastq
[SampleID]_[custom_ID04]_ID[RUN_ID]_L004_R1.fastq
I need to merge all files with identical [SampleID] but different "Lanes" (L001-L004).
The following script works fine when directly run in the terminal:
custom_id="000"
RUN_ID="0025"
wd="/path/to/script/" # was missing/ incorrect
# get ALL sample identifiers
touch temp1.txt
for line in $wd/*.fastq ; do
fastq_identifier=$(echo "$line" | cut -d"_" -f1);
echo $fastq_identifier >> temp1.txt
done
# get all uniqe samples identical
cat temp1.txt | uniq > temp2.txt
input_var=$(cat temp2.txt)
# concatenate all fastq (different lanes) with identical identifier
for line in $input_var; do
cat $line*fastq >> $line"_"$custom_id"_ID"$Run_ID"_L001_R1.fastq"
done
rm temp1.txt temp2.txt;
But if I create a script file (concatenate_fastq.sh) and make it executable
$ chomd +x concatenate_fastq.sh
and run it
$ ./concatenate_fastq.sh
I got the following error:
$ concatenate_fastq.sh: line 17: /*.fastq_000_ID_L001_R1.fastq: Keine Berechtigung # = Permission denied
Thx to your hints below I solved the problem by fixing
wd=/path/to/script/
The immediate problem seems to be that wd is unset. If you script really genuinely contains exactly the line
wd="/path/to/script/"
then I would suspect invisible control characters in the script file (using a Windows editor is a common way to shoot yourself in the foot).
More generally, your script should cope correctly when the wildcard does not match any files. A common way to do that is to shopt -s nullglob but the subsequent script would still need adaptation then.
Refactoring the script to loop only over actual matches would help avoid trouble. Perhaps something like this:
shopt -s nullglob # bashism
printf '%s\n' "$wd"/*.fastq |
cut -d_ -f1 |
uniq |
while read -r line; do
cat "$line"*fastq >> "${line}_${custom_id}_ID${Run_ID}_L001_R1.fastq"
done
You'll notice that this simplifies the script tremendously, and avoids the pesky temporary files.
I solved it with:
if [ $# -ne 3 ] ; then
echo -e "Usage: $0 {path_to_working_directory} {custom_ID:Z+} {run_ID:ZZZZ}\n"
exit 1
fi
cwd=$(pwd)
wd=$1
custom_id=$2
RUN_ID=$3
folder=$(basename $wd)
input_var=$(ls *fastq | cut --fields 1 -d "_" | uniq)
for line in $input_var; do
cat $line*fastq >> $line"_"$custom_id"_ID"$RUN_ID"_L001_R1.fastq"
done
I have a bash script which parses a file line by line, extracts the date using a cut command and then makes a folder using that date. However, it seems like my variables are not being populated properly. Do I have a syntax issue? Any help or direction to external resources is very appreciated.
#!/bin/bash
ls | grep .mp3 | cut -d '.' -f 1 > filestobemoved
cat filestobemoved | while read line
do
varYear= $line | cut -d '_' -f 3
varMonth= $line | cut -d '_' -f 4
varDay= $line | cut -d '_' -f 5
echo $varMonth
mkdir $varMonth'_'$varDay'_'$varYear
cp ./$line'.mp3' ./$varMonth'_'$varDay'_'$varYear/$line'.mp3'
done
You have many errors and non-recommended practices in your code. Try the following:
for f in *.mp3; do
f=${f%%.*}
IFS=_ read _ _ varYear varMonth varDay <<< "$f"
echo $varMonth
mkdir -p "${varMonth}_${varDay}_${varYear}"
cp "$f.mp3" "${varMonth}_${varDay}_${varYear}/$f.mp3"
done
The actual error is that you need to use command substitution. For example, instead of
varYear= $line | cut -d '_' -f 3
you need to use
varYear=$(cut -d '_' -f 3 <<< "$line")
A secondary error there is that $foo | some_command on its own line does not mean that the contents of $foo gets piped to the next command as input, but is rather executed as a command, and the output of the command is passed to the next one.
Some best practices and tips to take into account:
Use a portable shebang line - #!/usr/bin/env bash (disclaimer: That's my answer).
Don't parse ls output.
Avoid useless uses of cat.
Use More Quotes™
Don't use files for temporary storage if you can use pipes. It is literally orders of magnitude faster, and generally makes for simpler code if you want to do it properly.
If you have to use files for temporary storage, put them in the directory created by mktemp -d. Preferably add a trap to remove the temporary directory cleanly.
There's no need for a var prefix in variables.
grep searches for basic regular expressions by default, so .mp3 matches any single character followed by the literal string mp3. If you want to search for a dot, you need to either use grep -F to search for literal strings or escape the regular expression as \.mp3.
You generally want to use read -r (defined by POSIX) to treat backslashes in the input literally.
What is the best way to choose a random file from a directory in a shell script?
Here is my solution in Bash but I would be very interested for a more portable (non-GNU) version for use on Unix proper.
dir='some/directory'
file=`/bin/ls -1 "$dir" | sort --random-sort | head -1`
path=`readlink --canonicalize "$dir/$file"` # Converts to full path
echo "The randomly-selected file is: $path"
Anybody have any other ideas?
Edit: lhunath makes a good point about parsing ls. I guess it comes down to whether you want to be portable or not. If you have the GNU findutils and coreutils then you can do:
find "$dir" -maxdepth 1 -mindepth 1 -type f -print0 \
| sort --zero-terminated --random-sort \
| sed 's/\d000.*//g/'
Whew, that was fun! Also it matches my question better since I said "random file". Honsetly though, these days it's hard to imagine a Unix system deployed out there having GNU installed but not Perl 5.
files=(/my/dir/*)
printf "%s\n" "${files[RANDOM % ${#files[#]}]}"
And don't parse ls. Read http://mywiki.wooledge.org/ParsingLs
Edit: Good luck finding a non-bash solution that's reliable. Most will break for certain types of filenames, such as filenames with spaces or newlines or dashes (it's pretty much impossible in pure sh). To do it right without bash, you'd need to fully migrate to awk/perl/python/... without piping that output for further processing or such.
Is "shuf" not portable?
shuf -n1 -e /path/to/files/*
or find if files are deeper than one directory:
find /path/to/files/ -type f | shuf -n1
it's part of coreutils but you'll need 6.4 or newer to get it... so RH/CentOS does not include it.
# ******************************************************************
# ******************************************************************
function randomFile {
tmpFile=$(mktemp)
files=$(find . -type f > $tmpFile)
total=$(cat "$tmpFile"|wc -l)
randomNumber=$(($RANDOM%$total))
i=0
while read line; do
if [ "$i" -eq "$randomNumber" ];then
# Do stuff with file
amarok $line
break
fi
i=$[$i+1]
done < $tmpFile
rm $tmpFile
}
Something like:
let x="$RANDOM % ${#file}"
echo "The randomly-selected file is ${path[$x]}"
$RANDOM in bash is a special variable that returns a random number, then I use modulus division to get a valid index, then reference that index in the array.
This boils down to: How can I create a random number in a Unix script in a portable way?
Because if you have a random number between 1 and N, you can use head -$N | tail to cut somewhere in the middle. Unfortunately, I know no portable way to do this with the shell alone. If you have Python or Perl, you can easily use their random support but AFAIK, there is no standard rand(1) command.
I think Awk is a good tool to get a random number. According to the Advanced Bash Guide, Awk is a good random number replacement for $RANDOM.
Here's a version of your script that avoids Bash-isms and GNU tools.
#! /bin/sh
dir='some/directory'
n_files=`/bin/ls -1 "$dir" | wc -l | cut -f1`
rand_num=`awk "BEGIN{srand();print int($n_files * rand()) + 1;}"`
file=`/bin/ls -1 "$dir" | sed -ne "${rand_num}p"`
path=`cd $dir && echo "$PWD/$file"` # Converts to full path.
echo "The randomly-selected file is: $path"
It inherits the problems other answers have mentioned should files contain newlines.
Newlines in file-names can be avoided by doing the following in Bash:
#!/bin/sh
OLDIFS=$IFS
IFS=$(echo -en "\n\b")
DIR="/home/user"
for file in $(ls -1 $DIR)
do
echo $file
done
IFS=$OLDIFS
Here's a shell snippet that relies only on POSIX features and copes with arbitrary file names (but omits dot files from the selection). The random selection uses awk, because that's all you get in POSIX. It's a very poor random number generator, since awk's RNG is seeded with the current time in seconds (so it's easily predictable, and returns the same choice if you call it multiple times per second).
set -- *
n=$(echo $# | awk '{srand(); print int(rand()*$0) + 1}')
eval "file=\$$n"
echo "Processing $file"
If you don't want to ignore dot files, the file name generation code (set -- *) needs to be replaced by something more complicated.
set -- *; [ -e "$1" ] || shift
set .[!.]* "$#"; [ -e "$1" ] || shift
set ..?* "$#"; [ -e "$1" ] || shift
if [ $# -eq 0]; then echo 1>&2 "empty directory"; exit 1; fi
If you have OpenSSL available, you can use it to generate random bytes. If you don't but your system has /dev/urandom, replace the call to openssl by dd if=/dev/urandom bs=3 count=1 2>/dev/null. Here's a snippet that sets n to a random value between 1 and $#, taking care not to introduce a bias. This snippet assumes that $# is at most 2^23-1.
while
n=$(($(openssl rand 3 | od -An -t u4) + 1))
[ $n -gt $((16777216 / $# * $#)) ]
do :; done
n=$((n % $#))
BusyBox (used on embedded devices) is usually configured to support $RANDOM but it doesn't have bash-style arrays or sort --random-sort or shuf. Hence the following:
#!/bin/sh
FILES="/usr/bin/*"
for f in $FILES; do echo "$RANDOM $f" ; done | sort -n | head -n1 | cut -d' ' -f2-
Note trailing "-" in cut -f2-; this is required to avoid truncating files that contain spaces (or whatever separator you want to use).
It won't handle filenames with embedded newlines correctly.
Put each line of output from the command 'ls' into an associative array named line and then choose one of those like so...
ls | awk '{ line[NR]=$0 } END { print line[(int(rand()*NR+1))]}'
My 2 cents, with a version that should not break when filenames with special chars exist:
#!/bin/bash --
dir='some/directory'
let number_of_files=$(find "${dir}" -type f -print0 | grep -zc .)
let rand_index=$((1+(RANDOM % number_of_files)))
printf "the randomly-selected file is: "
find "${dir}" -type f -print0 | head -z -n "${rand_index}" | tail -z -n 1
printf "\n"