I'm not entirely new to programming, but I'm not exactly experienced. I want to write small shell script for practice.
Here's what I have so far:
#!/bin/sh
name=$0
links=$3
owner=$4
if [ $# -ne 1 ]
then
echo "Usage: $0 <directory>"
exit 1
fi
if [ ! -e $1 ]
then
echo "$1 not found"
exit 1
elif [ -d $1 ]
then
echo "Name\t\tLinks\t\tOwner\t\tDate"
echo "$name\t$links\t$owner\t$date"
exit 0
fi
Basically what I'm trying to do is have the script go through all of the files in a specified directory and then display the name of each file with the amount of links it has, its owner, and the date it was created. What would be the syntax for displaying the date of creation or at least the date of last modification of the file?
Another thing is, what is the syntax for creating a for loop? From what I understand I would have to write something like for $1 in $1 ($1 being all of the files in the directory the user typed in correct?) and then go through checking each file and displaying the information for each one. How would I start and end the for loop (what is the syntax for this?).
As you can see I'm not very familiar bourne shell programming. If you have any helpful websites or have a better way of approaching this please show me!
Syntax for a for loop:
for var in list
do
echo $var
done
for example:
for var in *
do
echo $var
done
What you might want to consider however is something like this:
ls -l | while read perms links owner group size date1 date2 time filename
do
echo $filename
done
which splits the output of ls -l into fields on-the-fly so you don't need to do any splitting yourself.
The field-splitting is controlled by the shell-variable IFS, which by default contains a space, tab and newline. If you change this in a shell script, remember to change it back. Thus by changing the value of IFS you can, for example, parse CSV files by setting this to a comma. this example reads three fields from a CSV and spits out the 2nd and 3rd only (it's effectively the shell equivalent of cut -d, -f2,3 inputfile.csv)
oldifs=$IFS
IFS=","
while read field1 field2 field3
do
echo $field2 $field3
done < inputfile.csv
IFS=oldifs
(note: you don't need to revert IFS, but I generally do to make sure that further text processing in a script isn't affected after I'm done with it).
Plenty of documentation out the on both for and while loops; just google for it :-)
$1 is the first positional parameter, so $3 is the third and $4 is the fourth. They have nothing to do with the directory (or its files) the script was started from. If your script was started using this, for example:
./script.sh apple banana cherry date elderberry
then the variable $1 would equal "apple" and so on. The special parameter $# is the count of positional parameters, which in this case would be five.
The name of the script is contained in $0 and $* and $# are arrays that contain all the positional parameters which behave differently depending on whether they appear in quotes.
You can refer to the positional parameters using a substring-style index:
${#:2:1}
would give "banana" using the example above. And:
${#: -1}
or
${#:$#}
would give the last ("elderberry"). Note that the space before the minus sign is required in this context.
You might want to look at Advanced Bash-Scripting Guide. It has a section that explains loops.
I suggest to use find with the option -printf "%P\t%n\t%u\t%t"
for x in "$#"; do
echo "$x"
done
The "$#" protects any whitespace in supplied file names. Obviously, do your real work in place of "echo $x", which isn't doing much. But $# is all the junk supplied on the command line to your script.
But also, your script bails out if $# is not equal to 1, but you're apparently fully expecting up to 4 arguments (hence the $4 you reference in the early part of your script).
assuming you have GNU find on your system
find /path -type f -printf "filename: %f | hardlinks: %n| owner: %u | time: %TH %Tb %TY\n"
Related
Hello I am trying to get all files with Jane's name to a separate file called oldFiles.txt. In a directory called "data" I am reading from a list of file names from a file called list.txt, from which I put all the file names containing the name Jane into the files variable. Then I'm trying to test the files variable with the files in list.txt to ensure they are in the file system, then append the all the files containing jane to the oldFiles.txt file(which will be in the scripts directory), after it tests to make sure the item within the files variable passes.
#!/bin/bash
> oldFiles.txt
files= grep " jane " ../data/list.txt | cut -d' ' -f 3
if test -e ~data/$files; then
for file in $files; do
if test -e ~/scripts/$file; then
echo $file>> oldFiles.txt
else
echo "no files"
fi
done
fi
The above code gets the desired files and displays them correctly, as well as creates the oldFiles.txt file, but when I open the file after running the script I find that nothing was appended to the file. I tried changing the file assignment to a pointer instead files= grep " jane " ../data/list.txt | cut -d' ' -f 3 ---> files=$(grep " jane " ../data/list.txt) to see if that would help by just capturing raw data to write to file, but then the error comes up "too many arguments on line 5" which is the 1st if test statement. The only way I get the script to work semi-properly is when I do ./findJane.sh > oldFiles.txt on the shell command line, which is me essentially manually creating the file. How would I go about this so that I create oldFiles.txt and append to the oldFiles.txt all within the script?
The biggest problem you have is matching names like "jane" or "Jane's", etc. while not matching "Janes". grep provides the options -i (case insensitive match) and -w (whole-word match) which can tailor your search to what you appear to want without having to use the kludge (" jane ") of appending spaces before an after your search term. (to properly do that you would use [[:space:]]jane[[:space:]])
You also have the problem of what is your "script dir" if you call your script from a directory other than the one containing your script, such as calling your script from your $HOME directory with bash script/findJane.sh. In that case your script will attempt to append to $HOME/oldFiles.txt. The positional parameter $0 always contains the full pathname to the current script being run, so you can capture the script directory no matter where you call the script from with:
dirname "$0"
You are using bash, so store all the filenames resulting from your grep command in an array, not some general variable (especially since your use of " jane " suggests that your filenames contain whitespace)
You can make your script much more flexible if you take the information of your input file (e.g list.txt), the term to search for (e.g. "jane"), the location where to check for existence of the files (e.g. $HOME/data) and the output filename to append the names to (e.g. "oldFile.txt") as command line [positonal] parameters. You can give each default values so it behaves as you currently desire without providing any arguments.
Even with the additional scripting flexibility of taking the command line arguments, the script actually has fewer lines simply filling an array using mapfile (synonymous with readarray) and then looping over the contents of the array. You also avoid the additional subshell for dirname with a simple parameter expansion and test whether the path component is empty -- to replace with '.', up to you.
If I've understood your goal correctly, you can put all the pieces together with:
#!/bin/bash
# positional parameters
src="${1:-../data/list.txt}" # 1st param - input (default: ../data/list.txt)
term="${2:-jane}" # 2nd param - search term (default: jane)
data="${3:-$HOME/data}" # 3rd param - file location (defaut: ../data)
outfn="${4:-oldFiles.txt}" # 4th param - output (default: oldFiles.txt)
# save the path to the current script in script
script="$(dirname "$0")"
# if outfn not given, prepend path to script to outfn to output
# in script directory (if script called from elsewhere)
[ -z "$4" ] && outfn="$script/$outfn"
# split names w/term into array
# using the -iw option for case-insensitive whole-word match
mapfile -t files < <(grep -iw "$term" "$src" | cut -d' ' -f 3)
# loop over files array
for ((i=0; i<${#files[#]}; i++)); do
# test existence of file in data directory, redirect name to outfn
[ -e "$data/${files[i]}" ] && printf "%s\n" "${files[i]}" >> "$outfn"
done
(note: test expression and [ expression ] are synonymous, use what you like, though you may find [ expression ] a bit more readable)
(further note: "Janes" being plural is not considered the same as the singular -- adjust the grep expression as desired)
Example Use/Output
As was pointed out in the comment, without a sample of your input file, we cannot provide an exact test to confirm your desired behavior.
Let me know if you have questions.
As far as I can tell, this is what you're going for. This is totally a community effort based on the comments, catching your bugs. Obviously credit to Mark and Jetchisel for finding most of the issues. Notable changes:
Fixed $files to use command substitution
Fixed path to data/$file, assuming you have a directory at ~/data full of files
Fixed the test to not test for a string of files, but just the single file (also using -f to make sure it's a regular file)
Using double brackets — you could also use double quotes instead, but you explicitly have a Bash shebang so there's no harm in using Bash syntax
Adding a second message about not matching files, because there are two possible cases there; you may need to adapt depending on the output you're looking for
Removed the initial empty redirection — if you need to ensure that the file is clear before the rest of the script, then it should be added back, but if not, it's not doing any useful work
Changed the shebang to make sure you're using the user's preferred Bash, and added set -e because you should always add set -e
#!/usr/bin/env bash
set -e
files=$(grep " jane " ../data/list.txt | cut -d' ' -f 3)
for file in $files; do
if [[ -f $HOME/data/$file ]]; then
if [[ -f $HOME/scripts/$file ]]; then
echo "$file" >> oldFiles.txt
else
echo "no matching file"
fi
else
echo "no files"
fi
done
I need to verify that all images mentioned in a csv are present inside a folder. I wrote a small shell script for that
#!/bin/zsh
red='\033[0;31m'
color_Off='\033[0m'
csvfile=$1
imgpath=$2
cat $csvfile | while IFS=, read -r filename rurl
do
if [ -f "${imgpath}/${filename}" ]
then
echo -n
else
echo -e "$filename ${red}MISSING${color_Off}"
fi
done
My CSV looks something like
Image1.jpg,detail-1
Image2.jpg,detail-1
Image3.jpg,detail-1
The csv was created by excel.
Now all 3 images are present in imgpath but for some reason my output says
Image1.jpg MISSING
Upon using zsh -x to run the script i found that my CSV file has a BOM at the very beginning making the image name as \ufeffImage1.jpg which is causing the whole issue.
How can I ignore a BOM(byte-order marker) in a while read operation?
zsh provides a parameter expansion (also available in POSIX shells) to remove a prefix: ${var#prefix} will expand to $var with prefix removed from the front of the string.
zsh also, like ksh93 and bash, supports ANSI C-like string syntax: $'\ufeff' refers to the Unicode sequence for a BOM.
Combining these, one can refer to ${filename#$'\ufeff'} to refer to the content of $filename but with the Unicode sequence for a BOM removed if it's present at the front.
The below also makes some changes for better performance, more reliable behavior with odd filenames, and compatibility with non-zsh shells.
#!/bin/zsh
red='\033[0;31m'
color_Off='\033[0m'
csvfile=$1
imgpath=$2
while IFS=, read -r filename rurl; do
filename=${filename#$'\ufeff'}
if ! [ -f "${imgpath}/${filename}" ]; then
printf '%s %bMISSING%b\n' "$filename" "$red" "$color_Off"
fi
done <"$csvfile"
Notes on changes unrelated to the specific fix:
Replacing echo -e with printf lets us pick which specific variables get escape sequences expanded: %s for filenames means backslashes and other escapes in them are unmodified, whereas %b for $red and $color_Off ensures that we do process highlighting for them.
Replacing cat $csvfile | with < "$csvfile" avoids the overhead of starting up a separate cat process, and ensures that your while read loop is run in the same shell as the rest of your script rather than a subshell (which may or may not be an issue for zsh, but is a problem with bash when run without the non-default lastpipe flag).
echo -n isn't reliable as a noop: some shells print -n as output, and the POSIX echo standard, by marking behavior when -n is present as undefined, permits this. If you need a noop, : or true is a better choice; but in this case we can just invert the test and move the else path into the truth path.
I am trying to find the number of files in a directory. I am using the ls command. Based on the Number of Files, I need to display the count and a message. Script doesn't provide the desired output. Any assistance will be appreciated.
#!/bin/sh
FILECOUNT = $(ls /opt/report/ | grep *.ZIP_30 | wc -l);
if [ $FILECOUNT -gt "0" ]; then
echo "Statistic.filecount: $FILECOUNT";
echo "Message.filecount: Normal";
else
echo "Statistic.filecount: $FILECOUNT";
echo "Message.filecount: Warning";
fi;
exit 0;
You have a syntax error.
No spaces allowed around = in shell, so:
filecount=$(grep -c ZIP_30 /opt/report/*)
It's a general advise not to parse the result of ls, therefore I would advise you the following command:
find /opt/report/ -maxdepth 1 -name "*.ZIP_30" | wc -l
Besides the syntax error around = that Gilles pointed out and the quoting issue that William Pursell commented on, there's a simpler way to count the number of files: use the shell!
shopt -s nullglob
set -- /opt/report/*.ZIP_30
filecount=$#
if [ "$filecount" -gt 0 ]; then
echo "Statistic.filecount: $filecount";
echo "Message.filecount: Normal";
else
echo "Statistic.filecount: $filecount";
echo "Message.filecount: Warning";
fi;
exit 0;
The basic idea is to use the shell's globbing (wildcard) expansion feature to set the positional parameters to the list of matching files. I've used a bash shell feature (nullglob) for the case where there are exactly no matching files. Normally, the shell would leave the /opt/report/*.ZIP_30 text as the result of the empty match, but we're trying to count the files, so we want that to disappear when there aren't any matching files. The $# variable picks up the number of positional parameters, which gives us the file count. I've also lowercased the shell variable number, just as a good habit to prevent clobbering built-in shell variable names.
Here is a small[but complete] part of my bash script that finds and outputs all files in mydir if the have the prefix from a stored array. Strange thing I notice is that this script works perfectly if I take out the "-maxdepth 1 -name" from the script else it only gives me the files with the prefix of the first element in the array.
It would be of great help if someone explained this to me. Sorry in advance if there is some thing obviously silly that I'm doing. I'm relatively new to scripting.
#!/bin/sh
DIS_ARRAY=(A B C D)
echo "Array is : "
echo ${DIS_ARRAY[*]}
for dis in $DIS_ARRAY
do
IN_FILES=`find /mydir -maxdepth 1 -name "$dis*.xml"`
for file in $IN_FILES
do
echo $file
done
done
Output:
/mydir/Abc.xml
/mydir/Ab.xml
/mydir/Ac.xml
Expected Output:
/mydir/Abc.xml
/mydir/Ab.xml
/mydir/Ac.xml
/mydir/Bc.xml
/mydir/Cb.xml
/mydir/Dc.xml
The loop is broken either way. The reason why
IN_FILES=`find mydir -maxdepth 1 -name "$dis*.xml"`
works, whereas
IN_FILES=`find mydir "$dis*.xml"`
doesn't is because in the first one, you have specified -name. In the second one, find is listing all the files in mydir. If you change the second one to
IN_FILES=`find mydir -name "$dis*.xml"`
you will see that the loop isn't working.
As mentioned in the comments, the syntax that you are currently using $DIS_ARRAY will only give you the first element of the array.
Try changing your loop to this:
for dis in "${DIS_ARRAY[#]}"
The double quotes around the expansion aren't strictly necessary in your specific case, but required if the elements in your array contained spaces, as demonstrated in the following test:
#!/bin/bash
arr=("a a" "b b")
echo using '$arr'
for i in $arr; do echo $i; done
echo using '${arr[#]}'
for i in ${arr[#]}; do echo $i; done
echo using '"${arr[#]}"'
for i in "${arr[#]}"; do echo $i; done
output:
using $arr
a
a
using ${arr[#]}
a
a
b
b
using "${arr[#]}"
a a
b b
See this related question for further details.
#TomFenech's answer solves your problem, but let me suggest other improvements:
#!/usr/bin/env bash
DIS_ARRAY=(A B C D)
echo "Array is : "
echo ${DIS_ARRAY[*]}
for dis in "${DIS_ARRAY[#]}"
do
for file in "/mydir/$dis"*.xml
do
if [ -f "$file" ]; then
echo "$file"
fi
done
done
Your shebang line references sh, but your question is tagged bash - unless you need POSIX compliance, use a bash shebang line to take advantage of all that bash has to offer
To match files located directly in a given directory (i.e., if you don't need to traverse an entire subtree), use a glob (filename pattern) and rely on pathname expansion as in my code above - no need for find and command substitution.
Note that the wildcard char. * is UNquoted to ensure pathname expansion.
Caveat: if no matching files are found, the glob is left untouched (assuming the nullglob shell option is OFF, which it is by default), so the loop is entered once, with an invalid filename (the unexpanded glob) - hence the [ -f "$file" ] conditional to ensure that an actual match was found (as an aside: using bashisms, you could use [[ -f $file ]] instead).
I am creating a bash shell script that will rename a file extension without having to specify the old file extension name. If I enter "change foo *" to the Terminal in Linux, it will change all file extension to foo.
So lets say I've got four files: "file1.txt", "file2.txt.txt", "file3.txt.txt.txt" and "file4."
When I run the command, the files should look like this: "file1.foo", "file2.txt.foo", "file3.txt.txt.foo" and "file4.foo"
Can someone look at my code and correct it. I would also appreciate it if someone can implement this for me.
#!/bin/bash
shift
ext=$1
for file in "$#"
do
cut=`echo $FILE |sed -n '/^[a-Z0-9]*\./p'`
if test "${cut}X" == 'X'; then
new="$file.$ext"
else
new=`echo $file | sed "s/\(.*\)\..*/\1.$ext/"`
fi
mv $file $new
done
exit
Always use double quotes around variable substitutions, e.g. echo "$FILE" and not echo $FILE. Without double quotes, the shell expands whitespace and glob characters (\[*?) in the value of the variable. (There are cases where you don't need the quotes, and sometimes you do want word splitting, but that's for a future lesson.)
I'm not sure what you're trying to do with sed, but whatever it is, I'm sure it's doable in the shell.
To check if $FILE contains a dot: case "$FILE" in *.*) echo yes;; *) echo no;; esac
To strip the last extension from $FILE: ${FILE%.*}. For example, if $FILE is file1.txt.foo, this produces file1.txt. More generally, ${FILE%SOME_PATTERN} expands to $FILE with a the shortest suffix matching SOME_PATTERN stripped off. If there is no matching suffix, it expands to $FILE unchanged. The variant ${FILE%%SOME_PATTERN} strips the longest suffix. Similarly, ${FILE#SOME_PATTERN} and ${FILE##SOME_PATTERN} strip a suffix.
test "${TEMP}X" == 'X' is weird. This looks like a misremembered trick from the old days. The normal way of writing this is [ "$TEMP" = "" ] or [ -z "$TEMP" ]. Using == instead of = is a bash extension. There used to be buggy shells that might parse the command incorrectly if $TEMP looked like an operator, but these have gone the way of the dinosaur, and even then, the X needs to be at the beginning, because the problematic operators begin with a -: [ "X$TEMP" == "X" ].
If a file name begins with a -, mv will think it's an option. Use -- to say “that's it, no more options, whatever follows is an operand”: mv -- "$FILE" "$NEW_FILE".
This is very minor, but a common (not universal) convention is to use capital letters for environment variables and lowercase letters for internal script variables.
Since you're using only standard shell features, you can start the script with #!/bin/sh (but #!/bin/bash works too, of course).
exit at the end of the script is useless.
Applying all of these, here's the resulting script.
#!/bin/sh
ext="$1"; shift
for file in "$#"; do
base="${file%.*}"
mv -- "$file" "$base.$ext"
done
Not exactly what you are asking about, but have a look at the perl rename utility. Very powerful! man rename is a good start.
Use: for file in *.gif; do mv $file ${file%.gif}.jpg; done
Or see How to rename multiple files
For me this worked
for FILE in `ls`
do
NEW_FILE=${FILE%.*}
NEW_FILE=${NEW_FILE}${EXT}
done
I just want to tell about NEW_FILE=${FILE%.*}.
Here NEW_FILE gets the file name as output. You can use it as you want.
I tested in bash with uname -a = "Linux 2.4.20-8smp #1 SMP Thu Mar 13 17:45:54 EST 2003 i686 i686 i386 GNU/Linux"