This is one of my homework exercise.
Write a shell program, which will take a directory as an argument.
The script should then print all the regular files in the directory and all
the recursive directories, with the following information n the given order for
each of the files
File name (full name from the specified directory) file size owner
In case the directory argument is not given, the script should assume the
directory to be the current working directory
I am confused about how to approach this problem. For the listing of files part, I tried ls -R | awk ... but i was not able to do it because I was not able to find a suitable field seperator for awk.
I know its unfair to ask for a solution, but please can you guys give me a hint as how to proceed with the problem? Thanks in advance.
You really don't want to use ls and awk for this. Instead you want to check the documentation for find to figure out what string to put in the following script:
find ${1:-.} -type f -printf "format-string-to-be-determined-by-reader\n"
The problem is that parsing the output of ls is complicated at best and dangerous at worst.
What you'll want to do is use find to produce the list of files and a small shell script to produce the output you want. While there are many possible methods to accomplish this I would use the following general form
while read -r file ; do
# retrieve information about $file
# print that information on one line
done < <(find ...)
With a suitable find command to select the files. To retrieve the metadata about the files I would use stat inside the loop, probably multiple times.
I hope that's enough of a hint, but If you want a complete answer I can provide.
awk is fine.. use " +" as separator.
Bah. Where's the challenge in using ls or find? May as well write a one-liner in perl to do all the work, and then just call the one-liner from a script. ;)
You can do your recursive directory traversal in the shell natively, and use stat to get the size and owner. Basically, you write a function to list the directory (for element in *), and have the function change to the directory and call itself if [[ -d $element ]] is true. Something like
do_print "$elem"
if [[ -d "$elem" ]]
then
cd "$elem"
process_dir
cd ..
fi
or something akin to that.
Yeah, you'll have a zillion system calls to stat, but IMHO that's probably preferable to machine-parsing the output of a program whose output is intended to be human-readable. In this case, where performance is not an issue, it's worth it.
For bonus super happy fun times, change the value of IFS to a value which won't appear in a filename so the shell globbing won't get confused by files containing whitespace in its name. I'd suggest either a newline or a slash.
Or take the easy way out and just use find with printf. :)
Related
I am attempting to write a shell script that will take a file name with a wildcard, find all files matching that pattern in current directory, and copy them. My problem is every time I try and use a variable only the first match echo's and thats it.
./copyfiles.ksh cust*.txt
#! /usr/bin/ksh
IN_FILE=${1}
for file in $IN_FILE
do
echo "$file"
done
cust1.txt
This seems to only match the first one even though cust1.txt, cust2.txt, and cust3.txt all exist and when I run it with for file in cust*.txt it works.
The shell expands your argument of "cust*.txt" to a list then passes the list to your script, which then only processes $1 which is cust1.txt.
You want to use $# which will process all arguments passed:
#! /usr/bin/ksh
for file in "$#"
do
echo "$file"
done
I believe there is a limit to how many arguments can be passed this way though. How many files are you having to process? Make sure your version of the shell can handle the number of arguments you are likely to process. If I recall you may need a solution utilizing xargs but I'm a tad rusty to help with that.
In ./copyfiles.ksh cust*.txt the files cust*.txt will be expanded first.
When you do not want to change your copyfiles.ksh script. call it with
./copyfiles.ksh "cust*.txt"
You can also change your script, with something like
IN_FILE="$#" # INFILES would be a better name
I have a script one of my Professors from college gave us to modify however I cannot seem to figure out exactly what it does. Can anyone help me to understand it a bit better?
for i
do grep look $i; done
From what I can gather it greps the value of the variable i, which could be a file or directory. However I am not familiar with how the look command comes into play. I would greatly appreciate any tips you could offer.
look isn't a command, it's the first parameter to grep. So it will search for the word look in the file named $i. (grep will not search folders unless you pass in -R as in grep -R look $i.)
The confusing bit is that for i usually comes with an in WORDS specified, so for i in one two three will run the commands between do and done three times: once with variable i = "one", once with i = "two", and once with i = "three". However, the bash manual explains what to do if in isn't specified:
If ‘in words’ is not present, the for command executes the commands once for each positional parameter that is set, as if ‘in "$#"’ had been specified [...].
So, in short, if your script is in a file named foo.sh, then calling foo.sh file1 file2 will look for the word look in files "file1" and "file2".
I am using Automator on my Mac to set up a service that passes a selected folder to a bash shell script as arguments.
In the script I do:
for f in "$#"; do
printf "%q\n" "$f" | pbcopy
done
if I then do:
echo `pbpaste`
I get the path to my selected folder with spaces escaped (\). I then wanted to use this path to cd into that directory and do a bunch of other stuff (creating a blank directory structure). I hoped I could just do:
cd `pbpaste`
but this doesn't work.
If I type the path manually the cd works so I assume the is some issue with data types or returns or something??
I'll admit I don't really know what this script actually doing and may be going about this all wrong but but if anyone can explain what's going on here and how to get it working it that would be great but even better would be a pointer to a really good resource for a complete beginner to start learning about shell scripting.
I really like the idea of getting into this a bit more but all the resources I have found are either total basics (cd, ls, pwd etc) or really high level and assume a bunch of previous knowledge.
What I'd really like is a full language reference with some actual examples like you find for the languages I am more used to (HTML/CSS/JS/AS3), if such a thing exists.
Cheers for any help :)
I'm agree with #chepner's answer, but for google's results sake, to cd using pbpaste you simply do:
cd $(pbpaste)
When you use the %q format, you are adding literal backslashes to the string, which the shell does not process as escape characters when you use it with cd.
The clipboard is useful for interprocess communication; inside a single script, it's easier to just use variables to hold information temporarily. f already has the path name in it, so just use it:
cd "$f"
Notice I've quoted the expansion of f, so that any spaces in the path name are passed as part of the single argument to cd.
I am reading in a list of file names:
*.txt *.xml
which are space delimited. I read this into a variable in my ksh script, and I want to be able to manipulate it before putting each of them into a find command. The problem is, as soon as I do anything with the variable (for instance, breaking it into an array), the * resolves into filenames that are in my script's directory. What I want is for the *.txt to remain unchanged, so I can put that into my find command.
How do I do this? Unfortunately, I'm at work and can't just use perl or some other language.
set -f
turns off globbing in ksh, so * and ? characters are not expanded (globbed).
what's wrong with
'*.txt' '*.xml'
? . Else you have to show us more of your issues. Maybe edit your post to include a small test case that illustrates your problem, plus the desired output or intermediate values.
I am calling a script as below
directory path : /user/local/script/print_path.sh
var_path=`pwd`
echo $var_path
The above script is calling as below
directory path : /user/local/callPscript/call.sh
`/user/local/script/print_path.sh`
I want the out put as below :
/user/local/script/
But it gives the output :
/user/local/callPscript/
i.e. the pocation of the script is called. How can I make it to the scripts home directory path?
After some weeks of Bash programming, this has emerged as the standard solution:
directory=$(dirname -- $(readlink -fn -- "$0"))
$0 is the relative path to the script, readlink -f resolves that into an absolute path, and dirname strips the script filename from the end of the path.
A safer variant based on the completely safe find:
directoryx="$(dirname -- $(readlink -fn -- "$0"; echo x))"
directory="${directoryx%x}"
This should be safe with any filename - $() structures remove newlines at the end of the string, which is the reason for the x at the end.
May be this can help you.
var_path=$PWD
echo $var_path
Try this one, it should come close to what you want:
var_path=`dirname "$0"`
Please see BashFAQ/028.
This topic comes up frequently. This answer covers not only the expression used above ("configuration files"), but also several variant situations. If you've been directed here, please read this entire answer before dismissing it.
This is a complex question because there's no single right answer to it. Even worse: it's not possible to find the location reliably in 100% of all cases. All ways of finding a script's location depend on the name of the script, as seen in the predefined variable $0. But providing the script name in $0 is only a (very common) convention, not a requirement.
. . .
Generally, storing data files in the same directory as their programs is a bad practise. The Unix file system layout assumes that files in one place (e.g. /bin) are executable programs, while files in another place (e.g. /etc) are data files.
Read the complete page for lots more good information.