I have a large shell script that processes files each of my Solaris systems.
In the beginning the script creates a variable FILENAME
Sometimes people create directories/files that contain spaces.
e.g.
/users/ldap/Anukriti's System Backup/BACKUP/workspace/BP8/scripts/yui/editor/simpleeditor.js
Later in the script I run
cp $FILENAME $DESTDIR/
As you can imagine this always fails because the following is invalid.
cp /users/ldap/Anukriti's System Backup/BACKUP/workspace/BP8/scripts/yui/editor/simpleeditor.js $DESTDIR
I have tried putting the Variable in Quotes, but this is not working. I have used find with -exec option before, but for this circumstance that is not really an option, especially since Solaris does not support the -wholename or -path options
What can i do here?
You just have to protect the variables with quotes :
cp "$FILENAME" "$DESTDIR"
NOTE
Don't use single quotes ', the variables can't be expanded this way.
Looks like i need to use curly braces for variable expansion and double Quotes
cp "${FILENAME}" $DESTDIR
Make sure that
$DESTDIR exists
is a directory
and yes, use double quotes for both variables and get rid of the trailing /.
You might not believe it, but that is your problem. :-)
Related
I am trying to copy a .nii file (Gabor3.nii) path to a variable but even though the file is found by the find command, I can't copy the path to the variable.
find . -type f -name "*.nii"
Data= '/$PWD/"*.nii"'
output:
./Gabor3.nii
./hello.sh: line 21: /$PWD/"*.nii": No such file or directory
What went wrong
You show that you're using:
Data= '/$PWD/"*.nii"'
The space means that the Data= parts sets an environment variable $Data to an empty string, and then attempts to run '/$PWD/"*.nii"'. The single quotes mean that what is between them is not expanded, and you don't have a directory /$PWD (that's a directory name of $, P, W, D in the root directory), so the script "*.nii" isn't found in it, hence the error message.
Using arrays
OK; that's what's wrong. What's right?
You have a couple of options. The most reliable is to use an array assignment and shell expansion:
Data=( "$PWD"/*.nii )
The parentheses (note the absence of spaces before the ( — that's crucial) makes it an array assignment. Using shell globbing gives a list of names, preserving spaces etc in the names correctly. Using double quotes around "$PWD" ensures that the expansion is correct even if there are spaces in the current directory name.
You can find out how many files there are in the list with:
echo "${#Data[#]}"
You can iterate over the list of file names with:
for file in "${Data[#]}"
do
echo "File is [$file]"
ls -l "$file"
done
Note that variable references must be in double quotes for names with spaces to work correctly. The "${Data[#]}" notation has parallels with "$#", which also preserves spaces in the arguments to the command. There is a "${Data[*]}" variant which behaves analogously to "$*", and is of similarly limited value.
If you're worried that there might not be any files with the extension, then use shopt -s nullglob to expand the globbing expression into an empty list rather than the unexpanded expression which is the historical default. You can unset the option with shopt -u nullglob if necessary.
Alternatives
Alternatives involve things like using command substitution Data=$(ls "$PWD"/*.nii), but this is vastly inferior to using an array unless neither the path in $PWD nor the file names contain any spaces, tabs, newlines. If there is no white space in the names, it works OK; you can iterate over:
for file in $Data
do
echo "No white space [$file]"
ls -l "$file"
done
but this is altogether less satisfactory if there are (or might be) any white space characters around.
You can use command substitution:
Data=$(find . -type f -name "*.nii" -print -quit)
To prevent multiline output, the -quit option stop searching after the first file was found(unless you're sure only one file will be found or you want to process multiple files).
The syntax to do what you seem to be trying to do with:
Data= '/$PWD/"*.nii"'
would be:
Data="$(ls "$PWD"/*.nii)"
Not saying it's the best approach for whatever you want to do next of course, it's probably not...
I want to write a script that takes a name of a folder as a command line argument and produces a file that contains the names of all subfolders with size 0 (empty subfolder). This is what I got:
#!/bin/bash
echo "Name of a folder'
read FOLDER
for entry in "$search_dir"/*
do
echo "$entry"
done
your script doesn't have the logic you intended. find command has a feature for this
$ find path/to/dir -type d -empty
will print empty directories starting from the given path/to/dir
I would suggest you accept the answer which suggests to use find instead. But just to be complete, here is some feedback on your code.
You read the input directory into FOLDER but then never use this variable.
As an aside, don't use uppercase for your private variables; this is reserved for system variables.
You have unpaired quotes in the prompt string. If the opening quote is double, you need to close with a double quote, or vice versa for single quotes.
You loop over directory entries, but do nothing to isolate just the ones which are directories, let alone empty directories.
Finally, nothing in your script uses Bash-only facilities, so it would be safe and somewhat more portable to use #!/bin/sh
Now, looping over directories can be done by using search_dir/*/ instead of just search_dir/*; and finding out which ones are empty can be done by checking whether a wildcard within the directory returns just the directory itself. (This assumes default globbing behavior -- with nullglob you would make a wildcard with no matches expand to an empty list, but this is problematic in some scenarios so it's not the default.)
#!/bin/bash
# read -p is not POSIX
read -p "Name of a folder" search_dir
for dir in "$search_dir"/*/
do
# [[ is Bash only
if [[ "$dir"/* = "$dir/*" ]]; then # Notice tricky quoting
echo "$dir"
fi
done
Using the wildcard expansion with [ is problematic because it is not prepared to deal with a wildcard expansion -- you get "too many arguments" if the wildcard expands into more than one filename -- so I'm using the somewhat more mild-tempered Bash replacement [[ which copes just fine with this. Alternatively, you could use case, which I would actually prefer here; but I've stuck to if in order to make only minimal changes to your script.
I want to do something like the following:
#!/bin/bash
cmd="find . -name '*.sh'"
echo $($cmd)
What I expect is that it will show all the shell script files in the current directory, but nothing happened.
I know that I can solve the problem with eval according to this post
#!/bin/bash
cmd="find . -name '*.sh'"
eval $cmd
So my question is why command substitution doesn't work here and what's the difference between $(...) and eval in terms of the question?
Command substitution works here. Just you have wrong quoting. Your script find only one file name! This one with single quotes and asteriks in it:
'*.sh'
You can create such not usual file by this command and test it:
touch "'*.sh'"
Quoting in bash is different than in other programming languages. Check out details in this answer.
What you need is this quoting:
cmd="find . -name *.sh"
echo $($cmd)
Since you are already including the patter *.sh inside double quotes, there's no need for the single quotes to protect the pattern, and as a result the single quotes are part of the pattern.
You can try using an array to keep *.sh quoted until it is passed to the command substitution:
cmd=(find . -name '*.sh')
echo $("${cmd[#]}")
I don't know if there is a simple way to convert your original string to an array without the pattern being expanded.
Update: This isn't too bad, but it's probably better to just create the array directly if you can.
cmd="find . -name *.sh"
set -f
cmd=($cmd)
set +f
echo $("${cmd[#]}")
When you use the echo $($cmd) syntax, it's basically equivalent to just putting $cmd on it's own line. The problem is the way bash wants to interpolate the wildcard before the command runs. The way to protect against that is to put the variable containing the * char in quotes AGAIN when you dereference them in the script.
But if you put the whole command find . -name "*.sh" in a variable, then quote it with `echo $("$cmd"), the shell will interpret that to mean that the entire line is a file to execute, and you get a file not found error.
So it really depends on what you really need in the variable and what can be pulled out of it. If you need the program in the variable, this will work:
#!/bin/bash
cmd='/usr/bin/find'
$cmd . -name "*.sh" -maxdepth 1
This will find all the files in the current working directory that end in .sh without having the shell interpolate the wildcard.
If you need the pattern to be in a variable, you can use:
#!/bin/bash
pattern="*.sh"
/usr/bin/find . -name "$pattern" -maxdepth 1
But if you put the whole thing in a variable, you won't get what you expect. Hope this helps. If a bash guru knows something I'm missing I'd love to hear it.
I'm currently working on a small cute shell script to loop through a specific folder and only output the files inside it, excluding any eventual directories. Unfortunately I can't use find as I need to access the filename variables.
Here's my current snippet, which doesn't work:
for filename in "/var/myfolder/*"
do
if [ -f "$filename" ]; then
echo $filename # Is file!
fi
done;
What am I doing wrong?
You must not escape /var/myfolder/*, meaning, you must remove the double-quotes in order for the expression to be correctly expanded by the shell into the desired list of file names.
What you're doing wrong is not using find. The filename can be retrieved by using {}.
find /var/myfolder -maxdepth 1 -type f -exec echo {} \;
Try without double quotes around /var/myfolder/* (reason being is that by putting double quotes you are making all the files a single string instead of each filename a separate string
for filename in "/var/myfolder/*"
The quotes mean you get one giant string from that glob -- stick an echo _ $filename _ immediately before the if to discover that it only goes through the 'loop' once, with something that isn't useful.
Remove the quotes and try again :)
You can use find and avoid all these hassles.
for i in $(find /var/myfolder -type f)
do
echo $(basename $i)
done
Isn't this what you're trying to do with your situation? If you want to restrict depth, use the -maxdepth option to find.
I have a simple test bash script which looks like that:
#!/bin/bash
cmd="rsync -rv --exclude '*~' ./dir ./new"
$cmd # execute command
When I run the script it will copy also the files ending with a ~ even though I meant to exclude them. When I run the very same rsync command directly from the command line, it works! Does someone know why and how to make bash script work?
Btw, I know that I can also work with --exclude-from but I want to know how this works anyway.
Try eval:
#!/bin/bash
cmd="rsync -rv --exclude '*~' ./dir ./new"
eval $cmd # execute command
The problem isn't that you're running it in a script, it's that you put the command in a variable and then run the expanded variable. And since variable expansion happens after quote removal has already been done, the single quotes around your exclude pattern never get removed... and so rsync winds up excluding files with names starting with ' and ending with ~'. To fix this, just remove the quotes around the pattern (the whole thing is already in double-quotes, so they aren't needed):
#!/bin/bash
cmd="rsync -rv --exclude *~ ./dir ./new"
$cmd # execute command
...speaking of which, why are you putting the command in a variable before running it? In general, this is a good way make code more confusing than it needs to be, and trigger parsing oddities (some even weirder than this). So how about:
#!/bin/bash
rsync -rv --exclude '*~' ./dir ./new
You can use a simple --eclude '~' as (accoding to the man page):
if the pattern starts with a / then it is anchored to a particular spot in
the hierarchy of files, otherwise it
is matched against the end of the
pathname. This is similar to a leading
^ in regular expressions. Thus "/foo"
would match a name of "foo" at either
the "root of the transfer" (for a
global rule) or in the merge-file's
directory (for a per-directory rule).
An unqualified "foo" would match a
name of "foo" anywhere in the tree
because the algorithm is applied
recursively from the top down; it
behaves as if each path component gets
a turn at being the end of the
filename. Even the unanchored
"sub/foo" would match at any point in
the hierarchy where a "foo" was found
within a directory named "sub". See
the section on ANCHORING
INCLUDE/EXCLUDE PATTERNS for a full
discussion of how to specify a pattern
that matches at the root of the
transfer.
if the pattern ends with a / then it will only match a directory, not a
regular file, symlink, or device.
rsync chooses between doing a simple string match and wildcard
matching by checking if the pattern
contains one of these three wildcard
characters: '*', '?', and '[' .
a '*' matches any path component, but it stops at slashes.
use '**' to match anything, including slashes.