csh doesn't recognize command with command line options beginning with -- - bash

I have an rsync command in my csh script like this:
#! /bin/csh -f
set source_dir = "blahDir/blahBlahDir"
set dest_dir = "foo/anotherFoo"
rsync -av --exclude=*.csv ${source_dir} ${dest_dir}
When I run this I get the following error:
rsync: No match.
If I remove the --exclude option it works. I wrote the equivalent script in bash and that works as expected
#/bin/bash -f
source_dir="blahDir/blahBlahDir"
dest_dir="foo/anotherFoo"
rsync -av --exclude=*.csv ${source_dir} ${dest_dir}
The problem is that this has to be done in csh only. Any ideas on how I can get his to work?

It's because csh is trying to expand --exclude=*.csv into a filename, and complaining because it cannot find a file matching that pattern.
You can get around this by enclosing the option in quotes:
rsynv -rv '--exclude=*.csv' ...
or escaping the asterisk:
rsynv -rv --exclude=\*.csv ...
This is a consequence of the way csh and bash differ in their default treatment of arguments with wildcards that don't match a file. csh will complain while bash will simply leave it alone.
You may think bash has chosen the better way but that's not necessarily so, as shown in the following transcript where you have a file matching the argument:
pax> touch -- '--file=xyzzy.csv' ; ls -- *.csv
--file=xyzzy.csv
pax> echo --file=*.csv
--file=xyzzy.csv
You can see there that the bash shell expands the argument rather than giving it to the program as is. Both sides have their pros and cons.

Related

command substitution not working in alias?

I wanted to make an alias for launching a vim session with all the c/header/makefiles, etc loaded into the buffer.
shopt -s extglob
alias vimc="files=$(ls -A *.?(c|h|mk|[1-9]) .gitconfig [mM]akefile 2>/dev/null); [[ -z $files ]] || vim $files"
When I run the command enclosed within the quotations from the shell, it works but when run as the alias itself, it does not. Running vimc, causes vim to launch only in the first matched file(which happens to be the Makefile) and the other files(names) are executed as commands for some reason(of course unsuccessfully). I tried fiddling around and it seems that the command substitution introduces the problem. Because running only the ls produces expected output.
I cannot use xargs with vim because it breaks the terminal display.
Can anyone explain what might be causing this ?
Here is some output:
$ ls
Makefile readme main.1 main.c header.h config.mk
$ vimc
main.1: command not found
main.c: command not found
.gitignore: command not found
header.h: command not found
config.mk: command not found
On an related note, would it be possible to do what I intend to do above in a "single line", i.e without storing it into a variable files and checking to see if it is empty, using only the output stream from ls?

set -x and wildcard expansion

In shell scripts, our corporate coding standard requires using...
set -x
command
set +x
...for logging, rather than...
echo "doing command"
command
However, when a wildcard is part of the command, this can produce very verbose output.
For example...
for i in {1..10}; do touch $i.foo; done; # create 10 foo files
set -x # log command execution (stdout to be redirected to log file)
rm *.foo # delete foo files
set +x # end logging
...produces the output...
rm 10.foo 1.foo 2.foo 3.foo 4.foo 5.foo 6.foo 7.foo 8.foo 9.foo
Okay for 10 files, but not so great for 10,000.
The desired output is...
rm *.foo
My first thought was to put *.foo in quotes...
rm "*.foo"
However, that gives the error...
rm: cannot remove ‘*.foo’: No such file or directory
Is there a way, using set -x, to echo the command without expanding the wildcard?
For many cases, where simple '-x' or '-v' do not work (as per comments above), and staying within your coding standard (no separate echo), consider:
VAR=/tmp/123
$SHELL -cv "ls $VAR/*"
which will execute the command, but will log the command WITH variable substitution, command substitution, but WITHOUT wild-card substitution.

Loop over directories and act on files in bash script

I have a script, /home/user/me/my_script.sh that is supposed to loop over multiple directories and process files. My current working directory is /home/user/me. A call to ls -R yields:
./projects:
dir1 dir2 dir3
./projects/dir1:
image1.ntf points2.csv image1.img image1.hdr
./projects/dir2:
image2.ntf points2.csv image2.img image2.hdr
./projects/dir3:
image3.ntf points3.csv image3.img image3.hdr
I have this script:
#! /bin/bash -f
for $dir in $1*
do
echo $dir
set cmd = `/home/tools/tool.sh -i $dir/*.ntf -flag1 -flag2 -flag3 opt3`
$cmd
done
This is how it is run (from cwd /home/user/me) and the result:
bash-4.1$ ./myscript.sh projects/
projects/*
bash-4.1$
This is not the expected output.The expected output is:
bash-4.1$ ./myscript.sh projects/
projects/dir1
[output from tool.sh]
projects/dir2
[output from tool.sh]
projects/dir3
[output from tool.sh]
bash-4.1$
What should happen is the script should go into the first directory, find the *.ntf file and pass it to tool.sh. At that point I would start seeing output from that tool. I have run the tool on a single file:
bash-4.1$ /home/tools/tool.sh -i /home/user/me/projects/dir1/image1.ntf -flag1 -flag2 -flag3 opt3
[expected output from tool. lengthy.]
bash-4.1$
I have tried syntax found here: How to loop over directories in Linux? and here: Looping over directories in Bash
for $dir in /$1*/
do ...
Result:
bash-4.1$ ./myscript.sh projects/
/projects/*/
And:
for $dir in $1/*
do ...
Result:
bash-4.1$ ./myscript.sh projects
projects/*
I'm not sure how many other iterations of wildcard and slash I can come up with. What is the correct syntax?
First, you should remove flag -f in your shebang, because it utterly means:
$ man bash
[…]
-f Disable pathname expansion.
Second, there are some typical bug patterns: spaces missing around variables (write "$dir" to cope with directory names containing spaces), there is a spurious $ in your for line (write for dir in "$1"*) instead, the set line is incorrect (set is a shell builtin only used to change the configuration of the shell, e.g., set -x), according to your answer to #ghoti's question it seems that the $cmd line is unnecessary. Also, the backquotes syntax is deprecated and could have been replaced with cmd=$(/home/tools/tool.sh -i "$dir"/*.ntf -flag1 -flag2 -flag3 opt3).
This would lead to the following script:
#!/bin/bash
for dir in "$1"*
do
[[ -d "$dir" ]] || continue # only consider existing folders
printf "%s=%q\n" dir "$dir"
/home/tools/tool.sh -i "$dir"/*.ntf -flag1 -flag2 -flag3 opt3
done
As an aside, I would recommend to always run the ShellCheck static analyzer on your Bash scripts, in order to detect typical bugs and have feedback w.r.t. good practices. If you have a Linux distribution, it should be installable with your standard package manager.

Execute command with backquote in bash shell script

I write up a little shell script in bash that allows me to execute commands in sub-directories. Here is the script
bat.sh:
#!/bin/sh
for d in */; do
echo "Executing \"$#\" in $d"
cd $d
`$#`
cd ..
done
With my following directory structures
/home/user
--a/
----x.txt
----y.txt
--b/
----u.txt
----v.txt
I expect the following command to list out the content of directories a and b when it is executed in the home directory
bat.sh ls
The result is
Executing "ls" in a/
/home/user/bin/bat.sh: line 6: x.txt: command not found
Executing "ls" in b/
/home/user/bin/bat.sh: line 6: u.txt: command not found
Any idea on what is going wrong here?
You don't want the back quotes; you want double quotes.
#!/bin/sh
for d in */
do
echo "Executing \"$*\" in $d"
(cd "$d" && "$#")
done
You are trying to execute the output of the command you pass, whereas you simply want to execute the command.
The use of an explicit subshell (the ( … ) notation) may avoid some problems with symlinks that jump to other directories. It is, in my (perhaps archaic) view, a safer way to switch directories for the purposes of executing commands.

bash pattern with asterisk

I have the following simple bash script:
#!/bin/bash -fx
ls *sh
The problem is that bash add a quote to the pattern and I get wrong output.
+ ls '*sh'
ls: cannot access *sh: No such file or directory
How can I change this behavior?
The output of ls *sh from the terminal is:
$ls *sh
a.bash a.sh b.sh
I tried to add quotes according to this post - "Bash variable containing file wildcard"
without success
That because you're disabling pathname expansion with the -f option.
#!/bin/bash -fx
From man:
-f
Disable filename expansion (globbing).

Resources