Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'd like to create the directory "Dir (A/B)" in "test" folder in one go with the following command:
$ mkdir -vp "test/dir (A/B)"
test
test/dir (A
test/dir (A/B)
Unfortunately it's creating 'dir (A' in 'test'.
I've tried to escape it, but without success e.g. mkdir -vp "test/dir (A\/B)".
When creating manually in Finder, it works.
How should I escape the arguments? Thanks.
I'm using bash shell.
Do:
$ mkdir -vp "test/dir (A:B)"
The directory will appear as dir (A/B) in Finder and file open dialogs, but dir (A:B) in shell and other Unix applications.
Note that this is very Mac-specific, it won't work on other flavors of Unix.
Although i would not recommend this, you can create a filename like this:
mkdir 'test:dir (A:B)'
# when creating missing folders
mkdir -pv 'test/dir (A:B)'
In the finder it will show as: "test/dir (A/B)"
but if you look in the bash shell (ls -al), you will see "test:dir (A:B)"
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Bash -- and probably other shells too -- ignore any errors by default and just continue with the next command. I wonder why the shell was designed that way. After all, normally, one would want to abort a script in which every command needs the state generated by the previous.
I don't know the exact reasons, something like:
Every check takes extra time. For better performance no additional check everytime and no popup "Are you sure [Yes] [No] [Ignore]".
You are afraid for code like
cd /
ls
cd $HOME/temp;
rm -rf *
Terrible when you do not have a temp dir (script made by a normal user and executed by root)!
Anybody who has root access must be aware of the responsibility and dangers (s)he has. That is why you shouldn't execute scripts you don't trust (don't have the current dir in your PATH). And the person who wrote that script is wrong as well. Without checks on $? the script should be changed into something like
cd / && ls
cd "${HOME}"/temp && rm -rf *
or
cd / && ls
if [ ${#HOME} -gt 1 ]; then
rm -rf /"${HOME}"/temp/*
fi
Are these examples not a proof that exit-on-error would be better? I do not thinlk so. When Unix would fail exit on errors and you don't check everything, things can go terrible wrong with
cd /
ls
rm -rf /$HOME/temp/*
When HOME is set to / or a string with a space (ending with a space..) the last command might work. Always triple check your scripts, you are working with power.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 years ago.
Improve this question
I want my bash script to copy the files inside the foo directory into the baz directory.
When I run this command in the terminal, it achieves what I expect:
cp -r /foo/. /baz
But when I save it as a bash script:
#!/bin/bash
cp -r /foo/. /baz
And run:
./script.sh
Then it unexpectedly copies the foo directory itself into baz (rather than only the files in foo).
What am I doing wrong? Why is this happening? How do I fix the bash script?
Edit - bad question. I ran an old version of my script without noticing. Everything does work as expected. The answers still helped me with alternative solutions.
Use rsync instead. It doesn't copy the parent directory:
rsync -r /foo /baz
Change the content of the script to:
#!/bin/bash
cp -r /foo/* /baz
To be honest, I'm not sure why you run into this issue. It works fine for me. Still, the asterisk seems more appropriate. Which OS are you running?
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Suppose I execute the following commands:
$ mkdir -p a/b
$ ln -s a/b c
$ cd c
Then, in directory c, why does ls .. display the contents of directory a, but cd .. returns to the original directory?
The shell distinguishes between two types of paths: physical paths, which reflect the actual layout of folders on disk, and logical paths, which take into account symbolic links. When you changed your working directory to c (instead of a/b), the shell knows that the logical path to the current directory is ~/c (assuming a is in your home directory), and that the physical path is ~/a/b.
In your example, ls shows the contents of a because .. is an actual file system entry for the physical parent directory of c. The working directory, on the other hand, is a shell concept, and cd is a shell built-in command. The shell knows that although c is just another name for a/b, the working directory is specifically c, not a/b. Therefore, it parses .. logically instead of physically.
The POSIX standard specifies -L and -P options to the cd command to let you explicit say which path to follow. In your example, cd c; cd -P .. should put you in ~/b instead of ~.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a program, called carmel, which I can run from the command line via:
carmel -h
or whichever suffix I chose. When loading a file, I can say:
carmel fsa1.fst where fsa1.fst is located in my heme folder, /Users/adam/.
I would prefer to have the default file location be, e.g., /Users/adam/carmel/files, and would prefer to not type that in every time. Is there a way to let UNIX know, when I type carmel to then look in that location?
There is no standard Unix shortcut for this behaviour. Some applications will check an environment variable to see where their files are. but looking at carmel/src/carmel.cc on GitHub, I'd say you'd have to write a wrapper script. Like this:
#!/usr/bin/env bash
# Save as ${HOME}/bin/carmel and ensure ${HOME}/bin is before
# ${carmel_bin_dir} in your ${PATH}. Also ensure this script
# has the executable bit set.
carmel_bin_dir=/usr/local/bin # TODO change this?
working_directory=${CARMEL_HOME-${HOME}/carmel/files}
if [[ ! -d "${working_directory}" ]]; then
echo "${working_directory} does not exist. Creating."
mkdir -p "${working_directory}" || echo "Failed to create ${working_directory}"
fi
pushd "${working_directory}"
echo "Launching ${carmel_bin_dir}/carmel ${#} from $(pwd)..."
${carmel_bin_dir}/carmel ${#}
popd
Alternatively, since the source is freely available, you could add some code to read ${CARMEL_HOME} (or similar) and submit this as a pull request.
Good luck!
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Have I found a bug in bash?
I have created a folder named Test
cd Test/
rm -rf ../Test (Deleted the PWD while I was in that directory, as shown in image)
Not a bug, not related to bash either. You're current working directory (and all the environment variables that hold the path info in your shell) is simply pointing to a filesystem node that's been orphaned. Listing it will give you what's in the node, which is nothing because . and .. are gone (because it's orphaned). Note that rm removes everything in the directory before orphaning the node. Thus, ls gives you nothing.
Also note that when you try to create a file while inside the deleted directory with something like touch blah or mkdir blah, it'll give you a file not found error.
"orphaned" may not be the correct term, I'm simply using it to mean that it has no parent node.