bash LS command giving hidden folders in output [closed] - bash

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
I am not sure when this behavior started but by typing the ls command, I am getting the following output:
$ls
./ ../ .DS_Store Books/
I am not very sure about the first three items and they always come in every folder. Can anyone explain me how to get rid of them? I am using OS X Yosemite

The first two: ./ and ../ are current directory and parent directory respectively. You can't get rid of them. The last one .DS_Store is probably some config file/directory which you can remove with:
rm -f .DS_Store # use -r if it's a directory
But be sure to check what's its for!
The bahaviour of ls is not the reason for the "extra" output. You probably have an alias something like:
alias ls='ls -a'
in your shell. To find out exactly what it's aliased to, do:
type ls

Related

Bash - wait for a process (here gcc) [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
On my Ubuntu machine I want to create a custom command for compiling a c file.
At the moment I have something liks this which does not work like I want to:
#compile the file
gcc $1 -o ~/.compile-c-output
#run the program
./~/.compile-c-output
#delete the output file
rm ~/.compile-c-output
The problem is that the run command is executed before gcc is ready and so the file does not exist. How can I wait until gcc is ready and I can run the file normaly?
Btw how can I add a random number to the output file so this script also works if I run it on two different terminals?
./~/.compile-c-output
Get rid of the leading ./. That's why the file doesn't exist.
~/.compile-c-output
To get a random file name, use mktemp. mktemp guarantees not to overwrite existing files.
file=$(mktemp) # unspecified file name in /tmp
gcc "$1" "$file" && "$file"
rm "$file"

Copy only the contents of a directory into another directory [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 years ago.
Improve this question
I want my bash script to copy the files inside the foo directory into the baz directory.
When I run this command in the terminal, it achieves what I expect:
cp -r /foo/. /baz
But when I save it as a bash script:
#!/bin/bash
cp -r /foo/. /baz
And run:
./script.sh
Then it unexpectedly copies the foo directory itself into baz (rather than only the files in foo).
What am I doing wrong? Why is this happening? How do I fix the bash script?
Edit - bad question. I ran an old version of my script without noticing. Everything does work as expected. The answers still helped me with alternative solutions.
Use rsync instead. It doesn't copy the parent directory:
rsync -r /foo /baz
Change the content of the script to:
#!/bin/bash
cp -r /foo/* /baz
To be honest, I'm not sure why you run into this issue. It works fine for me. Still, the asterisk seems more appropriate. Which OS are you running?

[-d: command not found [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 years ago.
Improve this question
As per this answer: Unix Bash Shell Programming if directory exists, I'm trying to check if a directory exists. However, when I run this, I get line 1: [-d: command not found. What am I doing wrong here?
if [-d "~/.ssl"]; then
echo '~/.ssl directory already exists'
else
sudo mkdir ~/.ssl/
fi
[-d
is not a command.
[ -d
is the test command with the -d option.
Space matters.
(Also, the [ command needs to end with a ] parameter, which likewise has to be separated from other arguments by whitespace.)
That's the crux of the matter. There is another issue, though: If you quote the tilde, it doesn't expand. (This is one of the rare place where you may want to avoid quotes.) Quotes are great, though, so why not write "$HOME/.ssl"? (There's a subtle difference between ~ and "$HOME", but it doesn't matter for most uses.)
Honestly, all you really need is probably:
if mkdir -p ~/.ssl; then
# Do stuff with new directory
else
# Handle failure (but keep in mind `mkdir` will have its own error output)
fi

Is this a bug in the Bash shell [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Have I found a bug in bash?
I have created a folder named Test
cd Test/
rm -rf ../Test (Deleted the PWD while I was in that directory, as shown in image)
Not a bug, not related to bash either. You're current working directory (and all the environment variables that hold the path info in your shell) is simply pointing to a filesystem node that's been orphaned. Listing it will give you what's in the node, which is nothing because . and .. are gone (because it's orphaned). Note that rm removes everything in the directory before orphaning the node. Thus, ls gives you nothing.
Also note that when you try to create a file while inside the deleted directory with something like touch blah or mkdir blah, it'll give you a file not found error.
"orphaned" may not be the correct term, I'm simply using it to mean that it has no parent node.

What's the difference between directory contents and directory entries? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
The manpage for the ls command says that:
-d, --directorylist
directory entries instead of contents.
So, what's the difference between directory contents and entries? The ls -d command in my home directory only shows:
.
What's the purpose of the -d option of ls command?
In my experience, the -d flag is most useful when you run ls with a wild card.
In a command such as ls -l "B*" if a directory is matched, then ls will list the contents of that directory. This is obnoxious if you don't care about the contents of the subdirectories.
For example, suppose your directory structure is as follows:
/tmp/Foo
|-Bar
|---FooBar
|-Buzz
|-FizzBuzz
ls "/tmp/Foo/B*" will produce the following:
/tmp/Foo/Buzz
/tmp/Foo/Bar:
Foobar
ls -d "/tmp/Foo/B*" will produce the following:
/tmp/Foo/Buzz
/tmp/Foo/Bar
Notice that the second case is almost certainly what was intended.
If you do ls -l, you will get the info for all the items ( contents) under (current) directory. But what if you wanted to see the info for the (current) directory? That is when you do ls -ld
Of course, since ls just prints the names of the contents, ls -d prints . ( or the directory name if given path) and seems useless.
You can also do ls -d */ to list only the directories.
-d, --directorylist
directory entries instead of contents.
It simply means that the whatever information/entries is available about the directory would be displayed (and not its contents).

Resources