What is the difference between ls -la and ls -l in Windows? - windows

ls -la and ls -l both provide more information than the ls command. However,
as their outputs are very similar,
I'm not clear what the difference between the two commands is.
What is the difference between ls -la and ls -l in git bash?

Here is the meaning of the flags below. Both are not the same. In the first case (ls -l) hidden files/folders will not be listed while in the second case (ls -la) hidden files/folders will be shown.
-l
-l use a long listing format
-a
-a, --all do not ignore entries starting with .
which means show hidden files/directories as well.

Related

Why quantificators don't pass through xargs

The problem I faced with is that wildcard quantificator doesn't pass through xargs to command for some reason.
Assume, that we have file file1 with such content:
-l
-a
f*
we need to pass arguments to ls via xargs.
cat file1 | xargs -n3 ls
The output equivalent to ls -la command, with additon from terminal, that
ls: cannot access f*: No such file or directory.
But the file is in directory (ls -la f* returns suitable output)
If we substitute f* to file1 for example, we'll have right output too.
Can you explain me, why does it happen? Thanks.
EDIT1:
It seems interesting to add, how we can pass arguments from file file1 through shell interpreter to ls command. Below is the example:
ls `xargs -n3 < file1`
Now shell expansion is made before ls invocation resulting in same output as for ls -la f*
The f* expression is also known as a shell glob, and is supposed to be interpreted by the shell. You can try it out independently e.g. by running echo f*.
When you run ls -la f* from the shell, the shell interprets it according to your directory contents, and calls ls with the expanded version, like: ls -la file1 file2 file3. You can get some commands confused about this if it matches no file and the command expects one.
But when you pass that argument to ls through xargs, the shell doesn't get a chance to expand it, and ls feels like it's invoked exactly as ls -la f*. It recognizes -la as options, and f* as a filename to list. That file happens not to exist, so it fails with the error message you see.
You could get it to fail on a more usual filename:
$ ls non_existing_file
ls: cannot access non_existing_file: No such file or directory.
Or you could get it to succeed by actually having a file of that name:
$ touch 'f*'
$ xargs ls -la <<< 'f*'
-rw-rw-r-- 1 jb jb 0 2013-11-13 23:08 f*
$ rm 'f*'
Notice how I had to use single quotes so that the shell would not interpret f* as a glob when creating and deleting it.
Unfortuately, you can't get it to expand it when it gets passed directly from xargs to ls, since there's no shell there.
The contents of file are passed to xargs via standard input, so the shell never sees them to process the glob. xargs then passes them to ls, again without the shell ever seeing them, so f* is treated as a literal 2-character file name.

ssh to another server and list only the directories inside a directory

dev1.ab.com is the current server, I have to ssh to tes1.ab.com, move inside the path /spfs/tomcat/dir and list all the directories (no files) present there.
For this I am using the below command from the command line of dev1.ab.co
*ssh tes1.ab.com " ls -1d / /spfs/tomcat/dir "
but in output I am getting output the directories of home is /spfs/tomcat, I have also tried
*ssh tes1.ab.com " ls -lrt /spfs/tomcat/dir | ls -1d /"
*ssh tes1.ab.com " cd /spfs/tomcat/dir | ls -1d /"
and ended up with nearly same result.
can someone help me to write a command in command line of dev1.ab.com to ssh to tes1.ab.com and get the output as the list of directories present in /spfs/tomcat/dir directory.
Use find instead of ls.
ssh tes1.ab.com "find /my/folder -type d"
add -maxdepth 1 to list only the first level
you could use the following
ssh tes1.ab.com 'ls -ld /spfs/tomcat/dir/*/'
to list all the directories alone inside /spfs/tomcat/dir

How to get the total number of files in 3 directories?

How to write a single-line command line invocation that counts the total number of files in the directories /usr/bin, /bin and /usr/doc ?
So far, what I can think of is to use
cd /usr/bin&&ls -l | wc -l
but I don't know how to add them together, something like:
(cd /usr/bin&&ls -l | wc -l) + (cd /bin&&ls -l | wc -l)
Maybe there is a better way to do it, like get all the stdout of each directory, then pipe to wc -l
Any idea?
how about using find command + wc -l?
find /usr/bin /bin /usr/doc -type f |wc -l
Use ls for multiple directories in conjunction with wc is a little more succinct:
ls /usr/bin /bin /usr/doc | wc -l
Assuming bash or similarly capable shell, you can use an array:
files=(/usr/bin/* /bin/* /usr/doc*)
num=${#files[#]}
This technique will correctly handle filenames that contain newlines.
As Kent points out, find may be preferred as it will ignore directory entries. Tweak it if you want symbolic links.
A -maxdepth, if your find supports it, is needed unless you want to recurse into any unexpected directories therein. Also throwing away stderr in case a directory is not present for some odd reason.
find /usr/bin /bin /usr/doc -maxdepth 1 -type f 2>/dev/null | wc -l

How can I add certain commands to .bash_history permanently?

I use bash and I sometimes have to type some very long commands for certain tasks. These long commands are not regularly used and generally gets overwritten in .bash_history file by the time I use them again. How can I add certain commands to .bash_history permanently?
Thanks.
As others have mentioned, the usual way to do that would be to store your long commands as either aliases or bash functions. A nice way to organise them would the to put them all in a file (say $HOME/.custom_funcs) then source it from .bashrc.
If you really really want to be able to load commands into your bash history, you can use the -r option of the history command.
From the man page:
-r Read the current history file and append its contents to the history list.
Just store all your entries in a file, and whenever you need your custom history loaded simply run history -r <your_file>.
Here's a demo:
[me#home]$ history | tail # see current history
1006 history | tail
1007 rm x
1008 vi .custom_history
1009 ls
1010 history | tail
1011 cd /var/log
1012 tail -f messages
1013 cd
1014 ls -al
1015 history | tail # see current history
[me#home]$ cat $HOME/.custom_history # content of custom history file
echo "hello world"
ls -al /home/stack/overflow
(cd /var/log/messages; wc -l *; cd -)
[me#home]$ history -r $HOME/.custom_history # load custom history
[me#home]$ history | tail # see updated history
1012 tail -f messages
1013 cd
1014 ls -al
1015 history | tail # see current history
1016 cat .custom_history
1017 history -r $HOME/.custom_history
1018 echo "hello world"
1019 ls -al /home/stack/overflow
1020 (cd /var/log/messages; wc -l *; cd -)
1021 history | tail # see updated history
Note how entries 1018-1020 weren't actually run but instead were loaded from the file.
At this point you can access them as you would normally using the history or ! commands, or the Ctrl+r shortcuts and the likes.
How about just extending the size of your bash history file with the shell variable
HISTFILESIZE
Instead of the default 500 make it something like 2000
The canonical answer is to create scripts containing these commands.
Edit Supposing you have the following history entries;
find /var/www -name '*.html' -exec fgrep '<title>' {} /dev/null \;
find ~/public_html -name '*.php' -exec fgrep include {} /dev/null \;
... you can try to isolate the parameters into a function something like this;
r () {
find "$1" -name "*.$2" -exec fgrep "$3" {} /dev/null \;
}
... which you could use like this, to repeat the history entries from above:
r /var/www html '<title>'
r ~/public_html php include
Obviously, the step is then not very long to create a proper script with defaults, parameter validation, etc. (Hint: you could usefully default to the current directory for the path, and no extension for the file name, and add options like --path and --ext to override the defaults when you want to; then there will be only one mandatory argument to the script.)
Typically, you would store the script in $HOME/bin and make sure this directory is added to your PATH from your .profile or similar. For functions, these are usually defined in .profile, or a separate file which is sourced from this file.
Having it in a central place also helps develop it further; for example, for precision and perhaps some minor added efficiency, you might want to add -type f to the find command; now there is only one place to remember to edit, and you will have it fixed for good.
You can create an alias for your command.
alias myalas='...a very long command here...'

Pipe ls output to get path of all directories

I want to list all directories ls -d * in the current directory and list out all their full paths. I know I need to pipe the output to something, but just not sure what. I don't know if I can pipe the output to a pwd or something.
The desired result would be the following.
$ cd /home/
$ ls -d *|<unknown>
/home/Directory 1
/home/Directory 2
/home/Directory 3
<unknown> being the part which needs to pipe to pwd or something.
My overall goal is to create a script which will allow to me construct a command for each full path supplied to it. I'll type build and internally it will run the following command for each.
cd <full directory path>; JAVA_HOME=jdk/Contents/Home "/maven/bin/mvn" clean install
Try simply:
$ ls -d $PWD/*/
Or
$ ls -d /your/path/*/
find `pwd` -type d -maxdepth 1 -name [^.]\*
Note: The above command works in bash or sh, not in csh. (Bash is the default shell in linux and MacOSX.)
ls -d $PWD/* | xargs -I{} echo 'cd {} JAVA_HOME=jdk/Contents/Home "/maven/bin/mvn" clean install' >> /foo/bar/buildscript.sh
will generate the script for u.
Might I also suggest using -- within your ls construct, so that ls -d $PWD/*/ becomes ls -d -- $PWD/*/ (with an extra -- inserted)? This will help with those instances where a directory or filename starts with the - character:
/home/-dir_with_leading_hyphen/
/home/normal_dir/
In this instance, ls -d */ results in:
ls: illegal option -- -
usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...]
However, ls -d -- */ will result in:
-dir_with_leading_hyphen/ normal_dir/
And then, of course, you can use the script indicated above (so long as you include the -- any time you call ls).
No piping neccessary:
find $(pwd) -maxdepth 1 -type d -printf "%H/%f\n"
to my surprise, a command in the print statement works too:
find -maxdepth 1 -type d -printf "$(pwd)/%f\n"

Resources