I have an assignment where, only using bash one-liners, I must ls the specific directories in my home directory that do not follow a specific naming schema. In my home directory, there are some directories that have the format of 3 alphabetical lower case letters followed by 3 decimal digits. However, there are other directories that don't follow this format. I must list those files and output the info to a txt file. Here are some commands I have written so far and am experimenting with:
ls /home -1 | sed [^a-z][^a-z][^a-z].[^0-9][^0-9][^0-9]
ls /home -1 "[^[a-z][a-z][a-z][0-9][0-9][0-9]]"
ls /home -1 *{[^a-z][^a-z][^a-z].[^0-9][^0-9][^0-9]}*
Also before anyone asks, I know formatting and searching through the output of the ls command is not as effective as the find command. But the assignment that I am working on dictates that I may only use these commands: ls, ps, sed, cut, paste, sort, tr, grep, awk, cat, uniq
If you can use shopt -s extglob first, then
ls -1d /home/!([a-z][a-z][a-z][0-9][0-9][0-9])
If not,
ls -1d /home/* | grep -v '/home/[a-z][a-z][a-z][0-9][0-9][0-9]$'
PLEASE read the manual pages for ls, grep, and shopt.
If you don't understand why these work, you haven't learned anything, and we're just doing your work for you...
Related
If I do
$ ls -l --color=always
I get a list of files inside the directory with some nice colouring for different file types etc..
Now, I want to be able to pipe the coloured output of ls through grep to filter out some files I don't need. The key is that I still want to preserve the colouring after the grep filter.
$ ls -l --color=always | grep -E some_regex
^ I lose the colouring after grep
EDIT: I'm using headless-server Ubuntu 8.10, Bash 3.2.39, pretty much a stock install with no fancy configs
Your grep is probably removing ls's color codes because it has its own coloring turned on.
You "could" do this:
ls -l --color=always | grep --color=never pattern
However, it is very important that you understand what exactly you're grepping here. Not only is grepping ls unnecessary (use a glob instead), this particular case is grepping through not only filenames and file stats, but also through the color codes added by ls!
The real answer to your question is: Don't grep it. There is never a need to pipe ls into anything or capture its output. ls is only intended for human interpretation (eg. to look at in an interactive shell only, and for this purpose it is extremely handy, of course). As mentioned before, you can filter what files ls enumerates by using globs:
ls -l *.txt # Show all files with filenames ending with `.txt'.
ls -l !(foo).txt # Show all files with filenames that end on `.txt' but aren't `foo.txt'. (This requires `shopt -s extglob` to be on, you can put it in ~/.bashrc)
I highly recommend you read these two excellent documents on the matter:
Explanation of the badness of parsing ls: http://mywiki.wooledge.org/ParsingLs
The power of globs: http://mywiki.wooledge.org/glob
You should check if you are really using the "real" ls, just by directly calling the binary:
/bin/ls ....
Because: The code you described really should work, unless ls ignores --color=always for some weird reason or bug.
I suspect some alias or function that adds (directly or through a variable) some options. Double-check that this isn't the case.
New to UNIX, currently learning UNIX via secureshell in a class. We've been given a few basic assignments such as creating loops and finding files. Our last assignment asked us to
write code that will estimate the number of shell scripts in the current directory and then print out that total number as "Estimated number of shell script files in this directory:"
Unlike in our previous assignments we are now allowed to use conditional loops, we are encouraged to use grep and wc statements.
On a basic level I know I can enter
ls * .sh
to find all shell scripts in the current directory. Unfortunately, this doesn't estimate the total number or use grep. Hence my question, I imagine he wants us to go
grep -f .sh (or something)
but I'm not exactly sure if I am on the right path and would greatly appreciate any help.
Thank You
You can do it like:
echo "Estimated number of shell script files in this directory:" `ls *.sh | wc -l`
I'd do it this way:
find . -executable -execdir file {} + | egrep '\.sh: | Bourne| bash' | wc -l
Find all files in the current directory (.) which are executable.
For each file, run the file(1) command, which tries to guess what type of file it is (not perfect).
Grep for known patterns: filenames ending with .sh, or file types containing "Bourne" or "bash".
Count lines.
Huhu, there's a trap, .sh file are not always shell script as the extension is not mandatory.
What tells you this is a shell script will be the Shebang #!/bin/*sh ( I put a * as it could be bash, csh, tcsh, zsh, which are shells) at top of line, hence the hint to use grep, so the best answer would be:
grep '^#!/bin/.*sh' * | wc -l
This give output:
sensible-pager:#!/bin/sh
service:#!/bin/sh
shelltest:#!/bin/bash
smbtar:#!/bin/sh
grep works with regular expression by default, so the match #!/bin/.*sh will match files with a line starting (the ^) by #!/bin/ followed by 0 or unlimited characters .* followed by sh
You may test regex and get explanation of them on http://regex101.com
Piping the result to wc -l to get the number of files containing this.
To display the result, backticks or $() in an echo line is ok.
grep -l <string> *
will return a list of all files that contain in the current directory. Pipe that output into wc -l and you have your answer.
Easiest way:
ls | grep .sh > tmp
wc tmp
That will print the number of lines, bytes and charcters of 'tmp' file. But in 'tmp' there's a line for each *.sh file in your working directory. So the number of lines will give an estimated number of shell scripts you have.
wc tmp | awk '{print $1}' # Using awk to filter that output like...
wc -l tmp # Which it returns the number of lines follow by the name of file
But as many people say, the only certain way to know a file is a shell script is by taking a look at the first line an see if there is #!/bin/bash. If you wanna develop it that way, keep in mind:
cat possible_script.x | head -n1 # That will give you the first line.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I use inverse or negative wildcards when pattern matching in a unix/linux shell?
I've read the man page for ls, and I can't find the option to list all that do not match the file selector. Do you know how to perform this operation?
For example:
lets say my directory is this:
> ls
a.txt b.mkv c.txt d.mp3 e.flv
Now I would like to do something that does the following
> ls -[SOME_OPTION] *.txt
b.mkv d.mp3 e.flv
Is there such an option?
If not, is there a way to pipe the output of ls to another function (possibly sed) that shows only the ones that I would like?
I don't know exactly how to do this, but I'm imagining it would be something like:
> ls | sed [SOMETHING]
I really should learn how to use sed,awk,and grep, but I keep getting stuck at understanding how to write the regexes. I understand the concept of regular expressions clearly, but I get confused between regexes that use different syntax.
Any help would be much appreciated!
EDIT:
I forgot to mention that I am running Mac OS X, so the functions may be slightly different from the ones discussed in other answers for the unix/linux shell (hence some of my confusion with sed,awk,and grep).
this may be help you
ls --ignore=*.txt
It will not display the .txt files in your directory.
maybe this command will help you
find ./ -maxdepth 1 ! -path "*txt"
ls|grep -v ".txt"
does this helps?
ls just lists what arguments it is presented with. *.txt gets expanded to a.txt c.txt before ls sees it, try echo *.txt.
To do what you ask with sed you can delete the pattern from its input, for example:
ls | sed '/\.txt$/d'
Would delete all lines ending with .txt.
With bash and zsh you can have the shell do the inverted expansion, with bash it would be:
ls !(*.txt)
zsh:
ls *~*.txt
Note that both shells need the extended glob option to be enabled, shopt -s extglob with bash and setopt extendedglob with zsh.
One way using find:
find . -maxdepth 1 -type f -not -name "*.txt" -printf "%f\n"
If I do
$ ls -l --color=always
I get a list of files inside the directory with some nice colouring for different file types etc..
Now, I want to be able to pipe the coloured output of ls through grep to filter out some files I don't need. The key is that I still want to preserve the colouring after the grep filter.
$ ls -l --color=always | grep -E some_regex
^ I lose the colouring after grep
EDIT: I'm using headless-server Ubuntu 8.10, Bash 3.2.39, pretty much a stock install with no fancy configs
Your grep is probably removing ls's color codes because it has its own coloring turned on.
You "could" do this:
ls -l --color=always | grep --color=never pattern
However, it is very important that you understand what exactly you're grepping here. Not only is grepping ls unnecessary (use a glob instead), this particular case is grepping through not only filenames and file stats, but also through the color codes added by ls!
The real answer to your question is: Don't grep it. There is never a need to pipe ls into anything or capture its output. ls is only intended for human interpretation (eg. to look at in an interactive shell only, and for this purpose it is extremely handy, of course). As mentioned before, you can filter what files ls enumerates by using globs:
ls -l *.txt # Show all files with filenames ending with `.txt'.
ls -l !(foo).txt # Show all files with filenames that end on `.txt' but aren't `foo.txt'. (This requires `shopt -s extglob` to be on, you can put it in ~/.bashrc)
I highly recommend you read these two excellent documents on the matter:
Explanation of the badness of parsing ls: http://mywiki.wooledge.org/ParsingLs
The power of globs: http://mywiki.wooledge.org/glob
You should check if you are really using the "real" ls, just by directly calling the binary:
/bin/ls ....
Because: The code you described really should work, unless ls ignores --color=always for some weird reason or bug.
I suspect some alias or function that adds (directly or through a variable) some options. Double-check that this isn't the case.
I know you can do it with a find, but is there a way to send the output of ls to mv in the unix command line?
ls is a tool used to DISPLAY some statistics about filenames in a directory.
It is not a tool you should use to enumerate them and pass them to another tool for using it there. Parsing ls is almost always the wrong thing to do, and it is bugged in many ways.
For a detailed document on the badness of parsing ls, which you should really go read, check out: http://mywiki.wooledge.org/ParsingLs
Instead, you should use either globs or find, depending on what exactly you're trying to achieve:
mv * /foo
find . -exec mv {} /foo \;
The main source of badness of parsing ls is that ls dumps all filenames into a single string of output, and there is no way to tell the filenames apart from there. For all you know, the entire ls output could be one single filename!
The secondary source of badness of parsing ls comes from the broken way in which half the world uses bash. They think for magically does what they would like it to do when they do something like:
for file in `ls` # Never do this!
for file in $(ls) # Exactly the same thing.
for is a bash builtin that iterates over arguments. And $(ls) takes the output of ls and cuts it apart into arguments wherever there are spaces, newlines or tabs. Which basically means, you're iterating over words, not over filenames. Even worse, you're asking bask to take each of those mutilated filename words and then treat them as globs that may match filenames in the current directory. So if you have a filename which contains a word which happens to be a glob that matches other filenames in the current directory, that word will disappear and all those matching filenames will appear in its stead!
mv `ls` /foo # Exact same badness as the ''for'' thing.
One way is with backticks:
mv `ls *.boo` subdir
Edit: however, this is fragile and not recommended -- see #lhunath's asnwer for detailed explanations and recommendations.
None of the answers so far are safe for filenames with spaces in them. Try this:
for i in *; do mv "$i" some_dir/; done
You can of course use any glob pattern you like in place of *.
Not exactly sure what you're trying to achieve here, but here's one possibility:
The "xargs" part is the important piece everything else is just setup. The effect of this is to take everything that "ls" outputs and add a ".txt" extension to it.
$ mkdir xxx #
$ cd xxx
$ touch a b c x y z
$ ls
a b c x y z
$ ls | xargs -Ifile mv file file.txt
$ ls
a.txt b.txt c.txt x.txt y.txt z.txt
$
Something like this could also be achieved by:
$ touch a b c x y z
$ for i in `ls`;do mv $i ${i}.txt; done
$ ls
a.txt b.txt c.txt x.txt y.txt z.txt
$
I sort of like the second way better. I can NEVER remember how xargs works without reading the man page or going to my "cute tricks" file.
Hope this helps.
Check out find -exec {} as it might be a better option than ls but it depends on what you're trying to achieve.
/bin/ls | tr '\n' '\0' | xargs -0 -i% mv % /path/to/destdir/
"Useless use of ls", but should work. By specifying the full path to ls(1) you avoid clashes with aliasing of ls(1) mentioned in some of the previous posts. The tr(1) command together with "xargs -0" makes the command work with filenames containing (ugh) whitespace. It won't work with filenames containing newlines, but having filenames like that in the file system is to ask for trouble, so it probably won't be a big problem. But filenames with newlines could exist, so a better solution would be to use "find -print0":
find /path/to/srcdir -type f -print0 | xargs -0 -i% mv % dest/
You shouldn't use the output of ls as the input of another command. Files with spaces in their names are difficult as is the inclusion of ANSI escape sequences if you have:
alias ls-'ls --color=always'
for example.
Always use find or xargs (with -0) or globbing.
Also, you didn't say whether you want to move files or rename them. Each would be handled differently.
edit: added -0 to xargs (thanks for the reminder)
Backticks work well, as others have suggested. See xargs, too. And for really complicated stuff, pipe it into sed, make the list of commands you want, then run it again with the output of sed piped into sh.
Here's an example with find, but it works fine with ls, too:
http://github.com/DonBranson/scripts/blob/f09d24629ab6eb3ce509d4d3078818430306b063/jarfinder.sh
#!/bin/bash
for i in $( ls * );
do
mv $1 /backup/$1
done
else, it's the find solution by sybreon, and as suggested NOT the green mv ls solution.
Just use find or your shells globing!
find . -depth=1 -exec mv {} /tmp/blah/ \;
..or..
mv * /tmp/blah/
You don't have to worry about colour in the ls output, or other piping strangeness - Linux allows basically any characters in the filename except a null byte.. For example:
$ touch "blah\new|
> "
$ ls | xargs file
blahnew|: cannot open `blahnew|' (No such file or directory)
..but find works perfectly:
$ find . -exec file {} \;
./blah\new|
: empty
So this answer doesn't send the output of ls to mv but as #lhunath explained using ls is almost always the wrong tool for the job. Use shell globs or a find command.
For more complicated cases (often in a script), using bash arrays to build up the argument list from shell globs or find commands can be very useful. One can create an array and push to it with the appropriate conditional logic. This also handles spaces in filenames properly.
For example:
myargs=()
# don't push if the glob does not match anything
shopt -s nullglob
myargs+=(myfiles*)
To push files matching a find to the array: https://stackoverflow.com/a/23357277/430128.
The last argument should be the target location:
myargs+=("Some target directory")
Use myargs in the invocation of a command like mv:
mv "${myargs[#]}"
Note the quoting of the array myargs to pass array elements with spaces correctly.
You surround the ls with back quotes and put it after the mv, so like this...
mv `ls` somewhere/
But keep in mind that if any of your file names have spaces in them it won't work very well.
Also it would be simpler to just do something like this: mv filepattern* somewhere/