SunOS Sorting By Date - sorting

I'm currently running the following on a directory:
find `pwd` -type f -exec grep -s -i -l '$string' {} + | xargs -L 1 ls -lrthg | sort -k7 -k6
It arranges all returned files by date (Year). I'm close to what I want but I can't get it to sort all the months together in sequential day order.
I'm Running on an old version of SunOS (5.10).
I can't upgrade or install anything so this has been a challenge. I also can't use perl. Help would be greatly appreciated.
From bottom up (1)
From bottom up (2)

Here is what you might do to achieve that with the standard Solaris commands :
find . -type f -exec grep -sil "$string" {} + | xargs -L 1 ls -Erth | sort -k 6,7

Related

Bash command: head

I am trying to find all files with dummy* in the folder named dummy. Then I need to sort them according to time of creation and get the 1st 10 files. The command I am trying is:
find -L /home/myname/dummy/dummy* -maxdepth 0 -type f -printf '%T# %p\n' | sort -n | cut -d' ' -f 2- | head -n 10 -exec readlink -f {} \;
But this doesn't seem to work with the following error:
head: invalid option -- 'e'
Try 'head --help' for more information.
How do I make the bash to not read -exec as part of head command?
UPDATE1:
Tried the following:
find -L /home/myname/dummy/dummy* -maxdepth 0 -type f -exec readlink -f {} \; -printf '%T# %p\n' | sort -n | cut -d' ' -f 2- | head -n 10
But this is not according to timestamp sort because both find and printf are printing the files and sort is sorting them all together.
Files in dummy are as follows:
dummy1, dummy2, dummy3 etc. This is the order in which they are created.
How do I make the bash to not read -exec as part of head command?
The -exec and subsequent arguments appear intended to be directed to find. The find command stops at the first |, so you would need to move those arguments ahead of that:
find -L /home/myname/dummy/dummy* -maxdepth 0 -type f -printf '%T# %p\n' -exec readlink -f {} \; | sort -n | cut -d' ' -f 2- | head -n 10
However, it doesn't make much sense to both -printf file details and -exec readlink the results. Possibly you wanted to run readlink on each filename that makes it past head. In that case, you might want to look into the xargs command, which serves exactly the purpose of converting data read from the standard input into arguments to a command. For example:
find -L /home/myname/dummy/dummy* -maxdepth 0 -type f -printf '%T# %p\n' |
sort -n |
cut -d' ' -f 2- |
head -n 10 |
xargs -rd '\n' readlink -f
I think you are over-complicating things here. Using just ls and head should get you the results you want:
ls -lt /home/myname/dummy/dummy* | head -10
To sort by ctime specifically, use the -c flag for ls:
ls -ltc /home/myname/dummy/dummy* | head -10

How do you grep results from 'find'?

Trying to find a word/pattern contained within the resulting file names of the find command.
For instance, I have this command:
find . -name Gruntfile.js that returns several file names.
How do I grep within these for a word pattern?
Was thinking something along the lines of:
find . -name Gruntfile.js | grep -rnw -e 'purifycss'
However, this is doesn't work..
Use the -exec {} + option to pass the list of filenames that are found as arguments to grep:
find -name Gruntfile.js -exec grep -nw 'purifycss' {} +
This is the safest and most efficient approach, as it doesn't break when the path to the file isn't "well-behaved" (e.g. contains a space). Like an approach using xargs, it also minimises the number of calls to grep by passing multiple filenames at once.
I have removed the -e and -r switches, as I don't think that they're useful to you here.
An excerpt from man find:
-exec command {} +
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invocations of the command will be much less than the number of matched files.
While this doesn't strictly answer your question, provided you have globstar turned on (shopt -s globstar), you could filter the results in bash like this:
grep something **/Gruntfile.js
I was using religiously the approach used by Tom Fenech until I switched to zsh, which handles such things much better. Now all I do is:
grep text **/*(.)
which greps text through all regular files in current directory.
I believe this to be much cleaner syntax especially for day-to-day work in shell.
When too many files exist for the * expansion to run:
$ grep -o 'xxmaj\|xxbos\|xxfld' train/* | wc -l
-bash: /bin/grep: Argument list too long
0
Then this code fixes the “too long” problem:
$ find junk -maxdepth 1 -type f | xargs grep -o 'TVDetails\|xxmaj\|xxbos\|xxfld'
junk/gum-.doc.out:TVDetails
junk/Zv0n.doc.out:TVDetails
$ find junk -maxdepth 1 -type f | xargs grep -o 'TVDetails\|xxmaj\|xxbos\|xxfld' | wc -l
2
It runs faster on my system, and maybe yours, when using the -P 0 option:
$ /usr/bin/time -f "%E Elapsed Real Time" find train -maxdepth 1 -type f | xargs -P 0 grep -o 'TVDetails\|xxmaj\|xxbos\|xxfld' | wc -l
0:02.45 Elapsed Real Time
358
$ /usr/bin/time -f "%E Elapsed Real Time" find train -maxdepth 1 -type f | xargs grep -o 'TVDetails\|xxmaj\|xxbos\|xxfld' | wc -l
0:11.96 Elapsed Real Time
358
Hope this helps.

Script to count number of files in each directory

I need to count the number of files on a large number of directories. Is there an easy way to do this with a shell script (using find, wc, sed, awk or similar)? Just to avoid having to write a proper script in python.
The output would be something like this:
$ <magic_command>
dir1 2
dir2 12
dir3 5
The number after the dir name would be the number of files. A plus would be able to turn counting of dot/hidden files on and off.
Thanks!
Try the below one:
du -a | cut -d/ -f2 | sort | uniq -c | sort -nr
from http://www.linuxquestions.org/questions/linux-newbie-8/how-to-find-the-total-number-of-files-in-a-folder-510009/#post3466477
find <dir> -type f | wc -l
find -type f will list all files in the specified directory one at each line, wc -l count the amount of newlines seen from stdin.
Also for future reference: answers like this are a google away.
More or less what I was looking for:
find . -type d -exec sh -c 'echo "{}" `ls "{}" |wc -l`' \;
try ls | wc it list the file in your directory and gives list of file output to wc as input
One way like this:
$ for dir in $(find . -type d )
> do
> echo $dir $(ls -A $dir | wc -l )
> done
Just remove the -A option if you do not want the hidden file count
find . -type d | xargs ls -1 | perl -lne 'if(/^\./ || eof){print $a." ".$count;$a=$_;$count=-1}else{$count++}'
below is the test:
> find . -type d
.
./SunWS_cache
./wicked
./wicked/segvhandler
./test
./test/test2
./test/tempdir.
./signal_handlers
./signal_handlers/part2
> find . -type d | xargs ls -1 | perl -lne 'if(/^\./ || eof){print $a." ".$count;$a=$_;$count=-1}else{$count++}'
.: 79
./SunWS_cache: 4
./signal_handlers: 6
./signal_handlers/part2: 5
./test: 6
./test/tempdir.: 0
./test/test2: 0
./wicked: 4
./wicked/segvhandler: 9
A generic version of Mehdi Karamosly's solution to list folders of any directory without changing current directory
DIR=~/test/ sh -c 'cd $DIR; du -a | cut -d/ -f2 | sort | uniq -c | sort -nr'
Explanation:
Extract directory into variable
Start new shell
Change directory in that shell so that current shell's directory stays same
Process
I use these functions:
nf()(for d;do echo $(ls -A -- "$d"|wc -l) "$d";done)
nfr()(for d;do echo $(find "$d" -mindepth 1|wc -l) "$d";done)
Both assume that filenames don't contain newlines.
Here's bash-only versions:
nf()(shopt -s nullglob dotglob;for d;do a=("$d"/*);echo "${#a[#]} $d";done)
nfr()(shopt -s nullglob dotglob globstar;for d;do a=("$d"/**);echo "${#a[#]} $d";done)
I liked the output from the du based answer, but when I was looking at a large filesystem it was taking ages, so I put together a small ls based script which gives the same output, but much quicker:
for dir in `ls -1A ~/test/`;
do
echo "$dir `ls -R1Ap ~/test/$dir | grep -Ev "[/:]|^\s*$" | wc -l`"
done
You can try out copying the output of ls command in a text file and then count the number of lines in that file.
ls $LOCATION > outText.txt; NUM_FILES=$(wc -w outText.txt); echo $NUM_FILES
find -type f -printf '%h\n' | sort | uniq -c | sort -n

Find Files having multiple links in shell script

I want to find the files which have multiple links.
I am using ubuntu 10.10.
find -type l
It will shows all links to the file but I want to count links for particular file.
Thanks.
With this command, you will get a sumary of linked files:
find . -type l -exec readlink -f {} \; | sort | uniq -c | sort -n
or
find . -type l -print0 | xargs -n1 -0 readlink -f | sort | uniq -c | sort -n

How can I count all the lines of code in a directory recursively?

We've got a PHP application and want to count all the lines of code under a specific directory and its subdirectories.
We don't need to ignore comments, as we're just trying to get a rough idea.
wc -l *.php
That command works great for a given directory, but it ignores subdirectories. I was thinking the following comment might work, but it is returning 74, which is definitely not the case...
find . -name '*.php' | wc -l
What's the correct syntax to feed in all the files from a directory resursively?
Try:
find . -name '*.php' | xargs wc -l
or (when file names include special characters such as spaces)
find . -name '*.php' | sed 's/.*/"&"/' | xargs wc -l
The SLOCCount tool may help as well.
It will give an accurate source lines of code count for whatever
hierarchy you point it at, as well as some additional stats.
Sorted output:
find . -name '*.php' | xargs wc -l | sort -nr
For another one-liner:
( find ./ -name '*.php' -print0 | xargs -0 cat ) | wc -l
It works on names with spaces and only outputs one number.
You can use the cloc utility which is built for this exact purpose. It reports each the amount of lines in each language, together with how many of them are comments, etc. CLOC is available on Linux, Mac and Windows.
Usage and output example:
$ cloc --exclude-lang=DTD,Lua,make,Python .
2570 text files.
2200 unique files.
8654 files ignored.
http://cloc.sourceforge.net v 1.53 T=8.0 s (202.4 files/s, 99198.6 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
JavaScript 1506 77848 212000 366495
CSS 56 9671 20147 87695
HTML 51 1409 151 7480
XML 6 3088 1383 6222
-------------------------------------------------------------------------------
SUM: 1619 92016 233681 467892
-------------------------------------------------------------------------------
If using a decently recent version of Bash (or ZSH), it's much simpler:
wc -l **/*.php
In the Bash shell this requires the globstar option to be set, otherwise the ** glob-operator is not recursive. To enable this setting, issue
shopt -s globstar
To make this permanent, add it to one of the initialization files (~/.bashrc, ~/.bash_profile etc.).
On Unix-like systems, there is a tool called cloc which provides code statistics.
I ran in on a random directory in our code base it says:
59 text files.
56 unique files.
5 files ignored.
http://cloc.sourceforge.net v 1.53 T=0.5 s (108.0 files/s, 50180.0 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
C 36 3060 1431 16359
C/C++ Header 16 689 393 3032
make 1 17 9 54
Teamcenter def 1 10 0 36
-------------------------------------------------------------------------------
SUM: 54 3776 1833 19481
-------------------------------------------------------------------------------
You didn't specify how many files are there or what is the desired output.
This may be what you are looking for:
find . -name '*.php' | xargs wc -l
Yet another variation :)
$ find . -name '*.php' | xargs cat | wc -l
This will give the total sum, instead of file-by-file.
Add . after find to make it work.
Use find's -exec and awk. Here we go:
find . -type f -exec wc -l {} \; | awk '{ SUM += $0} END { print SUM }'
This snippet finds for all files (-type f). To find by file extension, use -name:
find . -name '*.py' -exec wc -l '{}' \; | awk '{ SUM += $0; } END { print SUM; }'
More common and simple as for me, suppose you need to count files of different name extensions (say, also natives):
wc $(find . -type f | egrep "\.(h|c|cpp|php|cc)" )
The tool Tokei displays statistics about code in a directory. Tokei will show the number of files, total lines within those files and code, comments, and blanks grouped by language. Tokei is also available on Mac, Linux, and Windows.
An example of the output of Tokei is as follows:
$ tokei
-------------------------------------------------------------------------------
Language Files Lines Code Comments Blanks
-------------------------------------------------------------------------------
CSS 2 12 12 0 0
JavaScript 1 435 404 0 31
JSON 3 178 178 0 0
Markdown 1 9 9 0 0
Rust 10 408 259 84 65
TOML 3 69 41 17 11
YAML 1 30 25 0 5
-------------------------------------------------------------------------------
Total 21 1141 928 101 112
-------------------------------------------------------------------------------
Tokei can be installed by following the instructions on the README file in the repository.
POSIX
Unlike most other answers here, these work on any POSIX system, for any number of files, and with any file names (except where noted).
Lines in each file:
find . -name '*.php' -type f -exec wc -l {} \;
# faster, but includes total at end if there are multiple files
find . -name '*.php' -type f -exec wc -l {} +
Lines in each file, sorted by file path
find . -name '*.php' -type f | sort | xargs -L1 wc -l
# for files with spaces or newlines, use the non-standard sort -z
find . -name '*.php' -type f -print0 | sort -z | xargs -0 -L1 wc -l
Lines in each file, sorted by number of lines, descending
find . -name '*.php' -type f -exec wc -l {} \; | sort -nr
# faster, but includes total at end if there are multiple files
find . -name '*.php' -type f -exec wc -l {} + | sort -nr
Total lines in all files
find . -name '*.php' -type f -exec cat {} + | wc -l
There is a little tool called sloccount to count the lines of code in a directory.
It should be noted that it does more than you want as it ignores empty lines/comments, groups the results per programming language and calculates some statistics.
You want a simple for loop:
total_count=0
for file in $(find . -name *.php -print)
do
count=$(wc -l $file)
let total_count+=count
done
echo "$total_count"
For sources only:
wc `find`
To filter, just use grep:
wc `find | grep .php$`
A straightforward one that will be fast, will use all the search/filtering power of find, not fail when there are too many files (number arguments overflow), work fine with files with funny symbols in their name, without using xargs, and will not launch a uselessly high number of external commands (thanks to + for find's -exec). Here you go:
find . -name '*.php' -type f -exec cat -- {} + | wc -l
None of the answers so far gets at the problem of filenames with spaces.
Additionally, all that use xargs are subject to fail if the total length of paths in the tree exceeds the shell environment size limit (defaults to a few megabytes in Linux).
Here is one that fixes these problems in a pretty direct manner. The subshell takes care of files with spaces. The awk totals the stream of individual file wc outputs, so it ought never to run out of space. It also restricts the exec to files only (skipping directories):
find . -type f -name '*.php' -exec bash -c 'wc -l "$0"' {} \; | awk '{s+=$1} END {print s}'
I know the question is tagged as bash, but it seems that the problem you're trying to solve is also PHP related.
Sebastian Bergmann wrote a tool called PHPLOC that does what you want and on top of that provides you with an overview of a project's complexity. This is an example of its report:
Size
Lines of Code (LOC) 29047
Comment Lines of Code (CLOC) 14022 (48.27%)
Non-Comment Lines of Code (NCLOC) 15025 (51.73%)
Logical Lines of Code (LLOC) 3484 (11.99%)
Classes 3314 (95.12%)
Average Class Length 29
Average Method Length 4
Functions 153 (4.39%)
Average Function Length 1
Not in classes or functions 17 (0.49%)
Complexity
Cyclomatic Complexity / LLOC 0.51
Cyclomatic Complexity / Number of Methods 3.37
As you can see, the information provided is a lot more useful from the perspective of a developer, because it can roughly tell you how complex a project is before you start working with it.
If you want to keep it simple, cut out the middleman and just call wc with all the filenames:
wc -l `find . -name "*.php"`
Or in the modern syntax:
wc -l $(find . -name "*.php")
This works as long as there are no spaces in any of the directory names or filenames. And as long as you don't have tens of thousands of files (modern shells support really long command lines). Your project has 74 files, so you've got plenty of room to grow.
WC -L ? better use GREP -C ^
wc -l? Wrong!
The wc command counts new lines codes, not lines! When the last line in the file does not end with new line code, this will not be counted!
If you still want count lines, use grep -c ^. Full example:
# This example prints line count for all found files
total=0
find /path -type f -name "*.php" | while read FILE; do
# You see, use 'grep' instead of 'wc'! for properly counting
count=$(grep -c ^ < "$FILE")
echo "$FILE has $count lines"
let total=total+count #in bash, you can convert this for another shell
done
echo TOTAL LINES COUNTED: $total
Finally, watch out for the wc -l trap (counts enters, not lines!!!)
Giving out the longest files first (ie. maybe these long files need some refactoring love?), and excluding some vendor directories:
find . -name '*.php' | xargs wc -l | sort -nr | egrep -v "libs|tmp|tests|vendor" | less
For Windows, an easy-and-quick tool is LocMetrics.
You can use a utility called codel (link). It's a simple Python module to count lines with colorful formatting.
Installation
pip install codel
Usage
To count lines of C++ files (with .cpp and .h extensions), use:
codel count -e .cpp .h
You can also ignore some files/folder with the .gitignore format:
codel count -e .py -i tests/**
It will ignore all the files in the tests/ folder.
The output looks like:
You also can shorten the output with the -s flag. It will hide the information of each file and show only information about each extension. The example is below:
If you want your results sorted by number of lines, you can just add | sort or | sort -r (-r for descending order) to the first answer, like so:
find . -name '*.php' | xargs wc -l | sort -r
If the files are too many, better to just look for the total line count.
find . -name '*.php' | xargs wc -l | grep -i ' total' | awk '{print $1}'
Very simply:
find /path -type f -name "*.php" | while read FILE
do
count=$(wc -l < $FILE)
echo "$FILE has $count lines"
done
Something different:
wc -l `tree -if --noreport | grep -e'\.php$'`
This works out fine, but you need to have at least one *.php file in the current folder or one of its subfolders, or else wc stalls.
It’s very easy with Z shell (zsh) globs:
wc -l ./**/*.php
If you are using Bash, you just need to upgrade. There is absolutely no reason to use Bash.
On OS X at least, the find+xarg+wc commands listed in some of the other answers prints "total" several times on large listings, and there is no complete total given. I was able to get a single total for .c files using the following command:
find . -name '*.c' -print0 |xargs -0 wc -l|grep -v total|awk '{ sum += $1; } END { print "SUM: " sum; }'
If you need just the total number of lines in, let's say, your PHP files, you can use very simple one line command even under Windows if you have GnuWin32 installed. Like this:
cat `/gnuwin32/bin/find.exe . -name *.php` | wc -l
You need to specify where exactly is the find.exe otherwise the Windows provided FIND.EXE (from the old DOS-like commands) will be executed, since it is probably before the GnuWin32 in the environment PATH and has different parameters and results.
Please note that in the command above you should use back-quotes, not single quotes.
While I like the scripts, I prefer this one as it also shows a per-file summary as long as a total:
wc -l `find . -name "*.php"`

Resources