BASH script : list all files including subdirectories and sort them by date - bash

I have a bash script:
for entry in "/home/pictures"/*
do
echo "ls -larth $entry"
done
I want to list also the files in subfolders and include their path
I want to sort the results by date
It must be a bash script, because some other software (Jenkins) will call it .

Try find.
find /home/pictures -type f -exec ls -l --full-time {} \; | sort -k 6

If there are no newlines in file names use:
find /home/pictures -type f -printf '%T# %p\n'|sort -n
If you can not tolerate timestamps in output, use:
find /home/pictures -type f -printf '%28T# %p\n' | sort -n | cut -c30-
If there is possibility of newlines in file name, and, if you can make the program that consumes the output accept null terminated records, you can use:
find /home/pictures -type f -printf '%T#,%p\0' | sort -nz
For no timestamps in output, use:
find /home/pictures -type f -printf '%28T# %p\0' | sort -nz | cut -zc30-
P.S.
I have assumed that you want to sort by last modification time.

I found out the solution for my question:
find . -name * -exec ls -larth {} +

Related

I want to grep a pattern inside a file n list that file based on current date

ls -l | grep "Feb 22" | grep -l "good" *
This is the command i am using . i have 4 files among which one file contains the world good . I want to list that file . And that file creation is the current date . based on both the criteria i want to list that file
Try this :
find . -type f -newermt 2018-02-21 ! -newermt 2018-02-22 -exec grep -l good {} \;
or
find . -type f -newermt 2018-02-21 ! -newermt 2018-02-22 | xargs grep -l good
And please, don't parse ls output
Hi Try with below command. How it works? Here find command with parameter -mtime -1 will search for files with current date in current directory as well as its sub directories. Each file found will be pass to grep command one at a time. grep command will check for the string in that file (means each file passes to it)
find . -mtime -1 -type f | xargs grep -i "good"
In the above command it will list all the file with current date. To list a files of specific kind you below command. Here I am listing only txt files.
find . -name "*.txt" -mtime -1 -type f | xargs grep -i "good"
find . is for running it from current directory (dot means current directory). To run it from a specific directory path modify like below:-
find /yourpath/ -name "*.txt" -mtime -1 -type f | xargs grep -i "good"
Also grep -i means "ignore case". For a specific case just use grep "good"

What is the correct Linux command of find, grep and sort?

I am writing a command using find, grep and sort to display a sorted list of all files that contain 'some-text'.
I was unable to figure out the command.
Here is my attempt:
$find . -type f |grep -l "some-text" | sort
but it didn't work.
You need to use something like XARGS so that the content of each file passed through the pipe | is made available for grep.
XARGS: converts input from standard input into arguments to a command
In my case, I have files1,2,3 and they contain the word test. This will do it.
za:tmp za$ find . -type f | xargs grep -l "test" | sort
./file1.txt
./file2.txt
./file3.txt
or
za:tmp za$ find . -type f | xargs grep -i "test" | sort
./file1.txt:some test string
./file2.txt:some test string
./file3.txt:some test string
You can use it in any unix:
find . -type f -exec sh -c 'grep "some text" {} /dev/null > /dev/null 2>&1' \; -a -print 2> /dev/null|sort
A more optimized solution that works only with GNU-grep:
find . -type f -exec grep -Hq "some-text" {} \; -a -print 2> /dev/null|sort

How to tell find command to escape space characters in file names?

I have a single line find command, which recursively checks and prints out the size, owner and name of a specific file type which are created in a specific time frame. But in the result, the filename column is given until the first space character in the directory or file name.
Any idea to fix this problem right in this single command without writing any loop in the bash? Thanks!
here is the command:
find /path/to/dist -type f -iname "*.png" -newermt 2015-01-01 ! -newermt 2016-12-31 -ls | sort -k5 | sort -k5 | awk '{print $5"\t"$7"\t"$11}'
Try changing your awk command to this :
awk '{$1=$2=$3=$4=$6=$8=$9=$10="" ; print$11}'
So that the whole command becomes this :
find /path/to/dist -type f -iname "*.png" -newermt 2015-01-01 ! -newermt 2016-12-31 -ls | sort -k5 | awk '{$1=$2=$3=$4=$6=$8=$9=$10="" ; print$0}'
This leaves some extra spaces at the beginning of the line, hopefully it works for your purpose.
I have removed the second instance of sort, as it sorts on the same key as the first, which does not seem likely to do anything.
Well thanks to Arno's input, the following line does the job. I used exec (-exec ls -lh {} \;) to make the size human readable:
find /Path/To/Dest/ -type f -iname "*.pdf" -newermt 2015-01-01 ! -newermt 2016-12-31 -exec ls -lh {} \; |sed 's/\\ /!!!/g' | sort -k5 | awk '{gsub("!!!"," ",$11);print $3"\t"$5"\t"$9}'
I found the following solution. You hide the space in filename. I did it with a sed, I used a strange chain "!!!" to replace "\ ". Then I replace it in awk command. Here is the command I used for my tests:
find . -type f -iname "*.pdf" -newermt 2015-01-01 -ls |sed 's/\\ /!!!/g' | sort -k5 | awk '{gsub("!!!"," ",$11);print $5"\t"$7"\t"$11}'
The -print0 action of find is probably the starting point. From find's manual page:
-print0
True; print the full file name on the standard output,
followed by a null character (instead of the newline
character that -print uses). This allows file names that
contain newlines or other types of white space to be cor‐
rectly interpreted by programs that process the find out‐
put. This option corresponds to the -0 option of xargs.
But find has the nice printf action that is even better:
find /path/to/dist -type f -iname "*.png" -newermt 2015-01-01 ! -newermt 2016-12-31 -printf "%u\t%s\t%p\n" | sort
probably does the job.

Terminal find, directories last instead of first

I have a makefile that concatenates JavaScript files together and then runs the file through uglify-js to create a .min.js version.
I'm currently using this command to find and concat my files
find src/js -type f -name "*.js" -exec cat {} >> ${jsbuild}$# \;
But it lists files in directories first, this makes heaps of sense but I'd like it to list the .js files in the src/js files above the directories to avoid getting my undefined JS error.
Is there anyway to do this or? I've had a google around and seen the sort command and the -s flag for find but it's a bit above my understanding at this point!
[EDIT]
The final solution is slightly different to the accepted answer but it is marked as accepted as it brought me to the answer. Here is the command I used
cat `find src/js -type f -name "*.js" -print0 | xargs -0 stat -f "%z %N" | sort -n | sed -e "s|[0-9]*\ \ ||"` > public/js/myCleverScript.js
Possible solution:
use find for getting filenames and directory depth, i.e find ... -printf "%d\t%p\n"
sort list by directory depth with sort -n
remove directory depth from output to use filenames only
test:
without sorting:
$ find folder1/ -depth -type f -printf "%d\t%p\n"
2 folder1/f2/f3
1 folder1/file0
with sorting:
$ find folder1/ -type f -printf "%d\t%p\n" | sort -n | sed -e "s|[0-9]*\t||"
folder1/file0
folder1/f2/f3
the command you need looks like
cat $(find src/js -type f -name "*.js" -printf "%d\t%p\n" | sort -n | sed -e "s|[0-9]*\t||")>min.js
Mmmmm...
find src/js -type f
shouldn't find ANY directories at all, and doubly so as your directory names will probably not end in ".js". The brackets around your "-name" parameter are superfluous too, try removing them
find src/js -type f -name "*.js" -exec cat {} >> ${jsbuild}$# \;
find could get the first directory level already expanded on commandline, which enforces the order of directory tree traversal. This solves the problem just for the top directory (unlike the already accepted solution by Sergey Fedorov), but this should answer your question too and more options are always welcome.
Using GNU coreutils ls, you can sort directories before regular files with --group-directories-first option. From reading the Mac OS X ls manpage it seems that directories are grouped always in OS X, you should just drop the option.
ls -A --group-directories-first -r | tac | xargs -I'%' find '%' -type f -name '*.js' -exec cat '{}' + > ${jsbuild}$#
If you do not have the tac command, you could easily implement it using sed. It reverses the order of lines. See info sed tac of GNU sed.
tac(){
sed -n '1!G;$p;h'
}
You could do something like this...
First create a variable holding the name of our output file:
OUT="$(pwd)/theLot.js"
Then, get all "*.js" in top directory into that file:
cat *.js > $OUT
Then have "find" grab all other "*.js" files below current directory:
find . -type d ! -name . -exec sh -c "cd {} ; cat *.js >> $OUT" \;
Just to explain the "find" command, it says:
find
. = starting at current directory
-type d = all directories, not files
-! -name . = except the current one
-exec sh -c = and for each one you find execute the following
"..." = go to that directory and concatenate all "*.js" files there onto end of $OUT
\; = and that's all for today, thank you!
I'd get the list of all the files:
$ find src/js -type f -name "*.js" > list.txt
Sort them by depth, i.e. by the number of '/' in them, using the following ruby script:
sort.rb:
files=[]; while gets; files<<$_; end
files.sort! {|a,b| a.count('/') <=> b.count('/')}
files.each {|f| puts f}
Like so:
$ ruby sort.rb < list.txt > sorted.txt
Concatenate them:
$ cat sorted.txt | while read FILE; do cat "$FILE" >> output.txt; done
(All this assumes that your file names don't contain newline characters.)
EDIT:
I was aiming for clarity. If you want conciseness, you can absolutely condense it to something like:
find src/js -name '*.js'| ruby -ne 'BEGIN{f=[];}; f<<$_; END{f.sort!{|a,b| a.count("/") <=> b.count("/")}; f.each{|e| puts e}}' | xargs cat >> concatenated

Use find, wc, and sed to count lines

I was trying to use sed to count all the lines based on a particular extension.
find -name '*.m' -exec wc -l {} \; | sed ...
I was trying to do the following, how would I include sed in this particular line to get the totals.
You may also get the nice formatting from wc with :
wc `find -name '*.m'`
Most of the answers here won't work well for a large number of files. Some will break if the list of file names is too long for a single command line call, others are inefficient because -exec starts a new process for every file. I believe a robust and efficient solution would be:
find . -type f -name "*.m" -print0 | xargs -0 cat | wc -l
Using cat in this way is fine, as its output is piped straight into wc so only a small amount of the files' content is kept in memory at once. If there are too many files for a single invocation of cat, cat will be called multiple times, but all the output will still be piped into a single wc process.
You can cat all files through a single wc instance to get the total number of lines:
find . -name '*.m' -exec cat {} \; | wc -l
On modern GNU platforms wc and find take -print0 and -files0-from parameters that can be combined into a command that count lines in files with total at the end. Example:
find . -name '*.c' -type f -print0 | wc -l --files0-from=-
you could use sed also for counting lines in place of wc:
find . -name '*.m' -exec sed -n '$=' {} \;
where '$=' is a "special variable" that keep the count of lines
EDIT
you could also try something like sloccount
Hm, solution with cat may be problematic if you have many files, especially big ones.
Second solution doesn't give total, just lines per file, as I tested.
I'll prefer something like this:
find . -name '*.m' | xargs wc -l | tail -1
This will do the job fast, no matter how many and how big files you have.
sed is not the proper tool for counting. Use awk instead:
find . -name '*.m' -exec awk '{print NR}' {} +
Using + instead of \; forces find to call awk every N files found (like with xargs).
For big directories we should use:
find . -type f -name '*.m' -exec sed -n '$=' '{}' + 2>/dev/null | awk '{ total+=$1 }END{print total}'
# alternative using awk twice
find . -type f -name '*.m' -exec awk 'END {print NR}' '{}' + 2>/dev/null | awk '{ total+=$1 }END{print total}'

Resources