I want to find the files which have multiple links.
I am using ubuntu 10.10.
find -type l
It will shows all links to the file but I want to count links for particular file.
Thanks.
With this command, you will get a sumary of linked files:
find . -type l -exec readlink -f {} \; | sort | uniq -c | sort -n
or
find . -type l -print0 | xargs -n1 -0 readlink -f | sort | uniq -c | sort -n
Related
I am trying to find all files with dummy* in the folder named dummy. Then I need to sort them according to time of creation and get the 1st 10 files. The command I am trying is:
find -L /home/myname/dummy/dummy* -maxdepth 0 -type f -printf '%T# %p\n' | sort -n | cut -d' ' -f 2- | head -n 10 -exec readlink -f {} \;
But this doesn't seem to work with the following error:
head: invalid option -- 'e'
Try 'head --help' for more information.
How do I make the bash to not read -exec as part of head command?
UPDATE1:
Tried the following:
find -L /home/myname/dummy/dummy* -maxdepth 0 -type f -exec readlink -f {} \; -printf '%T# %p\n' | sort -n | cut -d' ' -f 2- | head -n 10
But this is not according to timestamp sort because both find and printf are printing the files and sort is sorting them all together.
Files in dummy are as follows:
dummy1, dummy2, dummy3 etc. This is the order in which they are created.
How do I make the bash to not read -exec as part of head command?
The -exec and subsequent arguments appear intended to be directed to find. The find command stops at the first |, so you would need to move those arguments ahead of that:
find -L /home/myname/dummy/dummy* -maxdepth 0 -type f -printf '%T# %p\n' -exec readlink -f {} \; | sort -n | cut -d' ' -f 2- | head -n 10
However, it doesn't make much sense to both -printf file details and -exec readlink the results. Possibly you wanted to run readlink on each filename that makes it past head. In that case, you might want to look into the xargs command, which serves exactly the purpose of converting data read from the standard input into arguments to a command. For example:
find -L /home/myname/dummy/dummy* -maxdepth 0 -type f -printf '%T# %p\n' |
sort -n |
cut -d' ' -f 2- |
head -n 10 |
xargs -rd '\n' readlink -f
I think you are over-complicating things here. Using just ls and head should get you the results you want:
ls -lt /home/myname/dummy/dummy* | head -10
To sort by ctime specifically, use the -c flag for ls:
ls -ltc /home/myname/dummy/dummy* | head -10
I am writing a command using find, grep and sort to display a sorted list of all files that contain 'some-text'.
I was unable to figure out the command.
Here is my attempt:
$find . -type f |grep -l "some-text" | sort
but it didn't work.
You need to use something like XARGS so that the content of each file passed through the pipe | is made available for grep.
XARGS: converts input from standard input into arguments to a command
In my case, I have files1,2,3 and they contain the word test. This will do it.
za:tmp za$ find . -type f | xargs grep -l "test" | sort
./file1.txt
./file2.txt
./file3.txt
or
za:tmp za$ find . -type f | xargs grep -i "test" | sort
./file1.txt:some test string
./file2.txt:some test string
./file3.txt:some test string
You can use it in any unix:
find . -type f -exec sh -c 'grep "some text" {} /dev/null > /dev/null 2>&1' \; -a -print 2> /dev/null|sort
A more optimized solution that works only with GNU-grep:
find . -type f -exec grep -Hq "some-text" {} \; -a -print 2> /dev/null|sort
I have a directory that contains files and other directories. And I have one specific file where I know that there are duplicates of somewhere in the given directory tree.
How can I find these duplicates using Bash on macOS?
Basically, I'm looking for something like this (pseudo-code):
$ find-duplicates --of foo.txt --in ~/some/dir --recursive
I have seen that there are tools such as fdupes, but I'm neither interested in any duplicate files (only duplicates of a specific file) nor am I interested in duplicates anywhere on disk (only within the given directory or its subdirectories).
How do I do this?
For a solution compatible with macOS built-in shell utilities, try this instead:
find DIR -type f -print0 | xargs -0 md5 -r | grep "$(md5 -q FILE)"
where:
DIR is the directory you are interested in;
FILE is the file (path) you are searching for duplicates of.
If you only need the duplicated files paths, then pipe thru this as well:
cut -d' ' -f2
If you're looking for a specific filename, you could do:
find ~/some/dir -name foo.txt
which would return a list of all files with the name foo.txt in the directory. If you're looking if there are multiple files in the directory with the same name, you could do:
find ~/some/dir -exec basename {} \; | sort | uniq -d
This will give you a list of files with duplicate names (you can then use find again to figure out where those live).
---- EDIT -----
If you're looking for identical files (with the same md5 sum), you could also do:
find . -type f -exec md5sum {} \; | sort | uniq -d --check-chars=32
--- EDIT 2 ----
If your md5sum doesn't output the filename, you can use:
find . -type f -exec echo -n "{} " \; -exec md5sum {} \; | awk {'print $2 $1'} | sort | uniq -d --check-chars=32
--- EDIT 3 ----
if you're looking for a file with a specific md5 sums:
sum=`md5sum foo.txt | cut -f1 -d " "`
find ~/some/dir -type f -exec md5sum {} \; | grep $sum
I'm using command from this topic to view all file extensions in directory and all subdirectories.
find . -type f -name '*.*' | sed 's|.*\.||' | sort -u
How do I can count number of appearance for each extension?
Like:
png: 140
Like this, using uniq with the -c, --count flag:
find . -type f -name '*.*' | sed 's|.*\.||' | sort | uniq -c
I have tried this but not working.
find . -mtime +10 -print| grep -H -r "test" | cut -d: -f1
You can make use of xargs and process the files found by find, but find alone can make it:
find . -mtime +10 -exec grep -l "test" {} \+
find ... -exec XXX {} \; (or \+, thanks Kevin) performs the XXX command on the files found by find.
grep -l just shows the name of the files, as I think you are trying to get with cut -d: -f1.
You may also need to add -type f to just find files, no directories.
You have to execute using xargs like:
find . -mtime +10 -print0 | xargs -0 grep -H -r "test" | cut -d: -f1
edit
I inserted options so that you won't have problems with spaces in the filenames.