Is there a built-in way for find to not display the initial path in the results? Ideally a built-in option or a simple -printf argument.
For example, find initial/path returns:
initial/path/file1
initial/path/dir1/file1
initial/path/dir1/file2
I would like to just have:
file1
dir1/file1
dir1/file2
Things I've tried:
1) cd to the initial path first.
cd initial/path
find
But that gives ./file1, it still includes an extra ./
2) find initial/path | sed 's,^initial/path/,,'
This works, but it just seems to be unnecessary/extra processing if there is a better way.
With GNU find:
find some/dir -printf '%P\n'
More info
%P File's name with the name of the starting-point under
which it was found removed.
Related
I have a bunch of files with different names in different subdirectories. I created a txt file with those names but I cannot make find to work using the file. I have seen posts on problems creating the list, on not using find (do not understand the reason though). Suggestions? Is difficult for me to come up with an example because I do not know how to reproduce the directory structure.
The following are the names of the files (just in case there is a formatting problem)
AO-169
AO-170
AO-171
The best that I came up with is:
cat ExtendedList.txt | xargs -I {} find . -name {}
It obviously dies in the first directory that it finds.
I also tried
ta="AO-169 AO-170 AO-171"
find . -name $ta
but it complains find: AO-170: unknown primary or operator
If you are trying to ask "how can I find files with any of these names in subdirectories of the current directory", the answer to that would look something like
xargs printf -- '-o\0-name\0%s\0' <ExtendedList.txt |
xargs -r0 find . -false
The -false is just a cute way to let the list of actual predicates start with "... or".
If the list of names in ExtendedList.txt is large, this could fail if the second xargs decides to break it up between -o and -name.
The option -0 is not portable, but should work e.g. on Linux or wherever you have GNU xargs.
If you can guarantee that the list of strings in ExtendedList.txt does not contain any characters which are problematic to the shell (like single quotes), you could simply say
sed "s/.*/-o -name '&'/" ExtendedList.txt |
xargs -r find . -false
I am working on an aix and expecting to receive a file named e.g. "afilename0729". The file is however uploaded from a Windows machine. I don't have the control of that sometimes it comes with the name "AFILENAME0729". Originally I wrote a naive
ls afilename*
to locate it within a script. But instead of writing something like
find . -name "[aA][fF][iI][lL][eE][nN][aA][mM][eE][01][0-9][0-3][0-9]"
Would there be any better way to do this?
Use '-iname' to ignore case.
e.g.
find . -iname afilename*
I also found this unix.stackexchange answer for the same question
You can use -iname or for regex support use -iregex for ignore case matching in find:
find . -iregex '.*afilename[01][0-9][0-3][0-9]'
You started with a ls (looking in the current dir only).
The simple form without checking the numbers would be
ls | grep -i "afilename*"
You can also use regular expressions with
ls | egrep -i "^afilename[01][0-9][0-3][0-9]"
When you use regex, the * is not a normal wildcard:
Looking for files starting with an "a" and ending with [0-9], do not use
ls | egrep -i "^a*[0-9]$"
but use
ls | egrep -i "^a.*[0-9]$"
If you're using GNU find, you can use the case insensitive -iname instead of -name.
From the manpage:
-iname pattern:
Like -name, but the match is case insensitive. For example, the patterns fo* and F?? match the file names Foo, FOO, foo, fOo, etc. The pattern *foo* will also match a file called .foobar.
I know what's the directory which contains a file which name I don't know, but I know its name's ending (for example, .txt), also I know there's exactly one file with such ending in the directory.
I've tried the following code:file=grep -w $1/*.txt when $1 is the directory address which didn't help at all, neither did file=$1/*.txt.
Is there any way to achieve what I'm looking for?
If you know the precise directory and a wildcard which will not match any other files, you could run a loop which loops only once.
for file in "$1"/*.txt; do
: "$file" refers to your file here
done
Of course, in a lot of situations, you don't really need to know the precise file name. If you want to run grep on the file which matches your wildcard, just do that:
grep "xyzzy" "$1"/*.txt
What you're looking for is the find command.
you can use it like this (make sure you're in the directory you would like to search):
find . -iname '*.txt'
use the command man find to learn more about the command.
/usr/bin/find directory_name -iname "*.txt"
if you want to operate on those searched file you can even use with find -exec
I'd like to run the following:
ls /path/to/files/pattern*
and get
/path/to/files/pattern1
/path/to/files/pattern2
/path/to/files/pattern3
However, there are too many files matching the pattern in that directory, and I get
bash: /bin/ls: Argument list too long
What's a better way to do this? Maybe using the find command? I need to print out the full paths to the files.
This is where find in combination with xargs will help.
find /path/to/files -name "pattern*" -print0 | xargs -0 ls
Note from comments: xargs will help if you wish to do with the list once you have obtained it from find. If you only intend to list the files, then find should suffice. However, if you wish to copy, delete or perform any action on the list, then using xargs instead of -exec will help.
I'm using the find command in a ksh script, and I'm trying to retrieve just the filenames, rather than the full path. As in, I want it to return text.exe, not //severname/dir1/dir2/text.exe.
How would I go about getting that? To clarify, I know the directory the files are in, I am just grabbing the ones created before a certain date, so the pathname doesn't matter.
If you're using GNU find, then
find path -printf "%f\n"
will just print the file name and exclude the path.
find ... -exec basename {} \;
will also do the trick .. but as #Kent asks, why do you want this?
you can do it with:
find ..... |sed 's#.*/##'
however does it really make sense? if there are two files with same filename but located in different directories, how can you distinguish them?
e.g.
you are in /foo
/foo/a.txt
/foo/bar/a.txt
EDIT
edit the answer to gain some better text formatting.
As you described in comment, so you want to
find some files,
copy them to a dir,
gzip them to an archive say a.gz
remove copied files only if step 2 was successful
This could be done in one shot:
find ...|xargs tar -czf /path/to/your/target/a.gz
this will find files, make a tar (a.gz) to your target dir.
Here's another answer.
find | awk -F/ '{print $NF}'
GNU find natively supports this using -printf so all that you need to do is
find ... -printf '%f\n'
Update: Oops... this was already covered in the answer by #glenn-jackman which I somehow missed to see.