I'm using the find command in a ksh script, and I'm trying to retrieve just the filenames, rather than the full path. As in, I want it to return text.exe, not //severname/dir1/dir2/text.exe.
How would I go about getting that? To clarify, I know the directory the files are in, I am just grabbing the ones created before a certain date, so the pathname doesn't matter.
If you're using GNU find, then
find path -printf "%f\n"
will just print the file name and exclude the path.
find ... -exec basename {} \;
will also do the trick .. but as #Kent asks, why do you want this?
you can do it with:
find ..... |sed 's#.*/##'
however does it really make sense? if there are two files with same filename but located in different directories, how can you distinguish them?
e.g.
you are in /foo
/foo/a.txt
/foo/bar/a.txt
EDIT
edit the answer to gain some better text formatting.
As you described in comment, so you want to
find some files,
copy them to a dir,
gzip them to an archive say a.gz
remove copied files only if step 2 was successful
This could be done in one shot:
find ...|xargs tar -czf /path/to/your/target/a.gz
this will find files, make a tar (a.gz) to your target dir.
Here's another answer.
find | awk -F/ '{print $NF}'
GNU find natively supports this using -printf so all that you need to do is
find ... -printf '%f\n'
Update: Oops... this was already covered in the answer by #glenn-jackman which I somehow missed to see.
Related
I have a bunch of files with different names in different subdirectories. I created a txt file with those names but I cannot make find to work using the file. I have seen posts on problems creating the list, on not using find (do not understand the reason though). Suggestions? Is difficult for me to come up with an example because I do not know how to reproduce the directory structure.
The following are the names of the files (just in case there is a formatting problem)
AO-169
AO-170
AO-171
The best that I came up with is:
cat ExtendedList.txt | xargs -I {} find . -name {}
It obviously dies in the first directory that it finds.
I also tried
ta="AO-169 AO-170 AO-171"
find . -name $ta
but it complains find: AO-170: unknown primary or operator
If you are trying to ask "how can I find files with any of these names in subdirectories of the current directory", the answer to that would look something like
xargs printf -- '-o\0-name\0%s\0' <ExtendedList.txt |
xargs -r0 find . -false
The -false is just a cute way to let the list of actual predicates start with "... or".
If the list of names in ExtendedList.txt is large, this could fail if the second xargs decides to break it up between -o and -name.
The option -0 is not portable, but should work e.g. on Linux or wherever you have GNU xargs.
If you can guarantee that the list of strings in ExtendedList.txt does not contain any characters which are problematic to the shell (like single quotes), you could simply say
sed "s/.*/-o -name '&'/" ExtendedList.txt |
xargs -r find . -false
I need to create a script that will go through and add underscores to all files in multiple directories, ignoring the files that already have prefixes. For example, _file1, _file2, file3, file4 needs to look like _file1, _file2, _file3, _file4
I've got little to no knowledge of Unix scripting, so a simple explanation would be greatly appreciated!
You could use one liner like this:
find dir_with_files -regextype posix-extended -type f -regex '^.*\/[^_][^\/]*$' -exec rename -v 's/^(.*\/)([^_][^\/]*)$/$1_$2/' '{}' \;
where dir_with_files is upper dir where you search for your files. Then it finds files with names starting not from _, and each of them is renamed.
Before doing any changes you can use rename with params -n -v showing you what operations will take place, without actually executing them.
find dir_with_files -regextype posix-extended -type f -regex '^.*\/[^_][^\/]*$' -exec rename -v -n 's/^(.*\/)([^_][^\/]*)$/$1_$2/' '{}' \;
From the best Bash resource out there:
Create a glob which matches all of the relevant files.
Loop through all of the matching files.
Remove the underscore from the file name and save the result to a variable.
Prepend an underscore to the variable.
echo the original file name followed by the changed file name using proper quotes to check that they look sane (the quotes will not be printed by echo since they are syntax).
Use mv instead of echo to actually rename the files.
In addition:
If your mv supports -n/--no-clobber, use it to avoid the possibility of data loss in case you mess up
I know what's the directory which contains a file which name I don't know, but I know its name's ending (for example, .txt), also I know there's exactly one file with such ending in the directory.
I've tried the following code:file=grep -w $1/*.txt when $1 is the directory address which didn't help at all, neither did file=$1/*.txt.
Is there any way to achieve what I'm looking for?
If you know the precise directory and a wildcard which will not match any other files, you could run a loop which loops only once.
for file in "$1"/*.txt; do
: "$file" refers to your file here
done
Of course, in a lot of situations, you don't really need to know the precise file name. If you want to run grep on the file which matches your wildcard, just do that:
grep "xyzzy" "$1"/*.txt
What you're looking for is the find command.
you can use it like this (make sure you're in the directory you would like to search):
find . -iname '*.txt'
use the command man find to learn more about the command.
/usr/bin/find directory_name -iname "*.txt"
if you want to operate on those searched file you can even use with find -exec
Is there a built-in way for find to not display the initial path in the results? Ideally a built-in option or a simple -printf argument.
For example, find initial/path returns:
initial/path/file1
initial/path/dir1/file1
initial/path/dir1/file2
I would like to just have:
file1
dir1/file1
dir1/file2
Things I've tried:
1) cd to the initial path first.
cd initial/path
find
But that gives ./file1, it still includes an extra ./
2) find initial/path | sed 's,^initial/path/,,'
This works, but it just seems to be unnecessary/extra processing if there is a better way.
With GNU find:
find some/dir -printf '%P\n'
More info
%P File's name with the name of the starting-point under
which it was found removed.
I have a directory containing a bunch of header files from a library. I would like to see how "Uint32" is defined.
So, I need a way to scan over all those header files and print out lines with "Uint32".
I guess grep could help, but I'm new to shell scripts.
What should I do?
There's a couple of ways.
grep -r --include="*.c" Unit32
is one way.
Another is:
find . -name "*.c" | xargs grep Unit32
If you have spaces in the file names, the latter can be problematic.
find . -name "*.c" -print0 | xargs -0 grep Unit32
will solve that typically.
Just simple grep will be fine:
grep "Uint32" *.h*
This will search both *.h and *.hpp header files.
Whilst using grep is fine, for navigating code you may also want to investigate ack (a source-code aware grep variant), and/or ctags (which integrates with vi or emacs and allows navigation through code in your editor)
ack in particular is very nice, since it'll navigate through directory hierarchies, and only work on specific types of files (so for C it'll interrogate .c and .h files, but ignore SCM revision directories, backup files etc.)
Of course, you really need some form of IDE to give you complete navigation over the codebase.