I am trying to list all file names that contain a certain string but do not contain another string. For this particular case, I want all filenames containing "*.java" but not "*Test.java". To find and save the first set, I was using:
find -name "*.java" > sources.txt
But I don't know how to exclude the files that contain "*Test.java" in the file name.
I'm new to bash, so sorry if I've missed something obvious.
You can use:
find -name "*.java" ! -name "Test.java"
Related
I have a bunch of files with different names in different subdirectories. I created a txt file with those names but I cannot make find to work using the file. I have seen posts on problems creating the list, on not using find (do not understand the reason though). Suggestions? Is difficult for me to come up with an example because I do not know how to reproduce the directory structure.
The following are the names of the files (just in case there is a formatting problem)
AO-169
AO-170
AO-171
The best that I came up with is:
cat ExtendedList.txt | xargs -I {} find . -name {}
It obviously dies in the first directory that it finds.
I also tried
ta="AO-169 AO-170 AO-171"
find . -name $ta
but it complains find: AO-170: unknown primary or operator
If you are trying to ask "how can I find files with any of these names in subdirectories of the current directory", the answer to that would look something like
xargs printf -- '-o\0-name\0%s\0' <ExtendedList.txt |
xargs -r0 find . -false
The -false is just a cute way to let the list of actual predicates start with "... or".
If the list of names in ExtendedList.txt is large, this could fail if the second xargs decides to break it up between -o and -name.
The option -0 is not portable, but should work e.g. on Linux or wherever you have GNU xargs.
If you can guarantee that the list of strings in ExtendedList.txt does not contain any characters which are problematic to the shell (like single quotes), you could simply say
sed "s/.*/-o -name '&'/" ExtendedList.txt |
xargs -r find . -false
I would like to search for a simple text query (inside of a directory named "textfiles") and based on the matches, assign the results to a variable in bash (as an array or list). This query should be case-insensitive, and the context is inside of a bash script (.sh file). The names I'd hope to see in the array are simply the filenames, not the full paths.
What I am trying:
myfiles=./textfiles/*text*.txt
This matches all files that have the word text in them, but not the word TEXT.
I've also tried
myfiles=(find textfiles -iname *text*)
...and...
myfiles=find textfiles -iname *text*
Is there a solution to this?
myfiles=$(find textfiles -iname '*text*' -exec basename '{}' \; 2>/dev/null)
Note how -exec allows you to perform powerful operations on the files find finds. Maybe you do not even need the array after all, and can do what you need to do right there in the -exec argument.
And be aware that the -exec argument may be a script or other executable of your own making...
# plain
myfiles=($(find textfiles -iname *text*))
# if you write like below, you get the result in myflies as a single string
myfiles=$(find textfiles -iname *text*)
# if you want to assign all string as array then you write the following way
myfiles=(abc def ijk)
But this impose a problem, if there is space in your file name or directory name, it will give you incorrect result. Better solution would be
myfiles=()
while read -r fname; do
push myfiles $fname;
done < <(find . -type f)
As #Roadowl suggested better alternative to be xargs can be a better alternative
There are more than one way to solve a problem.
Since you said in your posting explicitly that you want to have files containing text, but not TEXT, you can not do a case-insensitive search, but have to be case-sensitive:
myfiles=($(find -name '*text*' 2>/dev/null))
However, this would also return a file named x.text.y.TEXT.z. If you want to exclude this file (since you consider exclusion of TEXT more important than inclusion of text), you can do a
myfiles=($(find -name '*text*' '!' -name '*TEXT*' 2>/dev/null))
Terminal noob need a little help :)
I have a 98 row long filename list in a .csv file. For example:
name01; name03, etc.
I have an external hard drive with a lot of files in chaotic file
structure. BUT the file names are consistent, something like:
name01_xy; name01_zq; name02_xyz etc.
I would like to copy every file and directory from the external hard
drive which begins with the filename stored in the .csv file to my
computer.
So basically it's a search and copy based on a text file from an eHDD to my computer. I guess the easiest way to do is a Terminal command. Do you have any advice? Thanks in advance!
The task can be split into three: read search criteria from file; find files by criteria; copy found files. We discuss each one separately and combine them in a one-liner step-by-step:
Read search criteria from .csv file
Since your .csv file is pretty much just a text file with one criterion per line, it's pretty easy: just cat the file.
$ cat file.csv
bea001f001
bea003n001
bea007f005
bea008f006
bea009n003
Find files
We will use find. Example: you have a directory /Users/me/where/to/search and want to find all files in there whose names start with bea001f001:
$ find /Users/me/where/to/search -type f -name "bea001f001*"
If you want to find all files that end with bea001f001, move the star wildcard (zero-or-more) to the beginning of the search criterion:
$ find /Users/me/where/to/search -type f -name "*bea001f001"
Now you can already guess what the search criterion for all files containing the name bea001f001 would look like: "*bea001f001*".
We use -type f to tell find that we are interested only in finding files and not directories.
Combine reading and finding
We use xargs for passing the file contents to find a -name argument:
$ cat file.csv | xargs -I [] find /Users/me/where/to/search -type f -name "[]*"
/Users/me/where/to/search/bea001f001_xy
/Users/me/where/to/search/bea001f001_xyz
/Users/me/where/to/search/bea009n003_zq
Copy files
We use cp. It is pretty straightforward: cp file target will copy file to directory target (if it is a directory, or replace file named target).
Complete one-liner
We pass results from find to cp not by piping, but by using the -exec argument passed to find:
$ cat file.csv | xargs -I [] find /Users/me/where/to/search -type f -name "[]*" -exec cp {} /Users/me/where/to/copy \;
Sorry this is my first post here. In response to the comments above, only the last file is selected likely because the others have a carriage return \r. If you first append the directory to each filename in the csv, you can perform the move with the following command, which strips the \r.
cp `tr -d '\r' < file.csv` /your/target/directory
I need to create a script that will go through and add underscores to all files in multiple directories, ignoring the files that already have prefixes. For example, _file1, _file2, file3, file4 needs to look like _file1, _file2, _file3, _file4
I've got little to no knowledge of Unix scripting, so a simple explanation would be greatly appreciated!
You could use one liner like this:
find dir_with_files -regextype posix-extended -type f -regex '^.*\/[^_][^\/]*$' -exec rename -v 's/^(.*\/)([^_][^\/]*)$/$1_$2/' '{}' \;
where dir_with_files is upper dir where you search for your files. Then it finds files with names starting not from _, and each of them is renamed.
Before doing any changes you can use rename with params -n -v showing you what operations will take place, without actually executing them.
find dir_with_files -regextype posix-extended -type f -regex '^.*\/[^_][^\/]*$' -exec rename -v -n 's/^(.*\/)([^_][^\/]*)$/$1_$2/' '{}' \;
From the best Bash resource out there:
Create a glob which matches all of the relevant files.
Loop through all of the matching files.
Remove the underscore from the file name and save the result to a variable.
Prepend an underscore to the variable.
echo the original file name followed by the changed file name using proper quotes to check that they look sane (the quotes will not be printed by echo since they are syntax).
Use mv instead of echo to actually rename the files.
In addition:
If your mv supports -n/--no-clobber, use it to avoid the possibility of data loss in case you mess up
I have a directory /folder1/folder2 containing two type of files:
file.txt
file.txt0* (* means any number)
I wrote a script to list all files matching pattern "file.txt0*" occurrencies in folder "/folder1/folder2":
find -wholename /folder1/folder2/file.txt0*
But it always returns nothing.
Any suggestion?
-name searches for the filename and not for the path. You would need to write the search like this:
find /folder1/folder2/ -name file.txt0*
make sure you are in proper relative directory. below should work, if you in root folder and folder1/folder2 are present in / (root)
find /folder1/folder2 -iname file.txt0*
-i does a case-insensitive search.