I just tried to use find / d <directory> in our university server to locate a package that I want to use, and it ended up listing all of the directories in the entire server, from the most basal through all of the open directories of everybody, etc. as it searched. Is there any way to search for directories without the "verbose" mode of thousands upon thousands of directories popping up in my terminal?
To search the entire filesystem for a directory named mydirectory, use:
find / -type d -name 'mydirectory'
That is the slow way, though. On a well-configured Unix system, there will generally be a locate command installed. locate does not have all the fancy features of find but, because it works from a database, it will be much faster. To find, for example, all files in any directory called mydirectory, try:
locate /mydirectory/
Usually, locate's database is updated once a day. So, if the files or directories you are looking for were installed today, you may need to use find.
Related
I would like to find the location of a Git repository I made on my mac. Is there a way to find, for exemple, albatrocity/git-version-control.markdown on macOS using the Terminal? I installed everything with default parameters. I guess it must be in the User directory but I don't find anything related to GitHub there.
When I find it, I would like to completely remove it to maker a "proper" install.
EDIT: sudo find / -name "parsing.py" -print
I used a file that I know the folder contained when Terminal showed me nothing with sudo find / -wholename "*albatrocity/git-version-control.markdown"
You can use find's -wholename option to find a file based on its name and folder:
find <directory> -wholename "*albatrocity/git-version-control.markdown"
Example, if you want to search in the /Users/ directory:
find /Users/ -wholename "*albatrocity/git-version-control.markdown"
If you have locate on mac, and a regularly running updatedb, locate might be much faster:
locate albatrocity | grep git-version-control.markdown
It uses a hashtable to fast access filenames, but can be out of date, if the database isn't updated regularly or the file is too young (typically less than one day old).
If this is without success, then I would go for a full search with find, but maybe restrict it to a possible, narrowed path.
I've got 218GB of assorted files recovered from a failing hard drive using PhotoRec. The files do not have their original file names and they're not sorted in any manner.
How can I go about sorting the files into separate folders by file type? I've tried searching for .jpg, for example, and I can copy those results into a new folder. But when I search for something like .txt, I get 16GB of text files as the result and there's no way I've found to select them all and copy them into their own folder. The system just hangs.
This is all being done on Windows 10.
Open powershell. Change to the recovered data folder cd c:\...\recovered_files. Make a directory for the text files mkdir text_files. Do the move mv *.txt text_files.
You really just want to move/cut the files like this instead of copying, because moving the files is just a name change (very fast), but to copy would have to duplicate all of the data (quite slow).
If your files are distributed among many directories, you would need to use a find command. In Linux, this would be quite simple with the command, find. In Windows, I have never tried anything like this. On MSDN there is an article about PowerShell that features an example which seems reminiscient of what you want to do. MSDN Documentation
The gist of it is that you would use the command:
cd <your recovered files directory containing the recup_dir folders>
Get-ChildItem -Path ".\*.txt" -Recurse | Move-Item -Verbose -Destination "Z:\stock_recovered\TXT"
Note that the destination is outside of the search path, which might be important!
Since I have never tried this before, there is NO WARRANTY. Supposing it works, I would be curious to know.
I need a script that will find and get me all files in all subdirectories (and leave them in the folder structure as they are now). I know how to find and print that files:
find . -name "something.extension"
The point is, in those directories are lots files that was used before, but I don't want to get those, so the script should only find me files that matches some kind of path pattern which is:
xxx/trunk/xxx/src/main/resources
xxx is different everytime, and after resources there are still some folders that directories are different based on xxx.
Every top xxx folder contains folder named 'tags' (the same level as trunk) that stores previous releases of module (and every release has files that name I am looking for, but I don't want outdated files).
So I want to find all that files in subdirectories of that path pattern that I specified and copy to new location but leave folder structure as it is right now.
I am using Windows and cygwin.
Update
I combined answer commands that 'that other guy' posted below, and it works. Just to be clear I have something like this:
find */trunk/*/src/main/resources -name "something.extension" -exec mkdir -p /absolute/target/path/{} \; -exec cp {} /absolute/target/path/{} \;
Thanks.
Instead of searching under the entire current directory (.), just search under the directories you care about:
find */trunk/*/src/main/resources -name "something.extension"
I have a folder containing many files and subfolders multiple levels deep. I'm looking for a command or script that will zip any subfolder called "fonts", resulting in a fonts.zip file at the same level as the fonts folder.
The fonts folders should remain after creation of their zip files (no delete).
If there is a fonts folder inside another fonts folder (unlikely case), only the top-level fonts folder should result in a fonts.zip file (ideal, but not mandatory).
If there is already a fonts.zip file at the same level as the fonts folder, it should be replaced.
I'll admit, I'm a Mac newbie. Hopefully there can be a simple terminal command to accomplish this. But I'm open to other ideas how to accomplish this.
Thanks,
-Matt
Using Kevin Grant's suggestion as a starting point, I was able to put together a terminal command that works the way I needed:
find . -type d -iname '*fonts' -execdir ditto -c -k -X --rsrc {} fonts.zip \;
I referred to the man pages for the find and ditto commands and their switches. I chose to use ditto over zip because it is supposed to be more compatible with HFS and resource forks under Mac OS X. Next I'll find out if StuffIt has a command line tool. If so, I'll use it instead of ditto.
This was also helpful.
In editors/ides such as eclipse and textmate, there are shortcuts to quickly find a particular file in a project directory.
Is there a similar tool to do full path completion on filenames within a directory (recursively), in bash or other shell?
I have projects with alot of directories, and deep ones at that (sigh, java).
Hitting tab in the shell only cycles thru files in the immediate directory, thats not enough =/
find /root/directory/to/search -name 'filename.*'
# Directory is optional (defaults to cwd)
Standard UNIX globbing is supported. See man find for more information.
If you're using Vim, you can use:
:e **/filename.cpp
Or :tabn or any Vim command which accepts a filename.
If you're looking to do something with a list of files, you can use find combined with the bash $() construct (better than backticks since it's allowed to nest).
for example, say you're at the top level of your project directory and you want a list of all C files starting with "btree". The command:
find . -type f -name 'btree*.c'
will return a list of them. But this doesn't really help with doing something with them.
So, let's further assume you want to search all those file for the string "ERROR" or edit them all. You can execute one of:
grep ERROR $(find . -type f -name 'btree*.c')
vi $(find . -type f -name 'btree*.c')
to do this.
When I was in the UNIX world (using tcsh (sigh...)), I used to have all sorts of "find" aliases/scripts setup for searching for files. I think the default "find" syntax is a little clunky, so I used to have aliases/scripts to pipe "find . -print" into grep, which allows you to use regular expressions for searching:
# finds all .java files starting in current directory
find . -print | grep '\.java'
#finds all .java files whose name contains "Message"
find . -print | grep '.*Message.*\.java'
Of course, the above examples can be done with plain-old find, but if you have a more specific search, grep can help quite a bit. This works pretty well, unless "find . -print" has too many directories to recurse through... then it gets pretty slow. (for example, you wouldn't want to do this starting in root "/")
I use ls -R, piped to grep like this:
$ ls -R | grep -i "pattern"
where -R means recursively list all the files, and -i means case-insensitive. Finally, the patter could be something like this: "std*.h" or "^io" (anything that starts with "io" in the file name)
I use this script to quickly find files across directories in a project. I have found it works great and takes advantage of Vim's autocomplete by opening up and closing an new buffer for the search. It also smartly completes as much as possible for you so you can usually just type a character or two and open the file across any directory in your project. I started using it specifically because of a Java project and it has saved me a lot of time. You just build the cache once when you start your editing session by typing :FC (directory names). You can also just use . to get the current directory and all subdirectories. After that you just type :FF (or FS to open up a new split) and it will open up a new buffer to select the file you want. After you select the file the temp buffer closes and you are inside the requested file and can start editing. In addition, here is another link on Stack Overflow that may help.
http://content.hccfl.edu/pollock/Unix/FindCmd.htm
The linux/unix "find" command.
Yes, bash has filename completion mechanisms. I don't use them myself (too lazy to learn, and I don't find it necessary often enough to make it urgent), but the basic mechanism is to type the first few characters, and then a tab; this will extend the name as far as it can (perhaps not at all) as long as the name is unambiguous. There are a boatload of Emacs-style commands related to completion in the good ol' man page.
locate <file_pattern>
*** find will certainly work, and can target specific directories. However, this command is slower than the locate command. On a Linux OS, each morning a database is constructed that contains a list of all directory and files, and the locate command efficiently searches this database, so if you want to do a search for files that weren't created today, this would be the fastest way to accomplish such a task.