I have a directory containing a bunch of header files from a library. I would like to see how "Uint32" is defined.
So, I need a way to scan over all those header files and print out lines with "Uint32".
I guess grep could help, but I'm new to shell scripts.
What should I do?
There's a couple of ways.
grep -r --include="*.c" Unit32
is one way.
Another is:
find . -name "*.c" | xargs grep Unit32
If you have spaces in the file names, the latter can be problematic.
find . -name "*.c" -print0 | xargs -0 grep Unit32
will solve that typically.
Just simple grep will be fine:
grep "Uint32" *.h*
This will search both *.h and *.hpp header files.
Whilst using grep is fine, for navigating code you may also want to investigate ack (a source-code aware grep variant), and/or ctags (which integrates with vi or emacs and allows navigation through code in your editor)
ack in particular is very nice, since it'll navigate through directory hierarchies, and only work on specific types of files (so for C it'll interrogate .c and .h files, but ignore SCM revision directories, backup files etc.)
Of course, you really need some form of IDE to give you complete navigation over the codebase.
Related
I have a bunch of files with different names in different subdirectories. I created a txt file with those names but I cannot make find to work using the file. I have seen posts on problems creating the list, on not using find (do not understand the reason though). Suggestions? Is difficult for me to come up with an example because I do not know how to reproduce the directory structure.
The following are the names of the files (just in case there is a formatting problem)
AO-169
AO-170
AO-171
The best that I came up with is:
cat ExtendedList.txt | xargs -I {} find . -name {}
It obviously dies in the first directory that it finds.
I also tried
ta="AO-169 AO-170 AO-171"
find . -name $ta
but it complains find: AO-170: unknown primary or operator
If you are trying to ask "how can I find files with any of these names in subdirectories of the current directory", the answer to that would look something like
xargs printf -- '-o\0-name\0%s\0' <ExtendedList.txt |
xargs -r0 find . -false
The -false is just a cute way to let the list of actual predicates start with "... or".
If the list of names in ExtendedList.txt is large, this could fail if the second xargs decides to break it up between -o and -name.
The option -0 is not portable, but should work e.g. on Linux or wherever you have GNU xargs.
If you can guarantee that the list of strings in ExtendedList.txt does not contain any characters which are problematic to the shell (like single quotes), you could simply say
sed "s/.*/-o -name '&'/" ExtendedList.txt |
xargs -r find . -false
I have a root directory that I need to run a find and/or grep command on to return a list of files that contain a specific string.
Here's an example of the file and directory set up. In reality, this root directory contains a lot of subdirectories that each have a lot of subdirectories and files, but this example, I hope, gets my point across.
From root, I need to go through each of the children directories, specifically into subdir/ and look through file.html for the string "example:". If a result is found, I'd like it to print out the full path to file.html, such as website_two/subdir/file.html.
I figured limiting the search to subdir/file.html will greatly increase the speed of this operation.
I'm not too knowledgeable with find and grep commands, but I have tried the following with no luck, but I honestly don't know how to troubleshoot it.
find . -name "file.html" -exec grep -HI "example:" {} \;
EDIT: I understand this may be marked as a duplicate, but I think my question is more along the lines of how can I tell the command to only search a specific file in a specific path, looping through all root-> level directories.
find ./ -type f -iname file.html -exec grep -l "example:" {} \+;
or
grep -Rl "example:" ./ | grep -iE "file.htm(l)*$" will do the trick.
Quote from GNU Grep 2.25 man page:
-R, --dereference-recursive
Read all files under each directory, recursively. Follow all symbolic links, unlike -r.
-l, --files-with-matches
Suppress normal output; instead print the name of each input file from which output would normally have
been printed. The scanning will stop on the first match.
-i, --ignore-case
Ignore case distinctions in both the PATTERN and the input files.
-E, --extended-regexp
Interpret PATTERN as an extended regular expression.
On mac/linux there is a command to merge mp3 files together which is
cat file1.mp3 file2.mp3 > newfile.mp3
I was wondering if there is a simpler way or command to select multiple mp3's in a folder and output it as a single file?
The find command would work. In this example I produce a sorted list of *.mp3 files in the current directory, cat the file and append it to the output file called out
find . -maxdepth 1 -type f -name '*.mp3' -print0 |
sort -z |
xargs -0 cat -- >>out
I should warn you though. If your mp3 files have id3 headers in them then simply appending the files is not a good way to go because the headers are going to wind up littered in the file. There are some tools that manage this much better. http://mp3wrap.sourceforge.net/ for example.
simply linking files together won't work. Don't forget modern Mp3 files have metadata in the head. Even if you don't care about the player name, album name etc, you should at least make the "end of file" mark correct.
Better use some tools like http://mulholland.xyz/dev/mp3cat/.
You can use mp3cat by Darren Mulholland available at https://darrenmulholland.com/dev/mp3cat.html
Source is available at https://github.com/dmulholland/mp3cat
I'm using the find command in a ksh script, and I'm trying to retrieve just the filenames, rather than the full path. As in, I want it to return text.exe, not //severname/dir1/dir2/text.exe.
How would I go about getting that? To clarify, I know the directory the files are in, I am just grabbing the ones created before a certain date, so the pathname doesn't matter.
If you're using GNU find, then
find path -printf "%f\n"
will just print the file name and exclude the path.
find ... -exec basename {} \;
will also do the trick .. but as #Kent asks, why do you want this?
you can do it with:
find ..... |sed 's#.*/##'
however does it really make sense? if there are two files with same filename but located in different directories, how can you distinguish them?
e.g.
you are in /foo
/foo/a.txt
/foo/bar/a.txt
EDIT
edit the answer to gain some better text formatting.
As you described in comment, so you want to
find some files,
copy them to a dir,
gzip them to an archive say a.gz
remove copied files only if step 2 was successful
This could be done in one shot:
find ...|xargs tar -czf /path/to/your/target/a.gz
this will find files, make a tar (a.gz) to your target dir.
Here's another answer.
find | awk -F/ '{print $NF}'
GNU find natively supports this using -printf so all that you need to do is
find ... -printf '%f\n'
Update: Oops... this was already covered in the answer by #glenn-jackman which I somehow missed to see.
My best shot so far is (for looking for strings in a directory containing a large C program)
find ~/example_directory -type f \( -name "*.mk" -or -name "*.[sch]" \) -print0 | xargs -0 -e grep "example_string"
Which works pretty well, but it relies on all the interesting things being in .mk makefiles, .c or .h source files, and .s assembler files.
I was thinking of adding in things like 'all files called Makefile' or 'all *.py python scripts', but it occurs that it would be way easier if there were some way to tell find only to find the text files.
If you just run grep on all files, it takes ages, and you get lots of uninteresting hits on object files.
GNU grep supports the -I option, which makes it treat binary files (as determined by looking at the first few bytes) as if they don't match, so they are essentially skipped.
grep -rI <path> <pattern>
The '-r' switch makes grep recurse, and '-I' makes it ignore binary files.
There are additional switches to exclude certain files and directories (I frequently do this to exclude svn metadata, for example)
Have you looked at ack?
From the top 10 reasons to use ack:
ack ignores most of the crap you do not want to search
...
binary files, core dumps, etc
You can use grep -I to ignore binary files. Using GNU Parallel instead of xargs will allow you to break up the work into multiple processes, exploiting some parallelism for speedup.
There is an example of how to perform a parallel grep available in the documentation:
http://www.gnu.org/s/parallel/man.html#example__parallel_grep
find -type f | parallel -k -j150% -n 1000 -m grep -I "example_string"