searching in the source code - bash

Often I do different projects and sometimes there is a lack of documentation.
So I decided to use open-source code for looking how people solved different problems.
The idea is if I run into function what I don't how to use I look for different developers used that function before.
Approach:
I downloaded a few pretty decent projects done by other people and put them into one folder.
Now, if I don't know how a function is used (e.g. main() ), I do :
find . -name \*.py | xargs cat | grep -n "main()"
Consequently I get examples of its use:
But there is a problem. I don't know from which file examples are. It'd be perfectly if it was possible to get name of the file as well as number of line.
It seems to be limitation of use "cat" command because it mixes all files together and as result I get information about number not in the file but rather in cat output. So I feel this approach is bad in the root.
i.e.
I want to be able to look for functions/symbols in plethora of source code
and get information about the line and file where a certain combination was met.
I prefer console-way.
Any advice?

Try this:
find . -name \*.py -exec grep -nH "main()" {} \;
Explanation:
The "-exec" option says to execute the following command, up until \; for each file it finds.
The "-H" option to grep causes it to print the name of the file in which the string was found.
The "-n" option causes grep to print the line numbers.
The {} is a placeholder that expands to the name of the file that "find" just found.

You need only grep command:
$ grep -nr 'main()' /path/to/projects/folder/* | grep '.py:'

Want to search source files ? Why not http://beyondgrep.com/ ?

I wont answer you from the point of the bash.
I dont know which editor/IDE are you using, but for code dissecting there is no better tool for me then:
Vim with Ctags combination
Ctrl-p,Ctrl-p funky and MRU plugin +
proper search and regex usage.
good vim debugger
There is no part of code that cant be examined. Please sorry if are using some other tools, I am just suggesting you what do I find is the best for code analysis for me.

Related

How do i get the 'head' of all files in a specified directory?

I am a beginner to UNIX. Im trying to create a bash script that lists the 'head' of every file in a specified directory but ive tried everything and it doesnt seem to work. How would i do it. Below is the code i currently have in my script. I intent to add more to the script later on but need this to work first.
numberOfLines=$1
directoryName=$2
head $numberOfLines $directoryName
Try this:
head $directoryName/* -n $numberOfLines
You are calling the head command in a wrong way.
Compare your code to the manual page.
I would use the find command:
find "$directory" -maxdepth 1 -type f -exec head -n "$numberOfLines" {} \;
This ensures that head will be executed only on files and not directories.
Head works on a file (or group of files), not a directory, so you need to adjust your directoryName variable so that you're telling the shell interpreter you mean "every file in this directory" and not a directory.
The easiest way would be to add "/*" to the directoryName, changing your third line to this:
head $numberOfLines ${directoryName}/*
Example:
myshell:tmp gdalton$ ./script.sh -2 hello
==> hello/file1 <==
file 1
==> hello/file2 <==
file 2
file 2
Note that you will need to invoke your first parameter with the dash as I did in the example because of the syntax for the head command. You could easily fix this in your code using the change I made to fix your code as a jumping point... I'd strongly advise you check the man pages for head so you can figure out how to structure your shell commands; they often contain a wealth of options for these commands.
man head
Good luck.

Fuzzy search Shell command?

The following situation:
I am on a different mac (no command history) using the Terminal (bash) remembering only a part of a command e.g. searching for a command with util in it. Did not remember that it was mdutil.
How to fuzzy search for a command in an efficient manner completely in the terminal, without creating new files?
Typical ways I do it now:
To find that command I could google, not always efficient and needs internet connection and browser.
Or Tab Tab, see all commands and scroll through them until I recognize the right one.
Or output all commands to a textfile and search in that.
I guess you could do something like this:
oldIFS="$IFS"
IFS=:
for dir in $PATH; do
ls $dir/*util* 2> /dev/null
done
IFS="$oldIFS"
That would loop through all the directories in your $PATH looking for a command that contains util.
How about starting with man -k and refining, like this:
man -k util | grep -i meta
Moose::Util::MetaRole(3pm) - Apply roles to any metaclass, as well as the object base class
mdutil(1) - manage the metadata stores used by Spotlight
compgen -ca | grep util
did it the best. Instead of util you can search any part of a command.
Like gniourf_gniourf said, a better solution would be
compgen -caX '!*util*'

Using find and grep to Mimic findstr Command [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I recently switched from a Windows development environment to an Apple development environment. The move has been a challenging process, but I'm struggling with picking up UNIX based commands in terminal to makeup for the commands I used on a daily basis in Windows command prompt. Any help would greatly appreciated, explanations on a basic level of what's going on in the commands provided is a huge bonus for me as I'm trying to get a grasp on the UNIX commands, but reading the manual is like reading a foreign language to me most of the time.
Here's my question: Is there a single line command, preferably short enough to memorize, that I can execute to mimic or produce very similar output to the following Windows CMD command:
findstr /s /c:"this piece of text" *.code
I use this command on Windows often to produce a result set that shows me where the text between the quotes resides in any of the files matching the *.code pattern in any subdirectories. This can be used to check version numbers of numerous files pulled back from servers to looking for where a variable was declared in a large project. The output comes in this form:
file1.code: other text this piece of text other text
file2.code: other text this piece of text other text
file3.code: other text this piece of text other text
file4.code: other text this piece of text other text
file5.code: other text this piece of text other text
Where other text is any other text found on the same line as my search string in the given file. I have searched through the questions here and found several people using find . -name *.code to build a list of files in the subdirectories. They then use the -exec flag from the find command paired with a grep sequence to search text. I tried this in several of the mentioned ways and was failing, I think due to escape sequences or missed characters. It would be awesome if there was a way to just give the command a string in between quotes that it just searched for as is.
I tried the following and wasn't getting any results... Maybe a syntax error?
find . -exec grep -H .getItemToChange().getItemAttributes()
UPDATE
The correct code is provided below with a great explanation by John. If this helps you like it helped me give his answer an upvote!
I was hoping to find the .java file with this function call in it in a large project. It wasn't giving me any results and it also didn't have a way to filter to only *.java.
Can anyone help me out here? Explanations to your commands are GREATLY appreciated.
find . -name '*.code' -exec grep -H 'pattern' {} +
Make sure to quote '*.code' so the shell doesn't expand the * wildcard. Usually we do want the shell to do the expansion, but in this case we want the literal string *.code to be passed to find so it can do the wildcard expansion itself.
When you use -exec you need to put {} somewhere; it's the placeholder for the file names that are found. You also need either \; or + at the end of the command. It's how you signal to find where the end of -exec's arguments are (it's possible to have other actions following -exec). \; will cause grep to be run once for each file while + runs a single grep on all of the files.
find . -name '*.code' -print0 | xargs -0 grep -H 'pattern'
Another common way to do this is by chaining together find and xargs. I like using -exec better, but find+xargs works just as well. The idea here is that xargs takes file names passed in on stdin and runs the named command with those file names appended. To get a suitable list of file names passed in we pipe the output of a find command into xargs.
The -print0 option tells find to print each file it finds along with a NUL character (\0). This goes hand in hand with xargs's -0 option. Using -print0 and -0 ensures that we can handle file names with unusual characters like whitespace and quotes correctly.

Loop through a directory with Grep (newbie)

I'm trying to do loop through the current directory that the script resides in, which has a bunch of files that end with _list.txt I would like to grep each file name and assign it to a variable and then execute some additional commands and then move on to the next file until there are no more _list.txt files to be processed.
I assume I want something like:
while file_name=`grep "*_list.txt" *`
do
Some more code
done
But this doesn't work as expected. Any suggestions of how to accomplish this newbie task?
Thanks in advance.
If I understand you problem correctly, you don't need a grep. You can just do:
for file in *_list.txt
do
# use $file, like echo $file
done
grep is one of the most useful commands of Unix. You must comprehend it well; see some useful examples here. As far as your current requirement, I think following code will be useful:
for file in *.*
do
echo "Happy Programming"
done
In place of *.* you can also use regular expressions. For more such useful examples, see First Time Linux, or read all grep options at your terminal using man grep.

How can you open files which contain the word "exam" in terminal?

I want to open many pdf -files which contain the word exam.
My Mac's terminal uses Bash.
The word exam is randomly in the name: sometimes at the beginner, sometimes at the midlle and sometimes at the end of the name.
How can you open files which contain the word "Exam" in terminal?
find . -name "*exam*" -exec <name of pdf reader executable> {} \;
Don't parse ls. It's output is not reliable and it's only made to look at (for human parsing). See http://mywiki.wooledge.org/ParsingLs.
Don't use xargs. It mangles your data trying to be smarter. Got spaces or quotes in your filenames? You can be sure that it'll explode.
To make xargs behave, you'd have to go to great lengths:
printf '%s\0' *Exam* | xargs -0 open
Yes, that's rather convoluted. Read on.
The find solution, while accurate, is recursive (might be what you want), but also a bit much to type in a prompt.
find . -name '*Exam*' -exec open {} +
You can make all that a lot easier by remembering that open on Mac OS X takes multiple arguments just fine (which is exactly what xargs does), so this is just fine for opening all documents in the current directory that contain the word Exam:
open *Exam*
Something like
acroread *Exam*.pdf
should work. This matches any string that has "Exam" in it, and ends with ".pdf". This also assumes that you have a command called "acroread" that knows how to read PDF:s, that may or may not be true for Mac OS X.
I would use:
ls *Exam* | xargs open

Resources